Good series of posts about A.I (posting the first)
I do think there is a lot of validity to what she's saying. A tremendous amount actually.
Remember, recently many PhD's across many different fields from Computer Science, Physics, Math, Engineering, Biology, etc. from Stanford, Georgia Tech, Caltech, M.I.T, U.I.C, etc. have gone on to say A.I probably won't truly get to the level people are shilling it as until maybe another 15-30 years. Possibly the next 50 years. If it
ever even does. This is what insanely smart people, working with
other insanely smart people from other fields have to say. That's the thing, no one knows. We need better tech breakthroughs and when will those happen? Nobody knows. It took centuries for Calculus to be invented once we had Algebra. The discovery can be right in our face and we glance over it for centuries or it can be far off and it'll take centuries to get there. It's like Terence Tao once said: it's not up to any one person or group of people to discover or make a scientific breakthrough. When nature presents itself, then the time comes up. You can't force scientific breakthroughs. This isn't something you brute force/optimize your way into, this is a breakthrough that's most likely going to be something novel and orthogonal, so nobody really knows.
Think about it, if we produce a sentient A.I that means we'll get a clue on how we came about as humans. Do you think we're close to finding that out?
Chances are it's not going to happen in the next 2-3 years like every shill on social media, Twitter echo chamber or solo brogrammer working on their own little something says.
Unfortunately, this Google news gets that latter group of people going. It's a significant amount of people affected unfortunately and that's legit news. However, Google is usually on point and chances are they've really had a hard look at their entire company to make that decision and brought in minds who are reliable to make a decision like that. I know how that company works, and it definitely ain't on a whim: their test suite alone is batshyt insane with world class coverage - nearly everything has tests. That's how cautious they are and the infrastructure they've had for
decades. Most likely there have been red flags with that department for awhile for one reason or another.
However, like I've told other people as well: many CEO's
will make and have made the
big mistake of overestimating A.I and letting go of talent critical to what they're doing that'll be highly costly or irreversible to them. They ain't Google - even other big dogs are not on Google's level.
Companies have literally laid departments off because of ChatGPT 3.0 and now they're down bad for it if they're even still around
This A.I hype has caused managers to cut corners and it's only made them pay for it more times than not.
That said, it's still good we're jumping on this issue now rather than 20 years from now. That's the one positive coming out of the overblown hype has done as we see A.I Executive Orders being dispersed - it actually has opened up
legit, demanding discussion regarding UBI. Which is funny because the same people who used to call UBI "handouts" are now freaking out about their jobs being jeopardized because CEO's are overestimating shytty A.I - which is putting a focus on shytty CEO's that they used to defend.