Sam Altman claims “deep learning worked”, superintelligence may be “a few thousand days” away, and “astounding triumphs” will incrementally become ...

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
50,863
Reputation
19,591
Daps
202,395
Reppin
the ether
/Have you tried using their most recent model? o1

While it isn't perfect I see how in a few iterations it could surpass human experts in things like mathematics and scientific research. And with agents, it'll soon be like having a thousand PhD students working on whatever issue you hand it. It's hard to see how this won't be revolutionary and contribute massively to scientific progress.

I agree that Sam isn't trustworthy but I don't think he's solely upselling A.I. to sell his product.


The only aspect of "mathematical and scientific progress" that I quoted was his claim that AI would solve all physics. And we don't even know that physics is solvable, much less that AI could do it. It very well could be that the information necessary to solve physics isn't even available to us because we lack the physical capacity to measure it. And so far we don't know that AI can solve ANYTHING in physics other than brute-force questions (which might make it rather similar to a thousand PhD students lol).


Besides that, everything I quoted were wild claims about social progress. And it's quite debatable whether AI will be responsible for ANY social progress, much less the wild utopian progress that Altman claims. This and other previous quotes on the matter suggest that Altman knows nothing about social issues - literally nothing as a potential sociopath in addition to not being in that world - so why would he be the expert on what would solve social issues?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
52,325
Reputation
7,979
Daps
150,066
Sam Altman is not responsible for creating ChatGPT's products, and I don't see how ChatGPT's current products justify those claims.


the underlying method used to train the recent models and models in-house has shown significant improvement and lead to them to believe that they will eventually achieve artificial super intelligence, solving reasoning is a big part of it. it's also entirely possible he could be underselling the timeline too.
 
Last edited:

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
50,863
Reputation
19,591
Daps
202,395
Reppin
the ether
I think it's kinda justified if you assume automated artificial intelligence will help solve problems at a unprecedented rate which will lead to new innovations and new discoveries. finding cures to illnesses, just building on the science we have now to the point where it shaves decades or centuries off human research and scientific advancements.


Except those sorts of discoveries rarely lead to significant social progress. Most sick people die early of diseases we've already cured. Most poor people lack resources we already have available. Technology of whatever sort won't make the Israelis free Gaza or Saudi/Iran stop their proxy war in Yemen, it won't remove the Putins and the Jinpings from power, it won't wrest India or Burma or Cambodia or Afghanistan away from genocide cults or get warring groups in Burkina Faso, Central African Republic, Congo, Ethiopia, Nigeria, Somalia, South Sudan and Sudan to throw down their arms.

In fact, current observations suggest that technological advancements have made fascism easier, not harder.






I'm not so sure about the resource wars because it seems predicated on the fact that we'll rely on the same resources we do now in greater quantities when i think advancements in material science is something that will make resource wars less warranted.

We're already depleting resources at an unsustainable rate, despite most of the world being far, far poorer than Americans. Considering that even middle-class AMERICANS feel they're resource deprived and want more more more, exactly how much more are we going to have to supply when EVERYONE wants to be as wealthier as a middle-upper class American?

Right now we're projecting towards something like 4x the ecological footprint that the Earth can sustain. So that means you'd have to count on material science reducing our use of every resource by 75% or more.....and that would just be to break even at "average American" levels. If you want to stop resource wars, you have to do far better than break even.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
50,863
Reputation
19,591
Daps
202,395
Reppin
the ether
Among all the other issues I never see Altman's type look at is what the technology will do to the mental and emotional strength of humans.

I work with the teenagers who grew up in the smartphone era. They're a fukking wreck. In terms of mental skills, they're way behind because they've always asked their phones to do everything for them. (Test scores have been dropping for 10 years, and that doesn't even come close to reflecting the real issue because they're being taught to game the tests and the skills that standardized tests don't test are even worse.) In terms of socioemotional skills, they're terrible at relating with people and would rather spend time on their phones than talk to friends. In terms of grit and work ethic, that's probably the very worst - all they want to do is be on their phones, getting the dopamine hits of swiping through endless tiktoks, and they put damn near zero effort towards anything that's not social media.

What happens when we get to the next generation who grows up on ChatGPT and has always asked it to do EVERYTHING for them? The generation who has never written a paragraph, much less an essay. The generation who has never tried to memorize anything, but just asked ChatGPT for the info? The generation that doesn't even think how they want to reply to their friend, but just asks ChatGPT to make that reply.

Even the richest man in the world is completely addicted to social media and has become a flaming train wreck as a result. And he didn't have social media until well into adulthood. How much worse will it be for teenagers, especially teenagers in poor socioeconomic situations?

Don't think about what AI will do for the top 0.1% who does scientific research or even the top 10% who know how to use it ideally. Think about the effect AI is going to have on the mental and social and emotional processing of the other 90% of the population.


The most likely scenario I see is for the mass of humanity to degrade in both intellectual reasoning and moral strength and become even that much more easily controlled and manipulated by bad actors than they already are.
 
Joined
Oct 22, 2017
Messages
33,154
Reputation
2,018
Daps
162,164
Yall love talking doom and gloom. Especially when it comes to AI.

You really think a machine is smarter than you? I’m not even talking about processing information faster, I’m talking real intelligence.

That’s like people claiming artificial flavors are better than the real thing.
this presumes that men aren’t, in some measure, creatures whose brains operate according to computational algorithms far in advance of what other animals are capable of.

if we’re able to create an entity who can process information millions of times faster than we can (and even that staggering number may be underselling it), then it stands to reason these machines will posses an intelligence that is orders of magnitude greater than our own.

:francis:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
52,325
Reputation
7,979
Daps
150,066
FdtadhH.jpeg
 
Top