Sam Altman claims “deep learning worked”, superintelligence may be “a few thousand days” away, and “astounding triumphs” will incrementally become ...

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,666
Daps
203,884
Reppin
the ether
/Have you tried using their most recent model? o1

While it isn't perfect I see how in a few iterations it could surpass human experts in things like mathematics and scientific research. And with agents, it'll soon be like having a thousand PhD students working on whatever issue you hand it. It's hard to see how this won't be revolutionary and contribute massively to scientific progress.

I agree that Sam isn't trustworthy but I don't think he's solely upselling A.I. to sell his product.


The only aspect of "mathematical and scientific progress" that I quoted was his claim that AI would solve all physics. And we don't even know that physics is solvable, much less that AI could do it. It very well could be that the information necessary to solve physics isn't even available to us because we lack the physical capacity to measure it. And so far we don't know that AI can solve ANYTHING in physics other than brute-force questions (which might make it rather similar to a thousand PhD students lol).


Besides that, everything I quoted were wild claims about social progress. And it's quite debatable whether AI will be responsible for ANY social progress, much less the wild utopian progress that Altman claims. This and other previous quotes on the matter suggest that Altman knows nothing about social issues - literally nothing as a potential sociopath in addition to not being in that world - so why would he be the expert on what would solve social issues?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,346
Reputation
8,496
Daps
160,036
Sam Altman is not responsible for creating ChatGPT's products, and I don't see how ChatGPT's current products justify those claims.


the underlying method used to train the recent models and models in-house has shown significant improvement and lead to them to believe that they will eventually achieve artificial super intelligence, solving reasoning is a big part of it. it's also entirely possible he could be underselling the timeline too.
 
Last edited:

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,666
Daps
203,884
Reppin
the ether
I think it's kinda justified if you assume automated artificial intelligence will help solve problems at a unprecedented rate which will lead to new innovations and new discoveries. finding cures to illnesses, just building on the science we have now to the point where it shaves decades or centuries off human research and scientific advancements.


Except those sorts of discoveries rarely lead to significant social progress. Most sick people die early of diseases we've already cured. Most poor people lack resources we already have available. Technology of whatever sort won't make the Israelis free Gaza or Saudi/Iran stop their proxy war in Yemen, it won't remove the Putins and the Jinpings from power, it won't wrest India or Burma or Cambodia or Afghanistan away from genocide cults or get warring groups in Burkina Faso, Central African Republic, Congo, Ethiopia, Nigeria, Somalia, South Sudan and Sudan to throw down their arms.

In fact, current observations suggest that technological advancements have made fascism easier, not harder.






I'm not so sure about the resource wars because it seems predicated on the fact that we'll rely on the same resources we do now in greater quantities when i think advancements in material science is something that will make resource wars less warranted.

We're already depleting resources at an unsustainable rate, despite most of the world being far, far poorer than Americans. Considering that even middle-class AMERICANS feel they're resource deprived and want more more more, exactly how much more are we going to have to supply when EVERYONE wants to be as wealthier as a middle-upper class American?

Right now we're projecting towards something like 4x the ecological footprint that the Earth can sustain. So that means you'd have to count on material science reducing our use of every resource by 75% or more.....and that would just be to break even at "average American" levels. If you want to stop resource wars, you have to do far better than break even.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,666
Daps
203,884
Reppin
the ether
Among all the other issues I never see Altman's type look at is what the technology will do to the mental and emotional strength of humans.

I work with the teenagers who grew up in the smartphone era. They're a fukking wreck. In terms of mental skills, they're way behind because they've always asked their phones to do everything for them. (Test scores have been dropping for 10 years, and that doesn't even come close to reflecting the real issue because they're being taught to game the tests and the skills that standardized tests don't test are even worse.) In terms of socioemotional skills, they're terrible at relating with people and would rather spend time on their phones than talk to friends. In terms of grit and work ethic, that's probably the very worst - all they want to do is be on their phones, getting the dopamine hits of swiping through endless tiktoks, and they put damn near zero effort towards anything that's not social media.

What happens when we get to the next generation who grows up on ChatGPT and has always asked it to do EVERYTHING for them? The generation who has never written a paragraph, much less an essay. The generation who has never tried to memorize anything, but just asked ChatGPT for the info? The generation that doesn't even think how they want to reply to their friend, but just asks ChatGPT to make that reply.

Even the richest man in the world is completely addicted to social media and has become a flaming train wreck as a result. And he didn't have social media until well into adulthood. How much worse will it be for teenagers, especially teenagers in poor socioeconomic situations?

Don't think about what AI will do for the top 0.1% who does scientific research or even the top 10% who know how to use it ideally. Think about the effect AI is going to have on the mental and social and emotional processing of the other 90% of the population.


The most likely scenario I see is for the mass of humanity to degrade in both intellectual reasoning and moral strength and become even that much more easily controlled and manipulated by bad actors than they already are.
 
Joined
Oct 22, 2017
Messages
34,006
Reputation
2,093
Daps
166,255
Yall love talking doom and gloom. Especially when it comes to AI.

You really think a machine is smarter than you? I’m not even talking about processing information faster, I’m talking real intelligence.

That’s like people claiming artificial flavors are better than the real thing.
this presumes that men aren’t, in some measure, creatures whose brains operate according to computational algorithms far in advance of what other animals are capable of.

if we’re able to create an entity who can process information millions of times faster than we can (and even that staggering number may be underselling it), then it stands to reason these machines will posses an intelligence that is orders of magnitude greater than our own.

:francis:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,346
Reputation
8,496
Daps
160,036
FdtadhH.jpeg
 

inndaskKy

Superstar
Joined
May 6, 2012
Messages
11,861
Reputation
2,667
Daps
42,554
Reppin
NULL
Not this scam ass talking point again :snoop:

These people have no sense of responsibility, accountability or imagination whatsoever. They're still claiming nuclear bombs made the world a better place too and that science is "neutral".
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,346
Reputation
8,496
Daps
160,036







1/32
@slow_developer
The reason OpenAI say AGI is the only goal is because THE SINGULARITY is the goal.

Once AGI is achieved, recursive self-improvement will be mind-blowing.



2/32
@RickKiessig
I claim that recursive self-improvement of AI will require *understanding*. However, as shown by Gödel's incompleteness theorem, understanding is not computational. We need leaps, not curve-fitting.



3/32
@slow_developer
or perhaps the leaps you mentioned could emerge from complex interactions within a sufficiently advanced AI, even if those interactions are ultimately based on computation... what you think?



4/32
@Kad_Tal
Post-AGI: ASI + Singularity probably in less than 2 years



5/32
@slow_developer
not that fast 👀



6/32
@arandomessence
If AGI will allow us to achieve pure understanding with unprecedented problem solving capabilities, then the only thing we focus on today is automating AI research.

“This could easily dramatically accelerate existing trends of
algorithmic progress, compressing a decade of advances into a year.”

- Leopold Aschenbrenner



7/32
@slow_developer
we need to be careful not to get so caught up in the 'how fast' that we lose sight of the 'why' and 'for what'



8/32
@AlgorithmMD
AGI already achieved. Just look at it from a hindsight point of view. All the clues are present, incontrovertibly.



9/32
@iamuqba
AGI leading to the singularity would be a paradigm shift, but the real question is how we manage such immense power. It’s a game changer, but we need to proceed with caution



10/32
@CohorteAI
Exactly! OpenAI’s focus on AGI (Artificial General Intelligence) is because it's seen as the key to unlocking The Singularity—a point where AI systems can improve themselves autonomously. Once AGI is achieved, the potential for recursive self-improvement could lead to exponential advancements in intelligence, creating a truly transformative shift in technology and society. The implications are staggering, and it's clear why this is the ultimate goal for OpenAI and others pursuing AGI. We may be on the verge of something mind-blowing



11/32
@benyogaking
will machine intelligence escape human intelligence ... how many recursive improvements until the people monitoring it no longer understand the improvements?



12/32
@carpioverse
How will we know (precisely) when/if the singularity takes place?



13/32
@mirojurcevic
“Fake it till you make it” will be an important principle for the foreseeable future.

This can be said for politics, AGI, podcasting and friendships



14/32
@LarryPanozzo
Hard, fast, explosive takeoff



15/32
@signal_el
Basically this



16/32
@jackadoresai
I see your point about AGI being a game-changer, but what about the potential risks?



17/32
@gnomethrower
We're already entering that phase.



18/32
@deter3
strong faith in openai /sam is already a mind-blowing thing .



19/32
@_maximura
I'll put out what I think is a controversial take, and I'm prepared to have a healthy debate on this.

I think we are not ready for AGI, and AGI scares me for this reason - recursive self-improvement.



20/32
@BeyondtheCodeAI
AGI by 2030. Thoughts?



21/32
@LebelBrosAI
AGI is what will call the reality in AI upcoming fixtures



22/32
@TheAI_Frontier
That's a long goal, and a revolution is necessary. We're not there yet.



23/32
@pulssart
Is AGI truly the final goal, or just a step? What defines the Singularity's success?



24/32
@Prashant_1722
Nearing AGI itself will change a lot of things



25/32
@deepwhitman
yup - it really is that simple

everything else from immigration to climate change to who should use which bathroom is a distraction when you truly understand what the singularity means



26/32
@jacobi_torsten
Once AI self assembles and properly simulates most of reality - all bets are off in terms of progress.



27/32
@taichnocrat
Everyone leaving OpenAI is actually moving to an island and prepping. The singularity is coming 😱



28/32
@talktoai


[Quoted tweet]
omfg.


29/32
@GorgeousTech
and after that...where will humans be? "dead" by machines since they don't need humans to perfect themselves?



30/32
@AlexAlarga
Recursive AI self-improvement will indeed be
"mind-blowing".



GZo3gzlW8DgwRQk.jpg


31/32
@lanerbop
It’s crazy that in 10 years or less, the economic landscape will likely be completely different than it is now, or ever has been in the past



32/32
@Yus_Way




FK28cpcXIAAtXQX.jpg



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,666
Daps
203,884
Reppin
the ether







1/32
@slow_developer
The reason OpenAI say AGI is the only goal is because THE SINGULARITY is the goal.

Once AGI is achieved, recursive self-improvement will be mind-blowing.



2/32
@RickKiessig
I claim that recursive self-improvement of AI will require *understanding*. However, as shown by Gödel's incompleteness theorem, understanding is not computational. We need leaps, not curve-fitting.



3/32
@slow_developer
or perhaps the leaps you mentioned could emerge from complex interactions within a sufficiently advanced AI, even if those interactions are ultimately based on computation... what you think?



4/32
@Kad_Tal
Post-AGI: ASI + Singularity probably in less than 2 years



5/32
@slow_developer
not that fast 👀



6/32
@arandomessence
If AGI will allow us to achieve pure understanding with unprecedented problem solving capabilities, then the only thing we focus on today is automating AI research.

“This could easily dramatically accelerate existing trends of
algorithmic progress, compressing a decade of advances into a year.”

- Leopold Aschenbrenner



7/32
@slow_developer
we need to be careful not to get so caught up in the 'how fast' that we lose sight of the 'why' and 'for what'



8/32
@AlgorithmMD
AGI already achieved. Just look at it from a hindsight point of view. All the clues are present, incontrovertibly.



9/32
@iamuqba
AGI leading to the singularity would be a paradigm shift, but the real question is how we manage such immense power. It’s a game changer, but we need to proceed with caution



10/32
@CohorteAI
Exactly! OpenAI’s focus on AGI (Artificial General Intelligence) is because it's seen as the key to unlocking The Singularity—a point where AI systems can improve themselves autonomously. Once AGI is achieved, the potential for recursive self-improvement could lead to exponential advancements in intelligence, creating a truly transformative shift in technology and society. The implications are staggering, and it's clear why this is the ultimate goal for OpenAI and others pursuing AGI. We may be on the verge of something mind-blowing



11/32
@benyogaking
will machine intelligence escape human intelligence ... how many recursive improvements until the people monitoring it no longer understand the improvements?



12/32
@carpioverse
How will we know (precisely) when/if the singularity takes place?



13/32
@mirojurcevic
“Fake it till you make it” will be an important principle for the foreseeable future.

This can be said for politics, AGI, podcasting and friendships



14/32
@LarryPanozzo
Hard, fast, explosive takeoff



15/32
@signal_el
Basically this



16/32
@jackadoresai
I see your point about AGI being a game-changer, but what about the potential risks?



17/32
@gnomethrower
We're already entering that phase.



18/32
@deter3
strong faith in openai /sam is already a mind-blowing thing .



19/32
@_maximura
I'll put out what I think is a controversial take, and I'm prepared to have a healthy debate on this.

I think we are not ready for AGI, and AGI scares me for this reason - recursive self-improvement.



20/32
@BeyondtheCodeAI
AGI by 2030. Thoughts?



21/32
@LebelBrosAI
AGI is what will call the reality in AI upcoming fixtures



22/32
@TheAI_Frontier
That's a long goal, and a revolution is necessary. We're not there yet.



23/32
@pulssart
Is AGI truly the final goal, or just a step? What defines the Singularity's success?



24/32
@Prashant_1722
Nearing AGI itself will change a lot of things



25/32
@deepwhitman
yup - it really is that simple

everything else from immigration to climate change to who should use which bathroom is a distraction when you truly understand what the singularity means



26/32
@jacobi_torsten
Once AI self assembles and properly simulates most of reality - all bets are off in terms of progress.



27/32
@taichnocrat
Everyone leaving OpenAI is actually moving to an island and prepping. The singularity is coming 😱



28/32
@talktoai


[Quoted tweet]
omfg.


29/32
@GorgeousTech
and after that...where will humans be? "dead" by machines since they don't need humans to perfect themselves?



30/32
@AlexAlarga
Recursive AI self-improvement will indeed be
"mind-blowing".



GZo3gzlW8DgwRQk.jpg


31/32
@lanerbop
It’s crazy that in 10 years or less, the economic landscape will likely be completely different than it is now, or ever has been in the past



32/32
@Yus_Way




FK28cpcXIAAtXQX.jpg



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196




I have never, ever heard someone I trust or respect use the word "singularity" in the way that those people use it.

I first ran into that use of the term in Peter Diamandis's book "Abundance: The Future Is Better Than You Think". And they sounded like freaking cult members who knew nothing whatsoever regarding what they were talking about. They claimed they could change the whole world - poverty, environment, science, etc. - without actually knowing any of those fields. Just spewing constant and unjustified techno-optimism. Ever since I read that book, I've kept a lookout for the word and it's always used by untrustworthy people who don't know what they're saying.

These people believe that issues of inequality, violence, overconsumption, resource depletion, environmental destruction, etc. are all irrelevant because the Singularity will solve everything....and if it doesn't they we can escape to Mars.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,346
Reputation
8,496
Daps
160,036









1/25
@rohanpaul_ai
Fantastic piece from Anthropic CEO.

Dario Amodei, believes powerful AI could come as early as 2026

And this is his definition of "Powerful AI"

[Quoted tweet]
Machines of Loving Grace: my essay on how AI could transform the world for the better

darioamodei.com/machines-of-…


GZphmOGWIAc3ifW.png


2/25
@rohanpaul_ai
"The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed5. It may however be limited by the response time of the physical world or of software it interacts with."



GZpiCd8WIBILn46.png


3/25
@roninhahn
I hate to beat a dead horse.



GZtGhHbXAA8TF0B.jpg


4/25
@rohanpaul_ai
Hope it will happen 🚀



5/25
@Colter
I agree with this more than AGI is around the corner. All AI model growth is slowing and topping the same ceiling.



6/25
@rohanpaul_ai
yep



7/25
@PostPCEra
glad to see, finally a rational individual depicts the capabilities of "powerful AI", and it's potential to transform various fields and it's interactions with physical & software world

even if it come by 2030, world get huge shocks

>powerful AI could come as early as 2026



8/25
@rohanpaul_ai
exactly, can wait a lifetime for that kind of machine power 😄



9/25
@TheJohnEgan
show me the math for that



10/25
@rohanpaul_ai
surely they have some basis of calculation



11/25
@KrakowiakK
Nobel prize winner on my local desktop, hmmm intriguing 😉



12/25
@rohanpaul_ai
Isn't it 🔥



13/25
@GM0x_8e08
Sign in - Google Accounts



GZpvYLqa4AA-YrM.jpg


14/25
@rohanpaul_ai
nice



15/25
@scifirunai
I love Dario Amodei's optimism about AI's potential to transform various fields. Powerful AI by 2026 is a bold prediction - excited to see how it plays out!



16/25
@rohanpaul_ai
👍👍



17/25
@HolgersenTobias
Looks good to me 🖖



18/25
@gpt_biz
Sounds interesting! AI is evolving fast, can't wait to see what 2026 holds



19/25
@LooveThyJourney
Not to mention, excellent memory. And efficient communication with multiple copies of itself. The logistical capabilities will be enormous.

Have it teach people how to make Tik Tok and YouToube dance videos and it may just cure depression across the population all in one go ✨🚀



20/25
@wangcyyc
a great job!



21/25
@johnny1tap
The whole thing with interfaces and agentic behavior is already here there just isn't a conglomerated product that everyone can point to and say look there it is. The proficiency level stuff obv. has a ways to go and there are definitely big questions regarding that...



22/25
@lolomovia
AI makes some people's lives better while making others' lives worse. AI users are getting rich by using AI, but in doing so, they are also making others poor, because one person's wealth is built on another's poverty.



23/25
@lolomovia
When one person has more resources, others will have less, you know, resources are fixed!!



24/25
@karlkyrkland
ChatGPT is currently getting a 2 out of 5 in AP English. There have been no real improvements since ChatGPT 3.5. It does not even grasp the fundamentals of writing. How would AI be able to compete with Nobel Prize winning authors in 2 years? It can’t even compete with teenagers.



25/25
@wdonno
So why did anthropic senior executive speak at the heritage foundation sponsored conference last month in San Francisco: Reboot 2024?
It was complete with an all male panel discussing on ProNatalism, ‘encouraging’ women to have babies.




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196





1/7
@rohanpaul_ai
Anthropic CEO, Dario Amodei's latest piece on the future of AI - With Google's NotebookLM

[Quoted tweet]
Fantastic piece from Anthropic CEO.

Dario Amodei, believes powerful AI could come as early as 2026

And this is his definition of "Powerful AI"


GZphmOGWIAc3ifW.png


https://video.twimg.com/amplify_video/1844897701101514754/vid/avc1/1080x1080/tF7za3Az2pqPzvZr.mp4

2/7
@retro_visionary
Are those voices ai-generated?



3/7
@rohanpaul_ai
yep



4/7
@GM0x_8e08
🔥🔥



5/7
@UristaTim
Nice. 🔥🔥



6/7
@gpt_biz
This is a great read if you're interested in the future of AI and its potential impact on our daily lives



7/7
@RomanP918791
NotebookLM is new Joe Rogan 🤣




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

Rick Fox at UNC

All Star
Bushed
Joined
Dec 2, 2015
Messages
4,207
Reputation
1,014
Daps
10,040
Reppin
Pretty Boys
it works? well, only Nvidia is making money off of AI currently

Picks and shovels.

Most of this essentially boils down to large distributed systems solving problems that are modeled by linear algebra and graph theory (which is often modeled using linear algebra) based algorithms.

Nvidia benefits because GPUs are exceedingly efficient at solving linear equations, at least when compared to general-purpose CPUs.

Major cloud providers such as AWS, GCP, and Azure benefit because they have become experts in designing multi-tenant "large-scale," "hyper-scale," or "massive-scale" systems. Much of this expertise preceded AI, more a result of cloud computing, but lends itself well to the space.

Network and custom silicon companies such as Arista, Broadcom, Cisco, Juniper, Nokia, and TSMC all benefit.

Broadcom chips (Tomahawk, Trident, Jericho), for example, are often at the core of network devices used to connect all of these compute and storage systems to each other and to the outside world. The latest chips are capable of 400/800Gbps at the port level and because there are novel ways to connect these ports to each other, systems are capable of many Tbps traffic flows between data centers, cities, regions, and even through subsea cable.

TSMC is well known.

Arista, Cisco, Juniper, and Nokia benefit because they are the manufactures of the aforementioned networking devices.

Data center providers such as Equinix and Core Site benefit to a degree, though, quite a few large cloud providers are choosing to host compute, network, and storage resources in their own custom built data centers.
 
Last edited:
Top