Sam Altman claims “deep learning worked”, superintelligence may be “a few thousand days” away, and “astounding triumphs” will incrementally become ...

Dorian Breh

Veteran
Joined
Jan 14, 2016
Messages
22,373
Reputation
13,653
Daps
111,981
a1d.jpeg
 

mitter

All Star
Joined
Jan 24, 2013
Messages
3,577
Reputation
7
Daps
9,819
Reppin
NULL
Imagine AI figuring out how to deal with climate change. How to master nuclear fusion for unimaginable amounts of relatively clean energy. Making custom vaccines for cancer patients. How to regrow or extend telomeres or other biological techniques so we could live forever.

AI cannot do any of those things.
 

Fresh

SOHH Vet
Joined
May 2, 2013
Messages
9,146
Reputation
5,486
Daps
21,895

The Intelligence Age​


September 23, 2024

A vibrant, impressionistic landscape of a winding path that stretches towards the horizon, lined with colorful fields


In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents.

This phenomenon is not new, but it will be newly accelerated. People have become dramatically more capable over time; we can already accomplish things now that our predecessors would have believed to be impossible.

We are more capable not because of genetic change, but because we benefit from the infrastructure of society being way smarter and more capable than any one of us; in an important sense, society itself is a form of advanced intelligence. Our grandparents – and the generations that came before them – built and achieved great things. They contributed to the scaffolding of human progress that we all benefit from. AI will give people tools to solve hard problems and help us add new struts to that scaffolding that we couldn’t have figured out on our own. The story of progress will continue, and our children will be able to do things we can’t.

It won’t happen all at once, but we’ll soon be able to work with AI that helps us accomplish much more than we ever could without AI; eventually we can each have a personal AI team, full of virtual experts in different areas, working together to create almost anything we can imagine. Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need. We can imagine similar ideas for better healthcare, the ability to create any kind of software someone can imagine, and much more.

With these new abilities, we can have shared prosperity to a degree that seems unimaginable today; in the future, everyone’s lives can be better than anyone’s life is now. Prosperity alone doesn’t necessarily make people happy – there are plenty of miserable rich people – but it would meaningfully improve the lives of people around the world.

Here is one narrow way to look at human history: after thousands of years of compounding scientific discovery and technological progress, we have figured out how to melt sand, add some impurities, arrange it with astonishing precision at extraordinarily tiny scale into computer chips, run energy through it, and end up with systems capable of creating increasingly capable artificial intelligence.

This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.

How did we get to the doorstep of the next leap in prosperity?

In three words: deep learning worked.

In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.

That’s really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is.

There are a lot of details we still have to figure out, but it’s a mistake to get distracted by any particular challenge. Deep learning works, and we will solve the remaining problems. We can say a lot of things about what may happen next, but the main one is that AI is going to get better with scale, and that will lead to meaningful improvements to the lives of people around the world.

AI models will soon serve as autonomous personal assistants who carry out specific tasks on our behalf like coordinating medical care on your behalf. At some point further down the road, AI systems are going to get so good that they help us make better next-generation systems and make scientific progress across the board.

Technology brought us from the Stone Age to the Agricultural Age and then to the Industrial Age. From here, the path to the Intelligence Age is paved with compute, energy, and human will.

If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.

We need to act wisely but with conviction. The dawn of the Intelligence Age is a momentous development with very complex and extremely high-stakes challenges. It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us.

I believe the future is going to be so bright that no one can do it justice by trying to write about it now; a defining characteristic of the Intelligence Age will be massive prosperity.

Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace. With nearly-limitless intelligence and abundant energy – the ability to generate great ideas, and the ability to make them happen – we can do quite a lot.

As we have seen with other technologies, there will also be downsides, and we need to start working now to maximize AI’s benefits while minimizing its harms. As one example, we expect that this technology can cause a significant change in labor markets (good and bad) in the coming years, but most jobs will change more slowly than most people think, and I have no fear that we’ll run out of things to do (even if they don’t look like “real jobs” to us today). People have an innate desire to create and to be useful to each other, and AI will allow us to amplify our own abilities like never before. As a society, we will be back in an expanding world, and we can again focus on playing positive-sum games.

Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable.

not gon lie, I didn't read the ENTIRE article, but I'm smart and got the basic premise.

But, what if this super intelligence AI views us as a threat to it, or view some humans as threats, or viewing all of humanity threats ?

You gotta understand, I'm not a young dude anymore, I grew up on Terminator 1and 2, both were classics btw, but it's all good until Skynet becomes self aware and starts tripping, lol.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
59,720
Reputation
8,867
Daps
165,020







1/11
@victor207755822
Unbelievable results, feels like a dream—our R1 model is now #1 in the world (with style control)! 🌍🏆 Beyond words right now. 🤯 All I know is we keep pushing forward to make open-source AGI a reality for everyone. 🚀✨ /search?q=#OpenSource /search?q=#AI /search?q=#AGI /search?q=#DeepSeekR1

[Quoted tweet]
Breaking News: DeepSeek-R1 surges to the top-3 in Arena🐳!

Now ranked #3 Overall, matching the top reasoning model, o1, while being 20x cheaper and open-weight!

Highlights:
- #1 in technical domains: Hard Prompts, Coding, Math
- Joint #1 under Style Control
- MIT-licensed

A massive congrats to @deepseek_ai for this incredible milestone and gift to the community! More analysis below 👇


GiDY-ybasAACoHj.jpg


2/11
@_TobiasLee
Amazing!!!



3/11
@victor207755822
thank you bro!



4/11
@teortaxesTex
You all are my heroes. Like, comic book tier heroes. Take care.



5/11
@victor207755822
Grateful for your unwavering support since day one! You've been our biggest cheerleader since our very first model. 🙌 /search?q=#ThankYou /search?q=#Support



6/11
@NaturallyDragon
It deserves all the accolades it gets! Amazing work!



7/11
@mmaunder
Benchmarks are fun, but this kind of outright censorship is incredibly problematic long term for DeepSeek. I've included OpenAI's o1 response to the exact same questions. o1 also addresses the moral problems around the US causing mass civilian casualties.



GiEp2tOX0AAWy6B.jpg

GiEp2tKXAAAHE2v.jpg

GiEqondWcAAt9fb.png

GiEqonfXEAAGL22.png


8/11
@Simply_sunil41
Impressive



9/11
@christiancooper
I cannot believe what I can get this model to do. I can get (nearly) one shot animation of any complex physics or math concept I can imagine.

Racing to fine tune the last few steps now…

Congrats!

[Quoted tweet]
Some text spacing issues but a nice intro for an undergrad complete with matching scene notes

All from #R1

This was not a one shot, a few framing issues I had to fix.

Calculated and rendered in under 5 minutes. Its getting faster🤔

raw.githubusercontent.com/Ha…


https://video.twimg.com/ext_tw_video/1882708260366770176/pu/vid/avc1/1920x1080/Vwolza2kMxbTPWRW.mp4

10/11
@sanjaykalra
Wow - that's really impressive.



11/11
@rudiranck
Great job 👊




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
59,720
Reputation
8,867
Daps
165,020



Anthropic chief says AI could surpass “almost all humans at almost everything” shortly after 2027​


Amodei: "I think progress really is as fast as people think it is."

Benj Edwards – Jan 22, 2025 3:33 PM |

268



A photo of co-founder and CEO of Anthropic, Dario Amodei, dated May 22, 2024.


A photo of co-founder and CEO of Anthropic, Dario Amodei, dated May 22, 2024. Credit: Chesnot via Getty Images

On Tuesday, Anthropic CEO Dario Amodei predicted that AI models may surpass human capabilities "in almost everything" within two to three years, according to a Wall Street Journal interview at the World Economic Forum in Davos, Switzerland.

Speaking at Journal House in Davos, Amodei said, "I don't know exactly when it'll come, I don't know if it'll be 2027. I think it's plausible it could be longer than that. I don't think it will be a whole bunch longer than that when AI systems are better than humans at almost everything. Better than almost all humans at almost everything. And then eventually better than all humans at everything, even robotics."

Amodei co-founded Anthropic in 2021 with his sister, Daniela Amodei, and five other former OpenAI employees. Not long after, Anthropic emerged as a strong technological competitor to OpenAI's AI products (such as GPT-4 and ChatGPT). Most recently, its Claude 3.5 Sonnet model has remained highly regarded among some AI users and highly ranked among AI benchmarks.

During the WSJ interview, Amodei also spoke some about the potential implications of highly intelligent AI systems when these AI models can control advanced robotics.

"[If] we make good enough AI systems, they'll enable us to make better robots. And so when that happens, we will need to have a conversation... at places like this event, about how do we organize our economy, right? How do humans find meaning?"

He then shared his concerns about how human-level AI models and robotics that are capable of replacing all human labor may require a complete re-think of how humans value both labor and themselves.

"We've recognized that we've reached the point as a technological civilization where the idea, there's huge abundance and huge economic value, but the idea that the way to distribute that value is for humans to produce economic labor, and this is where they feel their sense of self worth," he added. "Once that idea gets invalidated, we're all going to have to sit down and figure it out."

The eye-catching comments, similar to comments about AGI made recently by OpenAI CEO Sam Altman, come as Anthropic negotiates a $2 billion funding round that would value the company at $60 billion. Amodei disclosed that Anthropic's revenue multiplied tenfold in 2024.



Amodei distances himself from “AGI” term​


Even with his dramatic predictions, Amodei distanced himself from a term for this advanced labor-replacing AI favored by Altman, "artificial general intelligence" (AGI), calling it in a separate CNBC interview from the same event in Switzerland a marketing term.

Instead, he prefers to describe future AI systems as a "country of geniuses in a data center," he told CNBC. Amodei wrote in an October 2024 essay that such systems would need to be "smarter than a Nobel Prize winner across most relevant fields."

On Monday, Google announced an additional $1 billion investment in Anthropic, bringing its total commitment to $3 billion. This follows Amazon's $8 billion investment over the past 18 months. Amazon plans to integrate Claude models into future versions of its Alexa speaker.
 

papa pimp

All Star
Joined
Mar 11, 2022
Messages
4,459
Reputation
454
Daps
10,659
the jostling at the top for best LLM will continue for a while but everyone who wants a better mankind should hope open source wins out vs someone like altman who is on record saying some very questionable stuff about society.
 

IIVI

Superstar
Joined
Mar 11, 2022
Messages
12,374
Reputation
2,944
Daps
43,106
Reppin
Los Angeles
A few thousand days away gives it a good 10 years or so out. I’ll pretty much be done working by then anyways.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
59,720
Reputation
8,867
Daps
165,020

Stephen Fry


The open letter was signed by AI experts, practitioners and thinkers including Sir Stephen Fry. Photograph: Theophile Bloudanis/AFP/Getty Images

Artificial intelligence (AI)



AI systems could be ‘caused to suffer’ if consciousness achieved, says research​


Experts and thinkers signed open letter expressing concern over irresponsible development of technology

Dan Milmo Global technology editor

Mon 3 Feb 2025 15.07 EST

Artificial intelligence systems capable of feelings or self-awareness are at risk of being harmed if the technology is developed irresponsibly, according to an open letter signed by AI practitioners and thinkers including Sir Stephen Fry.

More than 100 experts have put forward five principles for conducting responsible research into AI consciousness, as rapid advances raise concerns that such systems could be considered sentient.

The principles include prioritising research on understanding and assessing consciousness in AIs, in order to prevent “mistreatment and suffering”.

The other principles are: setting constraints on developing conscious AI systems; taking a phased approach to developing such systems; sharing findings with the public; and refraining from making misleading or overconfident statements about creating conscious AI.

The letter’s signatories include academics such as Sir Anthony Finkelstein at the University of London and AI professionals at companies including Amazon and the advertising group WPP.

It has been published alongside a research paper that outlines the principles. The paper argues that conscious AI systems could be built in the near future – or at least ones that give the impression of being conscious.

“It may be the case that large numbers of conscious systems could be created and caused to suffer,” the researchers say, adding that if powerful AI systems were able to reproduce themselves it could lead to the creation of “large numbers of new beings deserving moral consideration”.

The paper, written by Oxford University’s Patrick Butlin and Theodoros Lappas of the Athens University of Economics and Business, adds that even companies not intending to create conscious systems will need guidelines in case of “inadvertently creating conscious entities”.

It acknowledges that there is widespread uncertainty and disagreement over defining consciousness in AI systems and whether it is even possible, but says it is an issue that “we must not ignore”.

Other questions raised by the paper focus on what to do with an AI system if it is defined as a “moral patient” – an entity that matters morally “in its own right, for its own sake”. In that scenario, it questions if destroying the AI would be comparable to killing an animal.

The paper, published in the Journal of Artificial Intelligence Research, also warned that a mistaken belief that AI systems are already conscious could lead to a waste of political energy as misguided efforts are made to promote their welfare.

The paper and letter were organised by Conscium, a research organisation part-funded by WPP and co-founded by WPP’s chief AI officer, Daniel Hulme.

Last year a group of senior academics argued there was a “realistic possibility” that some AI systems will be conscious and “morally significant” by 2035.

In 2023, Sir Demis Hassabis, the head of Google’s AI programme and a Nobel prize winner, said AI systems were “definitely” not sentient currently but could be in the future.

“Philosophers haven’t really settled on a definition of consciousness yet but if we mean sort of self-awareness, these kinds of things, I think there’s a possibility AI one day could be,” he said in an interview with US broadcaster CBS.
 
Top