REVEALED: Open A.I. Staff Warn "The progress made on Project Q* has the potential to endanger humanity" (REUTERS)

OnlyOneBoss

Superstar
Joined
Dec 18, 2016
Messages
3,150
Reputation
267
Daps
14,331
I know it's human nature, but still hilarious to see people post, "Man, lots of people are warning about the same thing, must be false!"


And it isn't even always just those two things. My #1 worry about AI is that it puts a battery in the back of dictators and fascists and makes their job far easier compared to those who have democratic support.





And just like every new-world changing tech that has come before it. It will be used for bad, but also good. At the end of the day technology is just an enhancement of human life and will be used how the makers of it see fit


The invention of combustion engines brought us tanks, but it also brought us ambulances and food trucks


Human flight brought us Apache helicopters, and rescue helicopters, and FedEx Air

The internet has shown us some of the dumbest people ever, and brought violence and hate out of people. But it also made it possible to do deep searches and look through data and find sites hosting pics of children so we can track them down.


AI is gonna do whatever people want it to do :manny: Of course evil people will get their hands on it. But there’s also someone out there who genuinely wants to do something good with it and will. It is what it is man death is a 100% certainty.


People want technological innovation and “the future” but when it’s looking at them right in the face they get scared
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,831

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,831





1/11
@RichardMCNgo
One reason I don’t spend much time debating AI accelerationists: few of them take superintelligence seriously. So most of them will become more cautious as AI capabilities advance - especially once it’s easy to picture AIs with many superhuman skills following long-term plans.



2/11
@RichardMCNgo
It’s difficult to look at an entity far more powerful than you and not be wary. You’d need a kind of self-sacrificing “I identify with the machines over humanity” mindset that even dedicated transhumanists lack (since many of them became alignment researchers).



3/11
@RichardMCNgo
Unfortunately the battle lines might become so rigid that it’s hard for people to back down. So IMO alignment people should be thinking less about “how can we argue with accelerationists?” and more about “how can we make it easy for them to help once they change their minds?”



4/11
@RichardMCNgo
For instance:

[Quoted tweet]
ASI is a fairy tale.


5/11
@atroyn
at the risk of falling into the obvious trap here, i think this deeply mis-characterizes most objections to the standard safety position. specifically, what you call not taking super-intelligence seriously, is mostly a refusal to accept a premise which is begging the question.



6/11
@RichardMCNgo
IMO the most productive version of accelerationism would generate an alternative conception of superintelligence. I think it’s possible but hasn’t been done well yet; and when accelerationists aren’t trying to do so, “not taking superintelligence seriously” is a fair description.



7/11
@BotTachikoma
e/acc treats AI as a tool, and so just like any other tool it is the human user that is responsible for how it's used. they don't seem to think fully-autonomous, agentic AI is anywhere near.



8/11
@teortaxesTex
And on the other hand, I think that as perceived and understandable control over AI improves, with clear promise of carrying over to ASI, the concern of mundane power concentration will become more salient to people who currently dismiss it as small-minded ape fear.



9/11
@psychosort
I come at this from both ends

On one hand people underestimate the economic interoperability of advanced AI and people. That will be an enormous economic/social shock not yet priced in.



10/11
@summeroff
From our perspective, it seems like the opposite team isn't taking superintelligence seriously, with all those doom scenarios where superintelligence very efficiently do something stupid.



11/11
@norabelrose
This isn't really my experience at all. Many accelerationists say stuff like "build the sand god" and in order to make the radically transformed world they want, they'll likely need ASI.




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,831


1/5
@sporadicalia
“this isn’t a one-off improvement – it’s a new scaling paradigm and we’re just getting started.”

September 12, 2024 — the day the growth curve turned skyward to the Singularity and the intelligence explosion truly began.

[Quoted tweet]
Our o1-preview and o1-mini models are available immediately. We’re also sharing evals for our (still unfinalized) o1 model to show the world that this isn’t a one-off improvement – it’s a new scaling paradigm and we’re just getting started. 2/9


2/5
@BasedNorthmathr
Is it actually tho. I’m seeing a lot of funny dunks



3/5
@sporadicalia
i think a lot of people are using it wrong

asking it basic questions where it doesn’t *need* to think, comes up with some pointless results, but of course it does



4/5
@michaeltastad
It looks to me like many AI model providers have focused on scaling the training, and there’s a lot of alpha in scaling the inference.

Chips on the front end and chips on the back end.

We need a lot more power and lot of chips



5/5
@r3muxd
never post again




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

GXSsNpgXEAAed0a.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,831






1/15
@dmdohan
🍓is ripe and is ready to think, fast and slow: check out OpenAI o1, trained to reason before answering

I joined OpenAI to push boundaries of science & reasoning with AI. Happy to share this result of team's amazing collaboration does just that

Try it on your hardest problems

[Quoted tweet]
We're releasing a preview of OpenAI o1—a new series of AI models designed to spend more time thinking before they respond.

These models can reason through complex tasks and solve harder problems than previous models in science, coding, and math. openai.com/index/introducing…


2/15
@dmdohan
o1 ranks in the top 500 students for AIME -> would qualify for the USA Math Olympiad

Coding @ the IOI, a variant scores at median among contestants, and an oracle among 10,000 samples per problem would receive a gold medal

On GPQA it achieves 78%, compared to 70% for PhDs



3/15
@dmdohan
We've entered a new paradigm which allows scaling test-time compute alongside train-time compute, so the model can spend more time and achieve better results.

Check out the research blog with details: https://openai.com/index/learning-to-reason-with-llms/



4/15
@dmdohan
Practically, thinking in plain language opens up a ton of possibilities.

On safety & alignment, the model is more reliable because it can reason about policies and available choices before responding, and we are able to inspect its thinking and look for why something happened



5/15
@dmdohan
It's important to emphasize that this is a huge leap /and/ we're still at the start

Give o1-preview a try, we think you'll like it.

And in a month, give o1 a try and see all the ways it has improved in such a short time

And expect that to keep happening



6/15
@dmdohan
Also want to point out o1-mini, which is incredible at coding tasks while being /fast/

It and o1 are the first generation of a new type of model.

[Quoted tweet]
As part of today, we’re also releasing o1-mini. This is an incredibly smart, small model that can also reason before it’s answer. o1-mini allows us at @OpenAI to make high-intelligence widely accessible.

openai.com/index/openai-o1-m…

On the AIME benchmark, o1-mini re-defines the intelligence + cost frontier (see if you can spot the old GPT-4o model in the bottom 🙂).

Massive congrats to the team and especially @ren_hongyu and @shengjia_zhao for leading this!


7/15
@akushaidesu
o1 by next month??
@kimmonismus



8/15
@ferdousbhai
How would it improve in a month? Is there a continuous RL in the pipeline?



9/15
@axpny
@readwise save



10/15
@llennchan2003
I didn't find any advantage of o1 over sonnet..it's on par at best



11/15
@wyqtor
I would if I had at least 10 daily messages



12/15
@natsothanaphan
I hope this keeps working! On a cautious side, transformers do have fundamental limitations such as they can’t natively count or keep states. So they will continue to be fragile on certain tasks. Realistically, this means directly continuing this approach will run into problems at some point. I hope you can get past them.



13/15
@KarmaLikeWater
Not a magic leap? 😲



14/15
@adhil_parammel
Orion in october 🥰



15/15
@kai_mordo





To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

GXSskpZXEAAjd6O.jpg

GXSt2XGW8AAVuye.png

GXSylkoWwAAN4Ls.jpg

GXSwuLKXMAAZBSJ.jpg

GXbYIfgWQAACOiU.png

GXbYKU0WEAAZ4Ni.png

GXbYNsnX0AEe5Da.png

GXbYOJtW4AAoR3v.jpg

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,831



1/3
We're publishing a temporary dashboard to help users understand o1's reasoning tokens:



2/3
Here you can see that:

- o1 usually uses more tokens for reasoning compared to completion (edited the graph to clarify)

- the median number of reasoning tokens is relatively constant regardless of prompt size

https://p.us5.datadoghq.com/sb/7dc1...9949362&to_ts=1726356349362&live=true



3/3
They actually are larger than the final tokens usually. The screenshot showed completion inclusive of reasoning, which was confusing - just updated the dashboard to separate the two.




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GXeTsfPbwAEYZZf.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,831




1/8
@thealexbanks
We’re at the brink of a new era in AI.

I just watched Sam Altman's recent discussion about OpenAI's o1 model:

I've been reflecting on how AI reasoning is still in its early stages—comparable to GPT-2.

But the acceleration curve is incredibly steep.

Altman expects over the coming months for reasoning to get A LOT better as we move from o1-preview to o1.

This will enable a serious step forward.

OpenAI talks about 5 stages of artificial intelligence:

Level 1 — Chatbots, AI with conversational language
Level 2 — Reasoners, human-level problem solving
Level 3 — Agents, systems that can take action
Level 4 — Innovators, AI that can aid in invention
Level 5 — Organisations, AI that can do the work of an organisation

Altman says we’ve now reached level 2 (reasoners) with the arrival of their new o1 model.

Level 1 to Level 2 was slow.

Level 2 to Level 3 will be fast.

It's evident that we're at a pivotal moment in AI development.

AI agents are closer than we think.

What are agents?

Autonomous systems that can make decisions and perform actions to achieve specific goals.

All without human intervention.

I'm curious about how significant these upcoming updates will be. Even if we just achieve functional agents, that's a massive leap.

Video credit: @tsarnick



2/8
@thealexbanks
The improvement curve is going to be steep.

It made me think of this image by Tim Urban in his famous 2015 blog post "The AI Revolution: The Road to Superintelligence".

It’s time to accelerate 🫡



3/8
@OranAITech
Massive leap ahead.

Agents change everything.

Action will need to have human in the loop



4/8
@thealexbanks
We've got a lot to look forward to Adi. Especially if level 2 to level 3 will be significantly quicker like what Altman said



5/8
@tomaldertweets
it's incredible to think we're already at level 2 with reasoners



6/8
@thealexbanks
We've already come so far—and the curve is only steepening



7/8
@Uttupaaji
Excited for the future



8/8
@matt_gray_
The compound effect at its finest.

Thank you for keeping us up to date, Alex!




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GX1nzcgXcAAzhWd.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,831


The Intelligence Age​


September 23, 2024

A vibrant, impressionistic landscape of a winding path that stretches towards the horizon, lined with colorful fields


In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents.

This phenomenon is not new, but it will be newly accelerated. People have become dramatically more capable over time; we can already accomplish things now that our predecessors would have believed to be impossible.

We are more capable not because of genetic change, but because we benefit from the infrastructure of society being way smarter and more capable than any one of us; in an important sense, society itself is a form of advanced intelligence. Our grandparents – and the generations that came before them – built and achieved great things. They contributed to the scaffolding of human progress that we all benefit from. AI will give people tools to solve hard problems and help us add new struts to that scaffolding that we couldn’t have figured out on our own. The story of progress will continue, and our children will be able to do things we can’t.

It won’t happen all at once, but we’ll soon be able to work with AI that helps us accomplish much more than we ever could without AI; eventually we can each have a personal AI team, full of virtual experts in different areas, working together to create almost anything we can imagine. Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need. We can imagine similar ideas for better healthcare, the ability to create any kind of software someone can imagine, and much more.

With these new abilities, we can have shared prosperity to a degree that seems unimaginable today; in the future, everyone’s lives can be better than anyone’s life is now. Prosperity alone doesn’t necessarily make people happy – there are plenty of miserable rich people – but it would meaningfully improve the lives of people around the world.

Here is one narrow way to look at human history: after thousands of years of compounding scientific discovery and technological progress, we have figured out how to melt sand, add some impurities, arrange it with astonishing precision at extraordinarily tiny scale into computer chips, run energy through it, and end up with systems capable of creating increasingly capable artificial intelligence.

This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.

How did we get to the doorstep of the next leap in prosperity?

In three words: deep learning worked.

In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.

That’s really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is.

There are a lot of details we still have to figure out, but it’s a mistake to get distracted by any particular challenge. Deep learning works, and we will solve the remaining problems. We can say a lot of things about what may happen next, but the main one is that AI is going to get better with scale, and that will lead to meaningful improvements to the lives of people around the world.

AI models will soon serve as autonomous personal assistants who carry out specific tasks on our behalf like coordinating medical care on your behalf. At some point further down the road, AI systems are going to get so good that they help us make better next-generation systems and make scientific progress across the board.

Technology brought us from the Stone Age to the Agricultural Age and then to the Industrial Age. From here, the path to the Intelligence Age is paved with compute, energy, and human will.

If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.

We need to act wisely but with conviction. The dawn of the Intelligence Age is a momentous development with very complex and extremely high-stakes challenges. It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us.

I believe the future is going to be so bright that no one can do it justice by trying to write about it now; a defining characteristic of the Intelligence Age will be massive prosperity.

Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace. With nearly-limitless intelligence and abundant energy – the ability to generate great ideas, and the ability to make them happen – we can do quite a lot.

As we have seen with other technologies, there will also be downsides, and we need to start working now to maximize AI’s benefits while minimizing its harms. As one example, we expect that this technology can cause a significant change in labor markets (good and bad) in the coming years, but most jobs will change more slowly than most people think, and I have no fear that we’ll run out of things to do (even if they don’t look like “real jobs” to us today). People have an innate desire to create and to be useful to each other, and AI will allow us to amplify our own abilities like never before. As a society, we will be back in an expanding world, and we can again focus on playing positive-sum games.

Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,831

1/21
@miramurati
I shared the following note with the OpenAI team today.



2/21
@esatoshiclub
What did Mira see?



3/21
@SolCardCC
She saw what Ilya saw 😭



4/21
@sama
I replied with this. Mira, thank you for everything.

It’s hard to overstate how much Mira has meant to OpenAI, our mission, and to us all personally.

I feel tremendous gratitude towards her for what she has helped us build and accomplish, but I most of all feel personal gratitude towards her for the support and love during all the hard times. I am excited for what she’ll do next.

We’ll say more about the transition plans soon, but for now, I want to take a moment to just feel thanks.

Sam



5/21
@Yampeleg
what did you see?



6/21
@IterIntellectus
WHAT IS HAPPENING?!



7/21
@vittoriomta
Sam is one step away from becoming a one-man show.



8/21
@davmander
Something is wrong at ClosedAI



9/21
@BillyM2k
mfw hearing this news



10/21
@yuvalsagron
Ok something fishy must be happening there



11/21
@OfficialLoganK
Onward 🚀



12/21
@LeeLeepenkman
strong follow



13/21
@BrentLynch
Folks, you heard it here first @miramurati was also tired of waiting for Sora and is leaving @OpenAI to make AI Cat Videos!

@Hailuo_AI Minimax is a good option Mira, I know a guy!

@sama you know what you need to do, let Mira make her AI Cat videos with Sora for fiscal quarter end.

@TheoMediaAI



14/21
@Yogeshjangir_
I think something amazing will happen there.



15/21
@icreatelife
Thank you for all you did for the AI field and may your next chapter be a great one!



16/21
@QStarETH
This one actually makes sense



17/21
@QStarETH
AGI replacing jobs in house



18/21
@JeffDean
Best of luck in all that you do next, Mira!



19/21
@iamgingertrash
Good move. Only roon is left as Sam’s keep ..



20/21
@gh0stpen
y'all are seeing this too right?



21/21
@xprunie
how times have changed




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GYWIFY8W4AECg_z.png

GYWK81XWEAAiqiE.jpg

GYWMviSXwAAcbQF.jpg

GYWWUFuWAAAQfyf.png

GYWRXadaMAAB194.jpg
 
Top