REVEALED: Open A.I. Staff Warn "The progress made on Project Q* has the potential to endanger humanity" (REUTERS)

OnlyOneBoss

Superstar
Joined
Dec 18, 2016
Messages
2,925
Reputation
201
Daps
13,283
I know it's human nature, but still hilarious to see people post, "Man, lots of people are warning about the same thing, must be false!"


And it isn't even always just those two things. My #1 worry about AI is that it puts a battery in the back of dictators and fascists and makes their job far easier compared to those who have democratic support.





And just like every new-world changing tech that has come before it. It will be used for bad, but also good. At the end of the day technology is just an enhancement of human life and will be used how the makers of it see fit


The invention of combustion engines brought us tanks, but it also brought us ambulances and food trucks


Human flight brought us Apache helicopters, and rescue helicopters, and FedEx Air

The internet has shown us some of the dumbest people ever, and brought violence and hate out of people. But it also made it possible to do deep searches and look through data and find sites hosting pics of children so we can track them down.


AI is gonna do whatever people want it to do :manny: Of course evil people will get their hands on it. But there’s also someone out there who genuinely wants to do something good with it and will. It is what it is man death is a 100% certainty.


People want technological innovation and “the future” but when it’s looking at them right in the face they get scared
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,871
Reputation
7,926
Daps
148,940

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,871
Reputation
7,926
Daps
148,940





1/11
@RichardMCNgo
One reason I don’t spend much time debating AI accelerationists: few of them take superintelligence seriously. So most of them will become more cautious as AI capabilities advance - especially once it’s easy to picture AIs with many superhuman skills following long-term plans.



2/11
@RichardMCNgo
It’s difficult to look at an entity far more powerful than you and not be wary. You’d need a kind of self-sacrificing “I identify with the machines over humanity” mindset that even dedicated transhumanists lack (since many of them became alignment researchers).



3/11
@RichardMCNgo
Unfortunately the battle lines might become so rigid that it’s hard for people to back down. So IMO alignment people should be thinking less about “how can we argue with accelerationists?” and more about “how can we make it easy for them to help once they change their minds?”



4/11
@RichardMCNgo
For instance:

[Quoted tweet]
ASI is a fairy tale.


5/11
@atroyn
at the risk of falling into the obvious trap here, i think this deeply mis-characterizes most objections to the standard safety position. specifically, what you call not taking super-intelligence seriously, is mostly a refusal to accept a premise which is begging the question.



6/11
@RichardMCNgo
IMO the most productive version of accelerationism would generate an alternative conception of superintelligence. I think it’s possible but hasn’t been done well yet; and when accelerationists aren’t trying to do so, “not taking superintelligence seriously” is a fair description.



7/11
@BotTachikoma
e/acc treats AI as a tool, and so just like any other tool it is the human user that is responsible for how it's used. they don't seem to think fully-autonomous, agentic AI is anywhere near.



8/11
@teortaxesTex
And on the other hand, I think that as perceived and understandable control over AI improves, with clear promise of carrying over to ASI, the concern of mundane power concentration will become more salient to people who currently dismiss it as small-minded ape fear.



9/11
@psychosort
I come at this from both ends

On one hand people underestimate the economic interoperability of advanced AI and people. That will be an enormous economic/social shock not yet priced in.



10/11
@summeroff
From our perspective, it seems like the opposite team isn't taking superintelligence seriously, with all those doom scenarios where superintelligence very efficiently do something stupid.



11/11
@norabelrose
This isn't really my experience at all. Many accelerationists say stuff like "build the sand god" and in order to make the radically transformed world they want, they'll likely need ASI.




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,871
Reputation
7,926
Daps
148,940


1/5
@sporadicalia
“this isn’t a one-off improvement – it’s a new scaling paradigm and we’re just getting started.”

September 12, 2024 — the day the growth curve turned skyward to the Singularity and the intelligence explosion truly began.

[Quoted tweet]
Our o1-preview and o1-mini models are available immediately. We’re also sharing evals for our (still unfinalized) o1 model to show the world that this isn’t a one-off improvement – it’s a new scaling paradigm and we’re just getting started. 2/9


2/5
@BasedNorthmathr
Is it actually tho. I’m seeing a lot of funny dunks



3/5
@sporadicalia
i think a lot of people are using it wrong

asking it basic questions where it doesn’t *need* to think, comes up with some pointless results, but of course it does



4/5
@michaeltastad
It looks to me like many AI model providers have focused on scaling the training, and there’s a lot of alpha in scaling the inference.

Chips on the front end and chips on the back end.

We need a lot more power and lot of chips



5/5
@r3muxd
never post again




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

GXSsNpgXEAAed0a.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,871
Reputation
7,926
Daps
148,940






1/15
@dmdohan
🍓is ripe and is ready to think, fast and slow: check out OpenAI o1, trained to reason before answering

I joined OpenAI to push boundaries of science & reasoning with AI. Happy to share this result of team's amazing collaboration does just that

Try it on your hardest problems

[Quoted tweet]
We're releasing a preview of OpenAI o1—a new series of AI models designed to spend more time thinking before they respond.

These models can reason through complex tasks and solve harder problems than previous models in science, coding, and math. openai.com/index/introducing…


2/15
@dmdohan
o1 ranks in the top 500 students for AIME -> would qualify for the USA Math Olympiad

Coding @ the IOI, a variant scores at median among contestants, and an oracle among 10,000 samples per problem would receive a gold medal

On GPQA it achieves 78%, compared to 70% for PhDs



3/15
@dmdohan
We've entered a new paradigm which allows scaling test-time compute alongside train-time compute, so the model can spend more time and achieve better results.

Check out the research blog with details: https://openai.com/index/learning-to-reason-with-llms/



4/15
@dmdohan
Practically, thinking in plain language opens up a ton of possibilities.

On safety & alignment, the model is more reliable because it can reason about policies and available choices before responding, and we are able to inspect its thinking and look for why something happened



5/15
@dmdohan
It's important to emphasize that this is a huge leap /and/ we're still at the start

Give o1-preview a try, we think you'll like it.

And in a month, give o1 a try and see all the ways it has improved in such a short time

And expect that to keep happening



6/15
@dmdohan
Also want to point out o1-mini, which is incredible at coding tasks while being /fast/

It and o1 are the first generation of a new type of model.

[Quoted tweet]
As part of today, we’re also releasing o1-mini. This is an incredibly smart, small model that can also reason before it’s answer. o1-mini allows us at @OpenAI to make high-intelligence widely accessible.

openai.com/index/openai-o1-m…

On the AIME benchmark, o1-mini re-defines the intelligence + cost frontier (see if you can spot the old GPT-4o model in the bottom 🙂).

Massive congrats to the team and especially @ren_hongyu and @shengjia_zhao for leading this!


7/15
@akushaidesu
o1 by next month??
@kimmonismus



8/15
@ferdousbhai
How would it improve in a month? Is there a continuous RL in the pipeline?



9/15
@axpny
@readwise save



10/15
@llennchan2003
I didn't find any advantage of o1 over sonnet..it's on par at best



11/15
@wyqtor
I would if I had at least 10 daily messages



12/15
@natsothanaphan
I hope this keeps working! On a cautious side, transformers do have fundamental limitations such as they can’t natively count or keep states. So they will continue to be fragile on certain tasks. Realistically, this means directly continuing this approach will run into problems at some point. I hope you can get past them.



13/15
@KarmaLikeWater
Not a magic leap? 😲



14/15
@adhil_parammel
Orion in october 🥰



15/15
@kai_mordo





To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

GXSskpZXEAAjd6O.jpg

GXSt2XGW8AAVuye.png

GXSylkoWwAAN4Ls.jpg

GXSwuLKXMAAZBSJ.jpg

GXbYIfgWQAACOiU.png

GXbYKU0WEAAZ4Ni.png

GXbYNsnX0AEe5Da.png

GXbYOJtW4AAoR3v.jpg

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,871
Reputation
7,926
Daps
148,940



1/3
We're publishing a temporary dashboard to help users understand o1's reasoning tokens:



2/3
Here you can see that:

- o1 usually uses more tokens for reasoning compared to completion (edited the graph to clarify)

- the median number of reasoning tokens is relatively constant regardless of prompt size

https://p.us5.datadoghq.com/sb/7dc1...9949362&to_ts=1726356349362&live=true



3/3
They actually are larger than the final tokens usually. The screenshot showed completion inclusive of reasoning, which was confusing - just updated the dashboard to separate the two.




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GXeTsfPbwAEYZZf.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,871
Reputation
7,926
Daps
148,940




1/8
@thealexbanks
We’re at the brink of a new era in AI.

I just watched Sam Altman's recent discussion about OpenAI's o1 model:

I've been reflecting on how AI reasoning is still in its early stages—comparable to GPT-2.

But the acceleration curve is incredibly steep.

Altman expects over the coming months for reasoning to get A LOT better as we move from o1-preview to o1.

This will enable a serious step forward.

OpenAI talks about 5 stages of artificial intelligence:

Level 1 — Chatbots, AI with conversational language
Level 2 — Reasoners, human-level problem solving
Level 3 — Agents, systems that can take action
Level 4 — Innovators, AI that can aid in invention
Level 5 — Organisations, AI that can do the work of an organisation

Altman says we’ve now reached level 2 (reasoners) with the arrival of their new o1 model.

Level 1 to Level 2 was slow.

Level 2 to Level 3 will be fast.

It's evident that we're at a pivotal moment in AI development.

AI agents are closer than we think.

What are agents?

Autonomous systems that can make decisions and perform actions to achieve specific goals.

All without human intervention.

I'm curious about how significant these upcoming updates will be. Even if we just achieve functional agents, that's a massive leap.

Video credit: @tsarnick



2/8
@thealexbanks
The improvement curve is going to be steep.

It made me think of this image by Tim Urban in his famous 2015 blog post "The AI Revolution: The Road to Superintelligence".

It’s time to accelerate 🫡



3/8
@OranAITech
Massive leap ahead.

Agents change everything.

Action will need to have human in the loop



4/8
@thealexbanks
We've got a lot to look forward to Adi. Especially if level 2 to level 3 will be significantly quicker like what Altman said



5/8
@tomaldertweets
it's incredible to think we're already at level 2 with reasoners



6/8
@thealexbanks
We've already come so far—and the curve is only steepening



7/8
@Uttupaaji
Excited for the future



8/8
@matt_gray_
The compound effect at its finest.

Thank you for keeping us up to date, Alex!




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GX1nzcgXcAAzhWd.jpg
 
Top