Sam Altman is a habitual liar and can't be trusted, says former OpenAI board member

Wargames

One Of The Last Real Ones To Do It
Joined
Apr 1, 2013
Messages
24,310
Reputation
4,165
Daps
90,647
Reppin
New York City
Basically the board was right he wants to be a billionaire. I am surprised so many left though, if they stayed they could have become billionaires too.

To me it looks like he wants investment in the company and to have money in the data centers. I also don’t think it’s a coincidence they released the best coding version of it first before doing this.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
50,869
Reputation
19,586
Daps
202,403
Reppin
the ether

Wargames

One Of The Last Real Ones To Do It
Joined
Apr 1, 2013
Messages
24,310
Reputation
4,165
Daps
90,647
Reppin
New York City
This survey of fusion power projections suggestions that most experts are projecting we'll see the first power-generating demo plant around 2040 and the first commercially viable plant around 2040-2050. The lowest projection is still in the 2030s. China has a plant that's supposed to be running by the 2030s, but it's not expected to produce more energy than it consumes.


I will believe it when I see it…. :stopitslime:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
52,329
Reputation
7,979
Daps
150,079
This survey of fusion power projections suggestions that most experts are projecting we'll see the first power-generating demo plant around 2040 and the first commercially viable plant around 2040-2050. The lowest projection is still in the 2030s. China has a plant that's supposed to be running by the 2030s, but it's not expected to produce more energy than it consumes.



he has a different timetable because he's betting his AI will become intelligent enough to solve the physics necessary for nuclear fusion to work. this is a reasonable assumption that if you have an actual artificial intelligence than the timetable for everything moves up significantly.


 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
50,869
Reputation
19,586
Daps
202,403
Reppin
the ether
he has a different timetable because he's betting his AI will become intelligent enough to solve the physics necessary for nuclear fusion to work. this is a reasonable assumption that if you have an actual artificial intelligence than the timetable for everything moves up significantly.




Yes, I realize that's what he's "betting", but I've pointed out in the past that he knows literally nothing about physics. What makes his wild predictions any more meaningful than Musk's predictions?

At some point y'all gonna realize that Altman and Musk are the same guy.....
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
52,329
Reputation
7,979
Daps
150,079
Yes, I realize that's what he's "betting", but I've pointed out in the past that he knows literally nothing about physics. What makes his wild predictions any more meaningful than Musk's predictions?

At some point y'all gonna realize that Altman and Musk are the same guy.....

he doesn't need to know physics, he knows they can can train narrow models on known physics and generate hundreds of thousands if not millions of synthetic textbooks on the subject in any which way and task any number of AI agents to tackle physics problems.







1/11
@denny_zhou
What is the performance limit when scaling LLM inference? Sky's the limit.

We have mathematically proven that transformers can solve any problem, provided they are allowed to generate as many intermediate reasoning tokens as needed. Remarkably, constant depth is sufficient.

[2402.12875] Chain of Thought Empowers Transformers to Solve Inherently Serial Problems (ICLR 2024)



2/11
@denny_zhou
Just noticed a fun youtube video for explaining this paper. LoL. Pointed by @laion_ai http://invidious.poast.org/4JNe-cOTgkY



3/11
@ctjlewis
hey Denny, curious if you have any thoughts. i reached the same conclusion:

[Quoted tweet]
x.com/i/article/178554774683…


4/11
@denny_zhou
Impressive! You would be interested at seeing this: [2301.04589] Memory Augmented Large Language Models are Computationally Universal



5/11
@nearcyan
what should one conclude from such a proof if it’s not also accompanied by a proof that we can train a transformer into the state (of solving a given arbitrary problem), possibly even with gradient descent and common post training techniques?



6/11
@QuintusActual
“We have mathematically proven that transformers can solve any problem, provided they are allowed to generate as many intermediate reasoning tokens as needed.”

I’m guessing this is only true because as a problem grows in difficulty, the # of required tokens approaches ♾️



7/11
@Shawnryan96
How do they solve novel problems without a way to update the world model?



8/11
@Justin_Halford_
Makes sense for verifiable domains (e.g. math and coding).

Does this generalize to more ambiguous domains with competing values/incentives without relying on human feedback?



9/11
@ohadasor
Don't fall into it!!

[Quoted tweet]
"can solve any problem"? Really?? Let's read the abstract in the image attached to the post, and see if the quote is correct. Ah wow! Somehow he forgot to quote the rest of the sentence! How is that possible?
The full quote is "can solve any problem solvable by boolean circuits of size T". This changes a lot. All problems solvable by Boolean circuits, of any size, is called the Circuit Evaluation Problem, and is known to cover precisely polynomial time (P) calculations. So it cannot solve the most basic logical problems which are at least exponential. Now here we don't even have P, we have only circuits of size T, which validates my old mantra: it can solve only constant-time problems. The lowest possible complexity class.
And it also validates my claim about the bubble of machine learning promoted by people who have no idea what they're talking about.


10/11
@CompSciFutures
Thx, refreshingly straightforward notation too, I might take the time to read this one properly.

I'm just catching up and have a dumb Q... that is an interestingly narrow subset of symbolic operands. Have you considered what happens if you add more?



11/11
@BatAndrew314
Noob question- how is it related to universal approximation theorem? Meaning does transformer can solve any problem because it is neural net? Or it’s some different property of transformers and CoT?




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

GXnuMiObgAAizHF.png


[Submitted on 20 Feb 2024 (v1), last revised 23 May 2024 (this version, v3)]


Chain of Thought Empowers Transformers to Solve Inherently Serial Problems​


Zhiyuan Li, Hong Liu, Denny Zhou, Tengyu Ma



Comments:38 pages, 10 figures. Accepted by ICLR 2024
Subjects: Machine Learning (cs.LG); Computational Complexity (cs.CC); Machine Learning (stat.ML)
Cite as:arXiv:2402.12875 [cs.LG]
(or arXiv:2402.12875v3 [cs.LG] for this version)
[2402.12875] Chain of Thought Empowers Transformers to Solve Inherently Serial Problems


Submission history​

From: Zhiyuan Li [view email]

[v1] Tue, 20 Feb 2024 10:11:03 UTC (3,184 KB)
[v2] Tue, 7 May 2024 17:00:27 UTC (5,555 KB)
[v3] Thu, 23 May 2024 17:10:39 UTC (5,555 KB)


 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
50,869
Reputation
19,586
Daps
202,403
Reppin
the ether
he doesn't need to know physics, he knows they can can train narrow models on known physics and generate hundreds of thousands if not millions of synthetic textbooks on the subject in any which way and task any number of AI agents to tackle physics problems.


Read the reddit and twitter comments on the article you posted to see how little that paper's result matters in a practical sense. Without knowing physics, you literally have no clue what they can and cannot solve.
 

Wargames

One Of The Last Real Ones To Do It
Joined
Apr 1, 2013
Messages
24,310
Reputation
4,165
Daps
90,647
Reppin
New York City
At this point I believe them. He feels like a young Elon Musk when everyone was ignoring the signs he was a full of shyt con man and thought he was a genius.

Most of the tech people who actually figured out how to create AI have left In protest and people are talking about Altman’s Dream for AI instead.
 
Top