Sam Altman is a habitual liar and can't be trusted, says former OpenAI board member

DrBanneker

Space is the Place
Joined
Jan 23, 2016
Messages
5,696
Reputation
4,566
Daps
19,670
Reppin
Figthing borg at Wolf 359
Sam Altman's goal I think is to be some grand historic figure that creates Artificial General Intelligence for humanity and thus propel us into some divine age of transhumanism, space travel, etc. All the ethics and safety shyt are obstacles to this and he thinks that even if he played by the rules it would let someone else like China, Google or Elon scoop him.

It's an arms race at this point, the safety and ethics shyt is just window dressing to make it look like they give a fukk.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,666
Daps
203,884
Reppin
the ether

DrBanneker

Space is the Place
Joined
Jan 23, 2016
Messages
5,696
Reputation
4,566
Daps
19,670
Reppin
Figthing borg at Wolf 359
"Some creative jobs maybe will go away, but maybe they shouldn't have been there in the first place," the CTO said of AI's role in the workplace. "I really believe that using it as a tool for education, creativity, will expand our intelligence."




These people are real fukking sociopaths.

If you want to understand their misanthropy, read these books by Douglas Rushkoff

Team Human


Survival of the Richest: Escape Strategies for Tech Billionaires
 

Consigliere

Superstar
Supporter
Joined
Jun 15, 2012
Messages
10,501
Reputation
1,814
Daps
36,882
These folks most likely have military and CIA types pushing them in a certain direction.

Not trying to be a conspiracy theorist.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,666
Daps
203,884
Reppin
the ether
I think they got a scaling "predictor" for the intelligence of the models they train and eventually this might our new reality.


How would you predict what level of intelligence it would take to "solve all of physics"? We don't even know if we have the necessary information to do that in the first place.

I don't think Altman has any clue what that would entail or even what that means.
 

Geek Nasty

Brain Knowledgeably Whizzy
Supporter
Joined
Jan 30, 2015
Messages
30,673
Reputation
4,786
Daps
115,505
Reppin
South Kakalaka
How would you predict what level of intelligence it would take to "solve all of physics"? We don't even know if we have the necessary information to do that in the first place.

I don't think Altman has any clue what that would entail or even what that means.
How much do power requirements scale with all this theoretical intelligence? AI is already stressing the grid, we're supposed to come up with power for retiring fossil fuels AND AI farms too?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,344
Reputation
8,496
Daps
160,030
How much do power requirements scale with all this theoretical intelligence? AI is already stressing the grid, we're supposed to come up with power for retiring fossil fuels AND AI farms too?
maybe AI will be used to solved it's energy needs :ld:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,344
Reputation
8,496
Daps
160,030
How would you predict what level of intelligence it would take to "solve all of physics"? We don't even know if we have the necessary information to do that in the first place.

I don't think Altman has any clue what that would entail or even what that means.

this is the way I see it possibly going, it'll be tasked with going from point A, where we currently are in our understanding of physics, to point B, where we have a complete and consistent theory of everything. the journey from A to B will likely involve an AI system that can learn from the vast amounts of existing research, identify patterns and connections that we may have missed, and generate new hypotheses and predictions that can be tested and validated. maybe a world model or something that surpasses it that can do reasonably accurate simulations. my point is I think it'll be like every task that it's given will be broken down into subtasks to fill in any knowledge gaps necessary to get to the end goal. thats basically how i think it would come up with novel solutions.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,666
Daps
203,884
Reppin
the ether
this is the way I see it possibly going, it'll be tasked with going from point A, where we currently are in our understanding of physics, to point B, where we have a complete and consistent theory of everything. the journey from A to B will likely involve an AI system that can learn from the vast amounts of existing research, identify patterns and connections that we may have missed, and generate new hypotheses and predictions that can be tested and validated. maybe a world model or something that surpasses it that can do reasonably accurate simulations. my point is I think it'll be like every task that it's given will be broken down into subtasks to fill in any knowledge gaps necessary to get to the end goal. thats basically how i think it would come up with novel solutions.


That's possible, but we don't know if it would work and it's definitely not what Altman said.

The main reason we don't know if it would work is that we don't know if we even have the experimental data necessary to develop the theories to explain physics. It's possible that we will never have that data, there could be shyt going on at a level we can't even physically detect, and without that information we'll never be able to realize what is actually going on.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,666
Daps
203,884
Reppin
the ether
How much do power requirements scale with all this theoretical intelligence? AI is already stressing the grid, we're supposed to come up with power for retiring fossil fuels AND AI farms too?


Not to mention that, with the hyper-profit focus, OpenAI and company appear obsessed with mass distributing shytty AI before they develop any next-level AI. Which means they'll flood the world with power-sucking shyt that is hardly contributing anything other than taking people's jobs long before they build anything that will "solve global issues" (not that that will necessarily ever happen).





maybe AI will be used to solved it's energy needs :ld:


Maybe, but there isn't a great history of corporate tech solving significant issues.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,666
Daps
203,884
Reppin
the ether
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,344
Reputation
8,496
Daps
160,030
That's possible, but we don't know if it would work and it's definitely not what Altman said.

The main reason we don't know if it would work is that we don't know if we even have the experimental data necessary to develop the theories to explain physics. It's possible that we will never have that data, there could be shyt going on at a level we can't even physically detect, and without that information we'll never be able to realize what is actually going on.

that's the gap in knowledge AI can be used to fill by "thinking" outside the box. make new connections or develop new science in pursuit of that goal/sub-goal. recursively solving a problem/mystery to ultimately fulfill the original objective. it's like how to get to space NASA made a number of scientific advancements and discoveries in pursuit of that goal.
 
Top