Sam Altman is a habitual liar and can't be trusted, says former OpenAI board member

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,845
Reppin
the ether

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,845
Reppin
the ether
The Cult of Failing Upwards

Ed Zitron walks you through how career manager Adam Mosseri pushed out Instagram's original founders, turning it into an ultra-profitable app that barely works, and how Sam Altman, the so-called hero of the AI boom, is a lobbyist dressed as a technologist best-known for being an absent, self-obsessed demagogue.

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,046
Reputation
8,229
Daps
157,717
Will Republicans help us deal with the issues tech is presenting particularly pertaining to black issues?

they creating AI images of images of black trump supporters and got bots online to sew discord, so no republicans won't be looking to do anything positive for black people in any regard.
 
Last edited:

Micky Mikey

Veteran
Supporter
Joined
Sep 27, 2013
Messages
15,840
Reputation
2,820
Daps
88,132

It's amazing how Sam is destroying his own company with his lies, lack of transparency, and obsessive need for power. I don't think he can help himself. I got the sense that something was off about him when listening to his interviews. He can never seem to give a straight and direct answer about anything.
 

Cynic

Superstar
Joined
Jan 7, 2013
Messages
16,155
Reputation
2,269
Daps
34,923
Reppin
NULL
I think they got a scaling "predictor" for the intelligence of the models they train and eventually this might our new reality.

He’s full of sh!t. Patchwork guard rails on LLMs aren’t going to produce anything innovative.

How can you trust something with less neural networks than a cockroach ?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,046
Reputation
8,229
Daps
157,717


OpenAI's brain drain isn't a great look for Sam Altman​


Jordan Hart

sam altman

OpenAI will have to weather the storms of being the first company to take generative AI mainstream. Justin Sullivan/Getty Images;Chelsea Jia Feng/BI


  • OpenAI is losing key members, including cofounder John Schulman, who left for Anthropic.

  • Despite leading the market with ChatGPT, OpenAI faces big challenges in the fledgling AI world.

  • Trust issues and whistleblower complaints add to Sam Altman's obstacles as CEO.

OpenAI is losing key members during a pivotal time in the artificial intelligence market, and it's not a good look.

On Monday, cofounder John Schulman announced that he would be leaving OpenAI to work at rival Anthropic. He joins two other high-level executives — though president Greg Brockman said he's taking an extended leave of absence — and several former employees in what appears to be an exodus from the ChatGPT maker helmed by Sam Altman.

"It's not good at all for OpenAI," said tech analyst Jacob Bourne at Emarketer, a sister company to Business Insider.

Although OpenAI got ahead in the AI arms race when it released its chatbot in a surprise move in November 2022, being the first may not be enough to keep it at the top of the leaderboard as other, bigger companies build and release their own AI and key executives depart.

"OpenAI has no 'moat,'" Mike Gualtieri, vice president and principal analyst at Forrester, said.

In business, a moat refers to a significant advantage that keeps a company more or less untouchable from its rivals and helps it maintain its market share.

Gualtieri told BI that Big Tech companies, Google and Meta in particular, already had generative AI tech at the same time as OpenAI.

"They were just afraid to release because hallucinations, etc, could impact their reputation and business," Gualtieri said.

Just last week, Meta released a statement addressing a hallucination that prompted its MetaAI chatbot to say that the July 13 assassination attempt of former President Donald Trump didn't happen. The story drew a lot of attention and criticism.

"I think we're going to see more of this kind of scrutiny, and it's not going to be just directed at OpenAI," Bourne said

OpenAI — and Sam Altman — are also under a lot of other scrutiny. On July 1, whistleblowers at OpenAI contacted the Securities and Exchange Commission, calling for it to investigate the company for rule violations around NDAs. Weeks before that, nine current and former OpenAI employees signed an open letter pointing out the risks of generative AI. And the company's management has been seen to be split between pressing ahead with AI development and having a more cautious approach.

"I decided to leave OpenAI because I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence," Daniel Kokotajlo, a former OpenAI employee who signed the letter, previously said in a statement.

Tech companies are spending heavily on AI endeavors, but trust remains one of the key factors in how much their investments will pay off.

Emarketer's Bourne said, "It's kind of this perfect storm for the emergence of this kind of concern around profits over safety that we're seeing."

And, Bourne said, as a young company with an "unusual government structure," OpenAI will continue to be under a magnifying glass — possibly even more than well-established rivals.

Meantime, it looks like OpenAI's brain drain to competitors could put the company at a further disadvantage despite its first-mover advantage and Apple partnership.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,046
Reputation
8,229
Daps
157,717

1/1
The singularity is literally starting right now and 99% of people have no idea


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

A.I generated explanation:

The tweet by an OpenAI developer is saying that some changes (or updates) to the OpenAI codebase were made entirely by a specific AI system called "o1."Here's a breakdown:
  1. PRs: This stands for "Pull Requests." In software development, a pull request is when someone suggests changes to the code and asks others to review and approve those changes.
  2. OpenAI codebase: This refers to the collection of code that makes up the OpenAI system.
  3. Authored solely by o1: This means that these changes were written and proposed entirely by an AI system named "o1," without any human intervention.

Simplified Version​

The developer is saying that an AI system named "o1" has independently made and suggested some updates to the OpenAI codebase, which is a significant achievement because it shows the AI's capability to contribute directly to software development without human help.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,046
Reputation
8,229
Daps
157,717


The Intelligence Age​


September 23, 2024

A vibrant, impressionistic landscape of a winding path that stretches towards the horizon, lined with colorful fields


In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents.

This phenomenon is not new, but it will be newly accelerated. People have become dramatically more capable over time; we can already accomplish things now that our predecessors would have believed to be impossible.

We are more capable not because of genetic change, but because we benefit from the infrastructure of society being way smarter and more capable than any one of us; in an important sense, society itself is a form of advanced intelligence. Our grandparents – and the generations that came before them – built and achieved great things. They contributed to the scaffolding of human progress that we all benefit from. AI will give people tools to solve hard problems and help us add new struts to that scaffolding that we couldn’t have figured out on our own. The story of progress will continue, and our children will be able to do things we can’t.

It won’t happen all at once, but we’ll soon be able to work with AI that helps us accomplish much more than we ever could without AI; eventually we can each have a personal AI team, full of virtual experts in different areas, working together to create almost anything we can imagine. Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need. We can imagine similar ideas for better healthcare, the ability to create any kind of software someone can imagine, and much more.

With these new abilities, we can have shared prosperity to a degree that seems unimaginable today; in the future, everyone’s lives can be better than anyone’s life is now. Prosperity alone doesn’t necessarily make people happy – there are plenty of miserable rich people – but it would meaningfully improve the lives of people around the world.

Here is one narrow way to look at human history: after thousands of years of compounding scientific discovery and technological progress, we have figured out how to melt sand, add some impurities, arrange it with astonishing precision at extraordinarily tiny scale into computer chips, run energy through it, and end up with systems capable of creating increasingly capable artificial intelligence.

This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.

How did we get to the doorstep of the next leap in prosperity?

In three words: deep learning worked.

In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.

That’s really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is.

There are a lot of details we still have to figure out, but it’s a mistake to get distracted by any particular challenge. Deep learning works, and we will solve the remaining problems. We can say a lot of things about what may happen next, but the main one is that AI is going to get better with scale, and that will lead to meaningful improvements to the lives of people around the world.

AI models will soon serve as autonomous personal assistants who carry out specific tasks on our behalf like coordinating medical care on your behalf. At some point further down the road, AI systems are going to get so good that they help us make better next-generation systems and make scientific progress across the board.

Technology brought us from the Stone Age to the Agricultural Age and then to the Industrial Age. From here, the path to the Intelligence Age is paved with compute, energy, and human will.

If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.

We need to act wisely but with conviction. The dawn of the Intelligence Age is a momentous development with very complex and extremely high-stakes challenges. It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us.

I believe the future is going to be so bright that no one can do it justice by trying to write about it now; a defining characteristic of the Intelligence Age will be massive prosperity.

Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace. With nearly-limitless intelligence and abundant energy – the ability to generate great ideas, and the ability to make them happen – we can do quite a lot.

As we have seen with other technologies, there will also be downsides, and we need to start working now to maximize AI’s benefits while minimizing its harms. As one example, we expect that this technology can cause a significant change in labor markets (good and bad) in the coming years, but most jobs will change more slowly than most people think, and I have no fear that we’ll run out of things to do (even if they don’t look like “real jobs” to us today). People have an innate desire to create and to be useful to each other, and AI will allow us to amplify our own abilities like never before. As a society, we will be back in an expanding world, and we can again focus on playing positive-sum games.

Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,046
Reputation
8,229
Daps
157,717






1/6
How it started vs how it’s goin.

[Quoted tweet]
I replied with this. Mira, thank you for everything.

It’s hard to overstate how much Mira has meant to OpenAI, our mission, and to us all personally.

I feel tremendous gratitude towards her for what she has helped us build and accomplish, but I most of all feel personal gratitude towards her for the support and love during all the hard times. I am excited for what she’ll do next.

We’ll say more about the transition plans soon, but for now, I want to take a moment to just feel thanks.

Sam


2/6
😂 it's almost nothing now



3/6
oh, maybe Matt hired Mira????



4/6
right. fully automated company



5/6
he hasn't left, but if a co-founder takes a long sabbatical, it usually ends up badly



6/6
my guess is AGI achieved internally, so soon they don't need a CEO




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

GYWL3q2aYAAUOIu.jpg

GYWL4H8XwAAo9u9.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,046
Reputation
8,229
Daps
157,717





Exclusive: OpenAI to remove non-profit control and give Sam Altman equity​


By Krystal Hu and Kenrick Cai

September 25, 20244:35 PM EDTUpdated 4 hours ago

Item 1 of 2 Sam Altman, CEO of OpenAI, attends the 54th annual meeting of the World Economic Forum, in Davos, Switzerland, January 18, 2024. REUTERS/Denis Balibouse/File Photo

[1/2]Sam Altman, CEO of OpenAI, attends the 54th annual meeting of the World Economic Forum, in Davos, Switzerland, January 18, 2024. REUTERS/Denis Balibouse/File Photo Purchase Licensing Rights
, opens new tab


  • OpenAI plots to restructure into for-profit benefit corporation
  • Non-profit board no longer controls for-profit when done
  • CEO Sam Altman to receive equity in OpenAI for the first time

SAN FRANCISCO, Sept 25 (Reuters) - ChatGPT-maker OpenAI is working on a plan to restructure its core business into a for-profit benefit corporation that will no longer be controlled by its non-profit board, people familiar with the matter told Reuters, in a move that will make the company more attractive to investors.

The OpenAI non-profit will continue to exist and own a minority stake in the for-profit company, the sources said. The move could also have implications for how the company manages AI risks in a new governance structure.

Chief executive Sam Altman will also receive equity for the first time in the for-profit company, which could be worth $150 billion after the restructuring as it also tries to remove the cap on returns for investors, sources added. The sources requested anonymity to discuss private matters.

"We remain focused on building AI that benefits everyone, and we’re working with our board to ensure that we’re best positioned to succeed in our mission. The non-profit is core to our mission and will continue to exist," an OpenAI spokesperson said.

The details of the proposed corporate structure, first reported by Reuters, highlight significant governance changes happening behind the scenes at one of the most important AI companies. The plan is still being hashed out with lawyers and shareholders and the timeline for completing the restructuring remains uncertain, the sources said.

The restructuring also comes amid a series of leadership changes at the startup. OpenAI's longtime chief technology officer Mira Murati abruptly announced her departure from the company on Wednesday. Greg Brockman, OpenAI's president, has also been on leave.

Founded in 2015 as a non-profit AI research organization, OpenAI added the for-profit OpenAI LP entity in 2019 as a subsidiary of its non-profit, securing capital from Microsoft (MSFT.O)
, opens new tab to fund its research.

The company captured global attention with the launch of ChatGPT in late 2022, a generative AI app that spit out human-like responses to text queries, which has become one of the fastest-growing applications in history with over 200 million weekly active users, igniting a global race to invest in AI.

Along with ChatGPT's success, OpenAI's valuation has skyrocketed from $14 billion in 2021 to $150 billion in the new convertible debt round under discussion, attracting investors such as Thrive Capital and Apple (AAPL.O)
, opens new tab.


AI SAFETY​


The company’s unusual structure, which gives full control of the for-profit subsidiary to the OpenAI nonprofit, was originally set to ensure the mission of creating "safe AGI that is broadly beneficial," referring to artificial general intelligence that is at or exceeding human intelligence.

The structure came into focus last November during one of the biggest boardroom dramas in Silicon Valley, where members of the non-profit board ousted Altman over a breakdown in communication and loss of trust. He was reinstated after five days with overwhelming support from employees and investors.

Since then, OpenAI's board has been refreshed with more tech executives, chaired by Bret Taylor, former Salesforce co-CEO who now runs his own AI startup. Any corporate changes need approval from its nine-person non-profit board.

The removal of non-profit control could make OpenAI operate more like a typical startup, a move generally welcomed by its investors who have poured billions into the company.

However, it could also raise concerns from the AI safety community about whether the lab still has enough governance to hold itself accountable in its pursuit of AGI, as it has dissolved the superalignment team that focuses on the long-term risks of AI earlier this year.

It’s unclear how much equity Altman will receive. Altman, already a billionaire from his multiple startup investments, has previously stated that he chose not to take an equity stake in the company because the board needed a majority of disinterested directors with no stake in the company. He has also said he has enough money and is doing it because he loves the work.

The new structure of OpenAI would resemble that of its major rival Anthropic and Elon Musk's xAI, which are registered as benefit corporations, a form of for-profits that aim to promote social responsibility and sustainability in addition to making profits.
 
Top