Sam Altman is a habitual liar and can't be trusted, says former OpenAI board member

bnew

Veteran
Joined
Nov 1, 2015
Messages
58,835
Reputation
8,672
Daps
163,050

1/2
@jaminball
If you add the current 4o-mini pricing to the chart, the drop in token pricing is 99.7% over last 18 months!

For everyone nervous about the real-time audio pricing being ~$120 / 1m tokens, take comfort in the below chart. This price will undoubtedly come down significantly!

[Quoted tweet]
OpenAI has shared this chart a few times, but important to call out. Inference pricing on GPT4 family of models has dropped nearly 90% since it's release!

This trend will continue. Business models that don't make sense now will in the future. Latent demand will be unlocked.


GY5hecCbAAQSPm4.jpg

GYV1LqkWYAAu68F.png


2/2
@BrianBrechbuhl
This was before raising $7B though!




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,702
Daps
203,929
Reppin
the ether
You got Nobel Prize winners in AI shytting on Sam Altman now






“I was particularly fortunate to have many very clever students – much cleverer than me – who actually made things work,” said Hinton. “They’ve gone on to do great things. I’m particularly proud of the fact that one of my students fired Sam Altman.”

"So OpenAI was set up with a big emphasis on safety. Its primary objective was to develop artificial general intelligence and ensure that it was safe," Hinton said on Tuesday.

"And over time, it turned out that Sam Altman was much less concerned with safety than with profits. And I think that's unfortunate," he added.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
58,835
Reputation
8,672
Daps
163,050

1/11
@johnschulman2
I shared the following note with my OpenAI colleagues today:

I've made the difficult decision to leave OpenAI. This choice stems from my desire to deepen my focus on AI alignment, and to start a new chapter of my career where I can return to hands-on technical work. I've decided to pursue this goal at Anthropic, where I believe I can gain new perspectives and do research alongside people deeply engaged with the topics I'm most interested in. To be clear, I'm not leaving due to lack of support for alignment research at OpenAI. On the contrary, company leaders have been very committed to investing in this area. My decision is a personal one, based on how I want to focus my efforts in the next phase of my career.

I joined OpenAI almost 9 years ago as part of the founding team after grad school. It's the first and only company where I've ever worked, other than an internship. It's also been quite a lot of fun. I'm grateful to Sam and Greg for recruiting me back at the beginning, and Mira and Bob for putting a lot of faith in me, bringing great opportunities and helping me successfully navigate various challenges. I'm proud of what we've all achieved together at OpenAI; building an unusual and unprecedented company with a public benefit mission.

I am confident that OpenAI and the teams I was part of will continue to thrive without me. Post-training is in good hands and has a deep bench of amazing talent. I get too much credit for ChatGPT -- Barret has done an incredible job building the team into the incredibly competent operation it is now, with Liam, Luke, and others. I've been heartened to see the alignment team coming together with some promising projects. With leadership from Mia, Boaz and others, I believe the team is in very capable hands.

I'm incredibly grateful for the opportunity to participate in such an important part of history and I'm proud of what we've achieved together. I'll still be rooting for you all, even while working elsewhere.



2/11
@sama
Thank you for everything you've done for OpenAI! You are a brilliant researcher, a deep thinker about product and society, and mostly, you are a great friend to all of us. We will miss you tremendously and make you proud of this place.

(I first met John in a cafe in Berkeley in 2015. He said something like "on one hand it seems ridiculous to be talking about AGI right now, but on the other hand I think it's very reasonable and here is why and also here is why I think it's important to be talking about it" and then laid out a significant fraction of what became OpenAI's initial strategy. That took about 15 minutes and then we awkwardly chatted for another 45 :smile: )



3/11
@Kat__Woods
Can you clarify? Sounds like you're saying that you're leaving the safety team so you can focus on safety.

But also, you say that you got support for researching safety?



https://video.twimg.com/ext_tw_video/1820626304045371392/pu/vid/avc1/720x720/_TxdtgItXWQX4chn.mp4

4/11
@LearnAI_MJ
GPT-5: I also quit 👋



GURWBB5aQAAvk99.jpg


5/11
@iamgingertrash
And thus it was foretold - the inevitable demise of OpenAI as the elves left for greener pastures



GUQf0LYXEAArsX1.jpg


6/11
@ChrisUniverseB
"To be clear, I'm not leaving due to lack of support for alignment research at OpenAI. On the contrary, company leaders have been very committed to investing in this area. My decision is a personal one, based on how I want to focus my efforts in the next phase of my career."

I heard that 2 months ago leadership repeatedly denied access to compute resources which are very critical in developing new AI safety schemes so idkkkkk



7/11
@psychosort
Congratulations, and all the best going forward!



8/11
@alexalbert__
Welcome to the team, John!



9/11
@AISafetyMemes




GUQmo95bUAAOX35.png


10/11
@yacineMTB
anthropic is great! I love the research and the people are great. Good luck and work hard 🙏



11/11
@janleike
Very excited to be working together again!




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196


1/11
@Luke_Metz
I'm leaving OpenAI after over 2 years of wild ride.

Alongside @barret_zoph , @LiamFedus , @johnschulman2 , and many others I got to build a “low key research preview” product that became ChatGPT. While we were all excited to work on it, none of us expected it to be where it is today, 100s of millions of users in a historically short amount of time. It was truly a privilege to witness its growth.

I learned so much through the process. Thank you so much @sama , @gdb , @miramurati , @ilyasut, @bobmcgrewai for giving us a chance. OpenAI is a special place.

Now, onto new things!



2/11
@chipro
What a ride! Can't wait to see what you'll build next



3/11
@nearcyan
thank you for the amazing work, looking forward to the future!



4/11
@stevenheidel
I always learned something new every time we had lunch, best of luck with what’s next!



5/11
@dhruv2038
OpenAi is nothing without it's people.



6/11
@austinvhuang
appreciated your onboarding tips back when i was starting at brain and you were on your way to oai.

good luck with what's next.



7/11
@ericjang11
Excited to see what you’ll do next!



8/11
@victor_explore
How much of the reason was burnout?



9/11
@stallion1319
First of all Good luck Luke and you guys did an incredible achievement, now why is everyone leaving ?🥸



10/11
@horsperg
Thanks, and may the wind be on your back!



11/11
@keshab001
Everyone leaving to launch their own company that will earn them more money




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

Hood Critic

The Power Circle
Joined
May 2, 2012
Messages
24,200
Reputation
3,772
Daps
110,355
Reppin
דעת

bnew

Veteran
Joined
Nov 1, 2015
Messages
58,835
Reputation
8,672
Daps
163,050

Moore's Law for Everything​

by Sam Altman · March 16, 2021

My work at OpenAI reminds me every day about the magnitude of the socioeconomic change that is coming sooner than most people believe. Software that can think and learn will do more and more of the work that people now do. Even more power will shift from labor to capital. If public policy doesn’t adapt accordingly, most people will end up worse off than they are today.

We need to design a system that embraces this technological future and taxes the assets that will make up most of the value in that world–companies and land–in order to fairly distribute some of the coming wealth. Doing so can make the society of the future much less divisive and enable everyone to participate in its gains.

In the next five years, computer programs that can think will read legal documents and give medical advice. In the next decade, they will do assembly-line work and maybe even become companions. And in the decades after that, they will do almost everything, including making new scientific discoveries that will expand our concept of “everything.”

This technological revolution is unstoppable. And a recursive loop of innovation, as these smart machines themselves help us make smarter machines, will accelerate the revolution’s pace. Three crucial consequences follow:

  1. This revolution will create phenomenal wealth. The price of many kinds of labor (which drives the costs of goods and services) will fall toward zero once sufficiently powerful AI “joins the workforce.”
  2. The world will change so rapidly and drastically that an equally drastic change in policy will be needed to distribute this wealth and enable more people to pursue the life they want.
  3. If we get both of these right, we can improve the standard of living for people more than we ever have before.

Because we are at the beginning of this tectonic shift, we have a rare opportunity to pivot toward the future. That pivot can’t simply address current social and political problems; it must be designed for the radically different society of the near future. Policy plans that don’t account for this imminent transformation will fail for the same reason that the organizing principles of pre-agrarian or feudal societies would fail today.

What follows is a description of what’s coming and a plan for how to navigate this new landscape.

Part 1

The AI Revolution​


On a zoomed-out time scale, technological progress follows an exponential curve. Compare how the world looked 15 years ago (no smartphones, really), 150 years ago (no combustion engine, no home electricity), 1,500 years ago (no industrial machines), and 15,000 years ago (no agriculture).

The coming change will center around the most impressive of our capabilities: the phenomenal ability to think, create, understand, and reason. To the three great technological revolutions–the agricultural, the industrial, and the computational–we will add a fourth: the AI revolution. This revolution will generate enough wealth for everyone to have what they need, if we as a society manage it responsibly.

The technological progress we make in the next 100 years will be far larger than all we’ve made since we first controlled fire and invented the wheel. We have already built AI systems that can learn and do useful things. They are still primitive, but the trendlines are clear.

Part 2

Moore's Law for Everything​


Broadly speaking, there are two paths to affording a good life: an individual acquires more money (which makes that person wealthier), or prices fall (which makes everyone wealthier). Wealth is buying power: how much we can get with the resources we have.

The best way to increase societal wealth is to decrease the cost of goods, from food to video games. Technology will rapidly drive that decline in many categories. Consider the example of semiconductors and Moore’s Law: for decades, chips became twice as powerful for the same price about every two years.

In the last couple of decades, costs in the US for TVs, computers, and entertainment have dropped. But other costs have risen significantly, most notably those for housing, healthcare, and higher education. Redistribution of wealth alone won’t work if these costs continue to soar.

AI will lower the cost of goods and services, because labor is the driving cost at many levels of the supply chain. If robots can build a house on land you already own from natural resources mined and refined onsite, using solar power, the cost of building that house is close to the cost to rent the robots. And if those robots are made by other robots, the cost to rent them will be much less than it was when humans made them.

Similarly, we can imagine AI doctors that can diagnose health problems better than any human, and AI teachers that can diagnose and explain exactly what a student doesn’t understand.

“Moore’s Law for everything” should be the rallying cry of a generation whose members can’t afford what they want. It sounds utopian, but it’s something technology can deliver (and in some cases already has). Imagine a world where, for decades, everything–housing, education, food, clothing, etc.–became half as expensive every two years.

We will discover new jobs–we always do after a technological revolution–and because of the abundance on the other side, we will have incredible freedom to be creative about what they are.

Part 3

Capitalism for Everyone​


A stable economic system requires two components: growth and inclusivity. Economic growth matters because most people want their lives to improve every year. In a zero-sum world, one with no or very little growth, democracy can become antagonistic as people seek to vote money away from each other. What follows from that antagonism is distrust and polarization. In a high-growth world the dogfights can be far fewer, because it’s much easier for everyone to win.

Economic inclusivity means everyone having a reasonable opportunity to get the resources they need to live the life they want. Economic inclusivity matters because it’s fair, produces a stable society, and can create the largest slices of pie for the most people. As a side benefit, it produces more growth.

Capitalism is a powerful engine of economic growth because it rewards people for investing in assets that generate value over time, which is an effective incentive system for creating and distributing technological gains. But the price of progress in capitalism is inequality.

Some inequality is ok–in fact, it’s critical, as shown by all systems that have tried to be perfectly equal–but a society that does not offer sufficient equality of opportunity for everyone to advance is not a society that will last.

The traditional way to address inequality has been by progressively taxing income. For a variety of reasons, that hasn’t worked very well. It will work much, much worse in the future. While people will still have jobs, many of those jobs won’t be ones that create a lot of economic value in the way we think of value today. As AI produces most of the world’s basic goods and services, people will be freed up to spend more time with people they care about, care for people, appreciate art and nature, or work toward social good.

We should therefore focus on taxing capital rather than labor, and we should use these taxes as an opportunity to directly distribute ownership and wealth to citizens. In other words, the best way to improve capitalism is to enable everyone to benefit from it directly as an equity owner. This is not a new idea, but it will be newly feasible as AI grows more powerful, because there will be dramatically more wealth to go around. The two dominant sources of wealth will be 1) companies, particularly ones that make use of AI, and 2) land, which has a fixed supply.

There are many ways to implement these two taxes, and many thoughts about what to do with them. Over a long period of time, perhaps most other taxes could be eliminated. What follows is an idea in the spirit of a conversation starter.

We could do something called the American Equity Fund. The American Equity Fund would be capitalized by taxing companies above a certain valuation 2.5% of their market value each year, payable in shares transferred to the fund, and by taxing 2.5% of the value of all privately-held land, payable in dollars.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
58,835
Reputation
8,672
Daps
163,050
All citizens over 18 would get an annual distribution, in dollars and company shares, into their accounts. People would be entrusted to use the money however they needed or wanted—for better education, healthcare, housing, starting a company, whatever. Rising costs in government-funded industries would face real pressure as more people chose their own services in a competitive marketplace.

As long as the country keeps doing better, every citizen would get more money from the Fund every year (on average; there will still be economic cycles). Every citizen would therefore increasingly partake of the freedoms, powers, autonomies, and opportunities that come with economic self-determination. Poverty would be greatly reduced and many more people would have a shot at the life they want.

A tax payable in company shares will align incentives between companies, investors, and citizens, whereas a tax on profits does not–incentives are superpowers, and this is a critical difference. Corporate profits can be disguised or deferred or offshored, and are often disconnected from share price. But everyone who owns a share in Amazon wants the share price to rise. As people’s individual assets rise in tandem with the country’s, they have a literal stake in seeing their country do well.

Henry George, an American political economist, proposed the idea of a land-value tax in the late 1800s. The concept is widely supported by economists. The value of land appreciates because of the work society does around it: the network effects of the companies operating around a piece of land, the public transportation that makes it accessible, and the nearby restaurants, coffeeshops, and access to nature that makes it desirable. Because the landowner didn’t do all that work, it’s fair for that value to be shared with the larger society that did.

If everyone owns a slice of American value creation, everyone will want America to do better: collective equity in innovation and in the success of the country will align our incentives. The new social contract will be a floor for everyone in exchange for a ceiling for no one, and a shared belief that technology can and must deliver a virtuous circle of societal wealth. (We will continue to need strong leadership from our government to make sure that the desire for stock prices to go up remains balanced with protecting the environment, human rights, etc.)

In a world where everyone benefits from capitalism as an owner, the collective focus will be on making the world “more good” instead of “less bad.” These approaches are more different than they seem, and society does much better when it focuses on the former. Simply put, more good means optimizing for making the pie as large as possible, and less bad means dividing the pie up as fairly as possible. Both can increase people’s standard of living once, but continuous growth only happens when the pie grows.

Part 4

Implementation and Troubleshooting​


The amount of wealth available to capitalize the American Equity Fund would be significant. There is about $50 trillion worth of value, as measured by market capitalization, in US companies alone. Assume that, as it has on average over the past century, this will at least double over the next decade.

There is also about $30 trillion worth of privately-held land in the US (not counting improvements on top of the land). Assume that this value will roughly double, too, over the next decade–this is somewhat faster than the historical rate, but as the world really starts to understand the shifts AI will cause, the value of land, as one of the few truly finite assets, should increase at a faster rate.

Of course, if we increase the tax burden on holding land, its value will diminish relative to other investment assets, which is a good thing for society because it makes a fundamental resource more accessible and encourages investment instead of speculation. The value of companies will diminish in the short-term, too, though they will continue to perform quite well over time.

It’s a reasonable assumption that such a tax causes a drop in value of land and corporate assets of 15% (which only will take a few years to recover!).

Under the above set of assumptions (current values, future growth, and the reduction in value from the new tax), a decade from now each of the 250 million adults in America would get about $13,500 every year. That dividend could be much higher if AI accelerates growth, but even if it’s not, $13,500 will have much greater purchasing power than it does now because technology will have greatly reduced the cost of goods and services. And that effective purchasing power will go up dramatically every year.

It would be easiest for companies to pay the tax each year by issuing new shares representing 2.5% of their value. There would obviously be an incentive for companies to escape the American Equity Fund tax by off-shoring themselves, but a simple test involving a percentage of revenue derived from America could address this concern. A larger problem with this idea is the incentive for companies to return value to shareholders instead of reinvesting it in growth.

If we tax only public companies, there would also be an incentive for companies to stay private. For private companies that have annual revenue in excess of $1 billion, we could let their tax in equity accrue for a certain (limited) number of years until they go public. If they remain private for a long time, we could let them settle the tax in cash.

We’d need to design the system to prevent people from consistently voting themselves more money. A constitutional amendment delineating the allowable ranges of the tax would be a strong safeguard. It is important that the tax not be so large that it stifles growth–for example, the tax on companies must be much smaller than their average growth rate.

We’d also need a robust system for quantifying the actual value of land. One way would be with a corps of powerful federal assessors. Another would be to let local governments do the assessing, as they now do to determine property taxes. They would continue to receive local taxes using the same assessed value. However, if a certain percentage of sales in a jurisdiction in any given year falls too far above or below the local government’s estimate of the property’s values, then all the other properties in their jurisdiction would be reassessed up or down.

The theoretically optimal system would be to tax the value of the land only, and not the improvements built on top of it. In practice, this value may turn out to be too difficult to assess, so we may need to tax the value of the land and the improvements on it (at a lower rate, as the combined value would be higher).

Finally, we couldn’t let people borrow against, sell, or otherwise pledge their future Fund distributions, or we won’t really solve the problem of fairly distributing wealth over time. The government can simply make such transactions unenforceable.

Part 5

Shifting to the New System​


A great future isn’t complicated: we need technology to create more wealth, and policy to fairly distribute it. Everything necessary will be cheap, and everyone will have enough money to be able to afford it. As this system will be enormously popular, policymakers who embrace it early will be rewarded: they will themselves become enormously popular.

In the Great Depression, Franklin Roosevelt was able to enact a huge social safety net that no one would have thought possible five years earlier. We are in a similar moment now. So a movement that is both pro-business and pro-people will unite a remarkably broad constituency.

A politically feasible way to launch the American Equity Fund, and one that would reduce the transitional shock, would be with legislation that transitions us gradually to the 2.5% rates. The full 2.5% rate would only take hold once GDP increases by 50% from the time the law is passed. Starting with small distributions soon will be both motivating and helpful in getting people comfortable with a new future. Achieving 50% GDP growth sounds like it would take a long time (it took 13 years for the economy to grow 50% to its 2019 level). But once AI starts to arrive, growth will be extremely rapid. Down the line, we will probably be able to reduce a lot of other taxes as we tax these two fundamental asset classes.

The changes coming are unstoppable. If we embrace them and plan for them, we can use them to create a much fairer, happier, and more prosperous society. The future can be almost unimaginably great.

Thanks to Steven Adler, Daniela Amodei, Adam Baybutt, Chris Beiser, Jack Clark, Ryan Cohen, Tyler Cowen, Matt Danzeisen, Steve Dowling, Tad Friend, Lachy Groom, Chris Hallacy, Reid Hoffman, Ingmar Kanitscheider, Oleg Klimov, Matt Knight, Aris Konstantinidis, Andrew Kortina, Matt Krisiloff, Scott Krisiloff, John Luttig, Erik Madsen, Preston McAfee, Luke Miles, Arvind Neelakantan, David Oates, Cullen O’Keefe, Alethea Power, Raul Puri, Ilya Sutskever, Luke Walsh, Caleb Watney, and Wojchiech Zaremba for reviewing drafts of this, and to Gregory Koberger for designing it.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
58,835
Reputation
8,672
Daps
163,050




1/11
@burkov
- Theranos: In a few months, our machine that does 3 tests will be able to do 250 tests.
- Tesla: Next year, our cars will drive 100% without a driver.
- Tesla: Our humanoid robots will soon live with you.
- OpenAI: We will achieve AGI in several thousand days.
- Anthropic: Our LLM that uses computers isn’t perfect yet but will be in a few months.

Why would all of them lie, and why would influencers amplify their lies?

It’s symbiosis. Tech companies lie because lies bring in billions of dollars—billions with a 'b'. Who wouldn’t like that? Influencers do it for likes. If you repeat these lies, you get five times more likes compared to calling out the bullshyt. So, influencers are like the remora fish that live in a symbiotic relationship with sharks, cleaning their teeth.



GamAQEBXgAAK53D.jpg


2/11
@michael_kove
Theranos lied with absolutely nothing behind their claim.

At least other companies shipped something tangible even if not fully functional



3/11
@burkov
They learned from Theranos' mistake.



4/11
@Hochmeister
Setting ambitious goals, even if you know they are unlikely to be met is not a lie. Saying you achieved the thing when you have not is the lie which is what Theranos did.



5/11
@burkov
Lend me $100. My ambitious goal is to pay you $110 back. If I only pay you $5, you should understand, it wasn't a lie. Just my ambitions were too ambitious.



6/11
@FeatureCrewPod
I mean its not fraud, its an early demo

[Quoted tweet]
New #AI Agent from @AnthropicAI can now...
📧 Delete emails
🗂️ Manage files
🎨 Try to draw
Watch the full video:


https://video.twimg.com/amplify_video/1848941581199400960/vid/avc1/1920x1080/NGOul_PPaOzxRNk9.mp4

7/11
@burkov
They said that in several months it will become perfect. It's a lie.



8/11
@ICannot_Enough
Are you proud of influencing /search?q=#TSLA investors to sell in May of 2020 (at a split-adjusted ~$55 per share), talking them out of making a 4x or better profit?



GaqEdmzWUAAPvVk.jpg

GaqEdm0XkAA02k5.jpg


9/11
@62fidla
I like that: "influencers are like the remora fish that live in a symbiotic relationship with sharks, cleaning their teeth."



10/11
@sudobashman
I think being optimistic is fine, but it has to be grounded in science, not false optimism. Quite often you get a Eureka moment, but then lots of hard slog and dev to get the thing into a shape that works/is functional. Lying is pretending you've had the Eureka moment when you haven't. In that scenario, saying you have/it'll be fine is the same as lying.



11/11
@Terje4Liberty
This comparison is utterly ridiculous. Sure, Theranos made big promises regarding its future capabilities just like others do. However, it also outright lied about its current technology. The core product, a blood-testing device called the "Edison," was described as being able to run a wide array of medical tests using just a few drops of blood. However, the technology never worked. And they engaged in deceit to make people think it did.

Theranos CEO Elizabeth Holmes was charged with wire fraud and conspiracy, found guilty on four counts and sentenced to 11 years and 3 months in prison.




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
58,835
Reputation
8,672
Daps
163,050

US Edition

640x-1.jpg
Photo illustration by Danielle Del Plato for Bloomberg Businessweek; Background illustration: Chuck Anderson/Krea, Photo: Bloomberg

Businessweek | The Big Take

Sam Altman on ChatGPT’s First Two Years, Elon Musk and AI Under Trump​


An interview with the OpenAI co-founder.

By Josh Tyrangiel

January 5, 2025 at 5:00 PM EST

On Nov. 30, 2022, traffic to OpenAI’s website peaked at a number a little north of zero. It was a startup so small and sleepy that the owners didn’t bother tracking their web traffic. It was a quiet day, the last the company would ever know. Within two months, OpenAI was being pounded by more than 100 million visitors trying, and freaking out about, ChatGPT. Nothing has been the same for anyone since, particularly Sam Altman. In his most wide-ranging interview as chief executive officer, Altman explains his infamous four-day firing, how he actually runs OpenAI, his plans for the Trump-Musk presidency and his relentless pursuit of artificial general intelligence—the still-theoretical next phase of AI, in which machines will be capable of performing any intellectual task a human can do. Edited for clarity and length.

Latest Issue


Featured in Bloomberg Businessweek, February 2025. Subscribe now.

Your team suggested this would be a good moment to review the past two years, reflect on some events and decisions, to clarify a few things. But before we do that, can you tell the story of OpenAI’s founding dinner again? Because it seems like the historic value of that event increases by the day.

Everyone wants a neat story where there’s one moment when a thing happened. Conservatively, I would say there were 20 founding dinners that year [2015], and then one ends up being entered into the canon, and everyone talks about that. The most important one to me personally was Ilya 1 and I at the Counter in Mountain View [California]. Just the two of us.

1 Ilya Sutskever is an OpenAI co-founder and one of the leading researchers in the field of artificial intelligence. As a board member he participated in Altman’s November 2023 firing, only to express public regret over his decision a few days later. He departed OpenAI in May 2024.

And to rewind even back from that, I was always really interested in AI. I had studied it as an undergrad. I got distracted for a while, and then 2012 comes along. Ilya and others do AlexNet. 2 I keep watching the progress, and I’m like, “Man, deep learning seems real. Also, it seems like it scales. That’s a big, big deal. Someone should do something.”

2 AlexNet, created by Alex Krizhevsky, Sutskever and Geoffrey Hinton, used a deep convolutional neural network (CNN)—a powerful new type of computer program—to recognize images far more accurately than ever, kick-starting major progress in AI.

So I started meeting a bunch of people, asking who would be good to do this with. It’s impossible to overstate how nonmainstream AGI was in 2014. People were afraid to talk to me, because I was saying I wanted to start an AGI effort. It was, like, cancelable. It could ruin your career. But a lot of people said there’s one person you really gotta talk to, and that was Ilya. So I stalked Ilya at a conference, got him in the hallway, and we talked. I was like, “This is a smart guy.” I kind of told him what I was thinking, and we agreed we’d meet up for a dinner. At our first dinner, he articulated—not in the same words he’d use now—but basically articulated our strategy for how to build AGI.

What from the spirit of that dinner remains in the company today?

Kind of all of it. There’s additional things on top of it, but this idea that we believed in deep learning, we believed in a particular technical approach to get there and a way to do research and engineering together—it’s incredible to me how well that’s worked. Usually when you have these ideas, they don’t quite work, and there were clearly some things about our original conception that didn’t work at all. Structure. 3 All of that. But [believing] AGI was possible, that this was the approach to bet on, and if it were possible it would be a big deal to society? That’s been remarkably true.

3 OpenAI was founded in 2015 as a nonprofit with the mission to ensure that AGI benefits all of humanity. This would become, er, problematic. We’ll get to it.

One of the strengths of that original OpenAI group was recruiting. Somehow you managed to corner the market on a ton of the top AI research talent, often with much less money to offer than your competitors. What was the pitch?

The pitch was just come build AGI. And the reason it worked—I cannot overstate how heretical it was at the time to say we’re gonna build AGI. So you filter out 99% of the world, and you only get the really talented, original thinkers. And that’s really powerful. If you’re doing the same thing everybody else is doing, if you’re building, like, the 10,000th photo-sharing app? Really hard to recruit talent. Convince me no one else is doing it, and appeal to a small, really talented set? You can get them all. And they all wanna work together. So we had what at the time sounded like an audacious or maybe outlandish pitch, and it pushed away all of the senior experts in the field, and we got the ragtag, young, talented people who are good to start with.

How quickly did you guys settle into roles?

Most people were working on it full time. I had a job, 4 so at the beginning I was doing very little, and then over time I fell more and more in love with it. And then, by 2018, I had drunk the Kool-Aid. But it was like a Band of Brothers approach for a while. Ilya and Greg 5 were kind of running it, but everybody was doing their thing.

4 In 2014, Altman became the CEO of Y Combinator, the startup accelerator that helped launch Airbnb, Dropbox and Stripe, among others.

5 Greg Brockman is a co-founder of OpenAI and its current president.

It seems like you’ve got a romantic view of those first couple of years.

Well, those are the most fun times of OpenAI history for sure. I mean, it’s fun now, too, but to have been in the room for what I think will turn out to be one of the greatest periods of scientific discovery, relative to the impact it has on the world, of all time? That’s a once-in-a-lifetime experience. If you’re very lucky. If you’re extremely lucky.

In 2019 you took over as CEO. How did that come about?

I was trying to do OpenAI and [Y Combinator] at the same time, which was really hard. I just got transfixed by this idea that we were actually going to build AGI. Funnily enough, I remember thinking to myself back then that we would do it in 2025, but it was a totally random number based off of 10 years from when we started. People used to joke in those days that the only thing I would do was walk into a meeting and say, “Scale it up!” Which is not true, but that was kind of the thrust of that time period.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
58,835
Reputation
8,672
Daps
163,050
The official release date of ChatGPT is Nov. 30, 2022. Does that feel like a million years ago or a week ago? [Laughs] I turn 40 next year. On my 30th birthday, I wrote this blog post, and the title of it was “The days are long but the decades are short.” Somebody this morning emailed me and said, “This is my favorite blog post, I read it every year. When you turn 40, will you write an update?” I’m laughing because I’m definitely not gonna write an update. I have no time. But if I did, the title would be “The days are long, and the decades are also f---ing very long.” So it has felt like a very long time.

OpenAI senior executives at the company’s headquarters in San Francisco on March 13, 2023, from left: Sam Altman, chief executive officer; Mira Murati, chief technology officer; Greg Brockman, president; and Ilya Sutskever, chief scientist.


OpenAI senior executives at the company’s headquarters in San Francisco on March 13, 2023, from left: Sam Altman, chief executive officer; Mira Murati, chief technology officer; Greg Brockman, president; and Ilya Sutskever, chief scientist. Photographer: Jim Wilson/The New York Times

As that first cascade of users started showing up, and it was clear this was going to be a colossal thing, did you have a “holy s---” moment?

So, OK, a couple of things. No. 1, I thought it was gonna do pretty well! The rest of the company was like, “Why are you making us launch this? It’s a bad decision. It’s not ready.” I don't make a lot of “we’re gonna do this thing” decisions, but this was one of them.

YC has this famous graph that PG 6 used to draw, where you have the squiggles of potential, and then the wearing off of novelty, and then this long dip, and then the squiggles of real product market fit. And then eventually it takes off. It’s a piece of YC lore. In the first few days, as [ChatGPT] was doing its thing, it’d be more usage during the day and less at night. The team was like, “Ha ha ha, it’s falling off.” But I had learned one thing during YC, which is, if every time there’s a new trough it’s above the previous peak, there’s something very different going on. It looked like that in the first five days, and I was like, “I think we have something on our hands that we do not appreciate here.”

6 Paul Graham, the co-founder of Y Combinator and a philosopher king-type on the subject of startups and technology.

And that started off a mad scramble to get a lot of compute 7—which we did not have at the time—because we had launched this with no business model or thoughts for a business model. I remember a meeting that December where I sort of said, “I’ll consider any idea for how we’re going to pay for this, but we can’t go on.” And there were some truly horrible ideas—and no good ones. So we just said, “Fine, we’re just gonna try a subscription, and we’ll figure it out later.” That just stuck. We launched with GPT-3.5, and we knew we had GPT-4 [coming], so we knew that it was going to be better. And as I started talking to people who were using it about the things they were using it for, I was like, “I know we can make these better, too.” We kept improving it pretty rapidly, and that led to this global media consciousness [moment], whatever you want to call it.

7 In AI, “compute” is commonly used as a noun, referring to the processing power and resources—such as central processing units (CPUs), graphics processing units (GPUs) and tensor processing units (TPUs)—required to train, run or develop machine-learning models. Want to know how Nvidia Corp.’s Jensen Huang got rich? Compute.

Are you a person who enjoys success? Were you able to take it in, or were you already worried about the next phase of scaling?

A very strange thing about me, or my career: The normal arc is you run a big, successful company, and then in your 50s or 60s you get tired of working that hard, and you become a [venture capitalist]. It’s very unusual to have been a VC first and have had a pretty long VC career and then run a company. And there are all these ways in which I think it’s bad, but one way in which it has been very good for me is you have the weird benefit of knowing what’s gonna happen to you, because you’ve watched and advised a bunch of other people through it. And I knew I was both overwhelmed with gratitude and, like, “F---, I’m gonna get strapped to a rocket ship, and my life is gonna be totally different and not that fun.” I had a lot of gallows humor about it. My husband 8 tells funny stories from that period of how I would come home, and he’d be like, “This is so great!” And I was like, “This is just really bad. It’s bad for you, too. You just don’t realize it yet, but it’s really bad.” [Laughs]

8 Altman married longtime partner Oliver Mulherin, an Australian software engineer, in early 2024. They’re expecting a child in March 2025.

You’ve been Silicon Valley famous for a long time, but one consequence of GPT’s arrival is that you became world famous with the kind of speed that’s usually associated with, like, Sabrina Carpenter or Timothée Chalamet. Did that complicate your ability to manage a workforce?

It complicated my ability to live my life. But in the company, you can be a well-known CEO or not, people are just like, “Where’s my f---ing GPUs?”

I feel that distance in all the rest of my life, and it’s a really strange thing. I feel that when I’m with old friends, new friends—anyone but the people very closest to me. I guess I do feel it at work if I’m with people I don’t normally interact with. If I have to go to one meeting with a group that I almost never meet with, I can kind of tell it’s there. But I spend most of my time with the researchers, and man, I promise you, come with me to the research meeting right after this, and you will see nothing but disrespect. Which is great.

Do you remember the first moment you had an inkling that a for-profit company with billions in outside investment reporting up to a nonprofit board might be a problem?

There must have been a lot of moments. But that year was such an insane blur, from November of 2022 to November of 2023, I barely remember it. It literally felt like we built out an entire company from almost scratch in 12 months, and we did it in crazy public. One of my learnings, looking back, is everybody says they’re not going to screw up the relative ranking of important versus urgent, 9 and everybody gets tricked by urgent. So I would say the first moment when I was coldly staring at reality in the face—that this was not going to work—was about 12:05 p.m. on whatever that Friday afternoon was. 10

9 Dwight Eisenhower apparently said “What is important is seldom urgent, and what is urgent is seldom important” so often that it gave birth to the Eisenhower Matrix, a time management tool that splits tasks into four quadrants:

  • Urgent and important: Tasks to be done immediately.
  • Important but not urgent: Tasks to be scheduled for later.
  • Urgent but not important: Tasks to be delegated.
  • Not urgent and not important: Tasks to be eliminated.

Understanding the wisdom of the Eisenhower Matrix—then ignoring it when things get hectic—is a startup tradition.

10 On Nov. 17, 2023, at approximately noon California time, OpenAI’s board informed Altman of his immediate removal as CEO. He was notified of his firing roughly 5 to 10 minutes before the public announcement, during a Google Meet session, while he was watching the Las Vegas Grand Prix.

When the news emerged that the board had fired you as CEO, it was shocking. But you seem like a person with a strong EQ. Did you detect any signs of tension before that? And did you know that you were the tension?

I don’t think I’m a person with a strong EQ at all, but even for me this was over the line of where I could detect that there was tension. You know, we kind of had this ongoing thing about safety versus capability and the role of a board and how to balance all this stuff. So I knew there was tension, and I’m not a high-EQ person, so there’s probably even more.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
58,835
Reputation
8,672
Daps
163,050
A lot of annoying things happened that first weekend. My memory of the time—and I may get the details wrong—so they fired me at noon on a Friday. A bunch of other people quit Friday night. By late Friday night I was like, “We’re just going to go start a new AGI effort.” Later Friday night, some of the executive team was like, “Um, we think we might get this undone. Chill out, just wait.”

Saturday morning, two of the board members called and wanted to talk about me coming back. I was initially just supermad and said no. And then I was like, “OK, fine.” I really care about [OpenAI]. But I was like, “Only if the whole board quits.” I wish I had taken a different tack than that, but at the time it felt like a just thing to ask for. Then we really disagreed over the board for a while. We were trying to negotiate a new board. They had some ideas I thought were ridiculous. I had some ideas they thought were ridiculous. But I thought we were [generally] agreeing. And then—when I got the most mad in the whole period—it went on all day Sunday. Saturday into Sunday they kept saying, “It’s almost done. We’re just waiting for legal advice, but board consents are being drafted.” I kept saying, “I’m keeping the company together. You have all the power. Are you sure you’re telling me the truth here?” “Yeah, you’re coming back. You’re coming back.”

And then Sunday night they shock-announce that Emmett Shear was the new CEO. And I was like, “All right, now I’m f---ing really done,” because that was real deception. Monday morning rolls around, all these people threaten to quit, and then they’re like, “OK, we need to reverse course here.”

OpenAI’s San Francisco offices on March 10, 2023.


OpenAI’s San Francisco offices on March 10, 2023. Photographer: Jim Wilson/The New York Times

The board says there was an internal investigation that concluded you weren’t “consistently candid” in your communications with them. That’s a statement that’s specific—they think you were lying or withholding some information—but also vague, because it doesn’t say what specifically you weren’t being candid about. Do you now know what they were referring to?

I’ve heard different versions. There was this whole thing of, like, “Sam didn’t even tell the board that he was gonna launch ChatGPT.” And I have a different memory and interpretation of that. But what is true is I definitely was not like, “We’re gonna launch this thing that is gonna be a huge deal.” And I think there’s been an unfair characterization of a number of things like that. The one thing I’m more aware of is, I had had issues with various board members on what I viewed as conflicts or otherwise problematic behavior, and they were not happy with the way that I tried to get them off the board. Lesson learned on that.

Can I offer a theory?

Sure.

You recognized at some point that the structure of [OpenAI] was going to smother the company, that it might kill it in the crib. Because a mission-driven nonprofit could never compete for the computing power or make the rapid pivots necessary for OpenAI to thrive. The board was made up of originalists who put purity over survival. So you started making decisions to set up OpenAI to compete, which required being a little sneaky, which the board—

I don’t think I was doing things that were sneaky. I think the most I would say is, in the spirit of moving really fast, the board did not understand the full picture. There was something that came up about “Sam owning the startup fund, and he didn’t tell us about this.” And what happened there is because we have this complicated structure: OpenAI itself could not own it, nor could someone who owned equity in OpenAI. And I happened to be the person who didn’t own equity in OpenAI. So I was temporarily the owner or GP 11 of it until we got a structure set up to transfer it. I have a different opinion about whether the board should have known about that or not. But should there be extra clarity to communicate things like that, where there’s even the appearance of doing stuff? Yeah, I’ll take that feedback. But that’s not sneaky. It’s a crazy year, right? It’s a company that’s moving a million miles an hour in a lot of different ways. I would encourage you to talk to any current board member 12 and ask if they feel like I’ve ever done anything sneaky, because I make it a point not to do that.

11 General partner. According to a Securities and Exchange Commission filing on March 29, 2024, the new general partner of OpenAI’s startup fund is Ian Hathaway. The fund has roughly $175 million available to invest in AI-focused startups.

12 OpenAI’s current board is made up of Altman and:

  • Bret Taylor: (chairman): Former co-CEO of Salesforce Inc. and co-founder of FriendFeed.
  • Adam D’Angelo: Co-founder and CEO of Quora Inc.
  • Lawrence Summers: A secretary of the Treasury under Bill Clinton and former president of Harvard University.
  • Sue Desmond-Hellmann: Former CEO of the Bill & Melinda Gates Foundation.
  • Nicole Seligman: Former executive vice president and general counsel at Sony Corp.
  • Fidji Simo: CEO and chair of Instacart.
  • Paul Nakasone: Former director of the National Security Agency (2018-24).
  • Zico Kolter: Computer scientist specializing in machine learning and AI safety.

I think the previous board was genuine in their level of conviction and concern about AGI going wrong. There’s a thing that one of those board members said to the team here during that weekend that people kind of make fun of her for, 13 which is it could be consistent with the mission of the nonprofit board to destroy the company. And I view that—that’s what courage of convictions actually looks like. I think she meant that genuinely. And although I totally disagree with all specific conclusions and actions, I respect conviction like that, and I think the old board was acting out of misplaced but genuine conviction in what they believed was right. And maybe also that, like, AGI was right around the corner and we weren’t being responsible with it. So I can hold respect for that while totally disagreeing with the details of everything else.

13 Former OpenAI board member Helen Toner is reported to have said there are circumstances in which destroying the company “would actually be consistent with the mission” of the board. Altman had previously confronted Toner—the director of strategy at Georgetown University's Center for Security and Emerging Technology—about a paper she wrote criticizing OpenAI for releasing ChatGPT too quickly. She also complimented one of its competitors, Anthropic, for not “stoking the flames of AI hype” by waiting to release its chatbot.

You obviously won, because you’re sitting here. But just practicing a bit of empathy, were you not traumatized by all of this?

I totally was. The hardest part of it was not going through it, because you can do a lot on a four-day adrenaline rush. And it was very heartwarming to see the company and kind of my broader community support me. But then very quickly it was over, and I had a complete mess on my hands. And it got worse every day. It was like another government investigation, another old board member leaking fake news to the press. And all those people that I feel like really f---ed me and f---ed the company were gone, and now I had to clean up their mess. It was about this time of year [December], actually, so it gets dark at like 4:45 p.m., and it’s cold and rainy, and I would be walking through my house alone at night just, like, f---ing depressed and tired. And it felt so unfair. It was just a crazy thing to have to go through and then have no time to recover, because the house was on fire.

When you got back to the company, were you self-conscious about big decisions or announcements because you worried about how your character may be perceived? Actually, let me put that more simply. Did you feel like some people may think you were bad, and you needed to convince them that you’re good?

It was worse than that. Once everything was cleared up, it was all fine, but in the first few days no one knew anything. And so I’d be walking down the hall, and [people] would avert their eyes. It was like I had a terminal cancer diagnosis. There was sympathy, empathy, but [no one] was sure what to say. That was really tough. But I was like, “We got a complicated job to do. I’m gonna keep doing this.”

Can you describe how you actually run the company? How do you spend your days? Like, do you talk to individual engineers? Do you get walking-around time?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
58,835
Reputation
8,672
Daps
163,050
Let me just call up my calendar. So we do a three-hour executive team meeting on Mondays, and then, OK, yesterday and today, six one-on-ones with engineers. I’m going to the research meeting right after this. Tomorrow is a day where there’s a couple of big partnership meetings and a lot of compute meetings. There’s five meetings on building up compute. I have three product brainstorm meetings tomorrow, and I’ve got a big dinner with a major hardware partner after. That’s kind of what it looks like. A few things that are weekly rhythms, and then it’s mostly whatever comes up.

How much time do you spend communicating, internally and externally?

Way more internal. I’m not a big inspirational email writer, but lots of one-on-one, small-group meetings and then a lot of stuff over Slack.

Oh, man. God bless you. You get into the muck?

I’m a big Slack user. You can get a lot of data in the muck. I mean, there’s nothing that’s as good as being in a meeting with a small research team for depth. But for breadth, man, you can get a lot that way.

You’ve previously discussed stepping in with a very strong point of view about how ChatGPT should look and what the user experience should be. Are there places where you feel your competency requires you to be more of a player than a coach?

At this scale? Not really. I had dinner with the Sora 14 team last night, and I had pages of written, fairly detailed suggestions of things. But that’s unusual. Or the meeting after this, I have a very specific pitch to the research team of what I think they should do over the next three months and quite a lot of granular detail, but that’s also unusual.

14 Sora is OpenAI’s advanced visual AI generator, released to the public on Dec. 9, 2024.

We’ve talked a little about how scientific research can sometimes be in conflict with a corporate structure. You’ve put research in a different building from the rest of the company, a couple of miles away. Is there some symbolic intent behind that?

Uh, no, that’s just logistical, space planning. We will get to a big campus all at once at some point. Research will still have its own area. Protecting the core of research is really critical to what we do.

Protecting it from what?

The normal way a Silicon Valley company goes is you start up as a product company. You get really good at that. You build up to this massive scale. And as you build up this massive scale, revenue growth naturally slows down as a percentage, usually. And at some point the CEO gets the idea that he or she is going to start a research lab to come up with a bunch of new ideas and drive further growth. And that has worked a couple of times in history. Famously for Bell Labs and Xerox PARC. Usually it doesn’t. Usually you get a very good product company and a very bad research lab. We’re very fortunate that the little product company we bolted on is the fastest-growing tech company maybe ever—certainly in a long time. But that could easily subsume the magic of research, and I do not intend to let that happen.

We are here to build AGI and superintelligence and all the things that come beyond that. There are many wonderful things that are going to happen to us along the way, any of which could very reasonably distract us from the grand prize. I think it’s really important not to get distracted.

As a company, you’ve sort of stopped publicly speaking about AGI. You started talking about AI and levels, and yet individually you talk about AGI.

I think “AGI” has become a very sloppy term. If you look at our levels, our five levels, you can find people that would call each of those AGI, right? And the hope of the levels is to have some more specific grounding on where we are and kind of like how progress is going, rather than is it AGI, or is it not AGI?

What’s the threshold where you’re going to say, “OK, we’ve achieved AGI now”?

The very rough way I try to think about it is when an AI system can do what very skilled humans in important jobs can do—I’d call that AGI. There’s then a bunch of follow-on questions like, well, is it the full job or only part of it? Can it start as a computer program and decide it wants to become a doctor? Can it do what the best people in the field can do or the 98th percentile? How autonomous is it? I don’t have deep, precise answers there yet, but if you could hire an AI as a remote employee to be a great software engineer, I think a lot of people would say, “OK, that’s AGI-ish.”

Now we’re going to move the goalposts, always, which is why this is hard, but I’ll stick with that as an answer. And then when I think about superintelligence, the key thing to me is, can this system rapidly increase the rate of scientific discovery that happens on planet Earth?

You now have more than 300 million users. What are you learning from their behavior that’s changed your understanding of ChatGPT?

Talking to people about what they use ChatGPT for, and what they don’t, has been very informative in our product planning. A thing that used to come up all the time is it was clear people were trying to use ChatGPT for search a lot, and that actually wasn’t something that we had in mind when we first launched it. And it was terrible for that. But that became very clearly an important thing to build. And honestly, since we’ve launched search in ChatGPT, I almost don’t use Google anymore. And I don’t think it would have been obvious to me that ChatGPT was going to replace my use of Google before we launched it, when we just had an internal prototype.

Another thing we learned from users: how much people are relying on it for medical advice. Many people who work at OpenAI get really heartwarming emails when people are like, “I was sick for years, no doctor told me what I had. I finally put all my symptoms and test results into ChatGPT—it said I had this rare disease. I went to a doctor, and they gave me this thing, and I’m totally cured.” That’s an extreme example, but things like that happen a lot, and that has taught us that people want this and we should build more of it.

Your products have had a lot of prices, from $0 to $20 to $200—Bloomberg reported on the possibility of a $2,000 tier. How do you price technology that’s never existed before? Is it market research? A finger in the wind?

We launched ChatGPT for free, and then people started using it a lot, and we had to have some way to pay for it. I believe we tested two prices, $20 and $42. People thought $42 was a little too much. They were happy to pay $20. We picked $20. Probably it was late December of 2022 or early January. It was not a rigorous “hire someone and do a pricing study” thing.

There’s other directions that we think about. A lot of customers are telling us they want usage-based pricing. You know, “Some months I might need to spend $1,000 on compute, some months I want to spend very little.” I am old enough that I remember when we had dial-up internet, and AOL gave you 10 hours a month or five hours a month or whatever your package was. And I hated that. I hated being on the clock, so I don’t want that kind of a vibe. But there’s other ones I can imagine that still make sense, that are somehow usage-based.

What does your safety committee look like now? How has it changed in the past year or 18 months?

One thing that’s a little confusing—also to us internally—is we have many different safety things. So we have an internal-only safety advisory group [SAG] that does technical studies of systems and presents a view. We have an SSC [safety and security committee], which is part of the board. We have the DSB 15 with Microsoft. And so you have an internal thing, a board thing and a Microsoft joint board. We are trying to figure out how to streamline that.

15 The Deployment Safety Board, with members from OpenAI and Microsoft, approves any model deployment over a certain capability threshold.

And are you on all three?

That’s a good question. So the SAG sends their reports to me, but I don’t think I’m actually formally on it. But the procedure is: They make one, they send it to me. I sort of say, “OK, I agree with this” or not, send it to the board. The SSC, I am not on. The DSB, I am on. Now that we have a better picture of what our safety process looks like, I expect to find a way to streamline that.

Has your sense of what the dangers actually might be evolved?
 
Top