Why I’m Leaving OpenAI and What I’m Doing Next
I’ve been excited about OpenAI as an organization since it was first announced in December 2015.
milesbrundage.substack.com
Why I’m Leaving OpenAI and What I’m Doing Next
Miles Brundage
Oct 23, 2024
I’ve been excited about OpenAI as an organization since it was first announced in December 2015. After the announcement, I stayed up all night writing down thoughts on the significance of it. Even before that, around 12 years ago, I decided to devote my life to something in the rough vicinity of OpenAI’s mission (ensuring AGI benefits all of humanity).
So leaving what is essentially my dream job – advising OpenAI executives and the OpenAI board on AGI readiness – was not an easy decision, and I’m sure some folks will have questions about it. I answer several such questions below in the hope that it leads to more accurate discussion of my departure and various related topics.
Thanks for reading Miles’s Substack! Subscribe for free to receive new posts and support my work.
The TL;DR is:
- I want to spend more time working on issues that cut across the whole AI industry, to have more freedom to publish, and to be more independent;
- I will be starting a nonprofit and/or joining an existing one and will focus on AI policy research and advocacy, since I think AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so;
- Some areas of research interest for me include assessment/forecasting of AI progress, regulation of frontier AI safety and security, economic impacts of AI, acceleration of beneficial AI applications, compute governance, and overall “AI grand strategy”;
- I think OpenAI remains an exciting place for many kinds of work to happen, and I’m excited to see the team continue to ramp up investment in safety culture and processes;
- I’m interested in talking to folks who might want to advise or collaborate on my next steps.
Who are you/what did you do at OpenAI?
Until the end of day this Friday, I’m a researcher and manager at OpenAI. I have been here for over six years, which is pretty long by OpenAI standards (it has grown a lot over those six years!). I started as a research scientist on the Policy team, then became Head of Policy Research, and am currently Senior Advisor for AGI Readiness. Before that I was in academia, getting my PhD in Human and Social Dimensions of Science and Technology from Arizona State University, and then as a post-doc at Oxford, and I worked for a bit in government at the US Department of Energy.
The teams I’ve led (Policy Research and then AGI Readiness) have, in my view, done a lot of really important work shaping OpenAI’s deployment practices, e.g., starting our external red teaming program and driving the first several OpenAI system cards, and publishing a lot of influential work on topics such as the societal implications of language models and AI agents, frontier AI regulation, compute governance, etc.
I’m incredibly grateful for the time I’ve been at OpenAI, and deeply appreciate my managers over the years for trusting me with increasing responsibilities, the dozens of people I’ve had the honor of managing and from whom I learned so much, and the countless brilliant colleagues I’ve worked with on a range of teams who made working at OpenAI such a fascinating and rewarding experience.
Why are you leaving?
I decided that I want to impact and influence AI's development from outside the industry rather than inside. There are several considerations pointing to that conclusion:
- The opportunity costs have become very high: I don’t have time to work on various research topics that I think are important, and in some cases I think they’d be more impactful if I worked on them outside of industry. OpenAI is now so high-profile, and its outputs reviewed from so many different angles, that it’s hard for me to publish on all the topics that are important to me. To be clear, while I wouldn’t say I’ve always agreed with OpenAI’s stance on publication review, I do think it’s reasonable for there to be some publishing constraints in industry (and I have helped write several iterations of OpenAI’s policies), but for me the constraints have become too much.
- I want to be less biased: It is difficult to be impartial about an organization when you are a part of it and work closely with people there everyday, and people are right to question policy ideas coming from industry given financial conflicts of interest. I have tried to be as impartial as I can in my analysis, but I’m sure there has been some bias, and certainly working at OpenAI affects how people perceive my statements as well as those from others in industry. I think it’s critical to have more industry-independent voices in the policy conversation than there are today, and I plan to be one of them.
- I’ve done much of what I set out to do at OpenAI: Since starting my latest role as Senior Advisor for AGI Readiness, I’ve begun to think more explicitly about two kinds of AGI readiness–OpenAI’s readiness to steward increasingly powerful AI capabilities, and the world’s readiness to effectively manage those capabilities (including via regulating OpenAI and other companies). On the former, I’ve already told executives and the board (the audience of my advice) a fair amount about what I think OpenAI needs to do and what the gaps are, and on the latter, I think I can be more effective externally.
It’s hard to say which of the bullets above is most important and they’re related in various ways, but each played some role in my decision.
So how are OpenAI and the world doing on AGI readiness?
In short, neither OpenAI nor any other frontier lab is ready, and the world is also not ready.
To be clear, I don’t think this is a controversial statement among OpenAI’s leadership, and notably, that’s a different question from whether the company and the world are on track to be ready at the relevant time (though I think the gaps remaining are substantial enough that I’ll be working on AI policy for the rest of my career).
Whether the company and the world are on track for AGI readiness is a complex function of how safety and security culture play out over time (for which recent additions to the board are steps in the right direction), how regulation affects organizational incentives, how various facts about AI capabilities and the difficulty of safety play out, and various other factors.
As a sidenote, I think that AGI is an overloaded phrase that implies more of a binary way of thinking than actually makes sense. One of the things my team has been working on lately is fleshing out the “levels of AI” framework referenced here. I hope that OpenAI and I will be able to publish a related paper before long. But for now I’d just note that when I say “ready for AGI,” I am using this as shorthand for something like “readiness to safely, securely, and beneficially develop, deploy, and govern increasingly capable AI systems.”
Should people work at OpenAI?
Working at OpenAI is one of the most impactful things that most people could hope to do, so in many/most cases, yes. Of course people have different skills and opportunities available to them and it’s hard to make sweeping statements, but I think every role at OpenAI, and the care with which it is done, matters, as does every contribution to OpenAI’s organizational culture. Every product deployment on the path to more capable AI matters by influencing norms, how AI is perceived and regulated, etc. I’m sad to be losing some of the opportunities for impact I have here though I hope to make it up via influencing the larger ecosystem to a greater extent.
I think it’s particularly impactful to work at OpenAI on safety, security, and ensuring broad benefit from the technology we’re building, or to work on capabilities research and product while using your influence to push things in a safer, more secure, and more widely beneficial direction. Anyone working at OpenAI should take seriously the fact that their actions and statements contribute to the culture of the organization, and may create positive or negative path dependencies as the organization begins to steward extremely advanced capabilities. Culture is important at any organization, but it is particularly important in the context of frontier AI since much of the decision-making is not determined by regulation, but rather up to the people at the company.
As I’ve mentioned above and as my decision suggests, I think some kinds of policy research are better done externally. I think this is likely more often the case for policy research than, e.g., safety and security work, though there do need to be some people internally at OpenAI pushing for good policy stances, and it’s also valuable for there to be independent safety and security research. For more on my general AI policy career advice, see here.
What’s happening to the AGI Readiness team?
The Economic Research team, which until recently was a sub-team of AGI Readiness led by Pamela Mishkin, will be moving under Ronnie Chatterji, OpenAI’s new Chief Economist. The remainder of the AGI Readiness team will be distributed among other teams, and I’m working closely with Josh Achiam on transfer of some projects to the Mission Alignment team he is building.
What are you doing next?
I plan to start a new nonprofit (and/or join an existing nonprofit) and will work on AI policy research and advocacy. I will probably do some mix of research and advocacy but the details and ratios are TBD, and this will be informed by conversations with potential cofounders and collaborators in the coming months. My interests are global but I may focus on countries I’m more familiar with like the US, policy openings that are likely to be particularly impactful like early EU AI Act implementation, and multilateral efforts of various kinds which I think will be increasingly important.
What do you think about AI policy?