Why I’m Leaving OpenAI and What I’m Doing Next

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,333
Reputation
8,496
Daps
159,995



Why I’m Leaving OpenAI and What I’m Doing Next​


Miles Brundage

Oct 23, 2024

I’ve been excited about OpenAI as an organization since it was first announced in December 2015. After the announcement, I stayed up all night writing down thoughts on the significance of it. Even before that, around 12 years ago, I decided to devote my life to something in the rough vicinity of OpenAI’s mission (ensuring AGI benefits all of humanity).

So leaving what is essentially my dream job – advising OpenAI executives and the OpenAI board on AGI readiness – was not an easy decision, and I’m sure some folks will have questions about it. I answer several such questions below in the hope that it leads to more accurate discussion of my departure and various related topics.

Thanks for reading Miles’s Substack! Subscribe for free to receive new posts and support my work.

The TL;DR is:

  • I want to spend more time working on issues that cut across the whole AI industry, to have more freedom to publish, and to be more independent;
  • I will be starting a nonprofit and/or joining an existing one and will focus on AI policy research and advocacy, since I think AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so;
  • Some areas of research interest for me include assessment/forecasting of AI progress, regulation of frontier AI safety and security, economic impacts of AI, acceleration of beneficial AI applications, compute governance, and overall “AI grand strategy”;
  • I think OpenAI remains an exciting place for many kinds of work to happen, and I’m excited to see the team continue to ramp up investment in safety culture and processes;
  • I’m interested in talking to folks who might want to advise or collaborate on my next steps.

Who are you/what did you do at OpenAI?

Until the end of day this Friday, I’m a researcher and manager at OpenAI. I have been here for over six years, which is pretty long by OpenAI standards (it has grown a lot over those six years!). I started as a research scientist on the Policy team, then became Head of Policy Research, and am currently Senior Advisor for AGI Readiness. Before that I was in academia, getting my PhD in Human and Social Dimensions of Science and Technology from Arizona State University, and then as a post-doc at Oxford, and I worked for a bit in government at the US Department of Energy.

The teams I’ve led (Policy Research and then AGI Readiness) have, in my view, done a lot of really important work shaping OpenAI’s deployment practices, e.g., starting our external red teaming program and driving the first several OpenAI system cards, and publishing a lot of influential work on topics such as the societal implications of language models and AI agents, frontier AI regulation, compute governance, etc.

I’m incredibly grateful for the time I’ve been at OpenAI, and deeply appreciate my managers over the years for trusting me with increasing responsibilities, the dozens of people I’ve had the honor of managing and from whom I learned so much, and the countless brilliant colleagues I’ve worked with on a range of teams who made working at OpenAI such a fascinating and rewarding experience.

Why are you leaving?

I decided that I want to impact and influence AI's development from outside the industry rather than inside. There are several considerations pointing to that conclusion:

  • The opportunity costs have become very high: I don’t have time to work on various research topics that I think are important, and in some cases I think they’d be more impactful if I worked on them outside of industry. OpenAI is now so high-profile, and its outputs reviewed from so many different angles, that it’s hard for me to publish on all the topics that are important to me. To be clear, while I wouldn’t say I’ve always agreed with OpenAI’s stance on publication review, I do think it’s reasonable for there to be some publishing constraints in industry (and I have helped write several iterations of OpenAI’s policies), but for me the constraints have become too much.
  • I want to be less biased: It is difficult to be impartial about an organization when you are a part of it and work closely with people there everyday, and people are right to question policy ideas coming from industry given financial conflicts of interest. I have tried to be as impartial as I can in my analysis, but I’m sure there has been some bias, and certainly working at OpenAI affects how people perceive my statements as well as those from others in industry. I think it’s critical to have more industry-independent voices in the policy conversation than there are today, and I plan to be one of them.
  • I’ve done much of what I set out to do at OpenAI: Since starting my latest role as Senior Advisor for AGI Readiness, I’ve begun to think more explicitly about two kinds of AGI readiness–OpenAI’s readiness to steward increasingly powerful AI capabilities, and the world’s readiness to effectively manage those capabilities (including via regulating OpenAI and other companies). On the former, I’ve already told executives and the board (the audience of my advice) a fair amount about what I think OpenAI needs to do and what the gaps are, and on the latter, I think I can be more effective externally.

It’s hard to say which of the bullets above is most important and they’re related in various ways, but each played some role in my decision.

So how are OpenAI and the world doing on AGI readiness?

In short, neither OpenAI nor any other frontier lab is ready, and the world is also not ready.

To be clear, I don’t think this is a controversial statement among OpenAI’s leadership, and notably, that’s a different question from whether the company and the world are on track to be ready at the relevant time (though I think the gaps remaining are substantial enough that I’ll be working on AI policy for the rest of my career).

Whether the company and the world are on track for AGI readiness is a complex function of how safety and security culture play out over time (for which recent additions to the board are steps in the right direction), how regulation affects organizational incentives, how various facts about AI capabilities and the difficulty of safety play out, and various other factors.

As a sidenote, I think that AGI is an overloaded phrase that implies more of a binary way of thinking than actually makes sense. One of the things my team has been working on lately is fleshing out the “levels of AI” framework referenced here. I hope that OpenAI and I will be able to publish a related paper before long. But for now I’d just note that when I say “ready for AGI,” I am using this as shorthand for something like “readiness to safely, securely, and beneficially develop, deploy, and govern increasingly capable AI systems.”

Should people work at OpenAI?

Working at OpenAI is one of the most impactful things that most people could hope to do, so in many/most cases, yes. Of course people have different skills and opportunities available to them and it’s hard to make sweeping statements, but I think every role at OpenAI, and the care with which it is done, matters, as does every contribution to OpenAI’s organizational culture. Every product deployment on the path to more capable AI matters by influencing norms, how AI is perceived and regulated, etc. I’m sad to be losing some of the opportunities for impact I have here though I hope to make it up via influencing the larger ecosystem to a greater extent.

I think it’s particularly impactful to work at OpenAI on safety, security, and ensuring broad benefit from the technology we’re building, or to work on capabilities research and product while using your influence to push things in a safer, more secure, and more widely beneficial direction. Anyone working at OpenAI should take seriously the fact that their actions and statements contribute to the culture of the organization, and may create positive or negative path dependencies as the organization begins to steward extremely advanced capabilities. Culture is important at any organization, but it is particularly important in the context of frontier AI since much of the decision-making is not determined by regulation, but rather up to the people at the company.

As I’ve mentioned above and as my decision suggests, I think some kinds of policy research are better done externally. I think this is likely more often the case for policy research than, e.g., safety and security work, though there do need to be some people internally at OpenAI pushing for good policy stances, and it’s also valuable for there to be independent safety and security research. For more on my general AI policy career advice, see here.

What’s happening to the AGI Readiness team?

The Economic Research team, which until recently was a sub-team of AGI Readiness led by Pamela Mishkin, will be moving under Ronnie Chatterji, OpenAI’s new Chief Economist. The remainder of the AGI Readiness team will be distributed among other teams, and I’m working closely with Josh Achiam on transfer of some projects to the Mission Alignment team he is building.

What are you doing next?

I plan to start a new nonprofit (and/or join an existing nonprofit) and will work on AI policy research and advocacy. I will probably do some mix of research and advocacy but the details and ratios are TBD, and this will be informed by conversations with potential cofounders and collaborators in the coming months. My interests are global but I may focus on countries I’m more familiar with like the US, policy openings that are likely to be particularly impactful like early EU AI Act implementation, and multilateral efforts of various kinds which I think will be increasingly important.

What do you think about AI policy?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,333
Reputation
8,496
Daps
159,995
I think the upsides of AI are already big and could be dramatically bigger, as are the downsides. As someone who has worked in this field for longer than most, it has been very sad to see increasing polarization along the lines of whether people focus on one side of the cost/benefit ledger or the other, or have different risk priorities, etc. My view is that there is a lot to worry about and a lot to be excited about, we don’t have to choose one thing to care about, and we should find common ground where it exists.

I think AI and AGI benefiting all of humanity is not automatic and requires deliberate choices to be made by decision-makers in governments, non-profits, civil society, and industry, and this needs to be informed by robust public discussion. Notably, this is true not just for risk mitigation but also for ensuring equitable distribution of the benefits, as is the case with, e.g., electricity and modern medicine as well. This is true for a few reasons, including, non-exhaustively, collective action problems, various unpriced negative externalities, and unequal starting positions of digital infrastructure access, wealth, etc. that affect who benefits and is harmed by default and to what degrees. As with railroads, electricity, etc., corporate and government policies will be critical to ensuring safe and fair outcomes.

I think AI capabilities are improving very quickly and policymakers need to act more urgently. This is one of the areas where I am most excited to be independent, since claims to this effect are often dismissed as hype when they come from industry. By going outside of industry, I will lose access to some information about the state-of-the-art, but I think there is plenty of information in the public domain to justify a sense of urgency and I have some ideas about how to pull together the narrative more effectively than has been done so far. A few years ago, I argued to effective altruists (who are quite interested in how long it may take for certain AI capabilities to exist) that forecasting isn’t necessarily the best use of their time. Many of the policy actions that should be taken are fairly independent of the exact timeline. However, I have since changed my mind and think that most policymakers won’t act unless they perceive the situation as urgent, and insofar as that is actually the case or could be in the future, it needs to be credibly explained why that’s the case. See: COVID and the many warnings about pandemic preparedness before that.

I think quantitative evaluations of AI capabilities and extrapolations thereof, in combination with analysis of the impacts of certain policies, will be critical in truthfully and persuasively demonstrating that urgency. There’s great work happening on measuring frontier models from a safety perspective, measuring trends over time in AI, and a growing body of work assessing the labor market implications of AI, but more is definitely needed.

I think we don’t have all the AI policy ideas we need, and many of the ideas floating around are bad or too vague to be confidently judged. This is particularly true of international competition over AI, where I find the existing proposals to be especially bad (e.g. “race against [competing country] as quickly as possible”) and vague (e.g. “CERN for AI”), although it’s encouraging to see a growing trend towards more nuanced discussion of some of these ideas. There are also many aspects of frontier AI safety and security that will require creative solutions.

On the other hand, I think we do have some of the ideas we need, and should act quickly in some of those areas. This is why I don’t want to limit myself to just research – in some cases awareness or political will are all that is missing. For example, Congress should robustly fund the US AI Safety Institute, so that the government has more capacity to think clearly about AI policy, as well as the Bureau of Industry and Security, so that someone in the government will have some idea of what happens to advanced AI chips after they are exported.

What kind of topics are you interested in researching in the future?

I’m sure my opinions will change after I’ve had some time to process the past six years a bit more, but currently I’m particularly interested in the following interrelated six topics. This is not intended to be exhaustive of all the important issues, and likely I will focus more over time (e.g., going deep on just one or two of these after an exploratory period in the coming months).

Assessment and forecasting of AI progress

This is one of the key foundations for thinking clearly about the rest of the topics below. As noted above, there’s some great work happening here already, but it is skewed in various ways by different incentives (including both commercial and academic ones). I think there’s a huge opportunity for more work on this topic in the non-profit sector.

At the same time, I’m conscious that we may ultimately get AI assessment and forecasting to a very rigorous state and yet still see relatively slow policy action. So in addition to working with people to better understand AI progress, I’m also interested in helping better communicate the pace of progress in a way that resonates. I have a strong sense that feeling the AGI is easier to achieve in industry but I don’t know why exactly, given that there isn’t actually a large gap between what capabilities exist in labs and what is publicly available to use. I have a few hypotheses that I’m interested in exploring. And I’m interested in learning from historical cases as to why some policy issues are and aren’t seen as urgent – e.g. COVID and pandemic policy generally, World War II and the rise of Hitler, climate change, etc.

Regulation of frontier AI safety and security

I think that improving frontier AI safety and security is quite urgent, given the number of companies (dozens) that will soon (next few years at most) have systems capable of posing catastrophic risks. Given that that is not much time to set up entirely new institutions, I’m particularly interested in opportunities for action under existing legal authorities, as well as shaping the implementation of already-approved legislation such as the EU AI Act.

As noted above, and explained in more detail in this paper and similar work, companies and governments will not necessarily give AI safety and security the attention it deserves by default (this is not a comment specifically about OpenAI, as discussed above). There are many reasons for this, one of which is a misalignment between private and societal interests, which regulation can help reduce. There are also difficulties around credible commitments to and verification of safety levels, which further incentivize corner-cutting: people assume others are going to cut corners to gain an advantage and can’t tell what the ground truth is, or think they will change their minds later. Corner-cutting occurs across a range of areas, including prevention of harmfully biased and hallucinated outputs as well as investment in preventing the catastrophic risks on the horizon. There are, to be clear, some ways in which commercial incentives encourage safety, though I think it would be irresponsible to assume that those incentives will be sufficient, particularly for ambiguous, novel, diffuse, and/or low-probability/high-magnitude safety risks.

I’m excited about understanding how companies can credibly demonstrate safety while protecting valuable and potentially misusable IP. The difficulty of demonstrating compliance without compromising sensitive information is a major barrier to arms control agreements, which requires innovation to address. This issue is also at the core of effective domestic regulation. I’m excited to collaborate with people working on this and other related technical AI governance questions.

While some think that the right approach to the global AI situation is for democratic countries to race against autocratic countries, I think that having and fostering such a zero-sum mentality increases the likelihood of corner-cutting on safety and security, an attack on Taiwan (given its central role in the AI chip supply chain), and other very bad outcomes. I would like to see academics, companies, civil society, and policymakers work collaboratively to find a way to ensure that Western AI development is not seen as a threat to other countries’ safety or regime stability, so that we can work across borders to solve the very thorny safety and security challenges ahead.

Even if, as I think is very likely, Western countries continue to substantially outcompete China on AI, there is more than enough “gas in the tank” of computing hardware and algorithmic progress in autocratic countries for them to build very sophisticated capabilities, so cooperation will be essential. I realize many people think this sounds naive but I think those people haven’t thought through the situation fully or considered how frequently international cooperation (enabled by foresight, dialogue, and innovation) has been essential to managing catastrophic risks.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,333
Reputation
8,496
Daps
159,995
Economic impacts of AI

I think it’s likely that in the coming years (not decades), AI could enable sufficient economic growth that an early retirement at a high standard of living is easily achievable (assuming appropriate policies to ensure fair distribution of that bounty). Before that, there will likely be a period in which it is easier to automate tasks that can be done remotely. In the near-term, I worry a lot about AI disrupting opportunities for people who desperately want work, but I think it’s simultaneously true that humanity should eventually remove the obligation to work for a living and that doing so is one of the strongest arguments for building AI and AGI in the first place. Likely some will continue to work in the long-term but the incentive to do so might be weaker than before (whether this is true depends on a variety of cultural and policy factors). That is not something we’re prepared for politically, culturally, or otherwise, and needs to be part of the policy conversation. A naive shift towards a post-work world risks civilizational stagnation (see: WALL-E), and much more thought and debate about this is needed.

There is also a need for innovative analysis of the tax policy implications of AI. For example, some have suggested a “robot tax,” but what would the impacts of that be exactly? How would it compare to a compute tax, or an AI-value-added-tax, etc.? Should we not do anything specifically targeted at AI but instead just be aware that AI makes other kinds of taxation more urgent (e.g., if the returns to capital and labor diverge dramatically)?

Acceleration of beneficial AI applications

While some benefits from AI will emerge and are emerging “automatically” (through the hard work of many people) without much government involvement or support, I don’t think this is happening to a sufficient degree. It’s quite apparent to people working in industry just how far we are from the ideal here, and how much things like limited AI literacy and limited resources are preventing the full spread of AI’s benefits.

I think that even to someone who is (nearly) exclusively focused on mitigating risks from AI, it makes sense to consider beneficial AI applications seriously, since they may change the perception of shared interests among different competing parties and thus affect the likelihood of corner-cutting, and beneficial applications may also provide a greater margin for error in terms of risk mitigation (e.g., by making society resilient to a higher level of AI being deployed or abused).

It’s also likely that there will, by default, be a growing gap between free and paid AI capabilities. There was a brief period where these were exactly equivalent (other than rate limits) namely when 4o was the best available paid model and also available to users for free. But that era has passed and will soon be a distant memory as companies compete to leverage test-time compute for those willing to pay more. I’m not saying that there should be no such disparities but that we should be thoughtful about what the right level is and whether we might be sleepwalking to a world of cognitive haves and have-nots.

The “AI for good” landscape seems very well-intentioned but inefficient to me at the moment, largely due to insufficient technical expertise in civil society and governments, and the lack of a good normative and theoretical foundation – e.g., it’d be good to see more analysis of the problems for which we should and shouldn’t expect solutions to arise via markets, when it makes sense to build bespoke narrow AI solutions for specific problems vs. improving general AI systems’ ability to handle those problems, when the right approach is subsidizing existing technologies vs. creating net new ones, etc.

There are certainly some exciting new ideas and framings being floated, but it’d also be good to see much more debate around these larger questions (in addition to one-off initiatives to foster a particular beneficial application), and it’d be good to see more cross-pollination between the “AI for good” community, the more theoretically oriented group of people thinking about ideas like risk-sensitive innovation, “d/acc,” and societal resilience to AI’s impacts (there is some cross-pollination but not enough), and those working to close the digital divide.

Compute governance

Compared to software, data, and talent, computing hardware has unique properties that make it an important focal point for AI policy: “it is detectable, excludable, and quantifiable, and is produced via an extremely concentrated supply chain” (quoted from this paper I worked on). This makes it worrying that the part of the US government responsible for overseeing what happens when that compute is shipped overseas is severely understaffed and underfunded, and that more generally there is little serious policy discussion of what the endgame is here (besides occasionally tightening export controls and requiring companies to report their big datacenters and training runs).

To the extent that there is serious analysis of compute governance happening in the academic literature, it generally lags behind developments in industry by a fair amount – e.g., to those within frontier AI companies, it has become increasingly clear in recent years that scaling up inference, not just training, can enable higher performance, but public analysis of the policy implications of this has only begun in earnest relatively recently. Ideas for distributing computing power (and the associated benefits of AI) more widely, such as via the government providing greater compute for academics, are generally too little too late and neglect issues specific to the developing world, which is in a quite different situation.

Fortunately, there are many exciting research questions to explore further, including some of the policy ideas briefly broached in the paper cited above, and in recent papers on technical AI governance generally and hardware-based AI governance specifically. But I think there are likely many more exciting ideas to explore here, including ones that would address head on the key tradeoffs in AI policy discussed below.

Overall “AI grand strategy”

There needs to be more debate about the big picture of how humanity will ensure AI and AGI benefit all of humanity. I don’t find the options currently on offer very compelling as noted above, and many are too vague to actually evaluate. It may turn out that research explicitly on “AI grand strategy” will turn out to be vacuous, compared to work that builds up such a vision piecemeal, but I first at least want to assess the lay of the land and go from there.

Examples of key questions in AI grand strategy include:

  • How can we resolve or alleviate the tradeoff between the benefits of decentralized AI development (e.g. reducing the likelihood of harmful concentration of power) and the downsides of it (e.g. reducing the amount of safety and security talent applied to each “unit” of AI development)?
  • What policy actions by companies and countries make sense in a world where a treaty or grand bargain on AI is possible, or if it’s impossible? Likewise for Congressional action or the lack thereof?
  • What are the various options on the spectrum from nationalization or internationalization of AI research to today’s laissez-faire approach that are worth considering?
  • What does a serious dialogue about the ideal pace of AI progress look like, beyond “e/acc vs. Pause AI”, that takes into account the real risks as well as the real benefits of continued development, and that is informed by a consideration of the real policy options available?
  • In domains like energy and climate policy, there are (contested) quantitative frameworks for distilling the overall picture of impacts into a single number, e.g. the social cost of carbon, which can then inform net-benefit analyses. Could there be similar frameworks for AI (e.g., “the social cost of silicon”)?

I think there are exciting intersections between AI grand strategy and other topics discussed above. For example, verifiably secure, hardware-implemented multilateral veto power over large uses of compute (for training and/or inference) could allow the best of both worlds of benefiting from economies of scale while distributing effective power. We need exciting high-level visions to merit technical research on such methods, and we need a better menu of such methods in order to help calibrate on which high-level visions are feasible.

____

To be clear, these research interests aren’t totally new: I’ve already published on all of these topics before and during my time at OpenAI, both in the context of standalone papers as well as many system cards and OpenAI blog posts. And some are quite related to interests OpenAI has expressed, e.g., here. But I think that the particular angle I will bring to them, and the speed with which I will share my findings, will be quite different from what’s possible in industry.

How will you maintain independence?

I’d like this effort to be (and be perceived as) independent in its research and recommendations, and I’d like to be and be seen that way myself, though it’s not trivial to do so in practice while remaining close to the state-of-the-art in AI. I’ll continue to engage constructively with sectors with different perspectives on AI, including industry, academia, civil society, and government, and people with different ideologies.

OpenAI has offered to provide funding, API credits, and early model access to the new effort and has assured me that there would be no pre-publication review associated with this support, which I appreciate. Still, I haven’t yet made any final decisions re: accepting these offers given that they may compromise the reality and/or perception of independence. I will make any decisions jointly with the people I work with going forward (e.g., weighing real or perceived hits to independence against greater ability to do certain forms of work).
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,333
Reputation
8,496
Daps
159,995
Regarding ideological independence, I will aim (as I have always done) to work with people who have a range of views on the risks and opportunities of AI, and to present my own views in a way that makes clear that I am not beholden to any particular ideology. That being said, I recognize that some will try to pigeon-hole me and my work in various ways no matter what – for example, given historical ties between the effective altruism, AI safety, and AI policy communities, I have lots of connections to the effective altruism community. And although those who know me know that I have also been a vocal critic of the community and don’t identify as “an EA,” I’m sure some will use these connections to dismiss me. Similarly, some may think I am too sympathetic to industry given my time there, or that I am otherwise biased in various ways – or they may draw similar conclusions based on who my future cofounders are. But I’d like to at least make such pigeon-holing be obviously unfair to anyone paying close attention, and I think being outside of industry will help towards that end.

How can I help?

I have never started a nonprofit, so I really need help on thinking through how to do so (if I go that direction). I’m interested in talking to people who might want to cofound a new effort in AI policy, as well as organizations that might want to host a semi-autonomous new effort as a component of their existing organization.

Please consider filling out this form if my research and advocacy interests above sound interesting to you, and especially (but not exclusively) if you:

  • Have a background in nonprofit management and operations (including fundraising),
  • Have expertise in economics, international relations, or public policy,
  • Have strong research and writing skills and are interested in a position as a research assistant across various topics,
  • Are an AI researcher or engineer, or
 

Hood Critic

The Power Circle
Joined
May 2, 2012
Messages
24,051
Reputation
3,727
Daps
109,815
Reppin
דעת
I get the NDAs that they sign and leaving OpenAI is the best way for them to signal danger/protest but these vague statements that they leave the public with have the opposite effect than they intend.
 
Top