The board of directors of OpenAI, Inc, the 501(c)(3) that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.

A member of OpenAI’s leadership team for five years, Mira has played a critical role in OpenAI’s evolution into a global AI leader. She brings a unique skill set, understanding of the company’s values, operations, and business, and already leads the company’s research, product, and safety functions. Given her long tenure and close engagement with all aspects of the company, including her experience in AI governance and policy, the board believes she is uniquely qualified for the role and anticipates a seamless transition while it conducts a formal search for a permanent CEO.

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.” [emphasis added]




Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Found this on Reddit: Anxious_Bandicoot126 comments on Sam Altman is leaving OpenAI (

I feel compelled as someone close to the situation to share additional context about Sam and company.

Engineers raised concerns about rushing tech to market without adequate safety reviews in the race to capitalize on ChatGPT hype. But Sam charged ahead. That's just who he is. Wouldn't listen to us.

His focus increasingly seemed to be fame and fortune, not upholding our principles as a responsible nonprofit. He made unilateral business decisions aimed at profits that diverged from our mission.

When he proposed the GPT store and revenue sharing, it crossed a line. This signaled our core values were at risk, so the board made the tough decision to remove him as CEO.

Greg also faced some accountability and stepped down from his role. He enabled much of Sam's troubling direction.

Now our former CTO, Mira Murati, is stepping in as CEO. There is hope we can return to our engineering-driven mission of developing AI safely to benefit the world, and not shareholders.

Obviously just speculation for now, but seems plausible. The moment the GPT store was released I thought:

"wow that's really good for business ... wow that's really bad for alignment"

I'm skeptical.

I've read their other comments. The initial comment sounded somewhat plausible, but their other comments sounded less like what I'd expect someone in that position to sound like.

Jonas V
This seems the most plausible speculation so far, though probably also wrong:
Lorenzo Buonanno
If you think it's more plausible than misalignment with OpenAI's mission, you could make some mana on 

Worth noting that of the 4 remaining board members, 2 are associated with EA: Helen Toner (CSET) and Tasha McCauley (EV UK board member)

This is a critically important point to hold in mind if the reason for the move seems to be due to safety concerns as opposed to personal malpractice/deceiving the board[1]

I don't know what the hell happened. I guess further clarifications on the decision-making process and corporate landscape will be known tomorrow or, more likely, early next working week

I've voiced concerns before that EA is unaware that it can be drawn into 'one-way fights' sometimes, and this feels like another such moment. The Silicon Valley tech-twitter scene[2] has exploded over this, and so far EA is not coming out well in their eyes from what I can see. I think the days of "e/acc" being a meme movement are rapidly drawing to a close, and EA might find itself in a hostile atmosphere in what used to be one of the most EA-friendly places in the world.

Again, early speculations, but be careful out there Bay-Area EAs. Keep your wits about you.

  1. ^

    Really strange that, while this looks like the most likely reason, it's not really reflected in the language

  2. ^

    Perhaps one of the few cases where Twitter might be an accurate representation of thoughts on the ground

Ironically, this particular set of comments is doing the rounds on Twitter with some banal commentary.


Yeah, this is one of the few times where I believe that the EAs on the board likely overreached here, because they probably didn't give enough evidence to justify their excoriating statement there that Sam Altman was dishonest, and he might be coming back to lead the company.

I'm not sure how to react to all of this, though.

Edit: My reaction is just WTF happened, and why did they completely play themselves? Though honestly, I just believe that they were inexperienced.

I'm not sure how to react to all of this, though.

Kudos for being uncertain, given the limited information available.

(Not something one cay say about many of the other comments to this post, sadly.)

Yeah, the tech scene really seems to come down on the side of Sam Altman already. Let's hope the board had good grounds and will be able to demonstrate evidence of dishonesty soon
Jelle Donders
I've shared very similar concerns for a while. The risk of successful narrow EA endeavors that lack transparency backfiring in this manner feels very predictable to me, but many seem to disagree.
There's some related discussion here on LW.
Ben Chancey
Do these explanations seem at odds to you for some reason? The language used in the statement does not say anything about personal malpractice/deception, just that he was "not consistently candid in his communications with the board". It seems entirely possible to me, and indeed probably most likely given what else we now know, that the board is alleging dishonesty re: safety-related commitments he made, or something like this. 

Adam D'Angelo also worked at Facebook with Moskovitz from 2004 to 2008 (incl. as CTO 2006-2008) and is on the board of Asana

Twitter is full of people laying into EA for being behind Sam Altman's firing. However, if it's true that this happened because the board thought Altman was trying to take the company in an 'unsafe' direction then I'm glad they did this. And I'm glad that for the time being considerations other than 'shareholder value' are not the defining motivation behind AI development.

This is incredibly short-sighted. The board’s behavior was grossly unprofessional and the accompanying blog post was borderline defamatory. And Altman is one of the most highly-connected and competent people in the Bay Area tech scene. Altman can easily start another AI company; in fact, media outlets are now reporting that he's considering doing just that, or might even return to OpenAI by pressuring the board to resign. 

In fact, Manifold is at 50% that Altman will return as CEO, and at 38% that he'll start another AI company. It seems that the board was unable to think even just two steps ahead if they thought this would end well.

Altman starting a new company could still slow things down a few months. Which could be critically important if AGI is imminent. In those few months perhaps government regulation with teeth could actually come in, and then shut the new company down before it ends the world.
You had no evidence to justify that claim back when you made it, and as new evidence is released, it looks increasingly likely that the claim was not only unjustified but also wrong (see e.g. this comment by Gwern).

Latest (48 hours in): OpenAI Board Stands by Decision to Force Sam Altman Out of C.E.O. Role
After 48 hours of furious negotiations, the A.I. company said Mr. Altman would not return to his job and that former Twitch C.E.O. Emmett Shear would be its interim boss. 

The board of directors at OpenAI, the high-flying artificial intelligence start-up, stood by its decision to push out its former chief executive Sam Altman, according to an internal memo sent to the company’s staff on Sunday night.

OpenAI named Emmett Shear, a former executive at Twitch, as the new interim chief executive, pushing aside Mira Murati, a longtime OpenAI executive who was named interim chief executive after Mr. Altman’s ouster. The board said Mr. Shear has a “unique mix of skills, expertise and relationships that will drive OpenAI forward,” according to the memo viewed by The New York Times.

“The board firmly stands by its decision as the only path to advance and defend the mission of OpenAI,” said the memo, referring to Mr. Altman’s ouster on Friday. It was signed by each of the four directors on the company’s board; Adam D’Angelo, Helen Toner, Ilya Sutskever, and Tasha McCauley.

“Put simply, Sam’s behavior and lack of transparency in his interactions with the board undermined the board’s ability to effectively supervise the company in the manner it was mandated to do,” the memo said.

Oh wow, that last paragraph seems like a good sign that they have good grounds for these statements they're not walking back

It seems odd for them to say that given that there were relatively credible rumours that the board was negotiating with Sam about a potential return (which we can assume broke down as they looked for an alternative CEO). [I've retracted the above, as it seems inaccurate with the new hiring of Shear and reports that the board just went silent in response to pressure from investors and Microsoft] Can they not share some of the reasoning though? Like, sure, some of it may involved corporate propreitary knowledge and NDAs, but part of the reason there was such a blowback to the decision was that it seemed to come out of nowhere. People assumed another shoe was going to drop because of the manner of the board's decision, and then it just hasn't? The new CEO has literally just promised to: So he's accepted the position without even knowing why they did what they did at a high level. [seems false, see Joshua's reply below] While the board probably have the right to do what they did via the OpenAI Charter, the fact they are not sharing the reasons for doing so, at either a high or low level, internally or externally, means that they have lost and are continuing to lose a lot of credibility and legitimacy, regardless of the legal facts of the case.

Why do you think that the rumors that the board was negotiating with Sam was "relatively credible?" At this point, seems more likely than not to be false, eg either random fake news or a PR spin by pro-Altman VCs. 

I mean I definitely agree that there's a fog-of-war situation going on. Given some new updates here, I've retracted that paragraph. Some original points were: * Things like this - yes distrust the media etc etc but it seemed the main state of play * Altman's photo wearing the guest pass - seems like an obvious "i'm coming back to return as a CEO or not at all implication". Like he was obviously in the OpenAI offices for some reason, seems weird for it not to be negotiations with the board over something as opposed to collecting his belongings * Roon had a now-deleted tweet along the lines of "crossed the rubicon troops marching on rome" which again, implies there was an internal open-ai move to get sam back It still find the board silence is pretty weird, and the big missing piece here. I stand by my current belief that the radio-silence is currently damaging for the perception and support of the AI Safety cause Update on point 2: It seems that the board wasn't present when he visited. I guess what seemed to be going on were two different factions: 1) Mira Murati as interim CEO was trying to find some way to get Altman and Brockman back 2) The board was trying to find its own new CEO choice asap to foreclose any chance of Sam returning to the position

I think you are over-responding when we basically have no good information, as illustrated by the fact that you keep having to walk back claims you have made only a short time before

I take your point here John. There's a lot that's still to come out about the events of the weekend, and I've probably been a bit trigger-happy with responses. I'm going to step back from this thread and possibly the Forum as a whole for a little bit.

I do want to note that I picked up a somewhat hostile/adversarial tone to your comment (I'm not saying this was intentional). To 'keep having to walk back claims' seems a bit of an implied overclaim to me, especially as from my PoV it only happened twice - once seeing Ashlee Vance's updated reporting, and the other with Joshua's comment.

'Walking back' seems to also be more adversarial than just 'corrected mistakes' too (compare 'you keep having to walk back claims' vs 'you made corrections twice'. In any case, while the reporting has changed, a lot of my intuitions and feelings haven't shifted much. I still find the board's complete silence strange, and think this could be a precarious moment for AI Safety.

I don't think this is correct, from the same statement:
Thanks for this, have retracted that sentence. Feels like some version of the reasoning should be made available to investors/microsoft/the public is some short-term timeframe though? I feel like that would do a fair amount to quell some of the reactions
I would like that, however, how much they care about external reactions is unclear to me
Ben Chancey
How on earth does one reconcile this with the fact that Ilya has now publicly tweeted that he deeply regrets his involvement in the board’s actions, and that he has signed the open letter threatening to quit unless the board resigns?

An open letter from 500 of ~700 OpenAI employees to the board, calling on them to resign (also on The Verge).

Suggests there's an enormous amount of bad feeling about the decision internally. It also seems like a bad sign that the board was unwilling to provide any 'written evidence' of wrongdoing, though maybe something will appear in the coming days.

But all told it looks pretty bad for EA. Seems like there's an enormous backlash online - initially against OpenAI for firing everyone’s favourite AI CEO, and now against “EA” “woke” “decelerationist” types.[1][2]

It’s also seemed to trigger a flurry of tweets from Nick Cammarata, saying that EAs are overwhelmingly self-flagellating and self-destructive and that EA caused him and his friends enormous harm. I think his claims are flatly wrong (though they may be true for him and his friends), and some of the replies seem to agree, but it has 500K views as I publish.

Seems like the whole episode (combined with at least one prominent EA seemingly saying it’s emblematic dreadful and toxic) has the potential to cause a lot of reputational damage, especially if the board chooses not to clarify its actions (although it's possibly too late for t... (read more)

Kevin Lacker
It is a disaster for EA. We need the EAs on the board to explain themselves, and if they made a mistake, just admit that they made a mistake and step down. "Effective altruism" depends on being effective. If EA is just putting people in charge of other peoples' money, they make decisions that seem like bad decisions, they never explain why, refuse to change their mind whatever happens... that's no better than existing charities! This is what EA was supposed to prevent! We are supposed to be effective. Not to fire the best employees and destroy a company that is putting an incredible amount of effort into doing responsible things. I might as well give my money to the San Francisco Symphony. At least they won't spend it ruining things that I care about. Please, anyone who knows Helen or Tasha, ask them to reconsider.

I don't think that they own the EA community an explanation (it would be nice, but they don't have to). The only people that can have a right to demand that are the people that have appointed them there and the OAI staff.

>I might as well give my money to the San Francisco Symphony. At least they won't spend it ruining things that I care about.

It is your right, but I don't know how this is related? How have they spent EA donors' money? If you are referring to the Open Phil $30M grant, Open Phil doesn't take donations so they can donate to whoever they want and don't need to explain themselves. It would have been different if Open AI was spending GiveWell's money.

I make this speculative comment with no inside information

There may be a world in which this is net positive. If EAs have been wrong the whole time about the best approach being the "narrow" or "inside" game, this might force EAs into being mostly adversarial vs. Tech accelerationists and many in silicon valley in general. This could be more effective at stopping or slowing doom in the medium to long term than trying to force safety from the inside against strong market forces.

It could even help the EA AI risk crowd come more alongside the sentiment of the general public, after the initial reputational loss simmers down.

I'm not saying this is even likely, it's just a different take.

FYI — lots of relevant links collected here: OpenAI: The Battle of the Board  and OpenAI: Facts from a Weekend 

Very interested to find out some of the details here:

  • Why now?  Was there some specific act of wrongdoing that the board discovered (if so, what was it?), or was now an opportune time to make a move that the board members had secretly been considering for a while, or etc?
  • Was this a pro-AI-safety move that EAs should ultimately be happy about (ie, initiated by the most EA-sympathetic board members, with the intent of bringing in more x-risk-conscious leadership)?  Or is this a disaster that will end up installing someone much more focused on making money than on talking to governments and figuring out how to align superintelligence?  Or is it relatively neutral from an EA / x-risk perspective?  (Update: first speculation I've seen is this cautiously optimistic tweet from Eliezer Yudkowsky)
  • Greg Brockman, president of the board, is also stepping down.  How might this be related, and what might this tell us about the politics of the board members and who supported/opposed this decision?

Side note: Greg held two roles: chair of the board, and president. It sounds like he was fired from the former and resigned from the latter role.

Jackson Wagner
Nice!  I like this a lot more than the chaotic multi-choice markets trying to figure out exactly why he was fired.

From this article:

Brad Lightcap, an OpenAI executive, told employees on Saturday morning that the company had been talking with the board to “better understand the reason and process behind their decision,” according to an internal message I obtained.

“We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety or security/privacy practices,” he wrote. “This was a breakdown in communication between Sam and the board.”

If this is true, then I think the board has made a huge mess of things. They've taken a shot without any ammunition, and not realised that the other parties can shoot back. Now there are mass resignations, Microsoft is furious, seemingly all of silicon valley has turned against EA, and it's even looking likely that Altman comes back.

It seems like they didn't think they had to act like the boards of other billion dollar companies (notifying your partners of big decisions, being literal instead of euphemistic when discussing reasons for firing, selling your decisions with PR, etc). But often norms and customs happen for a reason, and corporate governance seems to be no exception. 

I think it's premature to judge things based on the little information that's currently available. I would be surprised if there weren't reasons for the board's unconventional choices. (I'm not ruling it out though, that what you say ends up being right)

If this is true, then I think the board has made a huge mess of things. They've taken a shot without any ammunition, and not realised that the other parties can shoot back. Now there are mass resignations, Microsoft is furious, seemingly all of silicon valley has turned against EA, and it's even looking likely that Altman comes back.

How much of this is "according to anonymous sources"?

The Board was deeply aware of intricate details of other parties's will and ability to shoot back. Probably nobody was aware of all of the details, since webs of allies are formed behind closed doors and rearrange during major conflicts, and since investors have a wide variety of retaliatory capabilities that they might not have been open about during the investment process.


What is your current view given how things have developed? Why do you keep putting forward strong views that are based on very bad information?

Jelle Donders
The board must have thought things through in detail before pulling the trigger, so I'm still putting some credence on there being good reasons for their move and the subsequent radio silence, which might involve crucial info they have and we don't. If not, all of this indeed seems like a very questionable move.
Kevin Lacker

"OpenAI’s ouster of CEO Sam Altman on Friday followed internal arguments among employees about whether the company was developing AI safely enough, according to people with knowledge of the situation.

Such disagreements were high on the minds of some employees during an impromptu all-hands meeting following the firing. Ilya Sutskever, a co-founder and board member at OpenAI who was responsible for limiting societal harms from its AI, took a spate of questions.

At least two employees asked Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a “coup” or “hostile takeover,” according to a transcript of the meeting. To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software—which had become a billion-dollar business—at the expense of potential safety concerns."

Kara Swisher also tweeted:

"More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side."

"The developer day and how the store was introduced was in inflection moment of... (read more)

Not sure how important this is: Judging from the behavior of Satya Nadella during OpenAI's dev day 12 days ago, Microsoft quite likely didn't see that coming at that moment.

Thought this was a good article on Microsoft's power:

It is unclear if OpenAI could continue as a going concern without continual cash inflows from Microsoft. While OpenAI is, according to reports, making about $80 million per month currently and may be on track to make $1 billion in revenue in 2023—ten times more than it anticipated when it secured an additional $10 billion funding commitment from Microsoft in January—it is not known if the company is profitable or what its burn rate it is. But it is likely to be fast. The company lost $540 million dollars in 2022 on revenue of less than $30 million for the entire year, according to documents seen by Fortune. If its costs have also ramped up in line with revenues, the company would need continual support from Microsoft just to keep operating.

Furthermore, OpenAI is entirely dependent on Microsoft’s cloud computing datacenters to both train and run its models. The global shortage of graphic processing units (GPUs), the specialized computer chips needed to train and run large AI models, and the size of OpenAI’s business, with tens of millions of paying customers dependent on those models, mean that the San Francisco AI company cannot easily port its business to another cloud service provider.

It seems like the board did not fire Sam Altman for safety reasons, but instead for other reasons instead. Utterly confusing, and IMO demolishes my previous theory, though a lot of other theories also lost out.

Sources below, with their archive versions included:

This is mere speculation, but another group I'm on posited this might be part of it:
Sam Altman's sister, Annie Altman, claims Sam has severely abused her

This doesn't seem impossible given the timing, but I'd still be very surprised if this was what the board's decision was about. (I'm especially skeptical that it would be exclusively about this.) For one thing, the board announcement uses the wording "hindering [the board's] ability to exercise its responsibilities." This doesn't seem like the wording someone would choose if their decision was prompted by investigating events that happened more than twenty years ago and which don't directly relate to beneficial use of AI or running a company. (Even in the unlikely case where the board decided to open an investigation into abuse allegations and then caught Sam Altman lying about details related to that, it's not apparent why they would describe these hypothetical lies as "hindering [the board's] ability to exercise its responsibilities," as opposed to using wording that's more just about "lost the board's trust.") Besides, I struggle to picture board members starting an investigation solely based on one accusation from when the person in question was still a teenager. I'm not saying that these accusations are for sure unimportant – in fact, I said the opposite on that LW comment thr... (read more)

Yarrow B.
Wasn't that just a throwaway joke on Reddit?
I very much doubt he was fired over the allegations. However, if the allegations are true, it would raise the likelihood that he engaged in other sketchy or unethical behaviour that we don't know about.  "not consistently candid" seems to be an implication that he was deceptive to the board about something, at least. It could have just been about strategy, or it could have involved personal misbehaviour as well.

Yeah, now that more information has come to light, it seems to be clearly about disagreements about how to pursue the OpenAI mission. I wonder if the board can point to at least one objectively outrageous thing that Altman was deceptive about, or whether it was more subtle stuff that added up but is hard to convey to outsiders. For instance, I could imagine that they got "empty promises" vibes from Altman where he was placating the most safety-concerned voices at OpenAI by saying he'll take such and such precautions later in the future, but then kept doing things that are at odds with taking safety seriously, until people had enough and felt deceived and like they could no longer trust his assurances. In this scenario, it's going to be difficult for the board and for Sutskever to convey that their decision wasn't some overreaction. (FWIW, I think it can be totally justifiable to fire someone over weasel-like assurances about mission alignment that never led to any visible actions – it's just tricky that there's always some plausible deniability where the CEO can say "I was going to take action later, like I said; it's just that you people are insufficiently pragmatic and don't have experience dealing with investors like Microsoft; and anyway, the tech isn't risky enough yet and you all are freaking out.")

It would seem like a bad move to openly say the "not consistently candid" and "hindering responsibilities" thing if there was no objective deception they could point to. Even if they don't state what happened publicly, the board has to be able to defend it's actions to it's employees and to it's partners at Microsoft. 

My impression is that this type of public admonishment is rather rare for the ousting of a CEO, and it would be more typical to talk about a "difference of vision" or something similarly bland. I think either they have a clear cut case against him, or the board has mishandled the situation. 

We are at a critical time as we stand; either we have the Board yielding to the plea/threat of the worker or we have inexperienced actors being at the helm of the driving force in AI. What do you think organizations like EA can do in this regard, should we just sit and watch or should we regard the threat as non-existent because to me, having this sort of people managing the AI space is a ticking time bomb

Interesting. The press release defines the board's governance mission as "ensure that artificial general intelligence benefits all humanity," and then asserts that Sam hindered that mission.

I suppose one could interpret that as a shift towards greater caution and governance in the name of AI safety, or a shift towards greater speed/open-sourcing if the board views their mission through a lens of accelerationism and accessibility. 

Or something entirely different... we're digging into talmudic nuance here, and all of these are near-wild guesses.

It could... (read more)

Not too long an unemployment period of 5 days, but on the other hand, not a bad endorsement.

The reinstatement of Altman as head of OpenAI took place under truly revolutionary circumstances. Reportedly, 650 employees threatened to leave immediately and investors threatened legal action against the ChatGPT creator. Unsurprisingly, Microsoft, the largest investor, owning 49% of the shares and pumping huge amounts of money into the company, had the most at stake. It was the tech giant that first expressed great dissatisfaction with Altman's dismissal and even offered him the creation of an AI division within Microsoft, should OpenAI's board of directors nonetheless relent.

Just saw this on hacker news as a response to Sam Altman Exposes the Charade of AI Accountability. The damage for EA's reputation is hard to estimate but perhaps real.

I think people have yet to realize that this whole AI Safety thing is complete BS. It's just another veil, like Effective Altruism, to get good PR and build a career around. The only people who truly believe this AI safety stuff are those with no technical knowledge or expertise.

Here’s a Bloomberg article with a few more details.

Odd anon
Wow, that article is seriously dishonest and misleading throughout. What a mess.

Apropos of nothing, I'm reminded of this old update from CEA.

Can someone who downvoted explain why they downvoted? 

Seemed not relevant enough to the topic, and too apt to be highly inflammatory, to be worthwhile to bring up. 

What’s the lore behind that update? This was before I followed EA community stuff

My understanding, though I'm not sure the board ever publicly confirmed this, was they decided that Larissa was acting on behalf of Leverage Research, and hence contrary to the best interests of CEA, and they wanted to stop the entryism.

IIRC the official reason (or at least the thing that caused stuff to come to a head) was that Larissa and Kerry had been dating for multiple months but had never told the rest of leadership or the board about it.

1[comment deleted]

If Holden or other folks in EA blew up OpenAI, that ain't gonna be good for the movement... fr fr

Is this AI safety related?

[comment deleted]4
More from Larks
Curated and popular this week
Relevant opportunities