The board of directors of OpenAI, Inc, the 501(c)(3) that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.
A member of OpenAI’s leadership team for five years, Mira has played a critical role in OpenAI’s evolution into a global AI leader. She brings a unique skill set, understanding of the company’s values, operations, and business, and already leads the company’s research, product, and safety functions. Given her long tenure and close engagement with all aspects of the company, including her experience in AI governance and policy, the board believes she is uniquely qualified for the role and anticipates a seamless transition while it conducts a formal search for a permanent CEO.
Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.” [emphasis added]
Found this on Reddit: Anxious_Bandicoot126 comments on Sam Altman is leaving OpenAI (reddit.com)
Obviously just speculation for now, but seems plausible. The moment the GPT store was released I thought:
"wow that's really good for business ... wow that's really bad for alignment"
I'm skeptical.
I've read their other comments. The initial comment sounded somewhat plausible, but their other comments sounded less like what I'd expect someone in that position to sound like.
Worth noting that of the 4 remaining board members, 2 are associated with EA: Helen Toner (CSET) and Tasha McCauley (EV UK board member)
This is a critically important point to hold in mind if the reason for the move seems to be due to safety concerns as opposed to personal malpractice/deceiving the board[1]
I don't know what the hell happened. I guess further clarifications on the decision-making process and corporate landscape will be known tomorrow or, more likely, early next working week
I've voiced concerns before that EA is unaware that it can be drawn into 'one-way fights' sometimes, and this feels like another such moment. The Silicon Valley tech-twitter scene[2] has exploded over this, and so far EA is not coming out well in their eyes from what I can see. I think the days of "e/acc" being a meme movement are rapidly drawing to a close, and EA might find itself in a hostile atmosphere in what used to be one of the most EA-friendly places in the world.
Again, early speculations, but be careful out there Bay-Area EAs. Keep your wits about you.
Really strange that, while this looks like the most likely reason, it's not really reflected in the language
Perhaps one of the few cases where Twitter might be an accurate representation of thoughts on the ground
Ironically, this particular set of comments is doing the rounds on Twitter with some banal commentary. https://twitter.com/tobi/status/1726132247227740623?t=Qu5UR4QKDz5anypwmuANwQ&s=19
Yeah, this is one of the few times where I believe that the EAs on the board likely overreached here, because they probably didn't give enough evidence to justify their excoriating statement there that Sam Altman was dishonest, and he might be coming back to lead the company.
I'm not sure how to react to all of this, though.
Edit: My reaction is just WTF happened, and why did they completely play themselves? Though honestly, I just believe that they were inexperienced.
Kudos for being uncertain, given the limited information available.
(Not something one cay say about many of the other comments to this post, sadly.)
Adam D'Angelo also worked at Facebook with Moskovitz from 2004 to 2008 (incl. as CTO 2006-2008) and is on the board of Asana
Twitter is full of people laying into EA for being behind Sam Altman's firing. However, if it's true that this happened because the board thought Altman was trying to take the company in an 'unsafe' direction then I'm glad they did this. And I'm glad that for the time being considerations other than 'shareholder value' are not the defining motivation behind AI development.
This is incredibly short-sighted. The board’s behavior was grossly unprofessional and the accompanying blog post was borderline defamatory. And Altman is one of the most highly-connected and competent people in the Bay Area tech scene. Altman can easily start another AI company; in fact, media outlets are now reporting that he's considering doing just that, or might even return to OpenAI by pressuring the board to resign.
In fact, Manifold is at 50% that Altman will return as CEO, and at 38% that he'll start another AI company. It seems that the board was unable to think even just two steps ahead if they thought this would end well.
Latest (48 hours in): OpenAI Board Stands by Decision to Force Sam Altman Out of C.E.O. Role
After 48 hours of furious negotiations, the A.I. company said Mr. Altman would not return to his job and that former Twitch C.E.O. Emmett Shear would be its interim boss.
Oh wow, that last paragraph seems like a good sign that they have good grounds for these statements they're not walking back
Why do you think that the rumors that the board was negotiating with Sam was "relatively credible?" At this point, seems more likely than not to be false, eg either random fake news or a PR spin by pro-Altman VCs.
I think you are over-responding when we basically have no good information, as illustrated by the fact that you keep having to walk back claims you have made only a short time before
I take your point here John. There's a lot that's still to come out about the events of the weekend, and I've probably been a bit trigger-happy with responses. I'm going to step back from this thread and possibly the Forum as a whole for a little bit.
I do want to note that I picked up a somewhat hostile/adversarial tone to your comment (I'm not saying this was intentional). To 'keep having to walk back claims' seems a bit of an implied overclaim to me, especially as from my PoV it only happened twice - once seeing Ashlee Vance's updated reporting, and the other with Joshua's comment.
'Walking back' seems to also be more adversarial than just 'corrected mistakes' too (compare 'you keep having to walk back claims' vs 'you made corrections twice'. In any case, while the reporting has changed, a lot of my intuitions and feelings haven't shifted much. I still find the board's complete silence strange, and think this could be a precarious moment for AI Safety.
An open letter from 500 of ~700 OpenAI employees to the board, calling on them to resign (also on The Verge).
Suggests there's an enormous amount of bad feeling about the decision internally. It also seems like a bad sign that the board was unwilling to provide any 'written evidence' of wrongdoing, though maybe something will appear in the coming days.
But all told it looks pretty bad for EA. Seems like there's an enormous backlash online - initially against OpenAI for firing everyone’s favourite AI CEO, and now against “EA” “woke” “decelerationist” types.[1][2]
It’s also seemed to trigger a flurry of tweets from Nick Cammarata, saying that EAs are overwhelmingly self-flagellating and self-destructive and that EA caused him and his friends enormous harm. I think his claims are flatly wrong (though they may be true for him and his friends), and some of the replies seem to agree, but it has 500K views as I publish.
Seems like the whole episode (combined with at least one prominent EA seemingly saying it’s emblematic dreadful and toxic) has the potential to cause a lot of reputational damage, especially if the board chooses not to clarify its actions (although it's possibly too late for t... (read more)
I don't think that they own the EA community an explanation (it would be nice, but they don't have to). The only people that can have a right to demand that are the people that have appointed them there and the OAI staff.
https://forum.effectivealtruism.org/posts/zuqpqqFoue5LyutTv/the-ea-community-does-not-own-its-donors-money
>I might as well give my money to the San Francisco Symphony. At least they won't spend it ruining things that I care about.
It is your right, but I don't know how this is related? How have they spent EA donors' money? If you are referring to the Open Phil $30M grant, Open Phil doesn't take donations so they can donate to whoever they want and don't need to explain themselves. It would have been different if Open AI was spending GiveWell's money.
I make this speculative comment with no inside information
There may be a world in which this is net positive. If EAs have been wrong the whole time about the best approach being the "narrow" or "inside" game, this might force EAs into being mostly adversarial vs. Tech accelerationists and many in silicon valley in general. This could be more effective at stopping or slowing doom in the medium to long term than trying to force safety from the inside against strong market forces.
It could even help the EA AI risk crowd come more alongside the sentiment of the general public, after the initial reputational loss simmers down.
I'm not saying this is even likely, it's just a different take.
FYI — lots of relevant links collected here: OpenAI: The Battle of the Board and OpenAI: Facts from a Weekend
Very interested to find out some of the details here:
Side note: Greg held two roles: chair of the board, and president. It sounds like he was fired from the former and resigned from the latter role.
Regarding the second question, I made this prediction market: https://manifold.markets/JonasVollmer/in-a-year-will-we-think-that-sam-al?r=Sm9uYXNWb2xsbWVy
From this article:
If this is true, then I think the board has made a huge mess of things. They've taken a shot without any ammunition, and not realised that the other parties can shoot back. Now there are mass resignations, Microsoft is furious, seemingly all of silicon valley has turned against EA, and it's even looking likely that Altman comes back.
It seems like they didn't think they had to act like the boards of other billion dollar companies (notifying your partners of big decisions, being literal instead of euphemistic when discussing reasons for firing, selling your decisions with PR, etc). But often norms and customs happen for a reason, and corporate governance seems to be no exception.
I think it's premature to judge things based on the little information that's currently available. I would be surprised if there weren't reasons for the board's unconventional choices. (I'm not ruling it out though, that what you say ends up being right)
How much of this is "according to anonymous sources"?
The Board was deeply aware of intricate details of other parties's will and ability to shoot back. Probably nobody was aware of all of the details, since webs of allies are formed behind closed doors and rearrange during major conflicts, and since investors have a wide variety of retaliatory capabilities that they might not have been open about during the investment process.
What is your current view given how things have developed? Why do you keep putting forward strong views that are based on very bad information?
"OpenAI’s ouster of CEO Sam Altman on Friday followed internal arguments among employees about whether the company was developing AI safely enough, according to people with knowledge of the situation.
Such disagreements were high on the minds of some employees during an impromptu all-hands meeting following the firing. Ilya Sutskever, a co-founder and board member at OpenAI who was responsible for limiting societal harms from its AI, took a spate of questions.
At least two employees asked Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a “coup” or “hostile takeover,” according to a transcript of the meeting. To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software—which had become a billion-dollar business—at the expense of potential safety concerns."
Kara Swisher also tweeted:
"More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side."
"The developer day and how the store was introduced was in inflection moment of... (read more)
Thought this was a good article on Microsoft's power: https://archive.li/soZMQ
It seems like the board did not fire Sam Altman for safety reasons, but instead for other reasons instead. Utterly confusing, and IMO demolishes my previous theory, though a lot of other theories also lost out.
Sources below, with their archive versions included:
https://twitter.com/norabelrose/status/1726635769958478244
https://twitter.com/eshear/status/1726526112019382275
https://archive.is/dXRgA
https://archive.is/FhHUv
This is mere speculation, but another group I'm on posited this might be part of it:
Sam Altman's sister, Annie Altman, claims Sam has severely abused her
This doesn't seem impossible given the timing, but I'd still be very surprised if this was what the board's decision was about. (I'm especially skeptical that it would be exclusively about this.) For one thing, the board announcement uses the wording "hindering [the board's] ability to exercise its responsibilities." This doesn't seem like the wording someone would choose if their decision was prompted by investigating events that happened more than twenty years ago and which don't directly relate to beneficial use of AI or running a company. (Even in the unlikely case where the board decided to open an investigation into abuse allegations and then caught Sam Altman lying about details related to that, it's not apparent why they would describe these hypothetical lies as "hindering [the board's] ability to exercise its responsibilities," as opposed to using wording that's more just about "lost the board's trust.") Besides, I struggle to picture board members starting an investigation solely based on one accusation from when the person in question was still a teenager. I'm not saying that these accusations are for sure unimportant – in fact, I said the opposite on that LW comment thr... (read more)
Yeah, now that more information has come to light, it seems to be clearly about disagreements about how to pursue the OpenAI mission. I wonder if the board can point to at least one objectively outrageous thing that Altman was deceptive about, or whether it was more subtle stuff that added up but is hard to convey to outsiders. For instance, I could imagine that they got "empty promises" vibes from Altman where he was placating the most safety-concerned voices at OpenAI by saying he'll take such and such precautions later in the future, but then kept doing things that are at odds with taking safety seriously, until people had enough and felt deceived and like they could no longer trust his assurances. In this scenario, it's going to be difficult for the board and for Sutskever to convey that their decision wasn't some overreaction. (FWIW, I think it can be totally justifiable to fire someone over weasel-like assurances about mission alignment that never led to any visible actions – it's just tricky that there's always some plausible deniability where the CEO can say "I was going to take action later, like I said; it's just that you people are insufficiently pragmatic and don't have experience dealing with investors like Microsoft; and anyway, the tech isn't risky enough yet and you all are freaking out.")
It would seem like a bad move to openly say the "not consistently candid" and "hindering responsibilities" thing if there was no objective deception they could point to. Even if they don't state what happened publicly, the board has to be able to defend it's actions to it's employees and to it's partners at Microsoft.
My impression is that this type of public admonishment is rather rare for the ousting of a CEO, and it would be more typical to talk about a "difference of vision" or something similarly bland. I think either they have a clear cut case against him, or the board has mishandled the situation.
We are at a critical time as we stand; either we have the Board yielding to the plea/threat of the worker or we have inexperienced actors being at the helm of the driving force in AI. What do you think organizations like EA can do in this regard, should we just sit and watch or should we regard the threat as non-existent because to me, having this sort of people managing the AI space is a ticking time bomb
Interesting. The press release defines the board's governance mission as "ensure that artificial general intelligence benefits all humanity," and then asserts that Sam hindered that mission.
I suppose one could interpret that as a shift towards greater caution and governance in the name of AI safety, or a shift towards greater speed/open-sourcing if the board views their mission through a lens of accelerationism and accessibility.
Or something entirely different... we're digging into talmudic nuance here, and all of these are near-wild guesses.
It could... (read more)
Not too long an unemployment period of 5 days, but on the other hand, not a bad endorsement.
The reinstatement of Altman as head of OpenAI took place under truly revolutionary circumstances. Reportedly, 650 employees threatened to leave immediately and investors threatened legal action against the ChatGPT creator. Unsurprisingly, Microsoft, the largest investor, owning 49% of the shares and pumping huge amounts of money into the company, had the most at stake. It was the tech giant that first expressed great dissatisfaction with Altman's dismissal and even offered him the creation of an AI division within Microsoft, should OpenAI's board of directors nonetheless relent.
Just saw this on hacker news as a response to Sam Altman Exposes the Charade of AI Accountability. The damage for EA's reputation is hard to estimate but perhaps real.
Here’s a Bloomberg article with a few more details.
https://archive.ph/sv8SH
Apropos of nothing, I'm reminded of this old update from CEA.
Seemed not relevant enough to the topic, and too apt to be highly inflammatory, to be worthwhile to bring up.
My understanding, though I'm not sure the board ever publicly confirmed this, was they decided that Larissa was acting on behalf of Leverage Research, and hence contrary to the best interests of CEA, and they wanted to stop the entryism.
IIRC the official reason (or at least the thing that caused stuff to come to a head) was that Larissa and Kerry had been dating for multiple months but had never told the rest of leadership or the board about it.
If Holden or other folks in EA blew up OpenAI, that ain't gonna be good for the movement... fr fr
Is this AI safety related?