This is an alternate take on the OpenAI governance transition from Zvi and Garrison’s (also excellent!) overviews. I wanted to focus more on the objective responsibilities that OpenAI will have following the transition and prospective effects on existential risk and the AI safety field, as well as what corporate structures actually tend to look like and allow. I believe that there’s a tendency not to focus as much on these things as we potentially should, as they govern the incentive structures that these companies seeking to dramatically affect the future will likely have to play by.
Summary
On October 28th, 2025, OpenAI announced its transition from its previous capped-profit structure to a for-profit public benefit corporation (PBC) model. This capped a months-long endeavor to change OpenAI’s governance form. They sought to move to a form that could be more easily capitalized and invested in than the old capped-profit structure, which was somewhat arcane and seemed to make investors nervous. This new structure, they believe, will be much more easily invested in. This is demonstrated by SoftBank’s April 2025 $40 billion investment in OpenAI, part of which was made conditional on this or a similar transition occurring.
It was allowed only conditional on several concessions extracted by the California and Delaware Attorneys General, including maintaining a governing nonprofit foundation which retains the power to appoint all board members of the for-profit and a safety and security commission which can veto any model release. Furthermore, in safety and security issues the for-profit board will be required to put the mission of the nonprofit above all for-profit motives.
This creates a massive new nonprofit with intentions to fund health and AI safety, potentially shaking up and increasing funding for the AI safety space in particular. However, it also carries potential risks in governance and safety, while at the same time likely marking an end to complete Sam Altman dominance of OpenAI’s board.
The previous model
Previously, OpenAI was incorporated as three entities. The first was a nonprofit parent entity, OpenAI Inc. This entity maintained full control over OpenAI GP, a partnership which had 51% economic control over a “capped-profit” child entity, OpenAI LP. This capped-profit entity also had a 49% economic interest from Microsoft, who retained several other privileges as a condition of their deal. However, OpenAI Inc. maintained actual corporate control over OpenAI LP via OpenAI GP. All investments in OpenAI LP were made with the condition that all returns above 100x would be returned completely to the nonprofit, and gave no control of the company.
In the 2023 board shakeup, during which Sam Altman was ousted for not being “consistently candid” with the board but was later reinstated and most of the board was replaced. To show the role of the entities described previously, here’s an example. The board of OpenAI Inc., the nonprofit, directed OpenAI GP, the controlling entity, to oust Altman from his CEO position at OpenAI LP, the capped-profit entity. That episode also showed how the board’s control wasn’t final even in the previous structure; it’s inherently conditional on the support of the employees and stakeholders.
Why it was changed
At the end of this previous structure’s lifetime, the nonprofit’s profit share was becoming significantly diluted as OpenAI sold more capped-profit shares to investors and gave them to employees, with the amount retained reportedly only 2%. This meant that although the nonprofit still had theoretical rights to the profits from AGI, they believed that they couldn’t get enough capped-profit investment to actually reach it with the perceived capital intensity required.
Part of the reason could also have to do with OpenAI’s governance structure being genuinely weird and unstable. As shown by the amount of description I’ve had to do here, it’s quite complex which makes governance harder. Furthermore, the ability to instantly oust the CEO or any executive at any time by a nonprofit board was unprecedented. The conversion was an opportunity to solve both problems.
What it was changed to:
OpenAI LP, the capped-profit entity, is being converted into a normal Delaware PBC, with shares being held by the various investors and employees of the company as well as the nonprofit. Microsoft was granted a 27% share, the nonprofit was given a 26% share, with other employees and investors getting 47%. The nonprofit’s share is set to grow an unspecified amount if OpenAI’s valuation increases over 10x, with scaling dependent on the amount over 10x.
OpenAI Inc., the nonprofit, was renamed the OpenAI Foundation, becoming the second-most capitalized nonprofit in the world after the Novo Nordisk Foundation, with assets valued around 130 billion dollars. They’re already planning to grant 25 billion USD. This initial commitment focuses on two areas: health and AI resilience. The Foundation plans to fund open-sourced frontier health datasets, support scientific research, and develop technical infrastructure to strengthen AI systems’ security and reliability.
Microsoft
Microsoft maintains a significant role as well. While under the previous agreement they had IP rights up until OpenAI declared they’d achieved AGI, it now maintains access to post-AGI models as well up to 2032. They accept that their revenue-sharing agreement lasts until AGI is achieved as well.
A mechanism for AGI declaration is added as well through an independent panel. OpenAI and Microsoft’s revenue sharing agreement, the details of which are private, will last until this declaration is confirmed. While they lose the Right of First Refusal for OpenAI’s cloud services, that’s accompanied with a $250 billion commitment by OpenAI to buy Azure cloud services.
Analysis
Effectively, OpenAI is selling its future uncapped upside for the ability to take more investment. The biggest change is the removal of the 100x caps, and in exchange, current investors, particularly Microsoft, take a significant hit on current profits. However, it also recapitalizes the nonprofit, and allows the for-profit to more easily take investments from outside partners. Furthermore, the later plan to grant more capital preserves some of OpenAI’s future capital in the case of AGI, though of course not all of it as the original system did. This is purely speculative since these details haven't been made public, but I would guess that the additional capital allocation might follow a sliding scale between 5-25% depending on what the market cap becomes, with OpenAI (or the AGs) perhaps insisting that in a true AGI scenario of astronomic proportions the nonprofit is at least symbolically the main beneficiary.1
Speaking of the AGs (and where EAs come in)
Most of what I’ve said is what would’ve likely happened according to OpenAI’s original plan. However, the approval of the Delaware and California Attorneys General was also necessary.
They negotiated many concessions from the company, some of which are quite substantial:
Attorney General Concessions
The California and Delaware Attorneys General extracted substantial governance protections as conditions for approval. These fall into four key categories:
1. Nonprofit Control & Corporate Governance
The nonprofit retains ultimate authority over the PBC:
- Board Control: The nonprofit has sole power to appoint and remove all PBC board members as long as it holds its special controlling shares
- Mission Alignment: The PBC’s mission must remain identical to the nonprofit’s mission
- Approval Rights: The nonprofit must approve major corporate actions including governance changes, mission amendments, mergers/acquisitions, and new share issuances (this effectively means the nonprofit must maintain control of the PBC)
- Safety-First Mandate: On safety and security issues, PBC directors must consider only the mission, not shareholder profits
2. Safety & Security Committee Powers
An independent safety commission with real teeth:
- Nonprofit Committee: The SSC remains a committee of the nonprofit, not the PBC
- Veto Authority: The SSC can halt model releases even when risk thresholds would otherwise permit them
- Leadership: Chair Zico Kolter (CMU professor) must serve on the nonprofit board but not the PBC board
- Full Transparency: SSC Chair gets complete observation rights to all PBC board meetings and access to all safety-related information
3. Board Independence Requirements
- Within one year, at least one additional nonprofit director (beyond the SSC Chair) cannot serve on the PBC board
- The PBC board must have a majority of independent directors
- The PBC must provide the nonprofit with access to personnel, research, IP, AI models, and platforms without compensation
4. Attorney General Oversight
- Advance Notice: 21 days required for changes of control, mission changes, or amendments reducing nonprofit rights (allows time for AG review and potential veto)
- Expert Review: AGs can retain experts at OpenAI’s expense to review major transactions
State-Specific Provisions
California:
- Both nonprofit and PBC headquarters must remain in California
- 21 days advance notice required for any relocation outside California (again, 21 days notice is basically a veto)
Delaware:
- Formal audit, risk, compensation, and nominating/governance committees required at the PBC level
- Each committee must be comprised solely of independent directors
The public pressure and its sources
Several groups sent letters to the AGs urging them to block this conversion, or at least include adequate safety guards, including:
“Not for Private Gain”, A coalition including AI researchers, former OpenAI employees, and several legal scholars and economists, including multiple Nobel Prize winners
A California nonprofit coalition
This might also have included you! I made a comment on the EA Forum encouraging people to send letters to the AGs to block the transition, especially those residents or those with a particular interest in the merger:
That message was later signal-boosted by Astral Codex Ten, and reached at least tens of thousands of people, resulting in several letters I'm aware were sent as a direct result and I hope many more that I didn’t hear about.
While there’s no evidence that this or any other campaign directly affected the concessions that were extracted by the AGs, I’m hopeful that this had some impact, considering the magnitude of the concessions. These are genuinely abnormal within corporate governance, and appear to be meaningful increases in safety.
How the new structure compares to competitors'
This actually leaves OpenAI’s governance structure fairly similar to Anthropic’s. If the boards of the nonprofit and the PBC are fully separated, this will leave a nonprofit with full board control of the company, with corporate interests also gaining profits from their shares in the company, though without any degree of certain control. As with Anthropic, everything is uncertain here, but in principle if either reached AGI at this point, and it ended up being a winner-takes-all type of thing, a large portion of the profits would likely end up donated. The exact proportion will depend on the specifics of corporate, nonprofit, and trust governance, but I think in truly astronomical scenarios (>1 trillion profit per year) we can expect donations of at least 25% of that as the bare minimum to avoid lawsuits, if not much more based on board willingness and further investment.
The independent safety and security commission, under the control of the nonprofit, is also unique. You can say that it’s probably more necessary than it would be at somewhere like Anthropic, but it’s also currently a stronger structure than anything Anthropic is mandated to have in place (they may have stronger structures internally). Furthermore, the mandated formal audit and risk committees are also a substantial governance concession.
Here’s a general picture of what governance for all the major AI companies currently looks like to see where that leaves OpenAI.
| Company | Corporate Structure | Special Considerations |
| OpenAI | Delaware PBC, mission to ensure AGI benefits all of humanity | Partially owned by nonprofit with control over board, also has independent commission able to block model releases |
| Anthropic | Delaware PBC, mission to develop safe and beneficial AI | Has Long-Term Benefit Trust, a separate board made of nonprofit leaders that will eventually appoint the majority of the primary corporate board, also various internal controls |
| xAI | Nevada C Corporation, for-profit (formerly PBC) | Standard corporate structure with Elon Musk as majority owner; no special safety governance structures publicly disclosed |
| Alphabet/Google/DeepMind | Delaware C Corporation (Alphabet parent), DeepMind operates as subsidiary | DeepMind has internal AI ethics board and safety teams; founders Larry Page and Sergey Brin together have voting rights majority of Alphabet via dual-class shares |
| Meta | Delaware C Corporation | Dual-class share structure gives Mark Zuckerberg majority voting control; has Responsible AI team and external oversight board for content (not specifically for AI releases) |
| Chinese Labs (e.g., Baidu, Alibaba, ByteDance, MoonshotAI) | Various structures under Chinese law, mix of public and private companies | Subject to Chinese government oversight and regulations including algorithm registration requirements; state influence through special shares ("golden shares") in some cases; must comply with Chinese AI governance framework |
What it means for AI safety more broadly
The initial implications for AI safety have to do with the potential amounts that the OpenAI Foundation will commit to the space. OpenAI as a company already works with organizations like Apollo on their model releases, and I would expect to see sponsorship of organizations similar to that as part of the $25 billion initial grant. It could mean more capacity for AI safety researchers, though I'm uncertain about current bottlenecks in this area.
The obvious worry is that the removal of immediate nonprofit control will result in less focus on safety at OpenAI proper. However, I think this change is likely a net positive even for AI safety at the purely research level. A mandated semi-independent commission getting permission to block model releases is more than existed before at OpenAI, and if a truly independent board is eventually achieved, which I think is likely (>50%), it will have different incentives from the current board, which I worry is far too loyal to Sam Altman. That isn’t to say that they’re going to bring in members who are going to attempt another coup, but independence from the PBC board is required by the AGs, and that will likely mean less loyalty to Altman personally.
Furthermore, the previous profit structure was an interesting experiment, and it had the potential to be incredible if it worked, but OpenAI really did appear to be running out of runway. Perhaps it would’ve been for the best if OpenAI ended up becoming capital-constrained, but it seems that the previous structure couldn’t last, and OpenAI’s board had put itself in a position where either intentionally or unintentionally some kind of conversion became the only option.
Conclusion
Conditional on this conversion happening, this was probably about the best possible outcome. A nonprofit board, which will likely be separated from Sam Altman’s interests, will gain power within the organization, and hopefully act as a real check on OpenAI’s governance.
Perhaps equally importantly, this shows that the state AGs have a real interest in maintaining the governance of these companies. These will be the same people enforcing controls on the governance structures of these organizations as they develop into corporations with considerable influence over our future. It's encouraging that they already appear to be taking this responsibility seriously.
1 i.e 26% + 25% additional equity would be 51%.
