Listen to the full podcast

Helen Toner: "For years, Sam had made it really difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases, outright lying to the board.

At this point, everyone always says, What? Give me some examples. I can't share all the examples, but to give a sense of the thing that I'm talking about, it's things like when ChatGPT came out, November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter.

Sam didn't inform the board that he owned the OpenAI Startup Fund, even though he constantly was claiming to be an independent board member with no financial interest in the company.

On multiple occasions, he gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change.

Then a last example that I can share because it's been very widely reported relates to this paper that I wrote, which has been, I think, way overplayed in the press. The problem was that after the paper came out, Sam started lying to other board members in order to try and push me off the board.

It was another example that just really damaged our ability to trust him, and actually only happened in late October last year when we were already talking pretty seriously about whether we needed to fire him.

There's more individual individual examples. For any individual case, Sam could always come up with some innocuous sounding explanation of why it wasn't a big deal or it was interpreted or whatever.

But the end effect was that after years of this thing, all four of us who fired him came to the conclusion that we just couldn't believe things that Sam was telling us. That's a completely unworkable place to be in as a board, especially a board that is supposed to be providing independent oversight over the company, not just just helping the CEO to raise more money."

123

4
0
3

Reactions

4
0
3
Comments13
Sorted by Click to highlight new comments since:

From the interview:

When ChatGPT came out, November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter.

Several sources have suggested that the ChatGPT release was not expected to be a big deal. Internally, ChatGPT was framed as a “low-key research preview”. From The Atlantic:

The company pressed forward and launched ChatGPT on November 30. It was such a low-key event that many employees who weren’t directly involved, including those in safety functions, didn’t even realize it had happened. Some of those who were aware, according to one employee, had started a betting pool, wagering how many people might use the tool during its first week. The highest guess was 100,000 users.

If that's true, then perhaps it wasn’t ex ante above the bar to report to the board.

Andrew Mayne points out that “the base model for ChatGPT (GPT 3.5) had been publicly available via the API since March 2022”.

I think what would be more helpful for me is the other things discussed in board meetings. Even if GPT was not expected to be a big deal, if they were (hyperbolic example) for example discussing whether to have a coffee machine at the office, I think not mentioning ChatGPT would be striking. On the other hand, if they only met once a year and only discussed e.g. if they are financially viable or not, then perhaps not mentioning ChatGPT makes more sense. And maybe even this is not enough - it would also be concerning if some board members wanted more info, but did not get it. If a board member requested more info on prod dev and then ChatGPT was not mentioned, this would also look bad. I think the context and the particulars of this particular board is important.

Sam didn't inform the board that he owned the OpenAI Startup Fund, even though he constantly was claiming to be an independent board member with no financial interest in the company.

To me, this is the most damning element. This would have had to have taken some amount of active deceit to pull off, indeed, the current website reads:

The fund’s investors include Microsoft and other OpenAI partners, although OpenAI itself is not an investor.

In particular, this revelation makes it look like the main reason the fund was started wasn't to create a developer ecosystem around OpenAI (as claimed), but to personally tie Sam to the success of OpenAI.

(The reason this is a serious problem is that having a serious financial stake in the success of AI technology disincentivises you to care about the negative externalities of increasing that business, which is what OpenAI's byzantine structure was designed to accomplish)

Bret Taylor and Larry Summers (members of the current OpenAI board) have responded to Helen Toner and Tasha McCauley in The Economist.

The key passages:

Helen Toner and Tasha McCauley, who left the board of Openai after its decision to reverse course on replacing Sam Altman, the CEO, last November, have offered comments on the regulation of artificial intelligence (AI) and events at OpenAI in a By Invitation piece in The Economist.

We do not accept the claims made by Ms Toner and Ms McCauley regarding events at OpenAI. Upon being asked by the former board (including Ms Toner and Ms McCauley) to serve on the new board, the first step we took was to commission an external review of events leading up to Mr Altman’s forced resignation. We chaired a special committee set up by the board, and WilmerHale, a prestigious law firm, led the review. It conducted dozens of interviews with members of OpenAI's previous board (including Ms Toner and Ms McCauley), Openai executives, advisers to the previous board and other pertinent witnesses; reviewed more than 30,000 documents; and evaluated various corporate actions. Both Ms Toner and Ms McCauley provided ample input to the review, and this was carefully considered as we came to our judgments.

The review’s findings rejected the idea that any kind of ai safety concern necessitated Mr Altman’s replacement. In fact, WilmerHale found that “the prior board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI's finances, or its statements to investors, customers, or business partners.”

Furthermore, in six months of nearly daily contact with the company we have found Mr Altman highly forthcoming on all relevant issues and consistently collegial with his management team. We regret that Ms Toner continues to revisit issues that were thoroughly examined by the WilmerHale-led review rather than moving forward.

Ms Toner has continued to make claims in the press. Although perhaps difficult to remember now, OpenAI released ChatGPT in November 2022 as a research project to learn more about how useful its models are in conversational settings. It was built on GPT-3.5, an existing ai model which had already been available for more than eight months at the time.

For context:

  1. OpenAI claims that while Sam owned the OpenAI Startup Fund, there was “no personal investment or financial interest from Sam”.
  2. In February 2024, OpenAI said: “We wanted to get started quickly and the easiest way to do that due to our structure was to put it in Sam's name. We have always intended for this to be temporary.”
  3. In April 2024 it was announced that Sam no longer owns the fund.

On (1): it's very unclear how ownership could be compatible with no financial interest.

Maaaaaybe (2) explains it. That is: while ownership does legally entail financial interest, it was agreed that this was only a pragmatic stopgap measure, such that in practice Sam had no financial interest.

Thanks for the context!

Obvious flag that this still seems very sketchy. "the easiest way to do that due to our structure was to put it in Sam's name"? Given all the red flags that this drew, both publicly and within the board, it seems hard for me to believe that this was done solely "to make things go quickly and smoothly."

I remember Sam Bankman-Fried used a similar argument around registering Alameda - in that case, I believe it later led to him later having a lot more power because of it.
 

Sam didn't inform the board that he owned the OpenAI Startup Fund, even though he constantly was claiming to be an independent board member with no financial interest in the company.

Sam has publicly said he has no equity in OpenAI. I've not been able to find public quotes where Sam says he has no financial interest in OpenAI (does anyone have a link?).

It would be hard to imagine he has no interest, I would say even a simple bonus scheme whether stock, options, cash, etc. would count as "interest". If company makes money then so does he.

He said this during that initial Senate hearing iirc, and I think he was saying this line frequently around then (I recall a few other instances but don't remember where).

Oh also fwiw, I believe this was relevant because the OpenAI nonprofit board was required (by its structure) to have a majority of board members without financial interest in the for-profit. Sam was working towards having majority control of the board, which would have been much harder if he couldn't be on it.

The stated behaviour sounds like grounds for 

  • opening an investigation, 
  • ensuring they got written statements from Altman on concerns they thought he might be dishonest about and comparing them to the actual facts then giving him concrete requirements to improve his behaviour, 
  • and perhaps (if it's compatible with an investigation) publicly expressing concerns and calling out Altman for his behaviour. 

If none of that worked, they could publicly call for his resignation and if he didn't give it, then make the difficult decision of whether to oust him on nonspecific grounds or collectively resign as the board.

Choosing instead to fire him to the complete shock of other employees and the world at large still seems like such a deeply counterproductive path that it inclines me towards scepticism of her subsequent justification and toward the interpretation of bad faith Peter presented in this comment.

Curated and popular this week
Relevant opportunities