4093 karmaJoined Working (0-5 years)Arlington, VA 22202, USA


I am a lawyer and policy researcher interested in improving the governance of artificial intelligence. I currently work as Director of Research at the Institute for Law & AI. I previously worked in various legal and policy roles at OpenAI.

I am also a Research Affiliate with the Centre for the Governance of AI and a VP at the O’Keefe Family Foundation.

My research focuses on the law, policy, and governance of advanced artificial intelligence.

You can share anonymous feedback with me here.


Law-Following AI
AI Benefits


Topic contributions


I think many people are tricking themselves into being more intellectually charitable to Hanania than warranted.

I know relatively little about Hanania other than stuff that has been brought to my attention through EA drama and some basic “know thy enemy” reading I did on my own initiative. I feel pretty comfortable in my current judgment that his statements on race are not entitled charitable readings in cases of ambiguity.

Hanania by his own admission was deeply involved in some of the most vilely racist corners of the internet. He knows what sorts of messages appeal to and mobilize those people, and how such racists would read his messages. He “know[s] how it looks” not just to left-wing people but to racists.

More recently, he has admitted that he harbors irrational animus (mostly anti-LGBT stuff from what I know) that seems like a much better explanation for his policy positions rather than any attempt at beneficence from egalitarian first principles. If you just read his recent policy stances on racial issues, they are shot through with an underlying contempt, lack of empathy, and broad-strokes painting that are all consistent with what I think can fairly be called a racist disposition towards Black people in particular.

Charitable interpretation of statements can be a sensible disposition in many settings. But giving charitable interpretations to people with this sort of history seems both morally and epistemically unwise.

The prior on “person with a white supremacist history still engaged in right wing racial politics still has a racist underlying psychology” should be very high. Right-wing racists also frequently engage in dogwhistles to signal to each other while maintaining plausible deniability. Reading that statement (and others of his) with those priors+facts in mind, I feel very comfortable not giving Hanania any benefit of the doubt here.

There’s also a textual case that I think supports the racist reading. Woke people walking around “in suits” is not at all a common trope—I’ve literally never heard of someone talking about a woke person wearing a suit as some sort of significant indicator of anything. But racists judging Black people by what they wear—e.g., purporting to be willing to be nicer to Black people if only they dressed more appropriately—is a huge trope in American race discourse. This sort of congruence between racist tropes and Hanania’s language similarly applies to “in subways” and “animals.” These are racist tropes consistently used about Black people, not woke people.

Yeah fair, should have considered that more duh

Example: They crammed three cosmonauts into a capsule initially designed for one person. But due to the size constraints, the cosmonauts couldn't wear proper spacesuits; they had to wear leisure suits!

Pretty wild discussion in this podcast about how aggressively the USSR cut corners on safety in their space program in order to stay ahead of the US. In the author's telling of the history, this was in large part because Khrushchev wanted to rack up as many "firsts" (e.g., first satellite, first woman in space) as possible. This seems like it was most proximately for prestige and propaganda rather than any immediate strategic or technological benefit (though of course the space program did eventually produce such bigger benefits).

Evidence of the following claim for AI: people may not need a reason to cut corners on safety because the material benefits are so high. They may do so just because of the prestige and glory of being first.


It could be the case that the board would reliably fail in all nearby fact patterns but that market participants simply did not know this, because there were important and durable but unknown facts about e.g. the strength of the MSFT relationship or players' BATNAs.

I agree this is an alternative explanation. But my personal view is also that the common wisdom that it was destined to fail ab initio is incorrect. I don't have much more knowledge than other people do on this point, though.

I think it would be fair to describe some Presidents as being effectively powerless with regard their veto yes, if the other party control a super-majority of the legislature and have good internal discipline.

(Emphasis added.) I think this is the crux of the argument. I agree that the OpenAI board may have been powerless to accomplish a specific result in a specific situation. Similarly, in this hypo, the President may be powerless powerless to accomplish a specific result (vetoing legislation) in a specific situation.

But I think this is very far away from saying a specific institution is "powerless" simpliciter, which is what I disagreed with Zach's headline. (And so similarly would disagree that the President was "powerless" simpliciter in your hypo.)

An institution's powers will almost always be constrained significantly by both law and politics, so showing significant constraints on an institution's ability to act unilaterally is very far from showing it overall completely lacks power.


I agree this would be appealing to intellectually consistent conservatives, but this seems like a bad meme to be spreading/strengthening for animal welfare. Maybe local activists should feel free to deploy it if they think they can flip some conservative's position, but they will be setting themselves up for charges of hypocrisy if they later want to e.g. ban eggs from caged chickens.

How are you defining "powerless"? See my previous comment: I think the common meaning of "powerless" implies not just significant constraints on power but rather the complete absence thereof.

I would say that the LTBT is powerless iff it can be trivially prevented from accomplishing its primary function—overriding the financial interests of the for-profit Anthropic investors—by those investors, such as with a simple majority (which is the normal standard of corporate control). I think this is very unlikely to be true, p<5%.

I definitely would not say that the OpenAI Board was powerless to remove Sam in general, for the exact reason you say: they had the formal power to do so, but it was politically constrained. That formal power is real and, unless it can be trivially overruled in any instance in which it is exercised for the purpose for which it exists, sufficient to not be "powerless."

It turns out that they were maybe powerless to remove him in that instance and in that way, but I think there are many nearby fact patterns on which the Sam firing could have worked. This is evident from the fact that, in the period of days after November 17, prediction markets gave much less than 90% odds—and for many periods of time much less than 50%—that Sam would shortly come back as CEO.

As an intuition pump: Would we say that the President is powerless just because the other branches of government can constrain her (e.g., through the impeachment power or ability to override her veto) in many cases? I think not.

"Powerless" under its normal meaning is a very high bar, meaning completely lacking power. Taking all of Anthropic's statements as true, I think we have evidence that the LTBT has significant powers (the ability to appoint an increasing number of board members), with unclear but significant legal and (an escalating supermajority requirement) and political constraints on those powers. I think it's good to push for both more transparency on what those constraints are and for more independence. But unless a simple majority of shareholders are able to override the LTBT—which seems to be ruled out by the evidence—I would not describe them as powerless.

I think "powerless" is a huge overstatement of the claims you make in this piece (many of which I agree with). Having powers that are legally and politically constrained is not the same thing as the nonexistence of those powers.

I agree though that additional information about the Trust and its relationship to Anthropic would be very valuable.


I am not under any non-disparagement obligations to OpenAI.

It is important to me that people know this, so that they can trust any future policy analysis or opinions I offer.

I have no further comments at this time.

Load more