I am a lawyer. I am not licensed in California, or Delaware, or any of the states that likely govern OpenAI's employment contracts. So take what I am about to say with a grain of salt, as commentary rather than legal advice. But I am begging any California-licensed attorneys reading this to look into it in more detail. California may have idiosyncratic laws that completely destroy my analysis, which is based solely on general principles of contract law and not any research or analysis of state-specific statutes or cases. I also have not seen the actual contracts and am relying on media reports. But.

 

I think the OpenAI anti-whistleblower agreement is completely unenforceable, with two caveats. Common law contract principles generally don't permit surprises, and don't allow new terms unless some mutual promise is made in return for those new terms. A valid contract requires a "meeting of the minds". Per Kelsey Tuoc's reporting, the only warning about this nondisparagement agreement required at termination is a line in the employment contract requiring that employees sacrifice even their vested profit participation units if they refuse to sign a general release upon ending their employment. Then OpenAI demands a "general release" that includes a lifetime nondisparagement clause. Which is not a standard part of a "general release". As commonly understood, that term means you give up the right to sue them for anything that happened during your employment. Now, employment contracts can get tricky. If you have at-will employment, as the vast majority of people do, your contract is subject to renegotiation at any time. Because you have no continued right to your job, an employer can legally impose new terms in exchange for your continued employment. If you don't like it, your remedy is to quit. But vested PPUs are different because you already have a right to them. New terms can't be imposed unless there are new benefits. 

 

So, the caveats:

 

  1. The employment contract might define "general release" differently than that term is normally used. Contracts are allowed to do this; you are presumed to have read and understood everything in a contract you sign, even if the contract is too long for that to be realistic, and contract law promotes the freedom of the parties to agree to whatever they want. You might still be able to contest this if it's an "adhesion contract", wherein you have no realistic opportunity to negotiate, but I suspect most OpenAI employees have enough bargaining power that they don't have this out. 

          2. OpenAI might give out some kind of deal sweetener in exchange for the                nondisparagement agreement. While contract law requires mutual promises, they don't have to be of equal value. It might, for example, offer mutual nondisparagement provisions that weren't included in the employment agreement. That's the trick I would use to make it enforceable. 

 

So, tldr, consult a lawyer. The employees who have already left and signed a new agreement might be screwed, but anyone else thinking about leaving OpenAI, and anyone who left without signing anything, can probably keep their PPUs and their freedom to speak, if they consult a lawyer before leaving. If you are a lawyer licensed in CA (or DE, or whatever state turns out to govern OpenAI's employment contracts), my ask is that you give some serious thought to helping anyone who has left/wants to leave OpenAI navigate these challenges, and drop a line in the comments with your contact info if you decide you want to do so. 

181

5
0

Reactions

5
0

More posts like this

Comments11


Sorted by Click to highlight new comments since:

For those who don't know, Matt Bruenig is a large and well-known twitter account. It would be easy to fact check this and very likely to be found out if it was false, so it is probably true. 

Confirmed; he does work in this area, there's independent reporting about his work on these topics, and has a substack about his very relevant legal work; https://www.nlrbedge.com/

Do you have any comment on the idea that nondisparagement clauses like this could be found invalid for being contrary to public policy? (How would that be established?)

Does anyone have guesses about how much it would cost to pay a California-licensed employment lawyer to form an opinion on this?

(edit: I don't plan to do this, because I'm on the wrong continent and don't really know any of the relevant people. I want this to be a prompt for someone closer to OpenAI to think about if they could make this happen.)

low thousands? Obviously haven't seen the documents, but a few hours times few hundred dollars per billable hours should be in the ballpark. Obviously someone might want a more in-depth study before taking actions that could expose them to massive liability, though...

If someone has a good plan for how to make good/useful things happen here, but requires funding for it, please contact me.

It would not be hard to do, but I don't quite get the point. What's the case for giving pro bono legal counseling to ex-OAI employees? They have the resources to coordinate and hire their own lawyers. Plaintiff-side firms have probably already reached out to them because it's in the news. Is the idea that writing a public legal memo analyzing the contract in depth would generate EA-related goodwill among the ex-OAIs, who are likely to go on to work at other AI shops? I'm not sure if that really makes sense.

For me, the idea was "creating a stronger public / free case that these contracts are unenforceable will make former OpenAI employees more willing to speak out / seek legal assurance that they can speak out, which in general is good for us learning more about what OpenAI is doing and more appropriately reacting to it".

I guess lots of people out there have a non-disparagement agreement that they're not going to question because they want a quiet life. I'd like to liberate those people (so that we can use their information) if I can.

Thanks, that's a helpful clarification.

I agree it's not clear there's anything useful to be done, which is why I asked for a good plan.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 1m read
 · 
Shape and lead the future of effective altruism in the UK — apply to be the Director of EA UK. The UK has the world's second-largest EA community, with London having the highest concentration of EAs globally. This represents a significant opportunity to strengthen and grow the effective altruism movement where it matters most. The EA UK board is recruiting for a new Director, as our existing Director is moving on to another opportunity. We believe that the strongest version of EA UK is one where the Director is implementing a strategy that they have created themselves, hence the open nature of this opportunity. As Director of EA UK, you'll have access to: * An established organisation with 9 years of London community building experience * An extensive network and documented history of what works (and what doesn't) * 9+ months of secured funding to develop and implement your vision, and additional potential funding and connections to funders * A supportive board and engaged community eager to help you succeed Your task would be to determine how to best leverage these resources to maximize positive impact through community building in the UK. This is a unique opportunity for a self-directed community-builder to shape EA UK's future. You'll be responsible for both setting the strategic direction and executing on that strategy. This is currently a one-person organisation (you), so you'll need to thrive working independently while building connections across the EA ecosystem. There is scope to pitch expansion to funders and the board. We hope to see your application! Alternatively, if you know anyone who might be a good fit for this role, please email madeleine@goodstructures.co. 
 ·  · 2m read
 · 
TL;DR Starting an Effective Altruism (EA) group might be one of the highest-impact opportunities available right now. Here’s how you can get involved: * University students: Apply to the Centre for Effective Altruism’s Organiser Support Programme (OSP) by Sunday, June 22. * City or national group organisers: You’re invited, too. See details here! * Interested in mentorship? Apply to mentor organisers by Wednesday, June 18. * Know someone who could be an organiser or mentor? Forward this post or recommend them directly. OSP provides mentorship, workshops, funding support, and practical resources to build thriving EA communities. Why Starting an EA Group Matters EA Groups, especially university groups, are often the very first exposure people have to effective altruism principles such as scale, tractability, and neglectedness. One conversation, one fellowship, one book club - these seemingly small moments can reshape someone’s career trajectory. Changing trajectories matters - even if one person changes course because of an EA group and ends up working in a high-impact role, the return on investment is huge. You don’t need to take our word for it: * 80,000 Hours: "Probably one of the highest-impact volunteer opportunities we know of." * Rethink Priorities: Only 3–7% of students at universities have heard of EA, indicating neglectedness and a high potential to scale. * Open Philanthropy: In a survey of 217 individuals identified as likely to have careers of particularly high altruistic value from a longtermist perspective, most respondents reported first encountering EA ideas during their college years. When asked what had contributed to their positive impact, local groups were cited most frequently on their list of biggest contributors. This indicates that groups play a very large role in changing career trajectories to high-impact roles. About the Organiser Support Programme (OSP) OSP is a remote programme by the Centre for Effective Altruism designed