I ran a successful protest at OpenAI yesterday. Before the night was over, Mikhail Samin, who attended the protest, sent me a document to review that accused me of what sounds like a bait and switch and deceptive practices because I made an error in my original press release (which got copied as a description on other materials) and apparently didn't address it to his satisfaction because I didn't change the theme of the event more radically or cancel it.
My error: I made the stupidest kind of mistake when writing the press release weeks before the event. The event was planned as a generic OpenAI protest a ~month and half ahead of time. Then the story about the mysteriously revised usage policy and subsequent Pentagon contract arose and we decided to make rolling it back the "small ask" of this protest, which is usually a news peg and goes in media outreach materials like the press release. (The "big ask" is always "Pause AI" and that's all that most onlookers will ever know about the messaging.) I quoted the OpenAI charter early on when drafting it, and then, in a kind of word mistake that is unfortunately common for me, started using the word “charter” for both the actual charter document and the usage policy document. It was unfortunately a semantic mistake, so proofreaders didn’t catch it. I also subsequently did this verbally in several places. I even kind of convinced myself from hearing my own mistaken language that OpenAI had violated a much more serious boundary– their actual guiding document– than they had. Making this kind of mistake is unfortunately a characteristic sort of error for me and making it in this kind of situation is one of my worst fears.
How I handled it: I was horrified when I discovered the mistake because it conveyed a significantly different meaning than the true story, and, were it intentional, could have slandered OpenAI. I spent hours trying to track down every place I had said it and people who may have repeated it so it could be corrected. Where I corrected "charter" to "usage policy", I left an asterisk explaining that an earlier version of the document had said "charter" in error. I told the protesters in the protester group chat (some protesters just see flyers or public events, so I couldn't reach them all) right away about my mistake, which is when Mikhail heard about it, and explained changing the usage policy is a lot less bad than changing the charter, but the protest was still on, as it had been before the military story arose as the “small ask”. I have a lot of volunteer help, but I'm still a one woman show on stuff like media communications. I resolved to have my first employee go over all communications, not just volunteer proofreaders, so that someone super familiar with what we are doing can catch brainfart content errors that my brain was somehow blind to.
So, Mikhail seems to think I should have done more or not kept the no-military small ask. He's going to publish something that really hurt my feelings because it reads as an accusation of lying or manipulation and calls for EA community level "mechanisms" to make sure that "unilateral action" (i.e. protests where I had to correct the description) can't be taken because I am "a high status EA".
This is silly. I made a very unfortunate mistake that I feel terrible about and tried really hard to fix. That's all. To be clear, PauseAI US is not an EA org and it wouldn't submit to EA community mechanisms because that would not be appropriate. Our programming does not need EA approval and is not seeking it. I failed in my own personal standards of accuracy, the standards I will hold PauseAI US to, and I will not be offering any kind of EA guarantees on what PauseAI does. Just because I am an EA doesn't sign my organization up for EA norms or make me especially accountable to Mikhail. I'm particularly done with Mikhail, in fact, after spending hours assisting him with his projects and trying to show him a good time when he visits Berkeley and answering to his endless nitpicks on Messenger and getting what feels like zero charity in return. During what should have been a celebration for a successful event, I was sobbing in the bathroom at the accusation that I (at least somewhat deliberately-- not sure how exactly he would characterize his claim) misled the press and the EA community.
I made suggestions on his document and told him he can post it, so you may see it soon. I think it's inflammatory and it will create pointless drama and he should know better, but I also know he would make a big deal and say I was suppressing the truth if I told him not to post it. I think coming at me with accusations of bad faith and loaded words like "deceptive" and "misleading" is shitty and I really do not want to be part of my own special struggle session on this forum. It wounds me because I feel the weight of what I'm trying to do, and my mistake scared me that I could stumble and cause harm. It's a lot of stress to carry this mission forward against intense disagreement, and carrying it forward in the face of personal failure is even tougher for me. But I also know I handled my error honorably and did my best.
because I made an error in my original press release
This was an honest mistake that you corrected. It is not at all what I'm talking about in the post. I want the community to pay attention to the final messaging being deceptive about the nature of OpenAI's relationship with the Pentagon. The TL;DR of my post:
Good-hearted EAs lack the mechanisms to not output information that can mislead people. They organised a protest, with the messaging centred on OpenAI changing their documents to "work with the Pentagon", while OpenAI only collaborates with DARPA on open-source cybersecurity tools and is in talks with the Pentagon about veteran suicide prevention. Many participants of the protest weren’t aware of this; the protest announcement and the press release did not mention this. People were misled into thinking OpenAI is working on military applications of AI. OpenAI still prohibits the use of their services to "harm people, develop weapons, for communications surveillance, or to injure others or destroy property". If OpenAI wanted to have a contract with the Pentagon to work on something bad, they wouldn't have needed tochange the usage policies of their publicly available services and could've simply provided any services through separate agreements. The community should notice a failure mode and implement something that would prevent unilateral decisions with bad consequences or noticeable violations of deontology.
I am extremely saddened by the emotional impact this has on you. I did not wish that to happen and was surprised and confused by your reaction. Unfortunately, it seems that you still don't understand the issue I'm pointing at; it is not the "charter" being in the original announcement, it's the final wording misleading people.
The draft you sent me opened with how people were misled about the "charter" and alleged that I didn't change the protest enough after fixing that mistake. I think you're just very unclear with your criticism (and what I understand I simply disagree with, as I did when we spoke about this before the protest) while throwing around loaded terms like "deceptive", "misleading", and "deontologically bad" that will give a very untrue impression of me.
Hey Holly, I hope you're doing ok. I think the Bay Area atmosphere might be particular unhealthy and tough around this issue atm, and I'm sorry for that. For what it's worth, you've always seemed like someone who has integrity to me.[1]
Maybe it's because I'm not in the thick of Bay Culture or super focused on AI x-risk, but I don't quite see why Mikhail reacted so strongly (especially the language around deception? Or the suggestions to have the EA Community police Pause AI??) to this mistake. I also know you're incredibly committed to Pause AI, so I hope you don't think what I'm going to say is insensitive, but I even some of your own language here is a bit storm-in-a-teacup?
The mix-up itself was a mistake sure, but every mistake isn't a failure. You clearly went out of your way to make sure that initial incorrect impression was corrected. I don't really see how that could meet a legal slander bar, and I think many people will find OpenAI reneging on a policy to work with the Pentagon highly concerning whether or not it's the charter.
I don’t really want to have a discussed about California defamation law. Mainly, I just wanted to reach out and offer some support, and say that from my perspective, it doesn't look as bad as it might feel to you right now.
If he publishes then I'll read it, but my prior is sceptical, especially given his apparent suggestions Turns out Mikhail published as I was writing! I'll give that a read
Thanks. I don't think I feel too bad about the mistake or myself . I know I didn't do it on purpose and wasn't negligent (I had informed proofreaders and commenters, none of whom caught it, either), and I know I sincerely tried everything to correct it. But it was really scary.
Mikhail has this abstruse criticism he is now insisting I don't truly understand, and I'm pretty sure people reading his post will not understand it, either, instead taking away the message that I ~lied or otherwise made a "deontological" violation, as I did when I read it.
It would be really great if EAs didn't take out their dismay over SBF's fraud on each other and didn't try to tear down EA as an institution. I am wrecked over what happened with FTX, too, and of course it was major violation of community trust that we all have to process. But you're not going to purify yourself by tearing down everyone else's efforts or get out ahead of the next scandal by making other EA orgs sound shady. EA will survive this whether you calm down or not but there's no reason to create strife and division over Sam et al.'s crimes when we could be coming together, healing, and growing stronger.
We all know EAs and rationalists are anxious about getting involved in politics because of the motivated reasoning and soldier mindset that it takes to succeed there (https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer).
Would it work to have a stronger distinction in our minds between discourse, which should stay pure from politics, and interventions, which can include, e.g. seeking a political office or advocating for a ballot measure?
Since EA political candidacies are happening whether we all agree or not, maybe we should take measures to insulate the two. I like the "discourse v intervention" frame as a tool for doing that, either as a conversational signpost or possibly to silo conversations entirely. Maybe people involved in political campaigns should have to recuse themselves from meta discourse?
Relatedly, I'm a bit worried that EA involvement in politics may lead to an increased tendency for reputational concerns to swamp object-level arguments in many EA discussions; and for an increasing number of claims and arguments to become taboo. I think there's already such a tendency, and involvement in politics could make it worse.
What's so weird to me about this is that EA has the clout it does today because of these frank discussions. Why shouldn't we keep doing that?
I'm in favor of not sharing infohazards but that's about the extent of reputation management I endorse-- and I think that leads to a good reputation for EA as honest!
What's so weird to me about this is that EA has the clout it does today because of these frank discussions. Why shouldn't we keep doing that?
I think the standard thing for many orgs and cultures to start off open and transparent and move towards closedness and insularity. There are good object-level reasons for the former, and good object-level reasons for the latter, but taken as a whole, it might just better be viewed as a lifecycle thing rather than one of principled arguments.
Open Phil is an unusually transparent and well-documented example in my mind (though perhaps this is changing again in 2022)
Agree there's little reason for political candidates to comment on meta-EA. There would, however, be reasons for them to comment on EA analysis of public policies. Their experiences in politics might also have a bearing on big picture EA strategy, which would be a greyer area.
I ran a successful protest at OpenAI yesterday. Before the night was over, Mikhail Samin, who attended the protest, sent me a document to review that accused me of what sounds like a bait and switch and deceptive practices because I made an error in my original press release (which got copied as a description on other materials) and apparently didn't address it to his satisfaction because I didn't change the theme of the event more radically or cancel it.
My error: I made the stupidest kind of mistake when writing the press release weeks before the event. The event was planned as a generic OpenAI protest a ~month and half ahead of time. Then the story about the mysteriously revised usage policy and subsequent Pentagon contract arose and we decided to make rolling it back the "small ask" of this protest, which is usually a news peg and goes in media outreach materials like the press release. (The "big ask" is always "Pause AI" and that's all that most onlookers will ever know about the messaging.) I quoted the OpenAI charter early on when drafting it, and then, in a kind of word mistake that is unfortunately common for me, started using the word “charter” for both the actual charter document and the usage policy document. It was unfortunately a semantic mistake, so proofreaders didn’t catch it. I also subsequently did this verbally in several places. I even kind of convinced myself from hearing my own mistaken language that OpenAI had violated a much more serious boundary– their actual guiding document– than they had. Making this kind of mistake is unfortunately a characteristic sort of error for me and making it in this kind of situation is one of my worst fears.
How I handled it: I was horrified when I discovered the mistake because it conveyed a significantly different meaning than the true story, and, were it intentional, could have slandered OpenAI. I spent hours trying to track down every place I had said it and people who may have repeated it so it could be corrected. Where I corrected "charter" to "usage policy", I left an asterisk explaining that an earlier version of the document had said "charter" in error. I told the protesters in the protester group chat (some protesters just see flyers or public events, so I couldn't reach them all) right away about my mistake, which is when Mikhail heard about it, and explained changing the usage policy is a lot less bad than changing the charter, but the protest was still on, as it had been before the military story arose as the “small ask”. I have a lot of volunteer help, but I'm still a one woman show on stuff like media communications. I resolved to have my first employee go over all communications, not just volunteer proofreaders, so that someone super familiar with what we are doing can catch brainfart content errors that my brain was somehow blind to.
So, Mikhail seems to think I should have done more or not kept the no-military small ask. He's going to publish something that really hurt my feelings because it reads as an accusation of lying or manipulation and calls for EA community level "mechanisms" to make sure that "unilateral action" (i.e. protests where I had to correct the description) can't be taken because I am "a high status EA".
This is silly. I made a very unfortunate mistake that I feel terrible about and tried really hard to fix. That's all. To be clear, PauseAI US is not an EA org and it wouldn't submit to EA community mechanisms because that would not be appropriate. Our programming does not need EA approval and is not seeking it. I failed in my own personal standards of accuracy, the standards I will hold PauseAI US to, and I will not be offering any kind of EA guarantees on what PauseAI does. Just because I am an EA doesn't sign my organization up for EA norms or make me especially accountable to Mikhail. I'm particularly done with Mikhail, in fact, after spending hours assisting him with his projects and trying to show him a good time when he visits Berkeley and answering to his endless nitpicks on Messenger and getting what feels like zero charity in return. During what should have been a celebration for a successful event, I was sobbing in the bathroom at the accusation that I (at least somewhat deliberately-- not sure how exactly he would characterize his claim) misled the press and the EA community.
I made suggestions on his document and told him he can post it, so you may see it soon. I think it's inflammatory and it will create pointless drama and he should know better, but I also know he would make a big deal and say I was suppressing the truth if I told him not to post it. I think coming at me with accusations of bad faith and loaded words like "deceptive" and "misleading" is shitty and I really do not want to be part of my own special struggle session on this forum. It wounds me because I feel the weight of what I'm trying to do, and my mistake scared me that I could stumble and cause harm. It's a lot of stress to carry this mission forward against intense disagreement, and carrying it forward in the face of personal failure is even tougher for me. But I also know I handled my error honorably and did my best.
See my post here.
This was an honest mistake that you corrected. It is not at all what I'm talking about in the post. I want the community to pay attention to the final messaging being deceptive about the nature of OpenAI's relationship with the Pentagon. The TL;DR of my post:
I am extremely saddened by the emotional impact this has on you. I did not wish that to happen and was surprised and confused by your reaction. Unfortunately, it seems that you still don't understand the issue I'm pointing at; it is not the "charter" being in the original announcement, it's the final wording misleading people.
The draft you sent me opened with how people were misled about the "charter" and alleged that I didn't change the protest enough after fixing that mistake. I think you're just very unclear with your criticism (and what I understand I simply disagree with, as I did when we spoke about this before the protest) while throwing around loaded terms like "deceptive", "misleading", and "deontologically bad" that will give a very untrue impression of me.
Hey Holly, I hope you're doing ok. I think the Bay Area atmosphere might be particular unhealthy and tough around this issue atm, and I'm sorry for that. For what it's worth, you've always seemed like someone who has integrity to me.[1]
Maybe it's because I'm not in the thick of Bay Culture or super focused on AI x-risk, but I don't quite see why Mikhail reacted so strongly (especially the language around deception? Or the suggestions to have the EA Community police Pause AI??) to this mistake. I also know you're incredibly committed to Pause AI, so I hope you don't think what I'm going to say is insensitive, but I even some of your own language here is a bit storm-in-a-teacup?
The mix-up itself was a mistake sure, but every mistake isn't a failure. You clearly went out of your way to make sure that initial incorrect impression was corrected. I don't really see how that could meet a legal slander bar, and I think many people will find OpenAI reneging on a policy to work with the Pentagon highly concerning whether or not it's the charter.
I don’t really want to have a discussed about California defamation law. Mainly, I just wanted to reach out and offer some support, and say that from my perspective, it doesn't look as bad as it might feel to you right now.
Even when I disagree with you!
If he publishes then I'll read it, but my prior is sceptical, especially given his apparent suggestionsTurns out Mikhail published as I was writing! I'll give that a readI know you want to hold yourself to a higher standard, but still.
Thanks. I don't think I feel too bad about the mistake or myself . I know I didn't do it on purpose and wasn't negligent (I had informed proofreaders and commenters, none of whom caught it, either), and I know I sincerely tried everything to correct it. But it was really scary.
Mikhail has this abstruse criticism he is now insisting I don't truly understand, and I'm pretty sure people reading his post will not understand it, either, instead taking away the message that I ~lied or otherwise made a "deontological" violation, as I did when I read it.
It would be really great if EAs didn't take out their dismay over SBF's fraud on each other and didn't try to tear down EA as an institution. I am wrecked over what happened with FTX, too, and of course it was major violation of community trust that we all have to process. But you're not going to purify yourself by tearing down everyone else's efforts or get out ahead of the next scandal by making other EA orgs sound shady. EA will survive this whether you calm down or not but there's no reason to create strife and division over Sam et al.'s crimes when we could be coming together, healing, and growing stronger.
We all know EAs and rationalists are anxious about getting involved in politics because of the motivated reasoning and soldier mindset that it takes to succeed there (https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer).
Would it work to have a stronger distinction in our minds between discourse, which should stay pure from politics, and interventions, which can include, e.g. seeking a political office or advocating for a ballot measure?
Since EA political candidacies are happening whether we all agree or not, maybe we should take measures to insulate the two. I like the "discourse v intervention" frame as a tool for doing that, either as a conversational signpost or possibly to silo conversations entirely. Maybe people involved in political campaigns should have to recuse themselves from meta discourse?
Relatedly, I'm a bit worried that EA involvement in politics may lead to an increased tendency for reputational concerns to swamp object-level arguments in many EA discussions; and for an increasing number of claims and arguments to become taboo. I think there's already such a tendency, and involvement in politics could make it worse.
What's so weird to me about this is that EA has the clout it does today because of these frank discussions. Why shouldn't we keep doing that?
I'm in favor of not sharing infohazards but that's about the extent of reputation management I endorse-- and I think that leads to a good reputation for EA as honest!
I think the standard thing for many orgs and cultures to start off open and transparent and move towards closedness and insularity. There are good object-level reasons for the former, and good object-level reasons for the latter, but taken as a whole, it might just better be viewed as a lifecycle thing rather than one of principled arguments.
Open Phil is an unusually transparent and well-documented example in my mind (though perhaps this is changing again in 2022)
I can see good reasons for individual orgs to do that, but way fewer for EA writ large to do this. I'm with Rob Bensinger on this.
Agree there's little reason for political candidates to comment on meta-EA. There would, however, be reasons for them to comment on EA analysis of public policies. Their experiences in politics might also have a bearing on big picture EA strategy, which would be a greyer area.