This is a special post for quick takes by Holly Elmore ⏸️ 🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I ran a successful protest at OpenAI yesterday. Before the night was over, Mikhail Samin, who attended the protest, sent me a document to review that accused me of what sounds like a bait and switch and deceptive practices because I made an error in my original press release (which got copied as a description on other materials) and apparently didn't address it to his satisfaction because I didn't change the theme of the event more radically or cancel it.

My error: I made the stupidest kind of mistake when writing the press release weeks before the event. The event was planned as a generic OpenAI protest a ~month and half ahead of time. Then the story about the mysteriously revised usage policy and subsequent Pentagon contract arose and we decided to make rolling it back the "small ask" of this protest, which is usually a news peg and goes in media outreach materials like the press release. (The "big ask" is always "Pause AI" and that's all that most onlookers will ever know about the messaging.) I quoted the OpenAI charter early on when drafting it, and then, in a kind of word mistake that is unfortunately common for me, started using the word “charter” for both the actual charter document and the usage policy document. It was unfortunately a semantic mistake, so proofreaders didn’t catch it. I also subsequently did this verbally in several places. I even kind of convinced myself from hearing my own mistaken language that OpenAI had violated a much more serious boundary– their actual guiding document– than they had. Making this kind of mistake is unfortunately a characteristic sort of error for me and making it in this kind of situation is one of my worst fears.

How I handled it: I was horrified when I discovered the mistake because it conveyed a significantly different meaning than the true story, and, were it intentional, could have slandered OpenAI. I spent hours trying to track down every place I had said it and people who may have repeated it so it could be corrected. Where I corrected "charter" to "usage policy", I left an asterisk explaining that an earlier version of the document had said "charter" in error. I told the protesters in the protester group chat (some protesters just see flyers or public events, so I couldn't reach them all) right away about my mistake, which is when Mikhail heard about it, and explained changing the usage policy is a lot less bad than changing the charter, but the protest was still on, as it had been before the military story arose as the “small ask”. I have a lot of volunteer help, but I'm still a one woman show on stuff like media communications. I resolved to have my first employee go over all communications, not just volunteer proofreaders, so that someone super familiar with what we are doing can catch brainfart content errors that my brain was somehow blind to.

So, Mikhail seems to think I should have done more or not kept the no-military small ask. He's going to publish something that really hurt my feelings because it reads as an accusation of lying or manipulation and calls for EA community level "mechanisms" to make sure that "unilateral action" (i.e. protests where I had to correct the description) can't be taken because I am "a high status EA".

This is silly. I made a very unfortunate mistake that I feel terrible about and tried really hard to fix. That's all. To be clear, PauseAI US is not an EA org and it wouldn't submit to EA community mechanisms because that would not be appropriate. Our programming does not need EA approval and is not seeking it. I failed in my own personal standards of accuracy, the standards I will hold PauseAI US to, and I will not be offering any kind of EA guarantees on what PauseAI does. Just because I am an EA doesn't sign my organization up for EA norms or make me especially accountable to Mikhail. I'm particularly done with Mikhail, in fact, after spending hours assisting him with his projects and trying to show him a good time when he visits Berkeley and answering to his endless nitpicks on Messenger and getting what feels like zero charity in return. During what should have been a celebration for a successful event, I was sobbing in the bathroom at the accusation that I (at least somewhat deliberately-- not sure how exactly he would characterize his claim) misled the press and the EA community.

I made suggestions on his document and told him he can post it, so you may see it soon. I think it's inflammatory and it will create pointless drama and he should know better, but I also know he would make a big deal and say I was suppressing the truth if I told him not to post it. I think coming at me with accusations of bad faith and loaded words like "deceptive" and "misleading" is shitty and I really do not want to be part of my own special struggle session on this forum. It wounds me because I feel the weight of what I'm trying to do, and my mistake scared me that I could stumble and cause harm. It's a lot of stress to carry this mission forward against intense disagreement, and carrying it forward in the face of personal failure is even tougher for me. But I also know I handled my error honorably and did my best. 

See my post here.

because I made an error in my original press release

This was an honest mistake that you corrected. It is not at all what I'm talking about in the post. I want the community to pay attention to the final messaging being deceptive about the nature of OpenAI's relationship with the Pentagon. The TL;DR of my post:

Good-hearted EAs lack the mechanisms to not output information that can mislead people. They organised a protest, with the messaging centred on OpenAI changing their documents to "work with the Pentagon", while OpenAI only collaborates with DARPA on open-source cybersecurity tools and is in talks with the Pentagon about veteran suicide prevention. Many participants of the protest weren’t aware of this; the protest announcement and the press release did not mention this. People were misled into thinking OpenAI is working on military applications of AI. OpenAI still prohibits the use of their services to "harm people, develop weapons, for communications surveillance, or to injure others or destroy property". If OpenAI wanted to have a contract with the Pentagon to work on something bad, they wouldn't have needed to change the usage policies of their publicly available services and could've simply provided any services through separate agreements. The community should notice a failure mode and implement something that would prevent unilateral decisions with bad consequences or noticeable violations of deontology.

I am extremely saddened by the emotional impact this has on you. I did not wish that to happen and was surprised and confused by your reaction. Unfortunately, it seems that you still don't understand the issue I'm pointing at; it is not the "charter" being in the original announcement, it's the final wording misleading people.

The draft you sent me opened with how people were misled about the "charter" and alleged that I didn't change the protest enough after fixing that mistake. I think you're just very unclear with your criticism (and what I understand I simply disagree with, as I did when we spoke about this before the protest) while throwing around loaded terms like "deceptive", "misleading", and "deontologically bad" that will give a very untrue impression of me.

Hey Holly, I hope you're doing ok. I think the Bay Area atmosphere might be particular unhealthy and tough around this issue atm, and I'm sorry for that. For what it's worth, you've always seemed like someone who has integrity to me.[1] 

Maybe it's because I'm not in the thick of Bay Culture or super focused on AI x-risk, but I don't quite see why Mikhail reacted so strongly (especially the language around deception? Or the suggestions to have the EA Community police Pause AI??) to this mistake. I also know you're incredibly committed to Pause AI, so I hope you don't think what I'm going to say is insensitive, but I even some of your own language here is a bit storm-in-a-teacup?

The mix-up itself was a mistake sure, but every mistake isn't a failure. You clearly went out of your way to make sure that initial incorrect impression was corrected. I don't really see how that could meet a legal slander bar, and I think many people will find OpenAI reneging on a policy to work with the Pentagon highly concerning whether or not it's the charter.

I don’t really want to have a discussed about California defamation law. Mainly, I just wanted to reach out and offer some support, and say that from my perspective, it doesn't look as bad as it might feel to you right now.

  1. ^

    Even when I disagree with you!

  2. ^

    If he publishes then I'll read it, but my prior is sceptical, especially given his apparent suggestions Turns out Mikhail published as I was writing! I'll give that a read

  3. ^

    I know you want to hold yourself to a higher standard, but still.

Thanks. I don't think I feel too bad about the mistake or myself . I know I didn't do it on purpose and wasn't negligent (I had informed proofreaders and commenters, none of whom caught it, either), and I know I sincerely tried everything to correct it. But it was really scary.

Mikhail has this abstruse criticism he is now insisting I don't truly understand, and I'm pretty sure people reading his post will not understand it, either, instead taking away the message that I ~lied or otherwise made a "deontological" violation, as I did when I read it.

It would be really great if EAs didn't take out their dismay over SBF's fraud on each other and didn't try to tear down EA as an institution. I am wrecked over what happened with FTX, too, and of course it was major violation of community trust that we all have to process. But you're not going to purify yourself by tearing down everyone else's efforts or get out ahead of the next scandal by making other EA orgs sound shady. EA will survive this whether you calm down or not but there's no reason to create strife and division over Sam et al.'s crimes when we could be coming together, healing, and growing stronger. 

We all know EAs and rationalists are anxious about getting involved in politics because of the motivated reasoning and soldier mindset that it takes to succeed there (https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer).

Would it work to have a stronger distinction in our minds between discourse, which should stay pure from politics, and interventions, which can include, e.g. seeking a political office or advocating for a ballot measure?

Since EA political candidacies are happening whether we all agree or not, maybe we should take measures to insulate the two.  I like the "discourse v intervention" frame as a tool for doing that, either as a conversational signpost or possibly to silo conversations entirely. Maybe people involved in political campaigns should have to recuse themselves from meta discourse? 

Relatedly, I'm a bit worried that EA involvement in politics may lead to an increased tendency for reputational concerns to swamp object-level arguments in many EA discussions; and for an increasing number of claims and arguments to become taboo. I think there's already such a tendency, and involvement in politics could make it worse.

What's so weird to me about this is that EA has the clout it does today because of these frank discussions. Why shouldn't we keep doing that?

I'm in favor of not sharing infohazards but that's about the extent of reputation management I endorse-- and I think that leads to a good reputation for EA as honest!

What's so weird to me about this is that EA has the clout it does today because of these frank discussions. Why shouldn't we keep doing that?

I think the standard thing for many orgs and cultures to start off open and transparent and move towards closedness and insularity. There are good object-level reasons for the former, and good object-level reasons for the latter, but taken as a whole, it might just better be viewed as a lifecycle thing rather than one of principled arguments.

Open Phil is an unusually transparent and well-documented example in my mind (though perhaps this is changing again in 2022)

I can see good reasons for individual orgs to do that, but way fewer for EA writ large to do this. I'm with Rob Bensinger on this.

Agree there's little reason for political candidates to comment on meta-EA. There would, however, be reasons for them to comment on EA analysis of public policies. Their experiences in politics might also have a bearing on big picture EA strategy, which would be a greyer area.

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would