I wrote an essay that's a case study for how Open Philanthropy (and by abstraction, EA) can be better at communications.

It's called Affective Altruism.

I wrote the piece because I was growing increasingly frustrated seeing EA have its public reputation questioned following SBF and OpenAI controversies. My main source of frustration wasn't just seeing EA being interpreted uncharitably, it was that the seeds for this criticism were sewn long before SBF and OpenAI became known entities.

EA's culture of ideological purity and (seemingly intentional) obfuscation from the public sets itself up for backlash. Not only is this unfortunate relative the movement's good intentions, it's strategically unsound. EA fundamentally is in the business of public advocacy. It should be aiming for more than resilience against PR crises. As I say in the piece:

The point of identifying and cultivating a new cause area is not for it to remain a fringe issue that only a small group of insiders care about. The point is that it is paid attention to where it previously wasn't.

The other thing that's frustrating is that what I'm asking for is not for EA to entertain some race-to-the-bottom popularity contest. It's an appeal to respect human psychology, to use time-tested techniques like visualization and story telling that are backed by evidence. There are ways to employ these communications strategies without reintroducing the irrationalities that EA prides itself on avoiding, and without meaningfully diminishing the rigorousness of the movement.

On a final personal note: 

I feel a tremendous love-hate relationship with EA. Amongst my friends (none of which are EAs despite most being inordinately altruistic) I'm slightly embarrassed to call myself an EA. There's a part of me that is allergic to ideologies and in-group dynamics. There's a part of me that's hesitant of allying myself with a movement that's so self-serious and disregarding of outside perceptions. There's also a part of me that feels spiteful towards all the times EA has soft and hard rejected my well-meaning attempts at participation (case-in-point, I've already been rejected from the comms job I wrote this post to support my application for). And yet, I keep coming back to EA because, in a world that is so riddled with despair and confusion, there's something reaffirming about a group of people who want to use evidence to do measurable good. This unimpeachable trait of EA should be understood for the potential energy it wields amongst many people like myself that don't even call themselves EAs. Past any kind of belabored point about 'big tent' movements, all I mean to say is that EA doesn't need to be so closed-off. Just a little bit of communications work would go a long way.

Here's a teaser video I made to go along with the essay:

99

4
1
2

Reactions

4
1
2
Comments19


Sorted by Click to highlight new comments since:

Very useful post, thanks. Improving our comms is one of our three priorities for EA Netherlands in 2024 and this will inform that work. 

Out of interest:

  1. What are your other two priorities?
  2. How will you know if you've been successful in "improving your comms"? Curious to hear if you have a more specific okr here

Hey! 

Our other priorities for 2024 are GCR field building and investing in our volunteering programme. We'll do this alongside maintaining our more established programmes e.g., our national EA crash course, our support for organisers around the country, and our co-working office.

In terms of measuring success, we still need to develop the strategy, so it is not currently possible to say in detail how we will evaluate it. Broadly speaking, we want to increase awareness of, and inclination towards, effective altruism amongst proto-EAs in the Netherlands. We also want to ensure inclination remains high amongst the general public once they become aware of us. Therefore, to evaluate the impact of this work, we will probably conduct surveys to measure awareness and inclination amongst proto-EAs and the general public before and after the interventions outlined in the strategy, whatever they may be, are implemented. 

Of course we'll also keep an eye on basic comms metrics like newsletter subscribers, LinkedIn followers, etc. And downstream metrics like intro programme completions, etc. 

For Q1 our comms OKR is as follows:

Objective: Comms - develop our strategy (ready to be handed to volunteer team) 
 
Key Results  

  1. Get 100 survey responses for our Dutch proto-EA marketing survey by March 8th (this asks about media consumption habits, barriers faced, recommendations for media platforms/influencers, etc).
  2. Internal publication of an analysis of the survey's results by March 15th (we're probably going to miss this deadline, in the end, we decided to rely on a volunteer for the analysis)
  3. Internal publication of a communications strategy (in the style of Rumelt) consisting of a diagnosis, guiding policy, and a set of coherent actions by March 22nd (again, we're probably going to miss this target)
  4. Recruit a team of 3+ volunteers by March 31st to help us implement the strategy (supplementing the marketing strategist and the google ads marketeer we've already got on the team)

Thanks James, cool to hear.

Re your final personal note - I feel a lot like you! Thanks for putting your thoughts out there.

thanks ulrik 🤝

I thought the video was excellent, and the highlights of your article were the concrete ideas and examples of good communication.

More concrete ideas please! I don't think anyone will disagree that EA hasn't been the best at branding itself, but in my experience it's easier said than done!

If people want more concrete ideas they can hire me to communications work.

I don't know how to be more concrete than I did in the article without working for free.

EA's culture of ideological purity and (seemingly intentional) obfuscation from the public sets itself up for backlash.

The link goes to this article itself. Curious what you were trying to link to.

fixed thanks

"An OpenAI program director, who has very little to actually do with this larger public debate, is suddenly subpoenaed to testify in a congressional hearing where they are forced to answer an ill-tempered congress member's questions. It might go something like this"

This should be OP, not OpenAI, right?

fixed thanks

I don't know much about EA yet, so this was nice to hear your perspective about where things could improve. I can see both sides of the coin here; that being accessible helps with distribution of information, and how non-serious people aren't going to include the rigor that's required to understand some of these complex issues. 

I wonder where the middle ground is? I also wonder what changes would bring the most relief to you. Shift in culture? Shift in sharable material?

One of the broader points I'm advocating for is that the middle ground is far more stable and sizable than many in the community might think it is.

I think the 'non-serious' individual you speak of is somewhat of a straw man. If they are real, the risk of them polluting the quality of EA's work is quite small IMO. It's important to make a distinction between the archetype of a follower/fan (external comms) and a worker/creator (internal comms). A lot of EAs conflate internal and external communications.

This is a really cool topic. I wonder why there is tension. I haven't been around long enough to see it in action, but I'm getting a better sense for it as I read similar posts. Do you think there's a key cultural shift that would address the underlying issue? Do you think there's any fear (or some other emotion/rationale) about avoiding this middle ground?

Yeah if you read the essay it spends a lot of time speaking to both of those questions

tldr

The fear is born from the very DNA of EA which has its roots in avoiding emotional irrationalities that lead to ineffective forms of altruism. The culture shift I want to see is a product of a) acknowledging and relinquishing this fear when it's not based on reality b) understanding the value proposition of good communications

The video is interesting! I liked the demonstration at the beginning that you care more about someone's ideas when you have seen who they are. The radio switch at the beginning was a bit long but otherwise very good idea.

Small feedback on your essay itself: 

even as someone interested in hearing what you had to say, your writing could be formatted to let me skim it more efficiently. I'd have loved if you posted more visible TL;DRs at the start & named the sections by their conclusions rather than their guiding questions.

The teaser video worked on me as you predicted though, props on that! 

This also makes for a distinctive cover letter to the OP job, to be sure! Smart.

This is great meta-feedback - I'll be sure to include more TL;DRs at the start of my articles too. 

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 32m read
 · 
Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, Taiwan eventually democratized in the 1990s and today is one of the riche