I wrote an essay that's a case study for how Open Philanthropy (and by abstraction, EA) can be better at communications.

It's called Affective Altruism.

I wrote the piece because I was growing increasingly frustrated seeing EA have its public reputation questioned following SBF and OpenAI controversies. My main source of frustration wasn't just seeing EA being interpreted uncharitably, it was that the seeds for this criticism were sewn long before SBF and OpenAI became known entities.

EA's culture of ideological purity and (seemingly intentional) obfuscation from the public sets itself up for backlash. Not only is this unfortunate relative the movement's good intentions, it's strategically unsound. EA fundamentally is in the business of public advocacy. It should be aiming for more than resilience against PR crises. As I say in the piece:

The point of identifying and cultivating a new cause area is not for it to remain a fringe issue that only a small group of insiders care about. The point is that it is paid attention to where it previously wasn't.

The other thing that's frustrating is that what I'm asking for is not for EA to entertain some race-to-the-bottom popularity contest. It's an appeal to respect human psychology, to use time-tested techniques like visualization and story telling that are backed by evidence. There are ways to employ these communications strategies without reintroducing the irrationalities that EA prides itself on avoiding, and without meaningfully diminishing the rigorousness of the movement.

On a final personal note: 

I feel a tremendous love-hate relationship with EA. Amongst my friends (none of which are EAs despite most being inordinately altruistic) I'm slightly embarrassed to call myself an EA. There's a part of me that is allergic to ideologies and in-group dynamics. There's a part of me that's hesitant of allying myself with a movement that's so self-serious and disregarding of outside perceptions. There's also a part of me that feels spiteful towards all the times EA has soft and hard rejected my well-meaning attempts at participation (case-in-point, I've already been rejected from the comms job I wrote this post to support my application for). And yet, I keep coming back to EA because, in a world that is so riddled with despair and confusion, there's something reaffirming about a group of people who want to use evidence to do measurable good. This unimpeachable trait of EA should be understood for the potential energy it wields amongst many people like myself that don't even call themselves EAs. Past any kind of belabored point about 'big tent' movements, all I mean to say is that EA doesn't need to be so closed-off. Just a little bit of communications work would go a long way.

Here's a teaser video I made to go along with the essay:

99

4
1
2

Reactions

4
1
2
Comments19


Sorted by Click to highlight new comments since:

Very useful post, thanks. Improving our comms is one of our three priorities for EA Netherlands in 2024 and this will inform that work. 

Out of interest:

  1. What are your other two priorities?
  2. How will you know if you've been successful in "improving your comms"? Curious to hear if you have a more specific okr here

Hey! 

Our other priorities for 2024 are GCR field building and investing in our volunteering programme. We'll do this alongside maintaining our more established programmes e.g., our national EA crash course, our support for organisers around the country, and our co-working office.

In terms of measuring success, we still need to develop the strategy, so it is not currently possible to say in detail how we will evaluate it. Broadly speaking, we want to increase awareness of, and inclination towards, effective altruism amongst proto-EAs in the Netherlands. We also want to ensure inclination remains high amongst the general public once they become aware of us. Therefore, to evaluate the impact of this work, we will probably conduct surveys to measure awareness and inclination amongst proto-EAs and the general public before and after the interventions outlined in the strategy, whatever they may be, are implemented. 

Of course we'll also keep an eye on basic comms metrics like newsletter subscribers, LinkedIn followers, etc. And downstream metrics like intro programme completions, etc. 

For Q1 our comms OKR is as follows:

Objective: Comms - develop our strategy (ready to be handed to volunteer team) 
 
Key Results  

  1. Get 100 survey responses for our Dutch proto-EA marketing survey by March 8th (this asks about media consumption habits, barriers faced, recommendations for media platforms/influencers, etc).
  2. Internal publication of an analysis of the survey's results by March 15th (we're probably going to miss this deadline, in the end, we decided to rely on a volunteer for the analysis)
  3. Internal publication of a communications strategy (in the style of Rumelt) consisting of a diagnosis, guiding policy, and a set of coherent actions by March 22nd (again, we're probably going to miss this target)
  4. Recruit a team of 3+ volunteers by March 31st to help us implement the strategy (supplementing the marketing strategist and the google ads marketeer we've already got on the team)

Thanks James, cool to hear.

Re your final personal note - I feel a lot like you! Thanks for putting your thoughts out there.

thanks ulrik 🤝

I thought the video was excellent, and the highlights of your article were the concrete ideas and examples of good communication.

More concrete ideas please! I don't think anyone will disagree that EA hasn't been the best at branding itself, but in my experience it's easier said than done!

If people want more concrete ideas they can hire me to communications work.

I don't know how to be more concrete than I did in the article without working for free.

EA's culture of ideological purity and (seemingly intentional) obfuscation from the public sets itself up for backlash.

The link goes to this article itself. Curious what you were trying to link to.

fixed thanks

"An OpenAI program director, who has very little to actually do with this larger public debate, is suddenly subpoenaed to testify in a congressional hearing where they are forced to answer an ill-tempered congress member's questions. It might go something like this"

This should be OP, not OpenAI, right?

fixed thanks

I don't know much about EA yet, so this was nice to hear your perspective about where things could improve. I can see both sides of the coin here; that being accessible helps with distribution of information, and how non-serious people aren't going to include the rigor that's required to understand some of these complex issues. 

I wonder where the middle ground is? I also wonder what changes would bring the most relief to you. Shift in culture? Shift in sharable material?

One of the broader points I'm advocating for is that the middle ground is far more stable and sizable than many in the community might think it is.

I think the 'non-serious' individual you speak of is somewhat of a straw man. If they are real, the risk of them polluting the quality of EA's work is quite small IMO. It's important to make a distinction between the archetype of a follower/fan (external comms) and a worker/creator (internal comms). A lot of EAs conflate internal and external communications.

This is a really cool topic. I wonder why there is tension. I haven't been around long enough to see it in action, but I'm getting a better sense for it as I read similar posts. Do you think there's a key cultural shift that would address the underlying issue? Do you think there's any fear (or some other emotion/rationale) about avoiding this middle ground?

Yeah if you read the essay it spends a lot of time speaking to both of those questions

tldr

The fear is born from the very DNA of EA which has its roots in avoiding emotional irrationalities that lead to ineffective forms of altruism. The culture shift I want to see is a product of a) acknowledging and relinquishing this fear when it's not based on reality b) understanding the value proposition of good communications

The video is interesting! I liked the demonstration at the beginning that you care more about someone's ideas when you have seen who they are. The radio switch at the beginning was a bit long but otherwise very good idea.

Small feedback on your essay itself: 

even as someone interested in hearing what you had to say, your writing could be formatted to let me skim it more efficiently. I'd have loved if you posted more visible TL;DRs at the start & named the sections by their conclusions rather than their guiding questions.

The teaser video worked on me as you predicted though, props on that! 

This also makes for a distinctive cover letter to the OP job, to be sure! Smart.

This is great meta-feedback - I'll be sure to include more TL;DRs at the start of my articles too. 

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 1m read
 · 
I recently read a blog post that concluded with: > When I'm on my deathbed, I won't look back at my life and wish I had worked harder. I'll look back and wish I spent more time with the people I loved. Setting aside that some people don't have the economic breathing room to make this kind of tradeoff, what jumps out at me is the implication that you're not working on something important that you'll endorse in retrospect. I don't think the author is envisioning directly valuable work (reducing risk from international conflict, pandemics, or AI-supported totalitarianism; improving humanity's treatment of animals; fighting global poverty) or the undervalued less direct approach of earning money and donating it to enable others to work on pressing problems. Definitely spend time with your friends, family, and those you love. Don't work to the exclusion of everything else that matters in your life. But if your tens of thousands of hours at work aren't something you expect to look back on with pride, consider whether there's something else you could be doing professionally that you could feel good about.
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would