I wrote an essay that's a case study for how Open Philanthropy (and by abstraction, EA) can be better at communications.

It's called Affective Altruism.

I wrote the piece because I was growing increasingly frustrated seeing EA have its public reputation questioned following SBF and OpenAI controversies. My main source of frustration wasn't just seeing EA being interpreted uncharitably, it was that the seeds for this criticism were sewn long before SBF and OpenAI became known entities.

EA's culture of ideological purity and (seemingly intentional) obfuscation from the public sets itself up for backlash. Not only is this unfortunate relative the movement's good intentions, it's strategically unsound. EA fundamentally is in the business of public advocacy. It should be aiming for more than resilience against PR crises. As I say in the piece:

The point of identifying and cultivating a new cause area is not for it to remain a fringe issue that only a small group of insiders care about. The point is that it is paid attention to where it previously wasn't.

The other thing that's frustrating is that what I'm asking for is not for EA to entertain some race-to-the-bottom popularity contest. It's an appeal to respect human psychology, to use time-tested techniques like visualization and story telling that are backed by evidence. There are ways to employ these communications strategies without reintroducing the irrationalities that EA prides itself on avoiding, and without meaningfully diminishing the rigorousness of the movement.

On a final personal note: 

I feel a tremendous love-hate relationship with EA. Amongst my friends (none of which are EAs despite most being inordinately altruistic) I'm slightly embarrassed to call myself an EA. There's a part of me that is allergic to ideologies and in-group dynamics. There's a part of me that's hesitant of allying myself with a movement that's so self-serious and disregarding of outside perceptions. There's also a part of me that feels spiteful towards all the times EA has soft and hard rejected my well-meaning attempts at participation (case-in-point, I've already been rejected from the comms job I wrote this post to support my application for). And yet, I keep coming back to EA because, in a world that is so riddled with despair and confusion, there's something reaffirming about a group of people who want to use evidence to do measurable good. This unimpeachable trait of EA should be understood for the potential energy it wields amongst many people like myself that don't even call themselves EAs. Past any kind of belabored point about 'big tent' movements, all I mean to say is that EA doesn't need to be so closed-off. Just a little bit of communications work would go a long way.

Here's a teaser video I made to go along with the essay:

99

4
1
2

Reactions

4
1
2
Comments19


Sorted by Click to highlight new comments since:

Very useful post, thanks. Improving our comms is one of our three priorities for EA Netherlands in 2024 and this will inform that work. 

Out of interest:

  1. What are your other two priorities?
  2. How will you know if you've been successful in "improving your comms"? Curious to hear if you have a more specific okr here

Hey! 

Our other priorities for 2024 are GCR field building and investing in our volunteering programme. We'll do this alongside maintaining our more established programmes e.g., our national EA crash course, our support for organisers around the country, and our co-working office.

In terms of measuring success, we still need to develop the strategy, so it is not currently possible to say in detail how we will evaluate it. Broadly speaking, we want to increase awareness of, and inclination towards, effective altruism amongst proto-EAs in the Netherlands. We also want to ensure inclination remains high amongst the general public once they become aware of us. Therefore, to evaluate the impact of this work, we will probably conduct surveys to measure awareness and inclination amongst proto-EAs and the general public before and after the interventions outlined in the strategy, whatever they may be, are implemented. 

Of course we'll also keep an eye on basic comms metrics like newsletter subscribers, LinkedIn followers, etc. And downstream metrics like intro programme completions, etc. 

For Q1 our comms OKR is as follows:

Objective: Comms - develop our strategy (ready to be handed to volunteer team) 
 
Key Results  

  1. Get 100 survey responses for our Dutch proto-EA marketing survey by March 8th (this asks about media consumption habits, barriers faced, recommendations for media platforms/influencers, etc).
  2. Internal publication of an analysis of the survey's results by March 15th (we're probably going to miss this deadline, in the end, we decided to rely on a volunteer for the analysis)
  3. Internal publication of a communications strategy (in the style of Rumelt) consisting of a diagnosis, guiding policy, and a set of coherent actions by March 22nd (again, we're probably going to miss this target)
  4. Recruit a team of 3+ volunteers by March 31st to help us implement the strategy (supplementing the marketing strategist and the google ads marketeer we've already got on the team)

Thanks James, cool to hear.

Re your final personal note - I feel a lot like you! Thanks for putting your thoughts out there.

thanks ulrik 🤝

I thought the video was excellent, and the highlights of your article were the concrete ideas and examples of good communication.

More concrete ideas please! I don't think anyone will disagree that EA hasn't been the best at branding itself, but in my experience it's easier said than done!

If people want more concrete ideas they can hire me to communications work.

I don't know how to be more concrete than I did in the article without working for free.

EA's culture of ideological purity and (seemingly intentional) obfuscation from the public sets itself up for backlash.

The link goes to this article itself. Curious what you were trying to link to.

fixed thanks

"An OpenAI program director, who has very little to actually do with this larger public debate, is suddenly subpoenaed to testify in a congressional hearing where they are forced to answer an ill-tempered congress member's questions. It might go something like this"

This should be OP, not OpenAI, right?

fixed thanks

I don't know much about EA yet, so this was nice to hear your perspective about where things could improve. I can see both sides of the coin here; that being accessible helps with distribution of information, and how non-serious people aren't going to include the rigor that's required to understand some of these complex issues. 

I wonder where the middle ground is? I also wonder what changes would bring the most relief to you. Shift in culture? Shift in sharable material?

One of the broader points I'm advocating for is that the middle ground is far more stable and sizable than many in the community might think it is.

I think the 'non-serious' individual you speak of is somewhat of a straw man. If they are real, the risk of them polluting the quality of EA's work is quite small IMO. It's important to make a distinction between the archetype of a follower/fan (external comms) and a worker/creator (internal comms). A lot of EAs conflate internal and external communications.

This is a really cool topic. I wonder why there is tension. I haven't been around long enough to see it in action, but I'm getting a better sense for it as I read similar posts. Do you think there's a key cultural shift that would address the underlying issue? Do you think there's any fear (or some other emotion/rationale) about avoiding this middle ground?

Yeah if you read the essay it spends a lot of time speaking to both of those questions

tldr

The fear is born from the very DNA of EA which has its roots in avoiding emotional irrationalities that lead to ineffective forms of altruism. The culture shift I want to see is a product of a) acknowledging and relinquishing this fear when it's not based on reality b) understanding the value proposition of good communications

The video is interesting! I liked the demonstration at the beginning that you care more about someone's ideas when you have seen who they are. The radio switch at the beginning was a bit long but otherwise very good idea.

Small feedback on your essay itself: 

even as someone interested in hearing what you had to say, your writing could be formatted to let me skim it more efficiently. I'd have loved if you posted more visible TL;DRs at the start & named the sections by their conclusions rather than their guiding questions.

The teaser video worked on me as you predicted though, props on that! 

This also makes for a distinctive cover letter to the OP job, to be sure! Smart.

This is great meta-feedback - I'll be sure to include more TL;DRs at the start of my articles too. 

Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 1m read
 · 
If you are planning on doing AI policy communications to DC policymakers, I recommend watching the full video of the Select Committee on the CCP hearing from this week.  In his introductory comments, Ranking Member Representative Krishnamoorthi played a clip of Neo fighting an army of Agent Smiths, described it as misaligned AGI fighting humanity, and then announced he was working on a bill called "The AGI Safety Act" which would require AI to be aligned to human values.  On the Republican side, Congressman Moran articulated the risks of AI automated R&D, and how dangerous it would be to let China achieve this capability. Additionally, 250 policymakers (half Republican, half Democrat) signed a letter saying they don't want the Federal government to ban state level AI regulation. The Overton window is rapidly shifting in DC, and I think people should re-evaluate what the most important messages are to communicate to policymakers. I would argue they already know "AI is a big deal." The next important question to answer is, "What should America do about it?"
 ·  · 13m read
 · 
  There is dispute among EAs--and the general public more broadly--about whether morality is objective.  So I thought I'd kick off a debate about this, and try to draw more people into reading and posting on the forum!  Here is my opening volley in the debate, and I encourage others to respond.   Unlike a lot of effective altruists and people in my segment of the internet, I am a moral realist.  I think morality is objective.  I thought I'd set out to defend this view.   Let’s first define moral realism. It’s the idea that there are some stance independent moral truths. Something is stance independent if it doesn’t depend on what anyone thinks or feels about it. So, for instance, that I have arms is stance independently true—it doesn’t depend on what anyone thinks about it. That ice cream is tasty is stance dependently true; it might be tasty to me but not to you, and a person who thinks it’s not tasty isn’t making an error. So, in short, moral realism is the idea that there are things that you should or shouldn’t do and that this fact doesn’t depend on what anyone thinks about them. So, for instance, suppose you take a baby and hit it with great force with a hammer. Moral realism says: 1. You’re doing something wrong. 2. That fact doesn’t depend on anyone’s beliefs about it. You approving of it, or the person appraising the situation approving of it, or society approving of it doesn’t determine its wrongness (of course, it might be that what makes its wrong is its effects on the baby, resulting in the baby not approving of it, but that’s different from someone’s higher-level beliefs about the act. It’s an objective fact that a particular person won a high-school debate round, even though that depended on what the judges thought). Moral realism says that some moral statements are true and this doesn’t depend on what people think about it. Now, there are only three possible ways any particular moral statement can fail to be stance independently true: 1. It’s