A few months ago I was talking to a software engineer at Google. On paper, a dream job. But she was frustrated. She felt like she wasn't contributing enough to the world and was seriously considering putting her engineering career aside to go study psychology. A whole decade-long academic track, starting from scratch.
I told her there are actually many ways to create a massive impact with exactly the technical background she already has. So I sent her to read about it on the EA websites.
She landed on a page about longtermism and existential risk reduction. She couldn't understand why any of it was relevant to her. Here was someone with the exact profile EA says it wants to reach: technically skilled, motivated by impact, ready to act. And we opened with the most abstract, most philosophically demanding version of the pitch before she'd even encountered the basic idea that some career paths do far more good than others.
She wasn't wrong to bounce. The content wasn't written for her. It was written for someone who'd already bought the premise.
I think this is EA's core growth problem. Not the ideas. The ideas are exceptional. The problem is sequencing: we put the most demanding version of our thinking at the front door, then interpret low engagement as "people don't care" rather than "we made the entrance too narrow."
I should flag my angle here: I work professionally in direct response marketing and growth strategy for impact-driven organizations. This isn't a philosophical argument about EA's communication culture. It's a diagnostic from someone who builds funnels and conversion paths for a living.
Disclaimer: Some LLMs were used for research and copyediting
The bridge problem
Rethink Priorities data on how people find EA tells a clear story: active EAs disproportionately came in through a friend, a local group, or 80,000 Hours. The general public mostly heard about EA through media, and that route rarely leads to real involvement. The 2024 Pulse survey (n≈5,000 US adults) found near-zero recognition of GiveWell or 80,000 Hours outside the community. Internal metrics look great: 20–25% YoY growth, the biggest EAG ever, ~$1.2B moved through the effective giving ecosystem in 2024. The inside is thriving. The bottleneck is the bridge.
And it's not that mainstream outreach hasn't been tried. Doing Good Better was a bestseller. CEA itself acknowledges that EA "has historically under-invested in external communications" and that "by not actively advocating for or defending ourselves, we've let critics define us." But the problem was never awareness. It was what happened after awareness. Someone read about EA in the press, got curious, visited a website, and hit a wall of insider language. The books did good sequencing. The ecosystem behind them didn't.
Why does personal connection work where media doesn't? Because a friend translates. A friend doesn't say "here's a cost-effectiveness analysis comparing QALYs." A friend says "I found something that changed how I think about giving." Outsider language first, insider framework later. That's the blueprint.
The sequencing problem
In advertising, there's a basic principle: you don't put the full spec sheet in the headline. You start simple, earn attention, add complexity as the person moves deeper. "Save 5 hours a week on reporting" gets them in the door. "Enterprise-grade API with customizable webhooks" comes later.
Our external-facing content does the opposite. Compare:
Insider: "Cost-effectiveness analysis suggests your marginal dollar has significantly more impact when directed to GiveWell's top-recommended charities." Outsider: "Some charities are literally 100x more effective than others, and there's a way to find out which."
Same idea. But the first version requires you to already accept that giving should be optimized. The second makes you curious about it.
Most external-facing touchpoints (websites, newsletters, social content) default to insider language. Fine for the community. But it means the bridge to everyone else is built with materials that only work on this side of the river.
We already know how to do this internally. GWWC's Pledge doesn't open with "optimize your giving portfolio across cause areas." It starts with a concrete commitment: give 10%. The introductory fellowships build week by week from accessible ideas to complex frameworks. The internal norm shifts (public pledging, transparent giving, career changes for impact) were adopted gradually through identity and social proof, not through a single philosophical argument. The sequencing principle is already here. It just hasn't been applied to how we talk to the outside world.
What a better bridge looks like
The first touchpoint needs to feel like "here's something surprising," not "here's what you're getting wrong." EA Forum discussion on framing found that "most people don't know how much good they could do" lands much better than "most people don't care enough." One invites curiosity. The other triggers defensiveness.
From there, the path builds gradually. "People like us think carefully about where our help goes" invites someone onto the bridge. "You should donate more effectively" shouts across the river. And the first step needs to be small: "Check one of your current donations on GiveWell." A 30-day experiment. Not the Pledge, not a worldview. Just a crack in the door. Cost-effectiveness, cause comparison, counterfactual reasoning, all of it comes after, as Stage 2. This is what the fellowships and the Pledge already do internally. The gap is doing it externally.
The School of Moral Ambition is a useful case study. Rutger Bregman's organization takes core EA ideas and packages them in outsider-first language: career change, personal purpose, pressing global problems. And it's reaching audiences that EA orgs haven't. But SMA doesn't prominently identify as EA-aligned, probably for PR reasons. If every successful outsider-facing initiative distances itself from the community, the ideas spread but the community doesn't grow. The bridge works, but it doesn't lead back anywhere.
I know the objection: simple entry points risk cause anchoring, scope neglect, identifiable victim bias. That's real. But cause anchoring is a problem when there's no Stage 2. The bigger risk isn't that someone engages with a simple message and gets stuck. It's that they never engage at all.
The people on the other side of the river aren't irrational. They're human. And they'll cross when the bridge starts on their side.

I really like the ideas but I was turned off by the style. Bereft of human voice. The almost pseudointellectual phrasing like "external-facing touchpoints", the repetitive "it wasn't X it was Y" and sloppy metaphors. AI wrote this.
I'm continuing to highlight this issue because I'm scared of the forum becoming the next generic slopfest like Linkedin. the ideas in this post are fantastic, but I don't think its good for the forum or the community to have AI writing posts. Its a shame if great ideas and great posts to be flattened and mitigated by the hollow voice of AI
Hi, thanks for the feedback.
You’re right that I used AI tools, and I did mention that transparently in the post itself. I mainly use them to help with research and copy-editing, but the ideas are my own.
I appreciate the point about the style as well. I’ll try to edit more carefully next time 🙂
Thanks I appreciate the response and the comment so much. I still think its best to write things ourselves and then let AI do some editing (even if extensive). That way we retain at least some of our own voice. But I know there's a wide range of views on this as I saw on my poll (was about 60/40 people not wanting AI writing most of posts).
Thanks for taking this so well Anna!
I felt a similar way - I see so much AI text online that I actually struggle to read it when I see it. However I also see that there are a lot of other readers who don't have this reaction, so take this with a pinch of salt.
If you're looking for tips at all, I'd recommend:
a) Taking a post like this as your penultimate draft, and then writing a much shorter post in your own words based on it. OR...
b) Making sure your system prompt contains a distilled version of this page as a 'what not to do'. This is the quickest way to ensure your text doesn't come across as too AI-written.
Also - thanks for disclosing the LLM use, that made me trust the content much more.
Quick tip: just run your draft through Claude and ask it to either give feedback in the style of an EA Forum user or, if you're short on time, ask it to rewrite it in the style that EA Forum readers prefer (and make sure you have plenty of reasoning transparency).
You want to send people like this to https://probablygood.org/ , the generalist careers navigator.
Ah yes, that makes sense 😄 Probably Good does seem like a great fit for people in that situation.
I agree with lots of this, to the extent that I’m in the process of starting a new blog/newsletter with a view to reaching a new audience (online sports nerds like me) with a new frame (analytical principles you already know work in sport also unlock hidden value in other - more important - domains).
(I’m Chief of Staff at CEA and co-authored some of our posts linked to in your post. I’m currently on parental leave so not directly involved in CEA’s other ongoing marketing and comms efforts; I do think things like our stories campaign were working along the lines you recommend.)
That sounds like a really interesting angle! the sports analogy seems like a very natural bridge to analytical thinking. I’d be curious to see how it lands with that audience.
And yes, I did see the stories campaign and thought it was a really good direction.
Great piece, thanks for writing it. Totally agree with your view on this!
Thanks! I’m really glad to hear that, especially coming from someone working on similar things :)
Strongly agree.
For me, the discussion of impartiality (first day of intro program) and longtermism (which isn't necessary for many of the suggested action points) were moments of doubt. Also 80k narrowing on transformative AI and alienating people that don't agree with the worldview.
Somehow I still stuck around.
But I think many of the things EA proposes don't need people to buy the whole package, and we are missing out on impact by leading with strong philosophical stuff.
I relate to this. Personally I feel more drawn to work on global poverty and animal welfare, which makes the impact more immediately tangible to me.
I can also understand why some people feel a bit confused or alienated when the framing focuses heavily on more abstract or long-term philosophical questions while there is still so much visible suffering in the world today.
I agree that many of the actions EA encourages don’t actually require people to buy into the entire philosophical package, and that we might reach more people if we sometimes led with the concrete problems and solutions.
For me it's even more that what you say. I was thinking even for most people working on AI or bio risk, the threats usually feel quite real in a scale of decades, and they could be personally affected. The numbers may change, but I think for most people working in EA cause areas, their work is well justified without appealing to impartiality (radical empathy would be enough, and it's less demanding) or longtermism.
Thanks for the useful post!
You may also be interested on our empirical studies on framing EA and longtermism.
We do, indeed, find that "longtermism" is distinctively unpopular as a framing. But messaging focused on global catastrophic risk reduction, or specific risks, such as AI safety in particular, performed quite well.
We're actually working on a new study examining EA vs AI framing specifically (including differences in who gets recruited by such framings). We'd welcome input on specific framings / elements to test.
Thanks, this is really interesting! I’ve also seen similar dynamics where “longtermism” as a label can be a bit off-putting, while framing things around concrete risks seems to resonate much more.
Strongly agree and can personally relate to this post. A principle in marketing that always stood out for me is "You are not your audience", which imo summarizes this post perfectly. There is so much growth potential for this community if we apply that principle more often.
Yes, exactly! that principle captures a big part of what I was trying to point to. It’s very easy for communities like ea to forget that we’re not actually the audience we’re trying to reach 🙏