A few months ago I was talking to a software engineer at Google. On paper, a dream job. But she was frustrated. She felt like she wasn't contributing enough to the world and was seriously considering putting her engineering career aside to go study psychology. A whole decade-long academic track, starting from scratch.
I told her there are actually many ways to create a massive impact with exactly the technical background she already has. So I sent her to read about it on the EA websites.
She landed on a page about longtermism and existential risk reduction. She couldn't understand why any of it was relevant to her. Here was someone with the exact profile EA says it wants to reach: technically skilled, motivated by impact, ready to act. And we opened with the most abstract, most philosophically demanding version of the pitch before she'd even encountered the basic idea that some career paths do far more good than others.
She wasn't wrong to bounce. The content wasn't written for her. It was written for someone who'd already bought the premise.
I think this is EA's core growth problem. Not the ideas. The ideas are exceptional. The problem is sequencing: we put the most demanding version of our thinking at the front door, then interpret low engagement as "people don't care" rather than "we made the entrance too narrow."
I should flag my angle here: I work professionally in direct response marketing and growth strategy for impact-driven organizations. This isn't a philosophical argument about EA's communication culture. It's a diagnostic from someone who builds funnels and conversion paths for a living.
Disclaimer: Some LLMs were used for research and copyediting
The bridge problem
Rethink Priorities data on how people find EA tells a clear story: active EAs disproportionately came in through a friend, a local group, or 80,000 Hours. The general public mostly heard about EA through media, and that route rarely leads to real involvement. The 2024 Pulse survey (n≈5,000 US adults) found near-zero recognition of GiveWell or 80,000 Hours outside the community. Internal metrics look great: 20–25% YoY growth, the biggest EAG ever, ~$1.2B moved through the effective giving ecosystem in 2024. The inside is thriving. The bottleneck is the bridge.
And it's not that mainstream outreach hasn't been tried. Doing Good Better was a bestseller. CEA itself acknowledges that EA "has historically under-invested in external communications" and that "by not actively advocating for or defending ourselves, we've let critics define us." But the problem was never awareness. It was what happened after awareness. Someone read about EA in the press, got curious, visited a website, and hit a wall of insider language. The books did good sequencing. The ecosystem behind them didn't.
Why does personal connection work where media doesn't? Because a friend translates. A friend doesn't say "here's a cost-effectiveness analysis comparing QALYs." A friend says "I found something that changed how I think about giving." Outsider language first, insider framework later. That's the blueprint.
The sequencing problem
In advertising, there's a basic principle: you don't put the full spec sheet in the headline. You start simple, earn attention, add complexity as the person moves deeper. "Save 5 hours a week on reporting" gets them in the door. "Enterprise-grade API with customizable webhooks" comes later.
Our external-facing content does the opposite. Compare:
Insider: "Cost-effectiveness analysis suggests your marginal dollar has significantly more impact when directed to GiveWell's top-recommended charities." Outsider: "Some charities are literally 100x more effective than others, and there's a way to find out which."
Same idea. But the first version requires you to already accept that giving should be optimized. The second makes you curious about it.
Most external-facing touchpoints (websites, newsletters, social content) default to insider language. Fine for the community. But it means the bridge to everyone else is built with materials that only work on this side of the river.
We already know how to do this internally. GWWC's Pledge doesn't open with "optimize your giving portfolio across cause areas." It starts with a concrete commitment: give 10%. The introductory fellowships build week by week from accessible ideas to complex frameworks. The internal norm shifts (public pledging, transparent giving, career changes for impact) were adopted gradually through identity and social proof, not through a single philosophical argument. The sequencing principle is already here. It just hasn't been applied to how we talk to the outside world.
What a better bridge looks like
The first touchpoint needs to feel like "here's something surprising," not "here's what you're getting wrong." EA Forum discussion on framing found that "most people don't know how much good they could do" lands much better than "most people don't care enough." One invites curiosity. The other triggers defensiveness.
From there, the path builds gradually. "People like us think carefully about where our help goes" invites someone onto the bridge. "You should donate more effectively" shouts across the river. And the first step needs to be small: "Check one of your current donations on GiveWell." A 30-day experiment. Not the Pledge, not a worldview. Just a crack in the door. Cost-effectiveness, cause comparison, counterfactual reasoning, all of it comes after, as Stage 2. This is what the fellowships and the Pledge already do internally. The gap is doing it externally.
The School of Moral Ambition is a useful case study. Rutger Bregman's organization takes core EA ideas and packages them in outsider-first language: career change, personal purpose, pressing global problems. And it's reaching audiences that EA orgs haven't. But SMA doesn't prominently identify as EA-aligned, probably for PR reasons. If every successful outsider-facing initiative distances itself from the community, the ideas spread but the community doesn't grow. The bridge works, but it doesn't lead back anywhere.
I know the objection: simple entry points risk cause anchoring, scope neglect, identifiable victim bias. That's real. But cause anchoring is a problem when there's no Stage 2. The bigger risk isn't that someone engages with a simple message and gets stuck. It's that they never engage at all.
The people on the other side of the river aren't irrational. They're human. And they'll cross when the bridge starts on their side.

You want to send people like this to https://probablygood.org/ , the generalist careers navigator.
I like the ideas but I was almost immediately turned off by the style. Bereft of human voice. I was turned off by the unen pseudointellectual phrasing like "external-facing touchpoints", the incessant it wasn't X it was Y and sloppy metaphors.
I'm continuing to bash on about this because I'm scared of the forum becoming generic slop like Linkedin. Your ideas are fantastic and you don't need AI to package them in a bland box with a generic bow.
Strongly agree.
For me, the discussion of impartiality (first day of intro program) and longtermism (which isn't necessary for many of the suggested action points) were moments of doubt. Also 80k narrowing on transformative AI and alienating people that don't agree with the worldview.
Somehow I still stuck around.
But I think many of the things EA proposes don't need people to buy the whole package, and we are missing out on impact by leading with strong philosophical stuff.
Great piece, thanks for writing it. Totally agree with your view on this!