Epistemic Status: I think this is a real issue, and think the importance of the issue varies significantly by introductory experience/facilitator.
TLDR
- EA introductions present the movement as cause-neutral, but newcomers discover that most funding and opportunities actually focus on longtermism (especially biosecurity and AI)
- This creates a "bait and switch" problem where people join expecting balanced cause prioritization but find stark funding hierarchies
- The solution is honest upfront communication about EA's current priorities rather than pretending to be more cause-neutral than we actually are
- We should update intro fellowship curricula to include transparent data about funding distribution and encourage open discussion about whether EA's shift toward longtermism represents rational updating or cultural bias
1. The Problem (Personal Experience)
In EA Bath, we have a group member, who's exactly the kind of person you'd want in EA - agentic, impact-minded, thoughtful about doing good. They had their first encounter with the broader EA world at the Student Summit in London last month, and came back genuinely surprised. They're interested in conservation and plant biology, but the conference felt like AI dominated everything - something like half the organisations there were AI-focused. They left wondering whether EAs as a community actually have much to offer someone like her.
This got me thinking about my own approach to introducing EA. When I'm tabling at university, talking to students walking by, I instinctively mention climate change and pandemics before AI. I talk about cause areas broadly, emphasising how we think carefully about cause prioritisation. But my group member's experience made me realise there's a gap - I'm presenting the concept of cause prioritisation without being transparent about just how stark the funding disparities actually are between different causes.
This isn't just about the group member, or about my particular approach to outreach. I think there's a systematic pattern where people interested in areas like conservation, biodiversity, or education policy in developed countries discover that EA has relatively little interest in funding work or opportunities in their domains. EA introductions do discuss cause prioritisation, but they don't make it clear enough that there are massive disparities in resources and attention between different cause areas. The result is that newcomers often feel confused or sidelined when they discover how dramatically unequal the actual allocation of EA resources really is.
2. The Evidence
My group member's experience reflects a broader pattern supported by data showing that EA's actual priorities don't match how we typically present ourselves to newcomers.
Conference and organisational focus
At the 2025 Student Summit, roughly half the organisations were AI-focused. At EAG London, it was a similar split - about 50% of organisations focusing on AI or AI plus biosecurity, and 50% everything else. For someone new to EA, which this event was, these events provide their first concrete sense of what the community actually works on. I liked the Student Summit, and rated it highly in my feedback. I also appreciate the organisers reached out to a diverse range of organisations, the reality is that organisations who said yes were focused on existential risks.
The engagement effect
Perhaps most tellingly, the 2024 EA Survey revealed a stark pattern: among highly engaged EAs, 47% most prioritised only a longtermist cause while only 13% most prioritised a neartermist cause. Among less engaged EAs, the split was much more even - 26% longtermist vs 31% neartermist. This suggests that as people become more involved in EA, they systematically shift toward longtermist priorities. But newcomers aren't told about this progression.
Community dissatisfaction
The 2024 EA survey found that the biggest source of dissatisfaction within the community was cause area prioritisation, specifically the perceived overemphasis on longtermism and AI. If existing EAs are feeling this tension, imagine how much stronger it must be for newcomers who weren't expecting it.
Ideological tendencies
As Thing of Things argues in "What Do Effective Altruists Actually Believe?", EA has distinctive ideological patterns that go beyond our stated principles - tendencies toward consequentialism, rationalism, specific approaches to politics and economics. These aren't mentioned in most EA introductions, but they significantly shape how the community operates and what work gets prioritised. These patterns are the opinions of one person, but somewhat similar patterns are discussed by Peter Hartree and Zvi.
Career advice concentration
This pattern extends to career guidance as well. 80,000 Hours, long considered EA's flagship career advice organisation, has pivoted toward AI-focused career recommendations. This shift has been significant enough that EA group organisers are now questioning whether they should still recommend 80k's advising services to newcomers, as discussed in recent forum posts. While I feel 80k is appropriately transparent about this focus on their website, someone who hears about "80,000 Hours career advice" at an EA intro talk and then explores their resources or signs up for their newsletter might be surprised to find most content centred on AI careers rather than the broader cause-neutral career guidance they might have expected.
3. Why This Matters + What Honest Introductions Could Look Like
This transparency gap creates real problems for both newcomers and the EA community itself.
The "bait and switch" problem
When someone gets interested in EA through cause-neutral principles and then discovers the stark funding hierarchies, it feels misleading. My group member's experience (probably) isn't unique - people enter thinking they're joining a movement that evaluates causes objectively, and will find some causes have been pre-judged as dramatically more important. This creates confusion and, potentially disillusionment.
We lose valuable perspectives: People passionate about education policy, conservation, or other lower-prioritised areas often drift away from EA when they realise how little space there is for their interests. But these are exactly the people who might bring fresh insights, identify overlooked opportunities, or build bridges to other communities. When we're not upfront about cause prioritisation, we're selecting for people who happen to align with existing EA priorities rather than people who might expand or challenge them. (I'll address the obvious objection - that being more transparent might worsen this selection bias - in section 4.)
What honest introductions could look like
The solution isn't to present every possible cause area and their relative funding levels - that would be overwhelming and impossible to keep current. Instead, we can be earnest about the realities without being exhaustive.
Chana Messinger writes compellingly about the power of earnest transparency in community building. Rather than scripted interactions that hide uncomfortable realities, she suggests being upfront: "I'm engaged in a particular project with my EA time... I'm excited to find people who are enthused by that same project and want to work together on it. If people find this is not the project for them, that's a great thing to have learned."
For individual conversations, this might mean saying something like: "EA principles suggest we should be cause-neutral, but in practice, most EA funding and career opportunities currently focus on AI safety and other longtermist causes. There's ongoing debate about this prioritisation, and we welcome people interested in other areas, but you should know what the current landscape looks like."
A concrete, scalable approach - Updating the Intro Fellowship
Rather than relying on individual organisers to navigate these conversations perfectly, we could update the intro fellowship curriculum itself. This might mean creating a forum post or resource that presents the evidence from this post - showing the actual funding landscape, conference composition, and survey data on cause prioritisation. We could be transparent about EA's historical shift toward long termism and AI risk over the past decade, presenting it as an interesting case study in how social movements evolve rather than something to hide.
This would require someone to update the resource annually as new data becomes available, but it would ensure that everyone going through the fellowship gets the same honest picture of EA's current priorities. It would also make the information feel less like a personal opinion from their local organiser and more like objective information about the movement. Such a resource could sit alongside the existing materials about EA principles - not replacing them, but providing the context that those principles currently operate within a community with particular priorities and funding patterns.
4. Objections & Responses
"This creates selection bias for people who already agree with EA priorities"
This is my main worry, and it directly relates to the point in section 3 about losing valuable perspectives. If we're explicit about AI/longtermism focus, we might inadvertently filter for people who already lean that way, losing the diversity of perspectives we claim to value. But I think we're already selecting for these people - we're just doing it inefficiently, after they've invested time in learning about EA. Better to be honest upfront and actively encourage people with different interests to engage critically rather than pretend EA is more cause-neutral than it actually is.
"There isn't time for this level of nuance"
In a two-minute elevator pitch or an Instagram post, there genuinely isn't space to explain funding hierarchies and ideological tendencies. But I think this misses where the transparency matters most. The crucial moments aren't the initial hook, but the follow-up conversations, the intro fellowship materials, and the first few meetings someone attends. These are when people are deciding whether to get seriously involved, and when honest conversations about cause prioritisation could prevent later disappointment.
"This will put people off who might otherwise contribute"
Possibly. Some people who would have stayed if they thought EA was more cause-neutral might leave earlier if they know about the longtermism dominance. But this cuts both ways - we might also attract people who are specifically interested in the causes EA actually prioritises, rather than people who join under false pretenses and leave disappointed later.
"We can't possibly mention every cause area people might care about"
True. The goal isn't comprehensive coverage but basic honesty about the current distribution of resources and attention. We can't predict every cause area someone might be passionate about, but we can acknowledge that EA's current priorities are more concentrated than our principles might suggest.
The alternative to these uncomfortable conversations isn't neutrality - it's deception by omission.
5. Conclusion
EA has powerful principles and valuable insights about doing good effectively. But when we introduce these ideas without being transparent about how they're actually applied in practice, we create unnecessary confusion and missed opportunities.
My group member's surprise at the Student Summit could have been avoided with more honest upfront communication about where EA resources and attention actually go. They might still have chosen to engage with EA - plenty of people interested in conservation find value in EA principles and community even if they don't work on the highest-prioritised cause areas. But they could have made that choice with full information rather than discovering it by accident.
The solution isn't to abandon cause prioritisation or pretend all causes are equally prioritised in EA. It's to be honest about what EA actually looks like while remaining open to the people who might help us do better. This means acknowledging funding hierarchies, explaining how cause prioritisation works in practice, and creating space for productive disagreement rather than surprise and disillusionment.
Chana Messinger is right that earnestness is powerful. We can trust people to handle honest information about EA's realities. The alternative - presenting EA as more cause-neutral than it actually is - serves no one well.
Massive thanks to Ollie Base for giving the encouragement and feedback that got me to write this :)
Disclaimer: This post was written with heavy AI assistance (Claude), with me providing the ideas, evidence, personal experiences, and direction while Claude helped with structure and drafting. I'm excited about this collaborative approach to writing and curious whether it produces something valuable, but I'm very aware this may read as AI-generated. I'd be keen to hear feedback on both the content and this writing process - including whether the AI assistance detracts from the argument or makes it less engaging to read. The conversation I used to write this is here: https://claude.ai/share/e75cd51a-0513-4447-a264-ded89bc1189a.
Thanks for your post, I broadly agree with your main point, and I really value the emphasis on transparency and honesty when introducing people to EA.
That said, I think the way you frame cause neutrality doesn’t quite match how most people in the community understand it.
To me, cause neutrality doesn’t mean giving equal weight to all causes. It means being open to any cause and then prioritizing them based on principles like scale, neglectedness, and tractability. That process will naturally lead to some causes getting much more attention and funding than others, and that’s not a failure of neutrality, but a result of applying it well.
So when you say we’re “pretending to be more cause-neutral than we are,” I think that’s a bit off. I get that this may sound like semantics, and I agree it's a problem if people are told EA treats all causes equally, only to later discover the community is heavily focused on a few. But that’s exactly why I think a principle-first framing is important. We should be clear that EA takes cause neutrality seriously as a principle, and that many people in the community, after applying that principle, have concluded that reducing catastrophic risks from AI is a top priority. And that this conclusion might change with new evidence or reasoning, but the underlying approach stays the same.
Strong agree - cause neutrality should not at all imply an even spread of investment. I just in fact do think AI is the most pressing cause according to my values and empirical beliefs
Yes, I agree.
The OP seems to talk about cause-agnosticism (uncertainty about which cause is most pressing) or cause-divergence (focusing on many causes).
But EA is not cause agnostic OR cause neutral atm
"EA principles suggest we should be cause-neutral, but in practice, most EA funding and career opportunities currently focus on AI safety and other longtermist causes."
I'm not sure if this is true technically, with GiveWell still having the biggest pot of "EA-ish" money every year. You could also argue that high impact Global Development jobs in general offer up far more (and easier to access) career opportunities than All AI/GCR jobs combined.
In reality, much GiveWell (rightfully IMO!) goes to non-EA orgs doing highly impactful work, so its interesting where that gets categorised. More and more though over the last few years, EA orgs (mainly Charity Entrepeneurship) have been well among the GiveWell money too.
I discovered EA well after my university years, which maybe gives me a different perspective. It sounds to me like both you and your group member share a fundamental misconception of what EA is, what questions are the central ones EA seeks to answer. You seem to be viewing it as a set of organizations from which to get funding/jobs. And like, there is a more or less associated set of organizations which provide a small number of people with funding and jobs, but that's not central to EA, and if that is your motivation for being part of EA, then you've missed what EA is fundamentally about. Most EAs will never receive a check from an EA org, and if your interest in EA is based on the expectation that you will, then you are not the kind of person we should want in EA. EA is, at its core, a set of ideas about how we should deploy whatever resources (our time and our money) that we choose to devote to benefiting strangers. Some of those are object level ideas (we can have the greatest impact on people far away in time and/or space), some are more meta level (the ITN framework), but they are about how we give, not how we get. If you think that you can have more impact in the near term than the long term, we can debate that within EA, but ultimately as long as you are genuinely trying to answer that question and base your giving decisions on it, you are doing EA. You can allocate your giving to near term causes and that is fine. But if you expect EAs who disagree with you to spread their giving in some even way, rather than allocating their giving to the causes they think are most effective, then you are expecting those EAs to do something other than EA. EA isn't about spreading giving in any particular way across cause areas, it is about identifying the most effective cause areas and interventions and allocating giving there. The only reason we have more than one cause area is because we don't all agree on which ones are most effective.
Talented people getting jobs or funding is a major theory of impact for EA meta orgs, and that objective can be seen in their activities. I don't think it is problematic that people seem to be viewing career-related stuff as a major part of the EA meta. Indeed, I would submit that we want people to know that EA has career opportunities. For instance, if someone is interested in AI safety, we want them to know that they could find a position or funding to work in that area. Deciding to engage involves opportunity costs, so it's reasonable for people to consider whether community involvement would help them achieve their altruistic goals.
Given that, we also need to take reasonable steps so that "[p]eople passionate about education policy, conservation, or other lower-prioritised areas" know fairly early that certain important forms of engagement are not realistically open to people working in these areas. And the post describes more than the absence of funding or jobs -- if people feel that they won't get value out of going to conferences because too much of the content is geared toward a few cause areas, they should be made aware of that in advance as well.
But that isn't true, never has been, and never will be. Most people who are interested in AI safety will never find paid work in the field, and we should not lead them to expect otherwise. There was a brief moment when FTX funding made it seem like everyone could get funding for anything, but that moment is gone, and it's never coming back. The economics of this are pretty similar to a church - yes there are a few paid positions, but not many, and most members will never hold one. When there is a member who seems particularly well suited to the paid work, yes, it makes sense to suggest it to them. But we need to be realistic with newcomers that they will probably never get a check from EA, and the ones who leave because of that weren't really EAs to begin with. The point of a local EA org, whether university based or not, isn't to funnel people into careers at EA orgs, it's to teach them ideas that they can apply in their lives outside of EA orgs. Lets not loose sight of that.
I really like this post and agree with the message.
I think this problem is exacerbated by the problem that it seems that AI safety does not have a larger ecosystem of which EA is a part. What % of AI safety funding/work is EA affiliated? I would guess a very large number. Contrasted with GHD, which has a much larger ecosystem of funders/orgs/thought leadership of which EA is just a part. It is easier to think of EA as a valuable contributor to GHD, while it feels like AI safety is JUST EA. A natural consequence of this is that the choice to prioritize AI safety is viewed as "radical", seeing as no one else seems to really agree with EAs about it (not that I think EA is wrong, but I think this affects perceptions).
Say you had a random person who is concerned with the world and wants to have a positive impact, but has little in the way of priors about what causes should be prioritized. If we were to rank different EA causes in order by how "believable" they are (1 being most believable), I think it would look something like:
When the community puts the least believable/most radical cause area front and center (again, not necessarily disagreeing with the analysis that leads to this), it becomes more difficult to bring people into the movement.
When I talk to people about EA as it relates to GHD and Animal Welfare, people tend to be somewhat willing to agree with the points. I haven't had any luck at all talking about longtermism/AI safety.
This all leads to a point I've considered lately - I think EA as a whole would benefit from AI safety/longtermism "spinning" off into its own thing which EA still contributes to. I know that this isn't really a thing that can be decided - the only way this gets accomplished is if AI safety can attract non-EA funding/support/attention, which is of course a very difficult task!
Agree with your read of the situation, and I wish that the solution could be for EA to actually be cause neutral… but if that’s not on offer then I agree the intro material should be more upfront about that.
Are there events or discussion boards, perhaps not organized by the funders with an AI focus? A list of those might be a more "cause-neutral EA space" to initially explore, rather than EAG and listening to 80k hours which will be dominated by AI. Perhaps people conducting EA introduction events could anchor on such a set of spaces and events, and categorize especially 80k hours as "AI orgs", just one org in one of many parts of the "EA eco system". That way, if newcomes start listening to 80k hours, or go to an EAG, they know they are "really tapping into a single-cause AI" space.
To a certain extent, I think that local/city groups might fit this description. The EA communities in NYC, DC, and Chicago each have their own Slack workspace, and they also do in-person events. For people not in big cities, EA Anywhere's Slack workspace offers a discussion space. None of those are perfect substitutes for the EA Forum, but my vague impression is that each of them is less AI-focused than the EA Forum. There is also an EA Discord, but I haven't interacted with that much so I can't speak to it's style or quality.
As far as events go, there is an online book club in the EA Anywhere's Slack workspace (full disclosure: I organize it, and I think it is great).
Depending on what you are looking for, there is also an Animal Advocacy Forum, although it is far less active than the EA Forum.
(I might add more object level thoughts later, but in the meantime -) thanks for writing this, I think it's an important discussion to have, and for people to have shared understanding about.