There was a recent discussion on twitter about whether global development had been deprioritised within EA. This struck a chord with some (*edit* despite the claim in the twitter thread being false). So:
What is the priority of Global poverty within EA, compared to where it ought to be?
I am going to post some data and some theories. I'd like if people in the comments falsified them and then we'd know the answer.
- Some people seem to think that global development is lower priority than it should be within EA. Is this view actually widespread?
- Global poverty was held in very high esteem in 2020. Without further evidence we should assume it still is. In the 2020 survey, no cause area had a higher average rating (I'm eyeballing this graph) or a higher % of near top + top priority ratings. In 2020, global development was considered the highest priority by EAs in general.
- Global poverty gets the most money by cause area from Open Phil & GWWC according to https://www.effectivealtruismdata.com/

- The FTX future fund lists economic growth as one of its areas of interest (https://ftxfuturefund.org/area-of-interest/)
- Theory: Elite EA conversation discusses global poverty less than AI or animal welfare. What is the share of cause areas among forum posts, 80k episodes or EA tweets? I'm sure some of this information is trivial for one of you to find. Is this theory wrong?
- Theory: Global poverty work has ossified around GiveWell and their top charities. Jeff Mason and Yudkowsky both made variations of this point. Yudkowsky's reasoning was that risktakers hadn't been in global poverty research anyway - it attracted a more conservative kind of person. I don't know how to operationalise thoughts against this, but maybe one of you can.
- Personally, I think that many people find global poverty uniquely compelling. It's unarguably good. You can test it. It has quick feedback loops (compared to many other cause areas). I think it's good to be in coalition with the most effective area of an altruistic space that vibes with so many people. I like global poverty as a key concern (even though it's not my key concern) because I like good coalitional partners. And Longtermist and global development EAs seem to me to be natural allies.
- I can also believe that if we care about the lives of people currently alive in the developing world and have AI timelines of less than 20 years, we shouldn't focus on global development. I'm not an expert here and this view makes me uncomfortable, but conditional on short AI timelines, I can't find fault with it. In terms of QALYs there may be more risk to the global poor from AI than malnourishment. If this is the case, EA would moves away from being divided by cause areas towards a primary divide of "AI soon" vs "AI later" (though deontologists might argue it's still better to improve people's lives now rather than save them from something that kills all of us). Feel fry to suggest flaws in this argument
- I'm going to seed a few replies in the comments. I know some of you hate it when I do this, but please bear with me.
What do you think? What are the facts about this?
endnote: I predict 50% that this discussion won't work, resolved by me in two weeks. I think that people don't want to work together to build a sort of vague discussion on the forum. We'll see.
This post wanted data, and I’m looking forward to that … but here is another anecdotal perspective.
I was introduced to EA several years ago via a Life You Can Save. I learned a lot about effective, evidence-based giving, and “GiveWell approved” global health orgs. I felt that EA had shared the same values as the traditional “do good” community, just even more obsessed with evidence-based, rigorous measurement. I changed my donation strategy accordingly and didn’t pay much more attention to EA community for a few years.
But in 2020, I checked back in to EA and attended an online conference. I was honestly quite surprised that very little of the conversation was about how to measurably help the world’s poor. Everyone I talked to was now focusing on things like AI Safety and Wild Animal Welfare. Even folks that I met for 1:1s, whose bio included global health work, often told me that they were “now switching to AI, since that is the way to have real impact.” Even more surprising was the fact that the most popular arguments weren’t based on measurable evidence, like GiveWell, but based on philosophical arguments and thought experiments. The “weirdness” of the philosophical arguments was a way to signal EA-ness; lack of empirical grounding wasn’t a dealbreaker anymore.
Ultimately I misjudged what the “core” principles of EA were. Rationalism and logic were a bigger deal than empiricism. But in my opinion, the old EA was defined by citing mainstream RCT studies to defend an intervention that was currently saving X lives. The current EA is defined by citing esoteric debates between Paul Christiano and Eliezer, which themselves cite EA-produced papers… all to decide which AI Safety org is actually going to save the world in 15 years. I’m hoping for a rebalance towards the old EA, at least until malaria is actually eradicated!
The original EA materials (at least the ones that I first encountered in 2015 when I was getting into EA) promoted evidence-based charity, that is making donations to causes with very solid evidence. But the the formal definition of EA is equally or more consistent with hits based charity, making donations with limited or equivocal evidence but large upside with the expectation that you will eventually hit the jackpot.
I think the failure to separate and explain the difference between these things leads to a lot of understandable confusion and anger.