We just released a podcast with me about what the core arguments for effective altruism actually are, and potential objections to them.
I wanted to talk about this topic because I think many people – even many supporters – haven’t absorbed the core claims we’re making.
As a first step in tackling this, I think we could better clarify what the key claim of effective altruism actually is, and what the arguments for that claim actually are. I think it would also help us improve our understanding of effective altruism.
The most relevant existing work is Will MacAskill's introduction to effective altruism in the Norton Introduction to Ethics, though it argues for the claim that we have a moral obligation to pursue effective altruism, and I wanted to formulate the argument without making it a moral obligation. What I say is also in line with MacAskill's definition of effective altruism.
I think a lot more work is needed in this area, and don’t have any settled answers, but I hoped this episode would get discussion going. There are also many other questions about how best to message effective altruism after it's been clarified, which I mostly don't get into.
In brief, here’s where I’m at. Please see the episode to get more detail.
The claim: If you want to contribute to the common good, it’s a mistake not to pursue the project of effective altruism.
The project of effective altruism: is defined as the search for the actions that do the most to contribute to the common good (relative to their cost). It can be broken into (i) an intellectual project – a research field aimed at identifying these actions and, (ii) a practical project to put these findings into practice and have an impact.
I define the ‘common good’ in the same way Will MacAskill defines the good in “The definition of effective altruism”, as what most increases welfare from an impartial perspective. This is only intended as a tentative and approximate definition, which might be revised.
The three main premises supporting the claim of EA are:
- Spread: there are big differences in how much different actions (with similar costs) contribute to the common good.
- Identifiability: We can find some of these high-impact actions with reasonable effort.
- Novelty: The high-impact actions we can find are not the same as what people who want to contribute to the common good typically do.
The idea is that if some actions do far more to contribute than others, we can find those actions, and they’re not the same as what we’re already doing, then – if you want to contribute to the common good – it’s worth searching for these actions. Otherwise, you’re failing to achieve as much for the common good as you could, and could better achieve your stated goal.
Moreover, we can say that it’s more of a mistake not to pursue the project of effective altruism the greater the degree to which each of the premises hold. For instance, the greater the degree of spread, the more you’re giving up by not searching (and same for the other two premises).
We can think of the importance of effective altruism quantitatively as how much your contribution is increased by applying effective altruism compared to what you would have done otherwise.
Unfortunately, there’s not much rigorously written up about how much actions differ in effectiveness ex ante, all considered, and I’m keen to see more research in this area.
In the episode, I also discuss:
- Some broad arguments for why the premises seem plausible.
- Some potential avenues to object to these premises – I don’t think these objections work as stated, but I’d like to see more work on making them better. (I think most of the best objections to EA are about EA in practice rather than the underlying ideas.)
- Common misconceptions about what EA actually is, and some speculation on why these got going.
- A couple of rough thoughts on how, given these issues, we might improve how we message effective altruism.
I’m keen to see people running with developing the arguments, making the objections better, and thinking about how to improve messaging. There’s a lot of work to be done.
If you're interested in working on this, I may be able to share you on some draft documents with a little more detail.
This isn't much more than a rotation (or maybe just a rephrasing), but:
When I offer a 10 second or less description of Effective Altruism, it is hard avoid making it sound platitudinous. Things like "using evidence and reason to do the most good", or "trying to find the best things to do, then doing them" are things I can imagine the typical person nodding along with, but then wondering what the fuss is about ("Sure, I'm also a fan of doing more good rather than less good - aren't we all?") I feel I need to elaborate with a distinctive example (e.g. "I left clinical practice because I did some amateur health econ on how much good a doctor does, and thought I could make a greater contribution elsewhere") for someone to get a good sense of what I am driving at.
I think a related problem is the 'thin' version of EA can seem slippery when engaging with those who object to it. "If indeed intervention Y was the best thing to do, we would of course support intervention Y" may (hopefully!) be true, but is seldom the heart of the issue. I take most common objections are not against the principle but the application (I also suspect this may inadvertently annoy an objector, given this reply can paint them as - bizarrely - 'preferring less good to more good').
My best try at what makes EA distinctive is a summary of what you spell out with spread, identifiability, etc: that there are very large returns to reason for beneficence (maybe 'deliberation' instead of 'reason', or whatever). I think the typical person does "use reason and evidence to do the most good", and can be said to be doing some sort of search for the best actions. I think the core of EA (at least the 'E' bit) is the appeal that people should do a lot more of this than they would otherwise - as, if they do, their beneficence would tend to accomplish much more.
Per OP, motivating this is easier said than done. The best case is for global health, as there is a lot more (common sense) evidence one can point to about some things being a lot better than others, and these object level matters a hypothetical interlocutor is fairly likely to accept also offers support for the 'returns to reason' story. For most other cause areas, the motivating reasons are typically controversial, and the (common sense) evidence is scant-to-absent. Perhaps the best moves are here would be pointing to these as salient considerations which plausibly could dramatically change ones priorities, and so exploring to uncover these is better than exploiting after more limited deliberation (but cf. cluelessness).
Hi Greg,
I agree that when introducing EA to someone for the first, it's often better to lead with a "thick" version, and then bring in thin later.
(I should have maybe better clarified that my aim wasn't to provide a new popular introduction, but rather to better clarify what "thin" EA actually is. I hope this will inform future popular intros to EA, but that involves a lot of extra steps.)
I also agree that many objections are about EA in practice rather than the 'thin' core ideas, and that it can be annoying to retreat back to thin EA, and that it's often ... (read more)