Here is my entry for the EA criticism contest being held this month. Originally, this essay was linked to the EA forum by someone else yesterday, but that post was eventually deleted (not sure why). So I'm personally reposting now, as it's encouraged to engage with the EA forum in the contest guidelines.
A note: This is very much a strident critical piece about the utilitarian core of EA, so know that going in. I definitely do not expect the average reader here to agree with my points, but my hope would be that what's helpful or interesting about this essay is you can see how a critic might think about the movement, especially how they can be off-put by the inevitable "repugnancy" of utilitarianism, and, in the ideal situation, you might walk away more favorable to the idea that dilution of utilitarianism out of EA is helpful for broadening the scope of the movement and making it appeal to more people. So please "judge" the piece along those lines. I expect criticism in response, of course. (However, I won't be posting many replies just because this is your space and I don't want to bogart it. I may reply occasionally if I think it'd be especially helpful/relevant.)
Here's an excerpt to get a sense of it:
How can the same moral reasoning be correct in one circumstance, and horrible in another? E.g., while the utilitarian outcome of the trolley problem is considered morally right by a lot of people (implying you should switch the tracks, although it’s worth noting even then that plenty disagree) the thought experiment of the surgeon slitting innocent throats in alleys is morally wrong to the vast majority of people, even though they are based on the same logic. Note that this is exactly the same as our intuitions in the shallow pond example. Almost everyone agrees that you should rescue the child but then, when this same utilitarian logic is instead applied to many more decisions instead of just that one, our intuitions shift to finding the inevitable human factory-farms repugnant. This is because utilitarian logic is locally correct, in some instances, particularly in low-complexity ceteris paribus set-ups, and such popular examples are what makes the philosophy attractive and have spread it far and wide. But the moment the logic is extended to similar scenarios with slightly different premises, or the situation itself complexifies, or the scope of the thought experiment expands to encompass many more actions instead of just one, then suddenly you are right back at some repugnant conclusion. Such a flaw is why the idea of “utility monsters” (originally introduced by Robert Nozick) is so devastating for utilitarianism—they take us from our local circumstances to a very different world in which monsters derive more pleasure and joy from eating humans than humans suffer from being eaten, and most people would find the pro-monsters-eating-humans position repugnant.
To give a metaphor: Newtonian physics works really well as long as all you’re doing is approximating cannon balls and calculating weight loads and things like that. But it is not so good for understanding the movement of galaxies, or what happens inside a semiconductor. Newtonian physics is “true” if the situation is constrained and simple enough for it to apply to; so too it is with utilitarianism. This is the etiology of the poison.
To be honest, I did feel like it came off this way to me as well. The majority of the piece feels like an essay on why you think utilitarianism sucks, and this post itself frames this as a criticism of EA’s “utilitarian core”. I sort of remember the point about EA just being ordinary do gooding when you strip this away as feeling like a side note, though I can reread it when I get a chance in case I missed something.
To address the point though, I’m not sure it works either, and I feel like the rest of your piece undermines it. Lots of things EA focuses on, like animal welfare and AI safety, are weird or at least weird combinations, so are plenty of its ways of thinking and approaching questions. These are consistent with utilitarianism, but they aren’t specifically tied to it, indeed you seem drawn to some of these and no one is going to accuse you of being a utilitarian after reading this, I have to imagine the idea that you do think something valuable and unique is left behind if you don’t just view EA as utilitarianism has to at least partly be behind your suggestion that we “dilute the poison” all the way out. If we already have “diluted the poison” out, I’m not sure what’s left to argue.
The point about how the founders of the movement have generally been utilitarians or utilitarian sympathetic doesn’t strike me as enough to make your point either[1]. If you mean that the movement is utilitarian at its core in the sense that utilitarianism motivated many of its founders, this is a good point. If you mean that it has a utilitarian core in the sense that it is “poisoned” by the types of implications of utilitarianism you are worried about, this doesn’t seem enough to get you there. I also think it proves far to much to mention the influence of Famine, Affluence and Morality. Non-utilitarian liberals regularly cite On Liberty, non-utilitarian vegans regularly cite Animal Liberation. Good moral philosophers generally don’t justify their points from first principles, but rather with the minimum premises necessary to agree with them on whatever specific point they’re arguing. These senses just seem crucially different to me.
I also think it’s overstated. Singer is certainly a utilitarian, but MacAskill overtly does not identify as one even though he is sympathetic to the theory and I think has plurality credence in it relative to other similarly specific theories, Ord I believe is the same, Bostrom overtly does not identify with it, Parfit moved around a bunch in his career but by the time of EA I believe he was either a prioritarian or “triple theorist” as he called it, Yudkowsky is a key example of yours but from his other writing he seems like a pluralist consequentialist at most to me. It’s true that, as your piece points out, he defends pure aggregation, but so do tons of deontologists these days, because it turns out that when you get specific about your alternative, it becomes very hard not to be a pure aggregationist. ↩︎