Here is my entry for the EA criticism contest being held this month. Originally, this essay was linked to the EA forum by someone else yesterday, but that post was eventually deleted (not sure why). So I'm personally reposting now, as it's encouraged to engage with the EA forum in the contest guidelines.
A note: This is very much a strident critical piece about the utilitarian core of EA, so know that going in. I definitely do not expect the average reader here to agree with my points, but my hope would be that what's helpful or interesting about this essay is you can see how a critic might think about the movement, especially how they can be off-put by the inevitable "repugnancy" of utilitarianism, and, in the ideal situation, you might walk away more favorable to the idea that dilution of utilitarianism out of EA is helpful for broadening the scope of the movement and making it appeal to more people. So please "judge" the piece along those lines. I expect criticism in response, of course. (However, I won't be posting many replies just because this is your space and I don't want to bogart it. I may reply occasionally if I think it'd be especially helpful/relevant.)
Here's an excerpt to get a sense of it:
How can the same moral reasoning be correct in one circumstance, and horrible in another? E.g., while the utilitarian outcome of the trolley problem is considered morally right by a lot of people (implying you should switch the tracks, although it’s worth noting even then that plenty disagree) the thought experiment of the surgeon slitting innocent throats in alleys is morally wrong to the vast majority of people, even though they are based on the same logic. Note that this is exactly the same as our intuitions in the shallow pond example. Almost everyone agrees that you should rescue the child but then, when this same utilitarian logic is instead applied to many more decisions instead of just that one, our intuitions shift to finding the inevitable human factory-farms repugnant. This is because utilitarian logic is locally correct, in some instances, particularly in low-complexity ceteris paribus set-ups, and such popular examples are what makes the philosophy attractive and have spread it far and wide. But the moment the logic is extended to similar scenarios with slightly different premises, or the situation itself complexifies, or the scope of the thought experiment expands to encompass many more actions instead of just one, then suddenly you are right back at some repugnant conclusion. Such a flaw is why the idea of “utility monsters” (originally introduced by Robert Nozick) is so devastating for utilitarianism—they take us from our local circumstances to a very different world in which monsters derive more pleasure and joy from eating humans than humans suffer from being eaten, and most people would find the pro-monsters-eating-humans position repugnant.
To give a metaphor: Newtonian physics works really well as long as all you’re doing is approximating cannon balls and calculating weight loads and things like that. But it is not so good for understanding the movement of galaxies, or what happens inside a semiconductor. Newtonian physics is “true” if the situation is constrained and simple enough for it to apply to; so too it is with utilitarianism. This is the etiology of the poison.
Thanks! Your summaries are very helpful. Yes, I agree with the first two as a summary of my beliefs. However, for (3) I mostly agree with the first sentence, but disagree with the second.
This is because in-principle objections to utilitarianism do have the potential to affect the altruistic work that EA does. Indeed, there’s a sense in which in-principle concerns impact all the in-practice ones. E.g., let’s say there there are indeed qualitative moral differences, as (2) might imply. If so, then donating enough money to charity to save, in expectation, a life, could very well not be qualitatively equivalent to jumping in to save a child from drowning in a pond. It might be merely quantitatively equivalent. That is, the former might be a morally heroic act that it’s any adult’s duty to do, the other, still admirable, but one has far less of a duty to do it. And if it’s true there’s a qualitative difference between saving a child from drowning and donating enough to charity to save a life in expectation, this calls into question whether the entire motto of EA, of maximizing the good, is accomplished by the sort of secular tithing that make up the core of its in-practice operations. This is what's behind my suggestion (I probably should have made it explicit) to continue to shift EA away from purely utilitarian causes and to much broader ones, like promoting "longtermism" or even just cool projects that no one else is doing that have little to zero utilitarian value. I very much agree that this piece lacks any specifics of how to do that (I think I glibly suggest mining an astroid) and could see a lack of specificity as a valid criticism of it, although I also think that the level of specificity of "move X dollars here" might be a somewhat high bar.