Here is my entry for the EA criticism contest being held this month. Originally, this essay was linked to the EA forum by someone else yesterday, but that post was eventually deleted (not sure why). So I'm personally reposting now, as it's encouraged to engage with the EA forum in the contest guidelines.
A note: This is very much a strident critical piece about the utilitarian core of EA, so know that going in. I definitely do not expect the average reader here to agree with my points, but my hope would be that what's helpful or interesting about this essay is you can see how a critic might think about the movement, especially how they can be off-put by the inevitable "repugnancy" of utilitarianism, and, in the ideal situation, you might walk away more favorable to the idea that dilution of utilitarianism out of EA is helpful for broadening the scope of the movement and making it appeal to more people. So please "judge" the piece along those lines. I expect criticism in response, of course. (However, I won't be posting many replies just because this is your space and I don't want to bogart it. I may reply occasionally if I think it'd be especially helpful/relevant.)
Here's an excerpt to get a sense of it:
How can the same moral reasoning be correct in one circumstance, and horrible in another? E.g., while the utilitarian outcome of the trolley problem is considered morally right by a lot of people (implying you should switch the tracks, although it’s worth noting even then that plenty disagree) the thought experiment of the surgeon slitting innocent throats in alleys is morally wrong to the vast majority of people, even though they are based on the same logic. Note that this is exactly the same as our intuitions in the shallow pond example. Almost everyone agrees that you should rescue the child but then, when this same utilitarian logic is instead applied to many more decisions instead of just that one, our intuitions shift to finding the inevitable human factory-farms repugnant. This is because utilitarian logic is locally correct, in some instances, particularly in low-complexity ceteris paribus set-ups, and such popular examples are what makes the philosophy attractive and have spread it far and wide. But the moment the logic is extended to similar scenarios with slightly different premises, or the situation itself complexifies, or the scope of the thought experiment expands to encompass many more actions instead of just one, then suddenly you are right back at some repugnant conclusion. Such a flaw is why the idea of “utility monsters” (originally introduced by Robert Nozick) is so devastating for utilitarianism—they take us from our local circumstances to a very different world in which monsters derive more pleasure and joy from eating humans than humans suffer from being eaten, and most people would find the pro-monsters-eating-humans position repugnant.
To give a metaphor: Newtonian physics works really well as long as all you’re doing is approximating cannon balls and calculating weight loads and things like that. But it is not so good for understanding the movement of galaxies, or what happens inside a semiconductor. Newtonian physics is “true” if the situation is constrained and simple enough for it to apply to; so too it is with utilitarianism. This is the etiology of the poison.
Hi Erik,
I just wanted to leave a very quick comment (sorry I'm not able to engage more deeply).
I think yours is an interesting line of criticism, since it tries to get to the heart of what EA actually is.
My understanding of your criticism is that EAs attempts to find an interesting middle ground between full utilitarianism and regular sensible do-gooding, whereas you claim there isn't one. In particular, we can impose limits on utilitarianism, but they're arbitrary and make EA contentless. Does this seem like a reasonable summary?
I think the best argument that an interesting middle ground exists the fact that EAs in practice have come up with ways of doing that that aren't standard (e.g. only a couple of percent of US philanthropy is spent on evidence-backed global health at best, and << 1% on ending factory farming + AI safety + ending pandemics).
More theoretically, I see EA as being about something like "maximising global wellbeing while respecting other values". This is different from regular sensible do-gooding in being more impartial, more wellbeing focused and more focused on finding the very best ways to contribute (rather than the merely good). I think another way EA is different is being more skeptical, open to weird ideas and trying harder to take a bayesian, science-aligned approach to finding better ways to help. (Cf the key values of EA.)
However, it's also different from utilitarianism since you can practice these values without saying maximising hedonic utility is the only thing that matters, or a moral obligation.
(Another way to understand EA is the claim that we should pay more attention to consequences, given the current state of the world, but not that only consequences matter.)
You could respond that there's arbitrariness in how to adjudicate conflicts between maximising wellbeing and other values. I basically agree.
But I think all moral theories imply crazy things ("poison") if taken to extremes (e.g. not lying to the axe murder as a deontologist; deep ecologists who think we should end humanity to preserve the environment; people who hold the person-affecting view in population ethics who say there's nothing bad about creating a being who's life is only suffering).
So imposing some level of arbitrary cut offs on your moral views is unavoidable. The best we can do is think hard about the tradeoffs between different useful moral positions, and try to come up with an overall course of action that's non-terrible on the balance of them.
Makes sense. It just seems to me that the diluted version still implies interesting & important things.
Or from the other direction, I think it's possible to move in the direction of taking utilitarianism more seriously, without having to accept all of the most wacky implications.