Here is my entry for the EA criticism contest being held this month. Originally, this essay was linked to the EA forum by someone else yesterday, but that post was eventually deleted (not sure why). So I'm personally reposting now, as it's encouraged to engage with the EA forum in the contest guidelines.
A note: This is very much a strident critical piece about the utilitarian core of EA, so know that going in. I definitely do not expect the average reader here to agree with my points, but my hope would be that what's helpful or interesting about this essay is you can see how a critic might think about the movement, especially how they can be off-put by the inevitable "repugnancy" of utilitarianism, and, in the ideal situation, you might walk away more favorable to the idea that dilution of utilitarianism out of EA is helpful for broadening the scope of the movement and making it appeal to more people. So please "judge" the piece along those lines. I expect criticism in response, of course. (However, I won't be posting many replies just because this is your space and I don't want to bogart it. I may reply occasionally if I think it'd be especially helpful/relevant.)
Here's an excerpt to get a sense of it:
How can the same moral reasoning be correct in one circumstance, and horrible in another? E.g., while the utilitarian outcome of the trolley problem is considered morally right by a lot of people (implying you should switch the tracks, although it’s worth noting even then that plenty disagree) the thought experiment of the surgeon slitting innocent throats in alleys is morally wrong to the vast majority of people, even though they are based on the same logic. Note that this is exactly the same as our intuitions in the shallow pond example. Almost everyone agrees that you should rescue the child but then, when this same utilitarian logic is instead applied to many more decisions instead of just that one, our intuitions shift to finding the inevitable human factory-farms repugnant. This is because utilitarian logic is locally correct, in some instances, particularly in low-complexity ceteris paribus set-ups, and such popular examples are what makes the philosophy attractive and have spread it far and wide. But the moment the logic is extended to similar scenarios with slightly different premises, or the situation itself complexifies, or the scope of the thought experiment expands to encompass many more actions instead of just one, then suddenly you are right back at some repugnant conclusion. Such a flaw is why the idea of “utility monsters” (originally introduced by Robert Nozick) is so devastating for utilitarianism—they take us from our local circumstances to a very different world in which monsters derive more pleasure and joy from eating humans than humans suffer from being eaten, and most people would find the pro-monsters-eating-humans position repugnant.
To give a metaphor: Newtonian physics works really well as long as all you’re doing is approximating cannon balls and calculating weight loads and things like that. But it is not so good for understanding the movement of galaxies, or what happens inside a semiconductor. Newtonian physics is “true” if the situation is constrained and simple enough for it to apply to; so too it is with utilitarianism. This is the etiology of the poison.
You appear to have missed the central point of the essay, which is strange because it's repeated over and over. I point out that utilitarians must either dilute or swallow the poison of repugnant conclusions - dilution of the repugnancy works, but the cost is making EA weaker and more vapid, of it becoming a toothless philosophy like "do good in the world." Instead of grappling with this criticism, you've transformed the thesis instead into a series of random unconnected claims, some of which don't even represent my views. Now, I won't address your defensive scolding of me personally, as I don't want to engage with something so childish here. I'd rather see some actual grappling with the thesis, or at least the more interesting parts like on whether there are qualitative, in addition to quantitative, moral differences, but here's the responses to the set of mostly uninteresting claims you've ascribed to me instead of dealing with the thesis.
Already addressed in the text: "I don’t think that there’s any argument effective altruism isn’t an outgrowth of utilitarianism—e.g., one of its most prominent members is Peter Singer, who kickstarted the movement in its early years with TED talks and books, and the leaders of the movement, like William MacAskill, readily refer back to Singer’s “Famine, Affluence, and Morality” article as their moment of coming to."
You'll have to explain to me how so many EA leaders readily reference utilitarian philosophy, or refer to utility calculations being the thing that makes EA special, or justify what counts as an effective intervention via utilitarian definitions, without anyone actually being utilitarian. People can call themselves whatever they want, and I understand people wanting to divorce themselves from the repugnancies of utilitarianism, but so much in EA draws on a utilitarian toolbox and all the origins are (often self-admittedly!) in utilitarian thought experiments.
(a) If you could get away with it utilitarianism tells you it's moral to do, you're just saying "in no possible world could you get away with it" which is both a way-too-strong claim and also irrelevant, for the repugnancy is found in that it is moral to do it and keep it a secret, if you can. As for (b) since harm is caused by inaction (at least according to many in EA), then diverting the charity money from say, the USA, where it will go less far and only save 1 life, to a third-world country, where it will save 5, is exactly this. You saying that "no one says to do that" seems to fly in the face of. . . what everyone is saying to do.
I don't say this anywhere I know of in the text.
You're missing the point of this part, which is that the utilitarian arbitrage has to necessarily keep going. You just, what, stop at a billion? Why? Because it sounds good to you, Nathan Young? The moral thing to do is keep going. That's my point about utilitarian arbitrage leading very naturally to the repugnant conclusion. So this seems to just not grok this part.
"Before you criticize effective altruism come up with something better than it" seems like a pretty high standard to me.
I'm clear about some of the things EA is known for, like AI safety, being justifiable through other philosophies, and that I agree with some of them. Which, you're right, the argument is focused on my in-principle disagreements, particularly that many will find the in-principle aspects repugnant, and my recommendation is to instead dilute it and use utilitarian calculations as a fig leaf. Again, a more complicated thesis that you're simply. . . not addressing in this breakdown of unconnected supposed claims.
None of these points were very important to the argument, and the ones that are, like whether or not EA is an outgrowth of utilitarianism, seem pretty settled.