Here is my entry for the EA criticism contest being held this month. Originally, this essay was linked to the EA forum by someone else yesterday, but that post was eventually deleted (not sure why). So I'm personally reposting now, as it's encouraged to engage with the EA forum in the contest guidelines.
A note: This is very much a strident critical piece about the utilitarian core of EA, so know that going in. I definitely do not expect the average reader here to agree with my points, but my hope would be that what's helpful or interesting about this essay is you can see how a critic might think about the movement, especially how they can be off-put by the inevitable "repugnancy" of utilitarianism, and, in the ideal situation, you might walk away more favorable to the idea that dilution of utilitarianism out of EA is helpful for broadening the scope of the movement and making it appeal to more people. So please "judge" the piece along those lines. I expect criticism in response, of course. (However, I won't be posting many replies just because this is your space and I don't want to bogart it. I may reply occasionally if I think it'd be especially helpful/relevant.)
Here's an excerpt to get a sense of it:
How can the same moral reasoning be correct in one circumstance, and horrible in another? E.g., while the utilitarian outcome of the trolley problem is considered morally right by a lot of people (implying you should switch the tracks, although it’s worth noting even then that plenty disagree) the thought experiment of the surgeon slitting innocent throats in alleys is morally wrong to the vast majority of people, even though they are based on the same logic. Note that this is exactly the same as our intuitions in the shallow pond example. Almost everyone agrees that you should rescue the child but then, when this same utilitarian logic is instead applied to many more decisions instead of just that one, our intuitions shift to finding the inevitable human factory-farms repugnant. This is because utilitarian logic is locally correct, in some instances, particularly in low-complexity ceteris paribus set-ups, and such popular examples are what makes the philosophy attractive and have spread it far and wide. But the moment the logic is extended to similar scenarios with slightly different premises, or the situation itself complexifies, or the scope of the thought experiment expands to encompass many more actions instead of just one, then suddenly you are right back at some repugnant conclusion. Such a flaw is why the idea of “utility monsters” (originally introduced by Robert Nozick) is so devastating for utilitarianism—they take us from our local circumstances to a very different world in which monsters derive more pleasure and joy from eating humans than humans suffer from being eaten, and most people would find the pro-monsters-eating-humans position repugnant.
To give a metaphor: Newtonian physics works really well as long as all you’re doing is approximating cannon balls and calculating weight loads and things like that. But it is not so good for understanding the movement of galaxies, or what happens inside a semiconductor. Newtonian physics is “true” if the situation is constrained and simple enough for it to apply to; so too it is with utilitarianism. This is the etiology of the poison.
When sharing this article you tweeted:
"In the last week alone, the effective altruist movement has been on the cover of The New Yorker, The NYT, and Time Magazine. It has billions in funding, and wants to make the world a better place. The problem is that it's poisonous."
However you finished that same thread by tweeting:
"10/ Ultimately this is what the "longtermism" view is - merely a dilution of the utilitarianism in effective altruism to caring only about existential risk, which is something everyone can get on board with"
I dislike the implication that EA is poisonous, but fair enough. But seemingly you don't believe this either, hence you tweeted your final tweet. That seems clickbaity.
Also, if you think that EA gets the balance right in practice, I don't think it's okay to say it's a poision. If the median EA does the things as you'd want them done, then it seems like EA is antivenom, even if parts of the dose would be venomous on their own. This seems unreasonable.
What's more, would you call the need to tell an axe murdered where your friend is hiding a "poison" in virtue ethics? Or most people's denial of those dying in the developing world in a way that can be cheaply prevented? This seems an unfair judgement of EA alone.
I think the framing (which again, even you don't seem to believe) of EA being a poison is unreasonable, unfair and clickbaity.
Yeah, I think there is a clear difference. Do you write about the flaws in other moral systems using equivalently valent terms?
But even if there isn't a difference, you yourself don't believe that EA is a poison. You think it's got some poision in it. I dislike the framing of that original, much shared, tweet.