Hide table of contents

Effective Altruism has largely stemmed from the works of Peter Singer, who boldly stated that the world was focusing on the wrong things. In Famine, Affluence, and Morality, Singer argued that, instead of just helping people you can see in front of you, you should help people you will probably never meet. He also argued that living in luxury while refusing to help prevent other people from starving was immoral. While these positions are commonly accepted in EA today, at the time, they were very controversial and somewhat novel. While philosophers had long emphasized the importance of helping others, few argued as forcefully and persuasively against egoist thinking while simultaneously challenging common place views. However, I feel like this spirit of always questioning intuitions has kind of been lost in mainstream EA[1].

For one, take the issue of animal welfare.  Although animal welfare being important is not even that far out and has been associated with EA for a while (Peter Singer also famously wrote a book called Animal Liberation, which laid the groundwork for much of modern animal rights philosophy) one of the most prominent EA organizations out there, GiveWell, seems to not even take animal welfare into account when they recommend their charities. In my view, this is not only an unfortunate oversight, it essentially makes all of their recommendations worthless to people who care at all about animals, as it is unclear if their charity recommendations would increase animal suffering or not.

Many of you may be familiar with the meat-eating problem: the argument that charities which save human lives contribute to animal suffering through the additional meat consumption of the people the charity saved. If factory farming is as bad as many claim, then saving human lives could indirectly lead to enormous amounts of animal suffering. While this is effect is complicated to model and may not outweigh the positive things that GiveWell's recommended charities are doing, at the very least, this should be considered in cost-effectiveness analyses. The fact that it usually isn’t (aside from blog posts on this form) taken into account shows how selective EA has become about which intuitions it is willing to challenge.

"So just donate to animal welfare charities", you might say. "Aren't their EA adjacent organizations like Faunalytics which focus solely on animal welfare," you might claim. However, even within EA Animal Welfare spaces, there seems to also be an unwillingness to engage in deep contemplation about to what extent our moral circles should extend. To illustrate this, take the issue of nematode welfare.   Sure, the later sounds wacky on its face, but simply due to the sheer numbers of nematodes, the well-being of nematodes should be of some concern. It's possible that one could be very, very confident that they are non-sentient, but this is not trivial to demonstrate, and even a small chance of nematode sentience, multiplied by their staggering numbers, could imply that their welfare should dominate our moral considerations. It is possible that intensive animal agriculture, the very same type that leads to horrendous conditions for billions of farm animals, actually is net good simply due to its reduction of nematode welfare. Many of these organizations have a reason for this: "There just isn't enough evidence", but that is precisely the problem. Why don't we fund that research, instead of just throwing our hands up and saying "Good enough"?

I must admit, I can conceptualize why doing research on cause prioritization is not as emphasized as I would like it to be. For one, I do not personally enjoy thinking about many of these problems, as there is very high uncertainty and it is difficult to see. There also isn't really a clear end goal (you can always do more cause prioritization), and it just isn't as personally rewarding as simply donating to charities. It can also feel really difficult to confront the mistakes you might have made due to incorrect cause prioritization. This likely encumbers pre-existing organizations from widely shifting their priorities, but I also have a personal anecdote which explains how it can hinder individuals from doing so.  

When I was in high school, I decided to take up a part-time job and donate all of my income to the Against Malaria Foundation[2] . I ended up working about 190 hours and donating 2500$ (Actually slightly higher than the amount I earned because I wanted it to be an even number). However, after the fact, when I learned about the meat-eating problem, I felt a sense of dread, of betrayal. I had difficulty imagining I could have contributed massively to the evils of factory farming. I briefly felt a similar sense of dread while reading about the suffering of nematodes. Could my personal veganism and advocacy of said veganism have caused the suffering of countless nematodes? Maybe my donation to the Against Malaria Foundation was net positive after-all. However, this whiplash is the natural product of a lack of cause prioritization. If I fully understood how to achieve my goal of promoting happiness and reducing suffering in the world, I would not find myself this situation.

Ultimately, the meat-eater problem and nematode welfare may both seem like niche issues, and there are man more I could have brought up, but in my view these is precisely the type of issue Effective Altruism should be engaging with. Even if the final conclusion is that nematodes cannot suffer, the process of thinking rigorously about the possibility is essential to making good decisions. EA is meant to be about radical open-mindedness, about following the arguments wherever they lead, no matter how unintuitive or uncomfortable the results may be.

If EA wants to make a positive impact in the world, it needs to recapture that willingness to look strange in the pursuit of truth. Otherwise, it risks turning into just another mainstream philanthropy movement—one that talks about effectiveness, but avoids the very questions that popularized it. It also risks making the world a worse place. 

 

This essay was written with the assistance of Chat GPT. Shout out to Vasco Grilo whose writings spurred me to write this.

  1. ^

    By mainstream EA, I mean the organizations which are most commonly associated with EA, not the people associated with EA. Many individuals have many thoughts on cause prioritization, but I feel cause prioritization research should ideally be performed by organizations, as this way they naturally include input from many different people and also have higher visibility.

  2. ^

    To be fully honest, I also thought this would look good on college applications.

7

0
1

Reactions

0
1
New Answer
New Comment
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities