The Effective Altruism (EA) movement is built on a simple yet powerful idea: using evidence and reason to do the most good possible with our limited resources. This principle of optimization is what EA is known for. In a world of real-world problems, this framework forces us to confront the reality of opportunity costs: every investment in one cause is, by definition, a non-investment in another. This makes our funding choices a matter of immense consequence.
A prominent cause area within EA is Animal Welfare, a focus that stems logically from a utilitarian philosophical base. The argument is compelling: the number of animals suffering in factory farms is astronomical, far exceeding the number of humans in extreme poverty. Furthermore, interventions to reduce this suffering—such as corporate cage-free campaigns—have proven to be highly effective and tractable.
However, it is precisely because of EA's commitment to opportunity cost that we must scrutinize this choice. Every dollar spent on improving conditions for animals is a dollar not spent on saving a human life through a top-rated global health charity. This creates a direct trade-off, forcing us to ask if we are implicitly preferencing animal welfare over human lives. My critique of our current focus on animal welfare is based on two fundamental challenges:
Critique 1: The Problem of Incommensurable Suffering
My first major point of difficulty is the assumption that we can meaningfully compare different types of suffering. You have put it perfectly: pain is not a neatly measurable unit. While EA models attempt to quantify suffering to create comparisons (e.g., "X days of chicken suffering averted per dollar"), this process masks a deep philosophical problem.
We cannot truly know what the subjective experience of a chicken is, let alone assign it a numerical value that can be weighed against the suffering of a human. Human suffering is not just a raw sensory input; it is deeply intertwined with complex psychological states like grief, dread, anxiety about the future, and the sorrow of seeing one's family suffer. How many "units" of a chicken's pain equals the lifelong grief of a mother who loses her child to malaria? The question itself feels absurd because the experiences are fundamentally different in kind, not just in quantity. We are trying to compare apples to existential dread.
Critique 2: The Asymmetrical Value of a Life
My second critique goes a step further. Even if we could perfectly measure and compare suffering, the analysis is incomplete. It focuses only on the reduction of a negative (suffering) and ignores the promotion of a positive (a flourishing life).
When an intervention from the Against Malaria Foundation saves a 5-year-old child, we haven't just averted the suffering of a fever. We have unlocked decades of potential for that human being: the potential to experience love, create art, innovate, build a community, raise a family, and contribute to the world. The positive value generated is immense and creates ripples throughout their society.
In contrast, most animal welfare interventions do not "save" a life in this sense. They improve the conditions for an animal that is still destined for slaughter in a few weeks or months. We are making a brief, painful existence slightly less painful. When we weigh the outcomes, the choice is between:
- Option A: Unlocking 50+ years of a uniquely human experience.
- Option B: Marginally reducing the pain in the final 2% of a farm animal's life.
When framed this way, it seems our current funding models may be dramatically undervaluing the sheer scope and positive potential of a human life.
Addressing the Inevitable Counterarguments
Before concluding, I want to proactively address the counterarguments that this line of reasoning will undoubtedly face. I believe engaging with them directly is crucial for a productive conversation.
- "This argument is merely speciesist." My argument is, admittedly, a form of species preference. I will not deny that. However, I argue that in a world of finite resources and triage, a life-saving speciesism is not only rational but morally necessary. The alternative is to accept the proposition that upholding the abstract principle of anti-speciesism is more important than saving a tangible human life. If forced to choose, I believe prioritizing the life and potential of a member of our own species—a being capable of complex consciousness—is a defensible ethical position. The burden of proof should be on those who would trade a human life for anything else.
- "But Animal Welfare is more neglected and tractable." The Importance, Tractability, and Neglectedness (ITN) framework is a cornerstone of EA cause prioritization, and I agree that Animal Welfare scores highly on T and N. However, the "I" for Importance (or Scale) is not just about the number of individuals. It is about the magnitude of value at stake. The value of a single human life, with its decades of potential for flourishing, consciousness, and contribution, is so immense that it can reasonably outweigh the other factors. A highly tractable solution to a problem of lesser moral consequence is not necessarily better than a still-quite-tractable solution (like distributing mosquito nets) to a problem of near-infinite moral consequence (a human death).
- "This is a false dichotomy. We can and should do both." While the EA movement as a whole can fund multiple cause areas, for every donor and every dollar, the choice is always at the margin. My next $100 can go to the Against Malaria Foundation or it can go to a cage-free campaign; it cannot do both. We have an ethical duty to ask which of those two actions does more good in the world. To say "let's do both" is to avoid the very question of prioritization that makes Effective Altruism effective.
Conclusion: A Call for Re-evaluation
I do not claim to have the final answer to one of the most difficult ethical questions we face. This post comes from a place of genuine intellectual struggle, not dogmatic certainty.
My argument is this: when we weigh the proven ability to save a human life for a few thousand dollars against the ability to reduce suffering in animals, we are not comparing like with like. We are comparing the full, complex, and invaluable potential of a human life against a temporary reduction in the pain of a non-human animal. The sheer asymmetry of this trade-off seems to be a catastrophic blind spot in our current allocation of resources.
My goal here is to open a serious discussion. I am asking the community to re-examine its premises and justify its moral weights. Therefore, I end with a few direct questions, and I am genuinely open to being convinced that my reasoning is flawed:
- What is the explicit exchange rate you are using between "human lives saved" and "animal-years of suffering averted," and what philosophical framework justifies this number?
- How do our models account for the immense positive and ripple-effect value of a saved human life, beyond simply averting the negative of a death?
- Finally, can anyone present a strong, first-principles argument for why an anonymous donor with $5,000 should choose to fund animal welfare initiatives over verifiably saving a human life?
These are all really important questions, and its great that you want to engage in discussion about it!
Have you looked at the Moral Weight Project? It's a great piece of work that tries to tackle the (very complex, as you state) issue of comparing the welfare of different species, including humans. All their assumptions are explicitly listed, so you can look through and see where you disagree. There is a great sequence about it on the forum here: https://forum.effectivealtruism.org/s/y5n47MfgrKvTLE3pw
And I also wrote a couple of more accessible explainers of the project here:
https://docs.google.com/document/d/16EdSGP1-xs0Mh4G6QfQYOaaYfduDEpp32NK5BBlznYE/edit?tab=t.0
https://docs.google.com/document/d/1SIurNLZB8hSXTCKUNmREHmBt2sP4IQ4R0XkdYAZbYp0/edit?tab=t.0
The "highest-lowest" game mechanic is a valuable lens when thinking about balancing investments. It's pretty clear the extremes are wrong (100% people, 0% animals, or the inverse). That means there is some middle ground that makes sense.
I trust the wisdom of the crowd. Some are drawn to human welfare, others to animal welfare. Some to both. The more we educate everyone on the issues and give them the agency to make their own decisions on where to invest resources, the more likely we are to come to a reasonable balance point.
Thanks for this perspective. I agree that the idea of finding a natural balance is appealing.
However, I think this touches on a fundamental tension in EA. The entire premise of the movement is that the "wisdom of the crowd" in charitable giving often leads to suboptimal outcomes, which is why we turn to rigorous analysis in the first place. We don't trust the crowd to decide between malaria nets and deworming; we use evidence.
My post is questioning the assumption that a "middle ground" is correct. From the perspective of a single donor's marginal dollar, it's always a 100/0 choice. My argument is that the asymmetrical value of a human life suggests that the most effective choice is consistently on one side of that trade-off.
So while the overall EA portfolio might be diversified, I'm still stuck on the question of what an individual donor should do to be most effective, and I'm not sure an appeal to balance can resolve that.
Telling people what they should do is antithetical to respect and agency and human flourishing. Making moral arguments is one thing, but authoritarianism crosses the line. IMHO.
If I understand your position correctly, Dave, you present two main claims:
A. Avoidance of Extremes: The assertion that allocating 100% to human beings or 100% to animals is wrong due to the "highest-lowest game mechanic."
B. Wisdom of the Crowds and Natural Balance: The belief that "the wisdom of the crowds" will naturally lead to a rational balance point between the fields, thanks to the diverse personal biases of individuals.
Well, I'd like to challenge both of these points.
- - -
Why are extremes not necessarily wrong?
First, the sweeping claim that "extremes are wrong," despite its prevalence, lacks a logical basis in itself. The rational decision on where to invest limited resources should be based on the principle of marginal utility. We should always invest the "next dollar" where it will yield the highest benefit.
For example: Suppose you have two investment options – A or B. You can improve A by 1% or B by 20%. If you've concluded that improving A by 1% is significantly better than improving B by 20%, why should you improve B at all? In such a case, any investment in B at the expense of A is simply a waste of resources.
We should invest exclusively in A until the marginal utility of additional investment in A decreases (due to diminishing returns) and becomes lower than the marginal utility of investment in B.
One might argue that there's no guarantee such a point will be reached within the available resources, especially when we're talking about such profound qualitative differences in value, like between saving a human life and improving suffering conditions for animals. If the marginal utility of investing in humans remains consistently superior, then my solution is to invest 100% in humans. And how can we allow animals to suffer indefinitely? What will become of them?
Well, it's unfortunate, but the fundamental goal of Effective Altruism is rational investment, and that means, among other things, not investing based on emotions. If I have $100, and the cost of preventing an animal from suffering is $1, we could end suffering for so little! But that's $1 less from investing in human lives, which we prioritize as more successful. It's unfortunate. But what else can we do?
- - -
The wisdom of the crowds is no guarantee of moral rationality
Secondly, relying on "the wisdom of the crowds" for optimal prioritization is highly problematic. "The wisdom of the crowds" may be effective for estimating simple averages, but it dramatically fails in issues requiring deep expertise, rigorous logical analysis, and complete information – especially in complex ethical matters.
The examples of Magnus Carlsen's and Garry Kasparov's chess games against "the World" illustrate this well: despite hundreds of thousands and tens of thousands of participants respectively, "the World" did not win either of these games. The reason is simple: chess requires deep strategic thinking, which cannot arise from an average of distributed intuitions.
Likewise, the question of prioritizing between saving human lives and alleviating animal suffering is not a matter of "average preferences." It is a question of what is objectively and morally right. If a rational analysis (like that presented in tootlife's original post) leads to the conclusion that one solution is significantly superior, then choosing the less effective solution, even if popular or representing some balance of public preferences, is a wrong choice.
It is well known that the public tends to be influenced by many emotional and cognitive biases, rather than purely rational considerations. In fact, the Effective Altruism movement itself arose from the understanding that "the wisdom of the crowds" and intuitive resource allocation are inefficient, and its purpose is precisely to correct this through data and reason. To claim that "the wisdom of the crowds" will lead to the optimal solution is, therefore, an internal contradiction to the basic rationale of EA.
This is wrong; it's a black-or-white logical fallacy. Emotions are an important channel of data. Not factoring them into calculations leads to false conclusions. Check out bilateral amygdala damage or frontotemporal dementia.
EA discourages emotion-only or emotion-overweighted decision-making. However, if emotion were not a part of EA, we would simply give every dollar to bednets in Africa and ignore every other cause.
Maybe I'm misreading your argument, but you seem to say there are legitimate cases to be made for 100% investment in humans, at the expense of complete obliteration of the remainder of the animal kingdom. The whole ecosystem we rely on for survival would collapse.
I might agree with you that the planet would be better off long term if we devoted 100% to animals and, conversely, obliterated all the people. There are (at least) two things wrong with this other extreme scenario, though:
It seems clear to me that both individuals and societies regularly trade off between human life extension and other human goals, including the reduction of human suffering. One has to at least implicitly make that tradeoff when deciding on a governmental budget, or deciding how often you will have a colonoscopy. If it's not possible to decide how to trade off these things, I think we have a problem that is practically much bigger than effective altruism.
The less well-trodden question to me, then, is whether we can estimate a tradeoff between animal suffering and human suffering. For most people, I think that's where more of the uncertainty lies. But I'm not sure whether that is the case for you or not.
If we can compare the moral value of a year's worth of human life extension to the value of reducing human suffering caused by a stimulus of specified severity, and then compare that to the value of reducing animal suffering caused by the same stimulus, then we should be able to compare the human life extension to the reduction of animal suffering.
I'm curious whether the crux is more on the first half of the equation, or the second. (Or whether you think the transitive logic just doesn't work here.)