A lucid illustration of this conundrum would involve preventing the demise of an approaching infinite number of individuals with a probability approaching zero (p→0). Apprehension concerning infinitesimal risks, even when associated with an astronomically vast scale of impact, is illogical. The improbable nature of an event's occurrence should not be mitigated by the magnitude of its potential harm.
Consider a scenario where, for a single dollar, one is offered a lottery ticket providing the chance to win a sum of 10^100 with a probability of 10^(−99). Superficially, an Expected Value (EV) calculation might suggest this to be a favorable transaction. However, any discerning individual would intuitively recognize that participating in such a lottery will, with near certainty, result in the loss of their investment. The probability of winning is infinitesimally small, and unless an exceptionally large quantity of tickets is purchased, one is virtually guaranteed to incur a loss.
If I understand your position correctly, Dave, you present two main claims:
A. Avoidance of Extremes: The assertion that allocating 100% to human beings or 100% to animals is wrong due to the "highest-lowest game mechanic."
B. Wisdom of the Crowds and Natural Balance: The belief that "the wisdom of the crowds" will naturally lead to a rational balance point between the fields, thanks to the diverse personal biases of individuals.
Well, I'd like to challenge both of these points.
- - -
Why are extremes not necessarily wrong?
First, the sweeping claim that "extremes are wrong," despite its prevalence, lacks a logical basis in itself. The rational decision on where to invest limited resources should be based on the principle of marginal utility. We should always invest the "next dollar" where it will yield the highest benefit.
For example: Suppose you have two investment options – A or B. You can improve A by 1% or B by 20%. If you've concluded that improving A by 1% is significantly better than improving B by 20%, why should you improve B at all? In such a case, any investment in B at the expense of A is simply a waste of resources.
We should invest exclusively in A until the marginal utility of additional investment in A decreases (due to diminishing returns) and becomes lower than the marginal utility of investment in B.
One might argue that there's no guarantee such a point will be reached within the available resources, especially when we're talking about such profound qualitative differences in value, like between saving a human life and improving suffering conditions for animals. If the marginal utility of investing in humans remains consistently superior, then my solution is to invest 100% in humans. And how can we allow animals to suffer indefinitely? What will become of them?
Well, it's unfortunate, but the fundamental goal of Effective Altruism is rational investment, and that means, among other things, not investing based on emotions. If I have $100, and the cost of preventing an animal from suffering is $1, we could end suffering for so little! But that's $1 less from investing in human lives, which we prioritize as more successful. It's unfortunate. But what else can we do?
- - -
The wisdom of the crowds is no guarantee of moral rationality
Secondly, relying on "the wisdom of the crowds" for optimal prioritization is highly problematic. "The wisdom of the crowds" may be effective for estimating simple averages, but it dramatically fails in issues requiring deep expertise, rigorous logical analysis, and complete information – especially in complex ethical matters.
The examples of Magnus Carlsen's and Garry Kasparov's chess games against "the World" illustrate this well: despite hundreds of thousands and tens of thousands of participants respectively, "the World" did not win either of these games. The reason is simple: chess requires deep strategic thinking, which cannot arise from an average of distributed intuitions.
Likewise, the question of prioritizing between saving human lives and alleviating animal suffering is not a matter of "average preferences." It is a question of what is objectively and morally right. If a rational analysis (like that presented in tootlife's original post) leads to the conclusion that one solution is significantly superior, then choosing the less effective solution, even if popular or representing some balance of public preferences, is a wrong choice.
It is well known that the public tends to be influenced by many emotional and cognitive biases, rather than purely rational considerations. In fact, the Effective Altruism movement itself arose from the understanding that "the wisdom of the crowds" and intuitive resource allocation are inefficient, and its purpose is precisely to correct this through data and reason. To claim that "the wisdom of the crowds" will lead to the optimal solution is, therefore, an internal contradiction to the basic rationale of EA.