Foreword
Sadly, it looks like the debate week will end without many of the stronger[1] arguments for Global Health being raised, at least at the post level. I don't have time to write them all up, and in many cases they would be better written by someone with more expertise, but one issue is firmly in my comfort zone: the maths!
The point I raise here is closely related to the Two Envelopes Problem, which has been discussed before. I think some of this discussion can come across as 'too technical', which is unfortunate since I think a qualitative understanding of the issue is critical to making good decisions when under substantial uncertainty. In this post I want to try and demystify it.
This post was written quickly, and has a correspondingly high chance of error, for which I apologise. I am confident in the core point, and something seemed better than nothing.
Two envelopes: the EA version
A commonly-deployed argument in EA circles, hereafter referred to as the "Multiplier Argument", goes roughly as follows:
- Under 'odd' but not obviously crazy assumptions, intervention B is >100x as good as intervention A.
- You may reasonably wonder whether those assumptions are correct.
- But unless you put <1% credence in those assumptions, or think that B is negative in the other worlds, B will still come out ahead.
- Because even if it's worthless 99% of the time, it's producing enough value in the 1% to more than make up for it!
- So unless you are really very (over)confident that those assumptions are false, you should switch dollars/support/career from A to B.
I have seen this for both Animal Welfare and Longtermism as B, usually with Global Health as A. As written, this argument is flawed. To see why, consider the following pair of interventions:
- A has produces 1 unit of value per $, or 1000 units per $, with 50/50 probability.
- B is identical to A, and independently will be worth 1 or 1000 per $ with 50/50 probability.
We can see that B's relative value to A is as follows:
- In 25% of worlds, B is 1000x more effective than A
- In 50% of worlds, B and A are equally effective.
- In 25% of worlds, B is 1/1000th as effective as A
In no world is B negative, and clearly we have far less than 99.9% credence in A beating B, so B being 1000x better than A in its favoured scenario seems like it should carry the day per the Multiplier Argument...but these interventions are identical!
What just happened?
The Multiplier Argument relies on mathematical sleight of hand. It implicitly calculated the expected ratio of impact between B and A, and the expected ratio in the above example is indeed way above 1:
E(B/A) = 25% * 1000 + 50% * 1 + 25% * 1/1000 = 250.5
But the difference in impact, or E(B-A), which is what actually counts, is zero. In 25% of worlds we gain 999 by switching from A to B, in a mirror set of worlds we lose 999, and in the other 50% there is no change.
Tl;DR: Multiplier Arguments are incredibly biased in favour of switching, and they get more biased the more uncertainty you have. Used naively in cases of high uncertainty, they will overwhelmingly suggest you switch intervention from whatever you use as your base.
In fact, we could use a Multiplier Argument to construct a seemingly-overwhelming argument for switching from A to B, and then use the same argument to argue for switching back again! Which is essentially the classic Two Envelopes Problem.
Some implications
One implication is that you cannot, in general, ignore the inconvenient sets of assumptions where your suggested intervention B is losing to intervention A. You need to consider A's upside cases directly, and how the value being lost there compares to the value being gained in B's upside cases.
If A has a fixed value under all sets of assumptions, the Multiplier Argument works. One post argues this is true in the case at hand. I don't buy it, for reasons I will get into in the next section, but I do want to acknowledge that this is technically sufficient for Multiplier Arguments to be valid, and I do think some variant of this assumption is close-enough to true for many comparisons, especially intra-worldview comparisons.
But in general, the worlds where A is particularly valuable will correlate with the worlds where it beats B, because that high value is helping it beat B! My toy example did not make any particular claim about A and B being anti-correlated, just independent. Yet it still naturally drops out that A is far more valuable in the A-favourable worlds than in the B-favourable worlds.
Global Health vs. Animal Welfare
Everything up to this point I have high confidence in. This section I consider much more suspect. I had some hope that the week would help me on this issue. Maybe the comments will, otherwise 'see you next time' I guess?
Many posts this week reference RP's work on moral weights, which came to the surprising-to-most "Equality Result": chicken experiences are roughly as valuable as human experiences. The world is not even close to acting as if this were the case, and so a >100x multiplier in favour of helping chickens strikes me as very credible if this is true.
But as has been discussed, RP made a number of reasonable but questionable empirical and moral assumptions. Of most interest to me personally is the assumption of hedonism.
I am not a utilitarian, let alone a hedonistic utilitarian. But when I try to imagine a hedonistic version of myself, I can see that much of the moral charge that drives my Global Health giving would evaporate. I have little conviction about the balance of pleasure and suffering experienced by the people whose lives I am attempting to save. I have much stronger conviction that they want to live. Once I stop giving any weight to that preference [2], my altruistic interest in saving those lives plummets.
To re-emphasise the above, down-prioritising Animal Welfare on these grounds does not require me to have overwhelming confidence that hedonism is false. For example a toy comparison could[3] look like:
- In 50% of worlds hedonism is true, and Global Health interventions produce 1 unit of value while Animal Welfare interventions produce 500 units.
- In 50% of worlds hedonism is false, and the respective amounts are 1000 and 1 respectively.
Despite a 50%-likely 'hedonism is true' scenario where Animal Welfare dominates by 500x, Global Health wins on EV here.
Conclusion
As far as I know, the fact that Multiplier Arguments fail in general and are particularly liable to fail where multiple moral theories are being considered - as is usually the case when considering Animal Welfare - is fairly well-understood among many longtime EAs. Brian Tomasik raised this issue years ago, Carl Shulman makes a similar point when explaining why he was unmoved by the RP work here, Holden outlines a parallel argument here, and RP themselves note that they considered Two Envelopes "at length".
It is not, in isolation, a 'defeater' of animal welfare, as a cursory glance at the prioritisation of the above would tell you. I would though encourage people to think through and draw out their tables under different credible theories, rather than focusing on the upside cases and discarding the downside ones as the Multiplier Argument pushes you to do.
You may go through that exercise and decide, as some do, that the value of a human life is largely invariant to how you choose to assign moral value. If so, then you can safely go where the Multiplier Argument takes you.
Just be aware that many of us do not feel that way.
- ^
Defined roughly as 'the points I'm most likely to hear and give most weight to when discussing this with longtime EAs in person'.
- ^
Except to the extent it's a signal about the pleasure/suffering balance I suppose. I don't think it does provides much information though; people generally seem to have a strong desire to survive in situations that seem to me to be very suffering-dominated.
- ^
For the avoidance of doubt, to the extent I have attempted to draw this out my balance of credences and values end up a lot more messy.
I agree I haven't given an argument on this. At various times people have asked what my view is (ex: we're taking here about something prompted by my completing a survey prompt) and I've given that.
Explaining why I have this view would be a big investment in time: I have a bundle of intuitions and thoughts that put me here, but converting that into a cleanly argued blog post would be a lot more work than I would normally do for fun and I don't expect this to be fun.
This is especially the case because If I did a good job at this I might end up primarily known for being an anti-animal advocate, and since I think my views on animals are much less important than many of my other views, I wouldn't see this as at all a good thing. I also expect that, again, conditional on doing a good job of this, I would need to spend a lot of time as a representative of this position: responding to the best counter arguments, evaluating new information as it comes up, people wanting me to participate in debates, animal advocates thinking that changing my mind is really very important for making progress toward their goals. These are similarly not where I want to put my time and energy, either for altruistic reasons personal enjoyment.
The normal thing to do would be to stop here: I've said what my view is, and explained why I've never put the effort into a careful case for that position. But I'm more committed to transparency than I am to the above, so I'm going to take about 10 minutes (I have 14 minutes before my kids wake up) to very quickly sketch the main things going into my view. Please read this keeping in mind that it is something I am sharing to be helpful, and I'm not claiming it's fully argued.
The key question for me is whether, in a given system, there's anyone inside to experience anything.
I think extremely small collections of neurons (ex: nematodes) can receive pain, in the sense of updating on inputs to generate less of some output. But I draw a distinction between pain and suffering, where the latter requires experience. And I think it's very unlikely nematodes experience anything.
I don't think this basic pleasure or pain matters, and if you can't make something extremely morally good by maximizing the number of happy neurons per cubic centimeter.
I'm pretty sure that most adult humans do experience things, because I do and I can talk to other humans about this.
I think it is pretty unlikely that very young children, in their first few months, have this kind of inner experience.
I don't find most things that people give as examples for animal consciousness to be very convincing, because you can often make quite a simple system that displays these features.
While some of my views above could imply that some humans are more valuable come up morally than others, I think it would be extremely destructive to act that way. Lots and lots of bad history there. I treat all people as morally equal.
The arguments for extending this to people as a class don't seem to me to justify extending this to all creatures as a class.
I also think there are things that matter beyond experienced joy and suffering (preference satisfaction, etc), and I'm even less convinced that animals have these.
Eliezer's view is reasonably close to mine, in places where I've seen him argue it.
(I'm not going to be engaging with object level arguments on this issue -- I'm not trying to become an anti-animal advocate.)