Crosspost of my blog

Lots of people have two conflicting desires:

  1. Do the most good.
  2. Make sure you do some good.

These are in conflict. Sometimes the thing that does the most good in expectation has a low chance of doing any good. If you give a lot of money to shrimp, for example, there’s about a 50% chance you’re wasting your money if it turns out that shrimp aren’t conscious. Similarly, if you take Pascal’s wager, and give money to groups that effectively promote the religion you think is most likely to be true, there’s a non-trivial chance that you’re not doing any good at all! And the same thing is true of other speculative proposals for improving the world—trying to become a billionaire, giving to Longtermist organizations that safeguard the future, and so on.

As it happens, I don’t think this attempt to make sure you do some good, rather than maximizing expected good, is rational. I think one should just try to do the most expected good without worrying about the probability that they do good. A 1/10 chance of saving ten people’s lives is just as good as a 100% chance of saving one person’s life.

The problem: I suck!

More precisely, even though when I think about it, expected utility maximization seems right, I can’t really get myself to do it. It’s hard to be motivated to perform a task if I think there’s a reasonable chance I’m wasting my life. While I’ve written a bunch about the rationality of Pascal’s wager, I find it very hard to take it myself! Similarly, I find it hard to be motivated to give to Longtermist charities, even though they have very high expected value. I currently give only about a quarter of my donations to effective charities, though I suspect I’d give more if I was fully rational. Try as I might to be a robotic expected value maximizer, I just can’t seem to get myself to do it.

So what should one do in a situation like this? Well, we can take our cue from people in finance. What do finance people do when there is a risky business that might fail but might become worth a lot? They diversify! They invest in a hundred companies like this, knowing that though 90 of them may fail, the rest will succeed enough to make it worth it.

You can do this with morality too. Suppose you’re not super sure if giving to shrimp welfare is a good idea. You’re also not sure if Longtermism is good. You’re also not sure how valuable charities helping free chickens from cages are compared to other things. You’re not certain if reducing wild animal suffering is effective. And maybe you’re not sure whether to take Pascal’s wager seriously and support organizations effectively spreading whichever religion you find most plausible.

If you want to be confident that you are doing some good: diversify. Give to all of them. Even if they’re all somewhat speculative, it’s likely that one of them will pay off massively. Just like one can diversify their portfolio by investing in lots of risky companies with potentially high payouts, you can do that for charity too. This doesn’t mean you should give to every place you think might do some amount of good, but it does mean you should risk wasting your money for a small chance of bringing about a ton of value.

If you give a hundred dollars to the shrimp, you can save 1.5 million shrimp from an excruciating. That’s three times more than the number of people in Wyoming! If you have any moral uncertainty about that, and take moral uncertainty seriously, surely that should be at least one of the thing you do at some point over the course of your life.

Now, this isn’t the best way to maximize expected value. If you were an expected value maximizing robot, you would not pursue this strategy. You would say “bleep bloop, this brings about 93.5 fewer expected utils than the other strategy.” But I assume you are not an EV maximizing robot.

This also makes it easier to take seriously the conclusion of weird arguments. If a weird argument has the conclusion that I should stop giving to charities providing bednets, and instead pay to refill the Huel containers at some Longtermist org, I find it very hard to act on. But if the conclusion of an argument is simply that I should be doing a little more to promote astronomical amounts of longterm value, well, that doesn’t seem so bad! It’s easier to motivate yourself to give some money for a speculative gamble than to give all your charitable money for a speculative gamble.

This is one reason I disagree with the common argument that a person should only give to one charity—whichever one they think is best. If I had to donate to only one charity, I’d probably give less effectively. I’d end up convincing myself that the best charity is whichever effective charity I feel the best about, and gave all my money there. For this reason, even though I think ideal agents would probably only give to one charity, accounting for human fallibility, it makes sense to diversify. My guess would be others are the same; if people could only give to one charity, probably very few people would go all in on the shrimp.

For similar reasons, you should refrain from doing things that might be extremely wrong on some ethical view. For instance, I refrain from eating happy animals. This is partly for practical reasons: it’s hard to know if the animal really was happy, and convincing other people to eat only happy animals just generally results in them eating factory farmed animals with nice labels slapped on the product.

But it’s also partly for reasons of moral uncertainty—while I am a utilitarian, it wouldn’t be completely shocking if deontology turned out to be right. If deontology is right and animals have rights, then eating meat is about as bad as being a serial killer. You shouldn’t risk doing something as bad as being a serial killer. Similarly, I would be wary about becoming an anti-Christian activist or abortion doctor, because there’s some chance that doing so is seriously and perhaps even infinitely bad—I don’t want to risk it!

(In a sane world, as Richard Y Chappell notes, people would similarly think about the serious moral risk of discouraging effective giving, on grounds that effective giving prevents children from dying. Somehow, however, people seem to think that small disagreements with the ideology of certain EAs is sufficient cause for blanket denunciation, a decision which is likely to cause additional poor people to die).

 

Suppose there are three possibilities which entail surprising moral conclusions. Suppose you give them each 30% odds. You might be tempted to dismiss them because any individual one is likely false. But the odds are ~2/3 that one of them is right. So if you diversify, if you take lots of high risk but high reward morally speculative actions, odds are decent that some of your actions will do lots of good!

 

 

 

 


 

33

0
0

Reactions

0
0

More posts like this

Comments8
Sorted by Click to highlight new comments since:

Thanks for the post, Matthew.

So what should one do in a situation like this? Well, we can take our cue from people in finance. What do finance people do when there is a risky business that might fail but might become worth a lot? They diversify! They invest in a hundred companies like this, knowing that though 90 of them may fail, the rest will succeed enough to make it worth it.

Investing in many companies to increase the chance of a big win makes sense because each investment has a potentially very large upside, but limited downside. In contrast, I believe charitable donations can have both a large upside and downside. For example, my best guess is that GiveWell's top charities increase the welfare of soil animals 610 k times as much as they increase the welfare of humans in expectation, but that the probability of them increasing welfare is only slightly above 50 %. They decrease soil-animal-years, but my probability for soil animals having negative lives is only slightly below 50 %.

But it’s also partly for reasons of moral uncertainty—while I am a utilitarian, it wouldn’t be completely shocking if deontology turned out to be right. If deontology is right and animals have rights, then eating meat is about as bad as being a serial killer.

It seems pretty clear to me that more animal farming decreases animal deaths due to increasing animal-years of soil animals way more than it decreases the animal-years of farmed animals, and soil animals having shorter lives than farmed animals ("number of deaths" = "animal-years"/"life expectancy"). Moreover, I also think animal farming decreases animal deaths weighted by the absolute value of the expected welfare per animal-year of the animals involved. I estimate animal farming changes the welfare of soil animals much more than it decreases the welfare of farmed animals.

My guess would be others are the same; if people could only give to one charity, probably very few people would go all in on the shrimp.

@Bentham's Bulldog, why farmed shrimps instead of soil animals? I estimate soil ants, termites, springtails, mites, and nematodes have 8.89 M (= 1.76*10^23/(1.98*10^16)) times as many neurons in total as farmed shrimps, and I think the total number of neurons underestimates the importance of soil animals relative to shrimps. You assume you agree with this too? In the post linked above, you say the "estimate that shrimp suffer about 3.1% as intensely as humans" "is a highly conservative estimate", whereas Rethink Priorities (RP) estimates shrimps have 10^-6 as many neurons as humans (see Table 5 here).

What matters is increasing welfare as much as possible per $, and this need not imply prioritising increasing the welfare of the animals accounting for the vast majority of total welfare in absolute terms. However, I estimate the Shrimp Welfare Project’s (SWP’s) Humane Slaughter Initiative (HSI) increases the welfare of shrimps only 0.0292 % as cost-effectively as the Centre for Exploratory Altruism Research’s (CEARCH’s) High Impact Philanthropy Fund (HIPF) increases the welfare of humans, and soil animals due to it decreasing 5.07 billion soil-animal-years per $. 

Suppose there are three possibilities which entail surprising moral conclusions. Suppose you give them each 30% odds. You might be tempted to dismiss them because any individual one is likely false. But the odds are ~2/3 that one of them is right. So if you diversify, if you take lots of high risk but high reward morally speculative actions, odds are decent that some of your actions will do lots of good!

Agreed. At the same time, taking more actions also means a higher chance of some doing lots of harm.

Lots of people have two conflicting desires:

  1. Do the most good.
  2. Make sure you do some good.

These are in conflict. Sometimes the thing that does the most good in expectation has a low chance of doing any good.

I think the desire people in the effective altruism community have besides doing the most good is not so much making sure they do some good, but making sure they are overall doing good instead of harm. Doing some good, but lots of harm would not be appealing.

As it happens, I don’t think this attempt to make sure you do some good, rather than maximizing expected good, is rational. I think one should just try to do the most expected good without worrying about the probability that they do good.

I also believe trying to do the most expected good makes more sense because it can vary a lot across portfolios, whereas there their chance of being overall net positive/negative will in my mind remain close to 50 %. I suspect even electrically stunning shrimp, which I see as one of the interventions outside research with the highest chance of being beneficial, has something like a 60 % chance of increasing welfare due to effects on soil animals[1], and maybe 50 % accounting for microorganisms.

  1. ^

    I estimate eating shrimp increases the welfare of soil ants, termites, springtails, mites, and nematodes 223 times as much as it decreases the welfare of shrimp. So I believe electrically stunning shrimp would decrease welfare if it decreased the consumption of shrimp by more than 0.448 % (= 1/223) without increasing the consumption of anything else requiring agricultural land. However, the consumption of shrimp would be replaced by something else requiring agricultural land, so it would have to decrease by more than 0.448 % for effects on soil animals to dominate.

Now, this isn’t the best way to maximize expected value. If you were an expected value maximizing robot, you would not pursue this strategy. You would say “bleep bloop, this brings about 93.5 fewer expected utils than the other strategy.” But I assume you are not an EV maximizing robot.

 

Hmmm. This is interesting, as diversification is expected utility maximizing in the finance context. The fact that it is not EV maximizing in the utilitarian framework makes me wonder if there is something wrong with the framing.

The obvious difference is that EV is risk-neutral. I think this is usually justified by the fact that we are counting utils, whereas in finance, we are counting dollars. Arguably, it makes sense that utility is concave in dollars, but not that utility is (strictly) concave in itself.

Intuitively, this seems wrong to me. Imagine a choice set {A, B}. Choice A has a 50% chance of resulting in a world with 1 million sentient beings, all with 100 utils each, and a 50% chance of resulting in a world with zero sentient beings. Choice B results with certainty in a world with 500 thousand sentient beings, all with 100 utils each. I strictly prefer Choice B to Choice A, implying that I have a moral meta utility function that is concave in utils. This is sufficient to derive that diversification is optimal.

Feels like the most straightforwardly rational argument for portfolio diversification is the assumption your EV and probability estimates almost certainly aren't the accurate or at least unbiased estimator they need to be for the optimal strategy to be to stick everything on the highest EV outcome. Even more so when the probability that a given EV estimate is accurate is unlikely to be uncorrelated with whether it scores particularly highly (the good old optimiser's curse, with a dose of wishful thinking thrown in). Financiers don't trust themselves to be perfectly impartial about stuff like commodity prices in central Asia or binary bets on the value of Yen on Thursday, and it seems unlikely that people who are extremely passionate about the causes they and their friends participate in ahead of a vast range of other causes that nominally claim to do good achieve a greater level of impartiality. Pascalian odds seem particularly unlikely to be representative of the true best option (in plain English, a 0.0001% subjective probability assessment of a 1 shot event is roughly "I don't really know what the outcome of this will be and it seems like there could be many, many things more likely to achieve the same end"). You can make the assumption that if they appear to be robustly positive and neglected they might deserve funding anyway, but that is a portfolio argument...

Interesting point, though I disagree--I think there are strong arguments for thinking that you should just maximize utility https://joecarlsmith.com/2022/03/16/on-expected-utility-part-1-skyscrapers-and-madmen/

Executive summary: The post argues that while pure expected value reasoning suggests focusing on a single best option, human motivation and moral uncertainty make it more practical and effective to diversify charitable giving and actions across speculative but potentially high-impact causes.

Key points:

  1. People often want to maximize both the expected good and ensure they do some good, but these aims conflict when speculative high-EV activities cause risk yielding no benefit.
  2. Though expected utility maximization is rational in theory, the author finds it psychologically difficult to act on when there’s a high chance of “wasting” effort or donations.
  3. A pragmatic solution is to diversify, analogous to financial portfolios: spread donations across multiple speculative causes (e.g., shrimp welfare, longtermism, chicken welfare, wild animal suffering, even Pascal’s wager) to increase confidence that some will have a significant impact.
  4. Diversification helps overcome motivational barriers, making it easier to act on “weird” philosophical arguments without committing all resources to them.
  5. Moral uncertainty further supports diversification, both by avoiding actions that could be extremely wrong from some ethical perspectives (e.g., eating meat, opposing religions, and abortion work) and by hedging across plausible moral frameworks.
  6. Even if ideal rational agents would concentrate on a single best charity, accounting for human fallibility and motivational limits, spreading resources may in practice lead to more effective giving.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

I was reminded of this post (Purchase Fuzzies and Utilons Separately), and it's something I do myself: work in some speculative EV-maximising space, but donate to "definitely doing good" things.

As somewhat of an amateur, it's good to hear I'm on the right track taking expected value as a core concept of EA, factoring it in with stuff daily. I'm reassured when I read the experts lend the same level of credence to it: a way of doing things that likely does an astronomical amount of good. Thank you for another contribution of great, well-written wisdom, Bentham! 🦐

Curated and popular this week
Relevant opportunities