(Cross-posted from my substack The Ethical Economist: a blog covering Economics, Ethics and Effective Altruism.)
Don’t give to just one charity, there are so many good charities to give to! Also, what if the charity turns out to be ineffective? Then you just wasted all your money and did no good. Don’t worry, there’s a simple solution. Just give to multiple charities and spread risk!
Thinking along these lines is natural. Whether it’s risk aversion, or just an inherent desire to support multiple charities or causes, most of us diversify our philanthropic giving. If your goal is to do the most good however, you should fight this urge with all you’ve got.
Diversification does make some sense, some of the time. If you’re going to fill a charity’s budget with your giving then any further giving should probably go elsewhere.
But most of us are small donors. Our giving usually won't fill a budget, or hit diminishing returns. It certainly won’t hit diminishing returns at the level of an entire cause area, unless perhaps you’re a billionaire philanthropist, in which case well done you.
When you’re deciding where to give you likely have some idea of what the best option is. Maybe you want to help animals and are quite uncertain about how best to do so, but you lean towards thinking that giving to The Humane League (THL) to support their corporate campaigns is slightly better on the margin than giving to Faunalytics to support their research, even though you think there’s a possibility either option is ineffective. In this case, you should give your full philanthropic budget to THL. Fight that urge to give to both charities to cover your back if you make the wrong choice.
Giving to both charities reduces the risk of you doing no good. But, because you subjectively think that THL is slightly better than Faunalytics, it also reduces the amount of good you will actually do in expectation. If you think THL is the best, then why give to anything else? Giving to both means trading away expected good done to get more certainty that you yourself will have done some good. It’s putting your own satisfaction ahead of the expected good of the world. Don’t be that person.
At this point you might push back and say that I haven’t convincingly shown that there’s anything wrong with being risk averse in this way. That is, risk averse with respect to the amount of good a particular individual does. Fair enough, so let me try something a bit more formal.
A recent academic paper by Hilary Greaves, William MacAskill, Andreas Mogenson and Teruji Thomas explores the tension between “difference-making risk aversion” and benevolence. Consider the below.
Outcome goodness | Heads | Tails |
Do nothing | 10 | 0 |
Give to Charity A | 20 | 10 |
Give to Charity B | 10+x | 20+x |
A fair coin is to be flipped which determines the payoffs if we do nothing, give to Charity A, or give to Charity B. The coin essentially represents our current uncertainty.
We do have a hunch that giving to Charity B is better. Charity B differs from Charity A in that, instead of a ½ probability of getting 20, Charity B involves a ½ probability of getting 20+x, and instead of a ½ probability of getting 10, Charity B involves a ½ probability of getting 10+x. Given this, it’s clearly better to give to Charity B. In technical language we say that giving to Charity B stochastically dominates giving to Charity A.
Now instead of ‘outcome goodness’ let’s consider the ‘difference made’ of giving to either charity, relative to doing nothing (this is just some simple subtraction using the table above).
Difference made | Heads | Tails |
Do nothing | 0 | 0 |
Give to Charity A | 10 | 10 |
Give to Charity B | x | 20+x |
A key thing to notice is that an individual with ‘difference-making risk aversion’ might prefer to give to Charity A. Giving to Charity A means you will do 10 units of good for sure. But if x is small, giving to Charity B would mean doing little good if the coin lands heads. A risk averse individual will have a tendency to want to avoid this bad outcome.
So being risk averse in this case might mean wanting to give to Charity A. But we already concluded above that giving to Charity A is silly, because giving to Charity B stochastically dominates giving to Charity A!
What we see here is that ‘difference-making risk aversion’ can lead one to go astray. In one’s effort to avoid doing little good, one makes a very poor decision under uncertainty. The key takeaway is that we shouldn’t respect our ‘difference-making risk aversion’. If we truly care about ensuring the most good is done, we should avoid tendencies to diversify whether it be across charities or cause areas.
To you reader I say this.

Yes, I think the argument would probably hold under MEC (ignoring indirect reasons like those I gave), although I think MEC is a pretty bad approach among alternatives:
I also think your instinct to look for a single option that does well across views is at odds with most approaches to normative uncertainty in the literature, including MEC, and I think a pretty reasonable requirement for a good approach to normative uncertainty. Suppose you have two moral views, A and B, each with 50% weight, and 3 options with the following moral values per unit of resources, where the first entry of each pair is the moral value under A, and the second is under B (not assuming A and B are using the same moral units here):
Picking just option 1 or just option 2 means causing net harm on either A or B, but option 3 does well on both A and B. However, picking just option 3 is strictly worse than 50% option 1 + 50% option 2, which has value (1.5, 1.5).
And we shouldn't be surprised to find ourselves in situations where mixed options beat single options that do well across views, because when you optimize for A, you don't typically expect this to be worse than what optimization for B can easily make up for, and vice versa. For example, corporate campaigns seem more cost-effective at reducing farmed animal suffering than GiveWell interventions are at causing it, because the former are chosen specifically to minimize farmed animal suffering, while GiveWell interventions are not chosen to maximize farmed animal suffering.
Furthermore, assuming constant marginal returns, MEC would never recommend mixed options (except for indirect reasons), unless the numbers really did line up nicely so that options 1 and 2 had the exact same expected choiceworthiness, and even then, it would be indifferent between pure and mixed options. It would be an extraordinarily unlikely coincidence for two options to have the exact same expected choiceworthiness for a rational Bayesian with precise probabilities.