Key Takeaways
- Optimizing your giving's effect on "EA's portfolio” implies you should fund the causes your value system thinks are most underfunded by EA's largest allocators (e.g. Open Phil and SFF).
- These causes aren't necessarily your value system's most preferred causes. ("Preferred" = the ones you'd allocate the plurality of EA's resources to.)
- For the typical EA, this would likely imply donating more to animal welfare, which is currently heavily underfunded under the typical EA's value system.
- Opportunities Open Phil is exiting from, including invertebrates, digital minds, and wild animals, may be especially impactful.
Alice's Investing Dilemma: A Thought Experiment
Alice is a conservative investor who prefers the risk-adjusted return of a portfolio of 70% stocks and 30% bonds. Along with 9 others, Alice has been allocated $1M to split between stocks and bonds however she sees fit. The combined $10M portfolio will be held for 10 years, and its profits or losses will be split equally among the 10 portfolio managers. The other 9 portfolio managers tell Alice they're planning to go with 100% stocks.
Alice's preferred asset is stocks (in the sense that if she could control the whole combined portfolio, she'd allocate the majority to stocks). However, the underallocated asset (by Alice's risk-adjusted return preference) is bonds. In this case, Alice best realizes her preferences by allocating her entire $1M to bonds! This holds even though Alice prefers stocks to bonds.
In Charity, We Should Optimize The Portfolio of Everyone's Actions
In Alice's investing dilemma, the premise that's doing the work is that Alice wants to optimize the combined portfolio instead of her particular $1M share.
In the case of effective giving, we typically focus on our giving's direct impact, but not on how it fits into the portfolio of the net effect of everyone's actions. But optimizing the portfolio of everyone's actions seems to directly follow from EA principles:
- The recipient of charity doesn't care who's giving it, so it would seem like a bias to be focused on the part of "the portfolio of everyone's actions" that is your actions rather than the whole (or any other particular part).
- Reducing funging to ensure counterfactual impact is already one way of reasoning about the effect of your giving on the portfolio of everyone's actions. This proposal simply extends that idea to also optimize for your value system's objectives.
There are many legitimate reasons to not overemphasize optimizing the portfolio of everyone's actions, such as many people's concerns about personally making a difference. However, I think we should put more thought into optimizing the portfolio of everyone's actions than we currently do.
Theoretical Implications
- You should prefer funding the causes your value system thinks are the world's most underallocated.
- These causes are not necessarily your value system's most preferred causes! ("Preferred" = The ones which you'd allocate the plurality of the world's resources to!)
"The Portfolio of Everyone's Actions" vs "EA's Portfolio"
In theory, this post argues that you should be optimizing the portfolio of the net effect of anything anyone will ever do (under your value system). But that's obviously intractable!
To deduce practical recommendations, one can assume that non-EA-aligned actions have negligible net effect relative to EA-aligned actions. In that case, "optimizing the portfolio of everyone's actions" reduces to optimizing the portfolio of EA's resource allocation. If you think this assumption is generally accurate, we can deduce some practical recommendations.
Practical Recommendations
Many EAs split their personal donations between cause areas including global health, animal welfare, and longtermism. If you're optimizing EA's portfolio, you probably shouldn't do this. Instead, you should identify which cause area your value system says EA most underfunds, and only donate there.
I personally believe longtermist interventions have the highest expected value, and I would allocate the plurality of EA resources to them if I could. But due to risk aversion, I think a substantial portion of our resources should go towards reducing near-term suffering, which animal welfare interventions do most cost-effectively. Since my value system says animal welfare is more underfunded than longtermism, when optimizing EA’s portfolio, it seems best for me to donate only to animal welfare. This holds even though longtermism is my preferred cause area.
For other value systems, the implications could be completely different! If Bob doesn't care about animals, but would want to split EA’s resources between 30% global health and 70% longtermism, optimizing EA’s portfolio by Bob’s value system means he should donate only to longtermist interventions.
EA's Current Resource Allocations
Knowing EA's current resource allocations would be helpful if you think this post's recommendations have merit. The most complete and up-to-date reference I know of is Tyler Maule's from November 2023:
Global health | Animal welfare | Longtermism | Meta |
70.4% | 5.5% | 16.2% | 7.9% |
The consensus of EA leaders and the EA community is that global health is overfunded. If global health is excluded, Tyler's aggregation gives:
Animal welfare | Longtermism | Meta |
18.7% | 54.5% | 26.8% |
Some other potentially helpful aggregations:
- Open Phil grants by cause area by Hamish McDoodles (updated daily)
- Resource allocations by cause area by Ben Todd (as of 2019)
If anyone is interested in maintaining a more complete and up-to-date aggregation, that could be impactful. The EA community could use that as a canonical resource to better target EA's most underallocated causes.
I think the presentation of this argument here misses some important considerations:
The way that you want us to act with respect to OP is already the way that OP is trying to act with respect to the rest of the world
EAs don't fund the most important causes, based purely on scale (otherwise tonnes of things EAs ignore would score highly, e.g. vaccination programs in rich countries). A core part of EA is looking for causes which are neglected. We look for the areas that are receiving the least funding relative to what they would receive in our ideal world, because these are likely to be the areas where our donations will have the highest marginal impact.
This is the reply to people who argue "oh you want local charities to disappear and to send all the money to malaria nets". The reply is: "No! In my ideal world, malaria nets would quickly attract all the funding they need. Then there would still be plenty of money left over for other things. But I think I should look at the world I actually live in, recognize that malaria nets are outrageously underfunded, and give all my resources there."
So in a sense, the argument you are making here isn't anything new. You are just saying we should try to act towards other EAs in a similar way to how EAs as a group act towards the rest of the world. And I don't disagree with this. But I think we should go all the way. I think we should treat other EAs in the same way that we treat the rest of the world. If I understand your argument correctly, you are trying to draw a distinction between the EA community and everyone else.
The same considerations that lead OP to choose not to allocate all their funds to the highest expected value cause should also be relevant for individual donors
OP do not allocate all of their funding to the 'best' cause. Even if OP were a pure EV maximizer, they might have valid reasons not to do this, because they have such a big budget. It may be that diminishing marginal returns mean that the 'best' cause stops being the best once OP have given a certain level of funds to it, at which point they should switch to funding another cause instead.
But my impression is that this is not OP's reason for donating to multiple causes (or at least not their only reason). They are not purely trying to maximize expected value, or at least not in a naive first order way. One reason to diversify might be donor risk aversion, like you mention (e.g. you want to maximize EV while bounding the risk that you have no positive impact at all), and there are plenty of other considerations that might come into it too, e.g. sense of duty to a certain cause, reputation, belief in unquantifiable uncertainty and impossibility of making certain cause comparisons etc
But if these considerations are valid for OP then they should also be relevant for individual donors. For example, if an individual donor wants to bound the risk that they have no impact, then that might well mean not donating everything to the cause they think is most underfunded by OP. It would only make sense to do this if they had a weird type of risk aversion where they want to bound the risk that the EA community as a whole has no positive impact, but are unconcerned about their own donations' risk. This seems very arbitrary! Either they should care about the risk for their own donations, and should diversify, or they should be concerned with all of humanity's donations, in which case OP should not be diversifying either!
Pure EV maximizers don't care about percentages anyway
You could bite the bullet and say that neither OP nor individual donors should be diversifying their donations (except when faced with diminishing marginal utility). For these individual donors, they should be donating everything to one cause (and probably one charity unless they have a lot to give!) But even for these donors, it's not which causes OP underfund that really matters. It's what causes all of humanity underfund. So it is not the percentages of OP's funding allocation that matter, it's the absolute value.
If OP are a relatively small player in a cause area (global health..?) then their donation decisions are unlikely to be especially relevant to the individual donor. If they thought global health was the top cause before OP donations were taken into account, it probably still will be afterwards. But if OP are a relatively big player (animal welfare..?) then their donations are more relevant, due to diminishing marginal utility. But it is the absolute amount of funding they are moving, not the percentages, which will determine this.