Rob Wiblin: One really important consideration that plays into Open Phil’s decisions about how to allocate its funding — and also it really bears importantly on how the effective altruism community ought to allocate its efforts — is worldview diversification. Yeah, can you explain what that is and how that plays into this debate?
Alexander Berger: Yeah, the central idea of worldview diversification is that the internal logic of a lot of these causes might be really compelling and a little bit totalizing, and you might want to step back and say, “Okay, I’m not ready to go all in on that internal logic.” So one example would be just comparing farm animal welfare to human causes within the remit of global health and wellbeing. One perspective on farm animal welfare would say, “Okay, we’re going to get chickens out of cages. I’m not a speciesist and I think that a chicken-day suffering in the cage is somehow very similar to a human-day suffering in a cage, and I should care similarly about these things.”
Alexander Berger: I think another perspective would say, “I would trade an infinite number of chicken-days for any human experience. I don’t care at all.” If you just try to put probabilities on those views and multiply them together, you end up with this really chaotic process where you’re likely to either be 100% focused on chickens or 0% focused on chickens. Our view is that that seems misguided. It does seem like animals could suffer. It seems like there’s a lot at stake here morally, and that there’s a lot of cost-effective opportunities that we have to improve the world this way. But we don’t think that the correct answer is to either go 100% all in where we only work on farm animal welfare, or to say, “Well, I’m not ready to go all in, so I’m going to go to zero and not do anything on farm animal welfare.”
Alexander Berger: We’re able to work on multiple things, and the effective altruism community is able to work on multiple things. A lot of the idea of worldview diversification is to say, even though the internal logic of some of these causes might be so totalizing, so demanding, ask so much of you, that being able to preserve space to say, “I’m going to make some of that bet, but I’m not ready to make all of that bet,” can be a really important move at the portfolio level for people to make in their individual lives, but also for Open Phil to make as a big institution.
Rob Wiblin: Yeah. It feels so intuitively clear that when you’re to some degree picking these numbers out of a hat, you should never go 100% or 0% based on stuff that’s basically just guesswork. I guess, the challenge here seems to have been trying to make that philosophically rigorous, and it does seem like coming up with a truly philosophically grounded justification for that has proved quite hard. But nonetheless, we’ve decided to go with something that’s a bit more cluster thinking, a bit more embracing common sense and refusing to do something that obviously seems mad.
This is also how I think about the meat eater problem. I have a lot of uncertainty about the moral weight of animals, and I see funding/working on both animal welfare and global development as a compromise position that is good across all worldviews. (Your certainty in the meat eater problem can reduce how much you want to fund global development on the margin, but not eliminate it altogether.)
Im worried that the chunk of EA that is concerned with effective human nearterm charities are all at risk of being net negative