We wanted to focus on a specific and somewhat manageable question related to AI vs. non-AI cause prioritization. You're right that it's not the only important question to ask. If you think the following claim is true - 'non-AI projects are never undercut but always outweighed' - then it doesn't seem like an important question at all. I doubt that claim holds generally, for reasons that were presented in the piece. When deciding what to prioritize, there are also broader strategic questions that matter - how is money and effort being allocated by other parties, what is your comparative advantage, etc. - that we don't touch at all here.
By calling out one kind of mistake, we don't want to incline people toward making the opposite mistake. We are calling for more careful evaluations of projects, both within AI and outside of AI. But we acknowledge the risk of focusing on just one kind of mistake (and focusing on an extreme version of it, to boot). We didn't pursue comprehensive analyses of which cause areas will remain important conditional on short timelines (and the analysis we did give was pretty speculative), but that would be a good future project. Very near future, of course, if short-ish timelines are correct!
You make a helpful point. We've focused on a pretty extreme claim, but there are more nuanced discussions in the area that we think are important. We do think that "AI might solve this" can take chunks out of the expected value of lots of projects (and we've started kicking around some ideas for analyzing this). We've also done some work about how the background probabilities of x-risk affect the expected value of x-risk projects.
I don't think that we can swap one general heuristic (e.g. AI futures make other work useless) for a more moderate one (e.g. AI futures reduce EV by 50%). The possibilities that "AI might make this problem worse" or "AI might raise the stakes of decisions we make now" can also amplify the EV of our current projects. Figuring out how AI futures affect cost-effectiveness estimates today is complicated, tricky, and necessary!
Thanks for the helpful addition. I'm not an expert in the x-risk funding landscape, so I'll defer to you. Sounds like your suggestion could be a sensible one on cross-cause prio grounds. It's possible that this dynamic illustrates a different pitfall of only making prio judgments at the level of big cause areas. If we lump AI in with other x-risks and hold cause-level funding steady, funding between AI and non-AI x-risks becomes zero sum.
Hi Michael!
You've identified a really weak plank in the argument against AI solving factory farming. I agree that capacity-building is not a significant bottleneck, for a lot of the reasons you present.
I think the key issue is whether there will be social and legal barriers that prevent people from switching to farmed animal alternatives. These barriers might prevent the kinds of capacity build-up that would make alternative proteins economically competitive.
I think I might be more pessimistic than you about whether people want to switch to more humane alternatives (and would do so if they were wealthier). That's probably the case for welfare-enhanced meat (as we see with many affluent customers today). I'm less confident about willingness to switch to lab-grown meat or other alternatives.
I'm quite curious about a scenario in which: massive capacity for producing alt proteins happens without cultural buy-in, causing alt proteins to be far cheaper than animal proteins. The economic incentives to switch could cause quite swift cultural changes. But I'm quite uncertain when trying to predict culture changes.
Depending on the allocation method you use, you can still have high credence in expected total hedonistic utilitarianism and get allocations that give some funding to GHD projects. For example, in this parliament, I assigned 50% to total utilitarianism, 37% to total welfarist consequentialism, and 12% to common sense (these were picked semi-randomly for illustration). I set diminishing returns to 0 to make things even less likely to diversify. Some allocation methods (e.g. maximin) give everything to GHD, some diversify (e.g. bargaining, approval), and some (e.g. MEC) give everything to animals.
With respect to your second question, it wouldn't follow that we should give money to causes that benefit the already well-off. Lots of worldviews that favor GHD will also favor projects to benefit the worst off (for various reasons). What's your reason for thinking that they mustn't? For what it's worth, this comes out in our parliament tool as well. It's really hard to get any parliament to favor projects that don't target suffering (like Artists Without Borders).
Our estimate uses Saulius's years/$ estimates. To convert to DALYs/$, we weighted by the amount of pain experienced by chickens per year. The details can be found in Laura Dufffy's report here. The key bit:
I estimated the DALY equivalent of a year spent in each type of pain assessed by the Welfare Footprint Project by looking at the descriptions of and disability weights assigned to various conditions assessed by the Global Burden of Disease Study in 2019 and comparing these to the descriptions of each type of pain tracked by the Welfare Footprint Project.
These intensity-to-DALY conversion factors are:
- 1 year of annoying pain = 0.01 to 0.02 DALYs
- 1 year of hurtful pain = 0.1 to 0.25 DALYs
- 1 year of disabling pain = 2 to 10 DALYs
- 1 year of excruciating pain = 60 to 150 DALYs
Here’s one method that we’ve found helpful when presenting our work. To get a feel for how the tools work, we set challenges for the group: find a set of assumptions that gives all resources to animal welfare; find how risk averse you’d have to be to favor GHD over x-risk; what moral views best favor longtermist causes? Then, have the group discuss whether and why these assumptions would support those conclusions. Our accompanying reports are often designed to address these very questions, so that might be a way to find the posts that really matter to you.
I think that I’ve become more accepting of cause areas that I was not initially inclined toward (particularly various longtermist ones) and also more suspicious of dogmatism of all kinds. In developing and using the tools, it became clear that there were compelling moral reasons in favor of almost any course of action, and slight shifts in my beliefs about risk aversion, moral weights, aggregation methods etc. could lead me to very different conclusions. This inclines me more toward very significant diversification across cause areas.
I think there are probably cases of each. For the former, there might be some large interventions in things like factory farming or climate change (i) that could have huge impacts and (ii) for which we don't think AI will be particularly efficacious or impactful.
For the latter, here are some cases off the top of my head. Suppose we think that if AI is used to make factory farming more efficient and pernicious, it will be via X (idk, some kind of precision farming technology). Efforts to make X illegal look a lot better after accounting for AI. Or, right now, making it harder for people to buy ingredients for biological weapons might be good bets but not great bets. It reduces the chances of bio weapons somewhat, but knowledge about how to create weapons is the main bottleneck. If AI removes that bottleneck, then those projects look a lot better.