Principal — Good Structures
I previously co-founded and served as Executive Director at Wild Animal Initiative, and was the COO of Rethink Priorities from 2020 to 2024.
I think this is true as a response in certain cases, but many philanthropic interventions probably aren't tried enough times to get the sample size and lots of communities are small. It's pretty easy to imagine a situation like:
It seems like this response would imply you should only do EV maximization if your movement is large (or that its impact is reliably predictable if the movement is large).
But I do think this is a fair point overall — though you could imagine a large system of interventions with the same features I describe that would have the same issues as a whole.
I don't think this is quite what I'm referring to, but I can't quite tell! But my quick read is we are talking about different things (I think because I used the word utility very casually). I'm not talking about my own utility function with regard to some action, but the potential outcomes of that action on others, and I don't know if I'm embracing risk aversion views as much as relating to their appeal.
Or maybe I'm misunderstanding, and you're just rejecting the conclusion that there is a moral difference between taking, say, an action with +1 EV and a 20% chance of causing harm and an action with +1EV and a 0% chance of causing harm / think I just shouldn't care about that difference?
I think I mean something slightly different than difference-making risk aversion, but I see what you're saying. I don't even know if I'm arguing against EV maximization - more just trying to point out that EV alone doesn't feel like it fully captures the picture of the value I care about (e.g. likelihood of causing harm relative to doing nothing feels like another important thing). I think specifically, that there are plausible circumstances where I am more likely than not to cause additional harm, and in expectation that action has positive EV, feels concerning. I imagine lots of AI risk work could be like this: doing some research project has some strong chance of advancing capabilities a bit (high probability of a bit of negative value), but maybe a very small chance of massively reducing risk (low probability of tons of positive value). The EV looks good, but my median outcome will be the world being worse than it was if I hadn't done anything.
Expected value maximization hides a lot of important details.
I think a pretty underrated and forgotten part of of Rethink Priorities' CURVE sequence is the risk aversion work. I think the defenses of EV against more risk-aware models seem to often boil down to EV's simplicity. But I think that EV actually just hides a lot of important detail, including, most importantly, that if you only care about EV maximization, you might be forced to conclude that worlds where you're more likely to cause harm than not are preferable.
As an example, imagine that you're considering a choice that can cause 10 equally possible outcomes. In 6 of them, you'll create -1 utility. In 3 of them, your impact is neutral. In 1 of them, you'll create 7 utility. The EV of taking the action is (-6+0+7)/10 = 0.1. This is a positive number! Your expected value is positive, even though you have a 60% chance of causing harm. In expectation you're more likely than not to cause harm, but also in expectation you should expect to increase utility a bit. This is weird.
Scenario 1
More concretely, if I consider the following choices, which are equivalent from an EV perspective:
Option A. A 0% chance of causing a harmful outcome, but in expectation will cause +10 utility
Option B. A 20% chance of causing a harmful outcome, but in expectation will cause +10 utility
It seems really bizarre to not prefer Option A. But if I prefer Option A, I'm just accepting risk aversion to at least some extent. But what if the numbers slip a little more?
Scenario 2
Option A. A 0% chance of causing a harmful outcome, but in expectation will cause +9.9999 utility
Option B. A 20% chance of causing a harmful outcome, but in expectation will cause +10 utility
Do I really want to take a 20% chance on causing harm in exchange for 0.001% gain in utility caused?
Scenario 3
Option A. A 0% chance of causing a harmful outcome, but in expectation will cause +5 utility
Option B. A 99.99999% chance of causing a harmful outcome, but in expectation will cause +10 utility
Do I really want to be exceedingly likely to cause harm, in exchange for a 100% gain in utility?
I don't know the answers to the above scenarios, but I think it feels like just saying "the EV is X" without reference to the downside risk misses a massive part of the picture. It seems much better to say "the expected range of outcomes are a 20% chance of really bad stuff happening, a 70% chance of nothing happening and a 10% of a really really great outcome, which all averages out to an >0 average". This is meaningfully different than saying "no downside risk, and a 10% chance of a pretty good outcome, so >0 average".
I think that risk aversion is pretty important, but even if it isn't incorporated into people's thinking at all, it really doesn't feel like EV produces a number I can take at face value, and that makes me feel like EV isn't actually that simple.
The place where I currently see this happening the most is naive expected value maximization in reasoning about animal welfare — I feel like I've seen an uptick in "I think there is a 52% chance these animals live net negative lives, so we should do major irreversible things to reduce their population". But it's pretty easy to imagine doing those things being harmful, or your efforts backfiring, etc. in ways that cause harm.
This isn't an answer to your question, but I think the underlying assumption is way too strong given available evidence.
Taking for granted that bad experiences outweigh good ones in the wild (something I'm sympathetic to also, but which definitely has not clearly been demonstrated), I think having any kind of position on whether or not climate change increases or decreases wild animal welfare is pretty much impossible to say.
I guess my overall view is that having any kind of reasonable opinion on the impact of climate change on insect or other animal populations in the longrun, besides extremely weak priors, is basically impossible right now, and most assumptions we can make will end up being wrong in various ways.
I also think it doesn't follow that if we think suffering in nature outweighs positive experience, we should try to minimize the number of animals. What if it is more cost-effective to improve the lives of those animals? Especially given that we are at best incredibly uncertain if suffering outweighs positive experience, it seems clearly better to explore cost-effective ways to improve welfare over reducing populations, as those interventions will be more robust no matter the overall dominance of negative vs positive experiences in the wild.
I think my view is that while I agree in principle it could be an issue, the voting has worked this way for long enough that I'd expect more evidence of entrenching to exist. Instead, I still see controversial ideas change people's minds on the forum pretty regularly and not be downvoted to oblivion, and see low quality or bad faith posts/comments get negative karma, and I think that's the upside of the system working well.
I think this is plausibly among the top two most promising immediate funding opportunities in the wild animal welfare space (besides general support for WAI, where I have giant conflicts of interest). CXL is really good at fundraising from non-EA donors, and if this works, which it seems like it has a decent chance to, it just effectively helps conservation dollars and for-profit investment go into a promising WAW intervention. I'd be excited to chat with anyone considering funding it about why I think it is so promising in more detail.