Example: "[wishy-washy argument that AI isn't risky], therefore we shouldn't work on AI safety." How confident are you about that? From your perspective, there's a non-trivial possibility that you're wrong. And I don't even mean 1%, I mean like 30%. Almost everyone working on AI safety think it has less than a 50% chance of killing everyone, but it's still a good expected value to work on it.
Example: "Shrimp are not moral patients so we shouldn't try to help them." Again, how confident are you about that? There's no way you can be confident enough for this argument to change your prioritization. The margin of error on the cost-effectiveness of some intervention is way higher than the difference in subjective probability on "shrimp are sentient" between someone who does, and someone who does not, care about shrimp welfare.
EAs are better at avoiding this fallacy than pretty much any other group, but still broadly bad at it.
I would like to have more examples of this phenomenon, I'm pretty sure it happens more than in just those two cases but I couldn't think of any others. I can recall examples of EAs making this style of argument with regard to particular AI safety plans, although those usually have concerns related to poisoning the well in which case it's correct to reject low-probability plans. (Ex: "Advocate for regulations to slow AI" risks poisoning the well if that position is not politically palatable.) I am pretty sure I've seen examples that don't have this concern but I can't remember any.
Thanks, Michael. I agree AI risk should not be dismissed without looking into how large it is. On the other hand, there is not an obvious relationship between existential risk, and the cost-effectiveness of decreasing it. The cost-effectiveness decreases as the risk increases because this decreases the value of the future, unless the risk is concentrated in a time of perils. In addition, a higher risk of human extinction does not necessarily imply a higher existential risk because some AI systems may well be sentient.