I previously asked this question in an AMA, but I'd like to pose it to the broader EA Forum community:
Do you believe that AGI poses a greater existential risk than other proposed x-risk hazards, such as engineered pandemics? Why or why not?
For background, it seems to me like the longtermist movement spends lots of resources (including money and talent) on AI safety and biosecurity as opposed to working to either discover or mitigate other potential x-risks such as disinformation and great power war. It also seems to be a widely held belief that transformative AI poses a greater threat to humanity's future than all other existential hazards, and I am skeptical of this. At most, we have arguments that TAI poses a big x-risk, but not arguments that it is bigger than every other x-risk (although I appreciate Nate's argument that it does outweigh engineered pandemics).
This is an interesting point, thanks! I tend not to distinguish between "hazards" and "risk factors" because the distinction between them is whether they directly or indirectly cause an existential catastrophe, and many hazards are both. For example:
Mathematically, you can speak of the probability of an existential catastrophe given a risk factor by summing up the probabilities of that risk factor indirectly causing a catastrophe by elevating the probability of a "direct" hazard:
Pr(extinction∣great power war)=Pr(extinction∣great power war,engineered pandemic)+Pr(extinction∣great power war,transformative AI)+…
You can do the same thing with direct risks. All that matters for prioritization is the overall probability of catastrophe given some combination of risk factors.