I previously asked this question in an AMA, but I'd like to pose it to the broader EA Forum community:
Do you believe that AGI poses a greater existential risk than other proposed x-risk hazards, such as engineered pandemics? Why or why not?
For background, it seems to me like the longtermist movement spends lots of resources (including money and talent) on AI safety and biosecurity as opposed to working to either discover or mitigate other potential x-risks such as disinformation and great power war. It also seems to be a widely held belief that transformative AI poses a greater threat to humanity's future than all other existential hazards, and I am skeptical of this. At most, we have arguments that TAI poses a big x-risk, but not arguments that it is bigger than every other x-risk (although I appreciate Nate's argument that it does outweigh engineered pandemics).
It's important to distinguish existential risk (x-risk) from global catastrophic risk (GCR). Nuclear war and extreme climate change, for example, are much more likely to have survivors, so are mostly GCRs rather than x-risks. Similarly with engineered pandemics - it seems like they are more likely to be survivable by some fraction of humanity, down to the relatively slow speed of spread, and the possibility of countermeasures (you are only up against human level intelligence), compared to an unaligned AGI (you are up against superintelligence which could wipe out the human race in minutes).