I previously asked this question in an AMA, but I'd like to pose it to the broader EA Forum community:
Do you believe that AGI poses a greater existential risk than other proposed x-risk hazards, such as engineered pandemics? Why or why not?
For background, it seems to me like the longtermist movement spends lots of resources (including money and talent) on AI safety and biosecurity as opposed to working to either discover or mitigate other potential x-risks such as disinformation and great power war. It also seems to be a widely held belief that transformative AI poses a greater threat to humanity's future than all other existential hazards, and I am skeptical of this. At most, we have arguments that TAI poses a big x-risk, but not arguments that it is bigger than every other x-risk (although I appreciate Nate's argument that it does outweigh engineered pandemics).
To clarify, is the main source of your skepticism that you don't think TAI has a particularly high chance of leading to an existential catastrophe, or are you also not sure we get TAI soon enough to matter?
Also, I think your post is asking for arguments which directly compare risks. Is there a reason you'd find this particularly compelling? If I tell you a coin has 2 sides, and Alice tells you a die has 6, it feels like you have enough information to work out that getting tails is more likely than rolling a 1, even if I've never met Alice.
The precipice is probably the best place to start if you do want direct comparisons though
My main source of skepticism is that I am not sure whether we'll get to TAI this century. While there are currently some organizations dedicated to building AGI (OpenAI, DeepMind), it could be that comprehensive AI services obviate the economic incentive to develop AGI rather than a collection of narrow AIs (especially given that AGI poses known risks that narrow AIs don't).
Yes, that is a... (read more)