I previously asked this question in an AMA, but I'd like to pose it to the broader EA Forum community:
Do you believe that AGI poses a greater existential risk than other proposed x-risk hazards, such as engineered pandemics? Why or why not?
For background, it seems to me like the longtermist movement spends lots of resources (including money and talent) on AI safety and biosecurity as opposed to working to either discover or mitigate other potential x-risks such as disinformation and great power war. It also seems to be a widely held belief that transformative AI poses a greater threat to humanity's future than all other existential hazards, and I am skeptical of this. At most, we have arguments that TAI poses a big x-risk, but not arguments that it is bigger than every other x-risk (although I appreciate Nate's argument that it does outweigh engineered pandemics).
Great power conflict is generally considered an existential risk factor, rather than an existential risk per se – it increases the chance of existential risks like bioengineered pandemics, nuclear war, and transformative AI, or lock-in of bad values (Modelling great power conflict as an existential risk factor, The Precipice chapter 7).
I can define a new existential risk factor that could be as great as all existential risks combined – the fact that our society and the general populace do not sufficiently prioritize existential risks, for example. So no, I don't think TAI is greater than all possible existential risk factors. But I think addressing this "risk" would involve thinking a lot about its impact mediated through more direct existential risks like TAI, and if TAI is the main one, then that would be a primary focus.
This passage from The Precipice may be helpful:
This is an interesting point, thanks! I tend not to distinguish between "hazards" and "risk factors" because the distinction between them is whether they directly or indirectly cause an existential catastrophe, and many hazards are both. For example:
- An engineered pandemic could wipe out humanity either directly or indirectly by causing famine, war, etc.
- Misaligned AI is usually thought of as a direct x-risk, but it can also be thought of a risk factor because it uses its knowledge of other hazards in order to drive humanity extinct as efficiently as possibl
... (read more)