I want to know how people estimate the probabilty of AI takeoff and causing humans extinction, and the details(such as: Humans attitude on AI safety, how AI gain physical access to world, how AI is good at tricking humans...) people consider on to predict. But I can only find estimation "results" on EA forum(mostly 2-10% in this century), but I don't know how you estimate it. Did you use complex math models to calculate? I know we should take a pinch of salt with the prediction, but I just want to know what people considers as important factors of AI risks.
I think 1 is >95% likely. We're in an arms race dynamic for at least some of the components of AGI. This is conditional on us not having been otherwise wiped out (by war, pandemic, asteroid, etc).
I think 2 and 3 are the wrong way to think about the question. Was humankind "motivated to conquer" the dodo? Or did we just have a better use for its habitat, and its extinction was just a whoopsie in the process?