I want to know how people estimate the probabilty of AI takeoff and causing humans extinction, and the details(such as: Humans attitude on AI safety, how AI gain physical access to world, how AI is good at tricking humans...) people consider on to predict. But I can only find estimation "results" on EA forum(mostly 2-10% in this century), but I don't know how you estimate it. Did you use complex math models to calculate? I know we should take a pinch of salt with the prediction, but I just want to know what people considers as important factors of AI risks.
I think most probabilistic estimates are subjective probability estimates. There are no complicated math models behind them usually.
Some people do make models, but then make subjective probability estimates. The math is typically not that complicated for these models, often just multiplying different probabilities together (which is imo not a good class of models for this kind of problem).
My guess would be that even some of the people who make models have different probability estimates for human extinction than the one that the model spits out, because they realize that their models have flaws and try to correct for that.
When I say "motivated to", I don't mean that it would be it's primary motivation. I mean that it has motivations that, at some point, would lead to it having "perform actions that would kill all of humanity" as a sub-goal. And in order to get to the point where we were dodo's to it, it would have to disempower humanity somehow.
Would y... (read more)