Author: Leonard Dung
Abstract: Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment of humanity, e.g. human extinction, by 2100. It rests on four substantive premises which it motivates and defends: first, the speed of advances in AI capability, as well as the capability level current systems have already reached, suggest that it is practically possible to build AI systems capable of disempowering humanity by 2100. Second, due to incentives and coordination problems, if it is possible to build such AI, it will be built. Third, since it appears to be a hard technical problem to build AI which is aligned with the goals of its designers, and many actors might build powerful AI, misaligned powerful AI will be built. Fourth, because disempowering humanity is useful for a large range of misaligned goals, such AI will try to disempower humanity. If AI is capable of disempowering humanity and tries to disempower humanity by 2100, then humanity will be disempowered by 2100. This conclusion has immense moral and prudential significance.
My thoughts: I read through it rather quickly so take what I say with a grain of salt. That said, it seemed persuasive and well-written. Additionally, the way that they split up the argument was quite nice. I'm very happy to see an attempt to make this argument more philosophically rigorous and I hope to see more work in this vein.
For what it's worth, I'd loosely summarize my position on this issue as being that I mainly think of AI as a general vehicle for accelerating technological and economic growth, along with accelerating things downstream of technology and growth, such as cultural change. And I'm skeptical we could ever fully "solve alignment" in the ambitious sense you seem to be imagining.
In this frame, it could be good to slow down AI if your goal is to delay large changes to the world. There are plausible scenarios in which this could make sense. Perhaps most significantly, one could be a cultural conservative and think that cultural change is generally bad in expectation, and thus more change is bad even if it yields higher aggregate prosperity sooner in time (though I'm not claiming this is your position).
Whereas, by contrast, I think cultural change can be bad, but I don't see much reason to delay it if it's inevitable. And the case against delaying AI seems even stronger here if you care about preserving (something like) the lives and values of people who currently exist, as AI offers the best chance of extending our lifespans, and "putting us in the driver's seat" more generally by allowing us to actually be there during AGI development.
If future humans were in the driver's seat instead, but with slightly more control over the process, I wouldn't necessarily see that as being significantly better in expectation compared to my favored alternative, including over the very long run (according to my values).
(And as a side note, I also care about influencing human values, or what you might term "human safety", but I generally see this as orthogonal to this specific discussion.)