Author: Leonard Dung
Abstract: Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment of humanity, e.g. human extinction, by 2100. It rests on four substantive premises which it motivates and defends: first, the speed of advances in AI capability, as well as the capability level current systems have already reached, suggest that it is practically possible to build AI systems capable of disempowering humanity by 2100. Second, due to incentives and coordination problems, if it is possible to build such AI, it will be built. Third, since it appears to be a hard technical problem to build AI which is aligned with the goals of its designers, and many actors might build powerful AI, misaligned powerful AI will be built. Fourth, because disempowering humanity is useful for a large range of misaligned goals, such AI will try to disempower humanity. If AI is capable of disempowering humanity and tries to disempower humanity by 2100, then humanity will be disempowered by 2100. This conclusion has immense moral and prudential significance.
My thoughts: I read through it rather quickly so take what I say with a grain of salt. That said, it seemed persuasive and well-written. Additionally, the way that they split up the argument was quite nice. I'm very happy to see an attempt to make this argument more philosophically rigorous and I hope to see more work in this vein.
I think the view that AIs will compromise with humans rather than go to war with them makes sense under the perspective shared by a large fraction (if not majority) of social scientists that war is usually costlier, riskier, and more wasteful than trade between rational parties with adequate levels of information, who have the option of communicating and negotiating successfully.
This is a general fact about war, and has little to do with the values of the parties going to war, c.f. Rationalist explanations for war. Economic models of war do not generally predict war between parties that have different utility functions. On the contrary, a standard (simple) economic model of human behavior consists of viewing humans as entirely misaligned with other agents in the world, in the sense of having completely non-overlapping utility functions with random strangers. This model has been generalized to firms, countries, alliances etc., and yet it is rare for these generalized models to predict war as the default state of affairs.
Usually when I explain this idea to people, I am met with skepticism that we can generalize these social science models to AI. But I don't see why not: they are generally our most well-tested models of war. They are grounded in empirical facts and decades of observations, rather than evidence-free speculation (which I perceive as the primary competing alternative in AI risk literature). And most importantly, the assumptions of the models are robust to differences in power between agents, and misalignment between agents, which are generally the two key facts that people point to when arguing why these models are wrong when applied to AI. Yet this alleged distinction appears to merely reflect a misunderstanding of the modeling assumptions, rather than any key difference between humans and AIs.
What's interesting to me is that many people generally have no problem generalizing these economic models to other circumstances. For example, we could ask:
In each case, I generally encounter AI risk proponents claiming that what distinguishes these cases from the case of AI is that, in these cases, we can assume that the genetically engineered humans and human emulations will be "aligned" with human values, which adequately explains why they will attempt to compromise rather than go to war with the ordinary biological humans. But as I have already explained, standard economic models of war do not predict that war is constrained by alignment to human values, but is instead constrained by the costs of war, and the relative benefits of trade compared to war.
To the extent you think these economic models of war are simply incorrect, then I think it is worth explicitly engaging with the established social science literature, rather than inventing a new model that makes unique predictions about what non-human AIs would apparently do, who definitionally do not share human values.
It is true that GPT-4 "sometimes" fails to follow human instructions, but the same could be said about humans. I think it's worth acknowledging the weight of the empirical evidence here regardless.
In my opinion the empirical evidence generally seems way stronger than the theoretical arguments, which (so far) seem to have had little success predicting when and how alignment would be difficult. For example, many people believed that AGI would be achieved at the time AIs are having natural conversations with humans (e.g. Eliezer Yudkowsky implied as much in his essay about a fire alarm[1]). According to this prediction, we should have already been having pretty severe misspecification problems if such problems were supposed to arise at AGI-level. And yet, I claim, we are not having these severe problems (and instead, we are merely having modestly difficult problems that can be patched with sufficient engineering effort).
It is true that problems of misspecification should become more difficult as AIs get smarter. However, it's important to recognize that as AI capabilities grow, so too will our tools and methods for tackling these alignment challenges. One key factor is that we will have increasingly intelligent AI systems that can assist us in the alignment process itself. To illustrate this point concretely, let's walk through a hypothetical scenario:
Suppose that aligning a human-level artificial general intelligence (AGI) merely requires a dedicated team of human alignment researchers. This seems generally plausible given that evaluating output is easier than generating novel outputs (see this article that goes into more detail about this argument and why it's relevant). Once we succeed in aligning that human-level AGI system, we can then leverage it to help us align the next iteration of AGI that is slightly more capable than human-level (let's call it AGI+). We would have a team of aligned human-level AGIs working on this challenge with us.
Then, when it comes to aligning the following iteration, AGI++ (which is even more intelligent), we can employ the AGI+ systems we previously aligned to work on this next challenge. And so on, with each successive generation of AI systems helping us to align the next, even more advanced generation.
It seems plausible that this cycle of AI systems assisting in the alignment of future, more capable systems could continue for a long time, allowing us to align AIs of ever-increasing intelligence without at any point needing mere humans to solve the problem of superintelligent alignment alone. If at some point this cycle becomes unsustainable, we can expect the highly intelligent AI advisors we have at that point to warn us about the limits of this approach. This would allow us to recognize when we are reaching the limits of our ability to maintain reliable alignment.
Full quote from Eliezer: "When they are very impressed by how smart their AI is relative to a human being in respects that still feel magical to them; as opposed to the parts they do know how to engineer, which no longer seem magical to them; aka the AI seeming pretty smart in interaction and conversation; aka the AI actually being an AGI already."