This is a linkpost for Imitation Learning is Probably Existentially Safe by Michael Cohen and Marcus Hutter.
Abstract
Concerns about extinction risk from AI vary among experts in the field. But AI encompasses a very broad category of algorithms. Perhaps some algorithms would pose an extinction risk, and others wouldn’t. Such an observation might be of great interest to both regulators and innovators. This paper argues that advanced imitation learners would likely not cause human extinction. We first present a simple argument to that effect, and then we rebut six different arguments that have been made to the contrary. A common theme of most of these arguments is a story for how a subroutine within an advanced imitation learner could hijack the imitation learner’s behavior toward its own ends. But we argue that each argument is flawed and each story implausible.
1 Introduction
While many theorists have come to share the view that sufficiently advanced AI systems might pose a threat to the continued existence of humanity [Hinton et al., 2023, Cohen et al., 2022, Russell, 2019, Bostrom, 2014], it is important, if we are to make progress in thinking about this issue, to be clear about which types of AI pose the genuine threats. That way we can focus on where the danger actually lies. This paper aims to refute claims that imitation learning algorithms present such a threat. While we do think there are types of AI we should be worried about, that does not extend to all types of AI. So in what follows, we will examine arguments that have been put forward that imitation learners present an extinction risk to humanity, and explain why we think they go wrong.
First, we’ll offer a simple argument that a sufficiently advanced supervised learning algorithm, trained to imitate humans, would very likely not gain total control over humanity (to the point of making everyone defenseless) and then cause or allow human extinction from that position.
No human has ever gained total control over humanity. It would be a very basic mistake to think anyone ever has. Moreover, if they did so, very few humans would accept human extinction. An imitation learner that successfully gained total control over humanity and then allowed human extinction would, on both counts, be an extremely poor imitation of any human, and easily distinguishable from one, whereas an advanced imitation learner will likely imitate humans well.
This basic observation should establish that any conclusion to the contrary should be very surprising, and so a high degree of rigor should be expected from arguments to that effect. If a highly advanced supervised learning algorithm is directed to the task of imitating a human, then powerful forces of optimization are seeking a target that is fundamentally existentially safe: indistinguishability from humans. Stories about how such optimization might fail should be extremely careful in establishing the plausibility of every step.
In this paper, we’ll rebut six different arguments we’ve encountered that a sufficiently advanced supervised learning algorithm, trained to imitate humans, would likely cause human extinction. These arguments originate from Yudkowsky [2008] (the Attention Director Argument), Christiano [2016] (the Cartesian Demon Argument), Krueger [2019] (the Simplicity of Optimality Argument), Branwen [2022] (the Character Destiny Argument), Yudkowsky [2023] (the Rational Subroutine Argument), and Hubinger et al. [2019] (the Deceptive Alignment Argument). Note: Christiano only thinks his argument is possibly correct, rather than likely correct, for the advanced AI systems that we will end up creating. And Branwen does not think his hypothetical is likely, only plausible enough to discuss. But maybe some of the hundreds of upvoters on the community blog LessWrong consider it likely.
In all cases, we have rewritten the arguments originating from those sources (some of which are spread over many pages with gaps that need to be filled in). For Christiano [2016] and Hubinger et al. [2019], our rewritten versions of their arguments are shorter, but the longer originals are no stronger at the locations that we contest. And for the other four sources, the original text is no thorougher than our characterization of their argument. None of the arguments have been peer reviewed, and to our knowledge, only Hubinger et al. [2019] was reviewed even informally prior to publication. However, we can assure the reader they are taken seriously in many circles.
8 Conclusion
The existential risk from imitation learners, which we have argued is small, stands in stark contrast to the existential risk arising from reinforcement learning agents and similar artificial agents planning over the long term, which are trained to be as competent as possible, not as human-like as possible. Cohen et al. [2022] identify plausible conditions under which running a sufficiently competent long-term planning agent would make human extinction a likely outcome. Regulators interested in designing targeted regulation should note that imitation learners may safely be treated differently from long-term planning agents. It will be necessary to restrict proliferation of the latter, and such an effort must not become stalled by bundling it with overly burdensome restrictions on safer algorithms.
Thanks for the comment, Geoffrey! I strongly upvoted it because I think it points to a discussion which is important to have.
I think such individuals or groups will not be the ones training the most powerful models. Gemini costed 630 M$, and the development cost of the leading models is expected to continue to increase. I appreciate the cost of a model of a given capability will decrease over time due to improvements in hardware and software. However, by the time terrorist individuals or groups have the resources to train a model as capable as e.g. Gemini, the leading models will be much more powerful. As long as the leading models are imitating most humans (as they seem to be now), who are not in favour of unilaterally causing human extinction, I think this would remain extremely unlikely.
In my mind, there is still a big difference between calling for human extinction and being willing to unilaterally cause human extinction. To illustrate, the vast majority of people arguing for less population would not be willing to kill people even if there were no consequences to themselves.
6 billion deaths would be terrible, but still quite far from human extinction. The global population reached 2 billion in 1927, i.e. only 97 years ago.
Moreover, I assume religious extremists want to increase the longterm number of Muslims, and killing the 6 billion people who are not Muslim seems to be a very suboptimal strategy of doing that. If Jihadist terrorists could have an AI model capable of doing this, which is much harder than just killing 6 billion random people, they could also use the super model to convert people who are not Muslim, or help them achieve greater influence in the world via other means (e.g. coming up with new technological investions, and sustainbly increasing their offspring).
In addition, I wonder whether there would still be Jihadist terrorists if they had the ability to become much richer with their own model. I suspect a key reason they are willing to sacrifice themselves if that their current lives are not great, but a model capable of causing human extinction could much more easily be used to increase their wealth and quality of life.
I believe the points I mentioned above apply to these groups too.
I guess you are assuming the amount of resources needed to cause human extinction will dramatically go down with advanced AI, and therefore worry that increasingly more individuals and groups will have the ability to cause human extinction. However, I do not think the absolute amount of resources controlled by terrorist groups is the key metric. I would say what matters is the offense-defense balance, such that the risk of human extinction depends on the fraction of the global resources controlled by terrorist groups. Historical trends suggest people with Bay Area values will control an increasingly larger fraction of the global resources, and terrorist groups an increasingly smaller fraction, which makes it more difficult for terrorist groups to cause human extinction. Historical terrorist attack deaths also seem to suggest an astronomically low probability of a terrorist attack causing human extinction.