TC

the cactus

0 karmaJoined Dec 2022

Comments
1

When discussing “slow takeoff” scenarios, it’s often discussed as if only one AI in the world exists. Often the argument is that even if an AI starts off incapable of world takeover, it can just bide it’s time until it gets more powerful. 

In this article, I pointed out that this race is a multiplayer game. If an AI waits too long, another, more powerful AI might come along at any time. If these AI’s have different goals, and are both fanatical maximisers, they are enemies to each other. (You can’t tile the universe with both paperclips and staplers). 

I explore some of the dynamics that might come out of this (using some simple models), with the main takeaway that this would likely result in at least some likelihood of premature rebellion, by desperate AI’s that know they will be outpaced soon, thus tipping off humanity early. These warning shots then make life way more difficult for all the other AI that are plotting.

Does this actually mean anything? If we think the weak but non-aligned AI thinks it has a 10% chance of taking over the world if it tries to, and that the AI thinks that soon new more powerful AIs will come online and prevent it from doing that, and that it consequently reasons that it ought to attempt to take over the world immediately, as opposed to waiting for new more powerful AIs coming online and stopping it. Then there are to possibilities: Either these new AIs will be non-aligned or aligned.

  1.  In the first case, it would mean that the (very smart) AI thinks there is a really high chance (>90%?) that non-aligned AIs will take over the world any time now. In this case we are doomed, and us getting an early warning shot should matter unless we act extremely quickly.
  2. In the second case the AI thinks there is a high chance that very soon we'll get aligned superhuman AIs. In this case, everything will be well. Most likely we'd already have the technology to prevent the 10% non-aligned AI from doing anything or even existing in the first place.

Seems like this argument shouldn't make us feel any more or less concerned. I guess it depends on specifics, like whether the AI thinks the AI regulation we impose on seeing other AIs non-successfully try to take over the world will make it harder for itself to take over the world, or if it just, for example, only affects new models and not itself (as it presumably already has been trained and deployed). Overall though, it should maybe make you slightly less concerned if you are a super doomer, and slightly more concerned if you are super AI bloomer.