Hide table of contents

So I googled and didn't find a related topic in such terms. Suppose we put a few astronauts on an asteroid, no regular contact with Earth. The moment they detect the light on Earth has blinked out, they send a projectile of Gray Goo to the Earth to stop the AGI from completing whatever its assigned end goal. If MAD has kept us humans in check, surely it'd work on more rational agents?

4

0
0

Reactions

0
0
New Answer
New Comment

2 Answers sorted by

An entity powerful enough to threaten all humans on earth would most likely also be powerful enough to defeat the MAD mechanism.

Suppose we have 10 different MAD mechanisms. Wouldn't the defense become impossible at some point? Wouldn't the AI think at one point, "It is simpler to just complete the task without killing anybody"?

EDIT: okay, if the task given to AI is "calculate the 2^256 th digit of Pi", then perhaps it'd rather risk the easier task of "before they certainly interrupt my Pi calculations, defeat 50 MAD threats and kill all people, then hopelessly proceed"

2
lincolnq
Perhaps, but now you're substituting one risk for another, which is the risk that the MAD mechanisms trigger by accident, causing unintentional destruction of the earth.
1
Leopard
Yeah, thanks for replying. I did now realize that its more complex than that.

To give you an overly specific answer, presumably the AGI could realize the existence of the trigger and just keep the lights on while surrounding them with with paperclips?

1 Related Questions

Parent Question
Curated and popular this week
Relevant opportunities