The next existential catastrophe is likelier than not to wipe off all animal sentience from the planet
HC
N
DR
J
NM
JT
JXL
G
NN
T
M
ID
H
disagree
agree

I had already made a question post about this during the last thematic week. I suppose my main motivation comes from being surprised at the fact that there's not just an absence consensus on this, but that it even seems sidelined in X-risk discussion (not that no one has ever given an answer to this, of course). It's a question I try to ask in 1:1 conversations with individuals involved in reducing existential risks, but the answers I get vary widely from person ton person, and I still don't have any idea of where "the community" tends to stand on this. Since it seems much "easier" for an existential catastrophe in general to happen than for all animal sentience to be wiped out even temporarily, I expect at least a slight majority of votes to be on the "disagree" side. However, from my limited experience, I've had the impression being that individuals with P(ASI Doom) > 50% over the next century tend to believe that an existential catastrophe (here, ASI) would indeed wipe out all animal life (and even biological life, perhaps).

Some notes : I mean wiping it out in the moment, independently of whether it could evolve again on earth in the future or not. And digital sentience is not a consideration here, though I think it matters a lot.

 


18

0
0

Reactions

0
0
Comments7
Sorted by Click to highlight new comments since:
tylermjohn
*13
2
0
70% disagree ➔ 30% agree

The next existential catastrophe is likelier than not to wipe off all animal sentience from the planet

 

Edit: OK almost done being nerdsniped by this, I think it basically comes down to:

What's the probability that the existential catastrophe comes from a powerful optimizer or something that turns into a powerful optimizer, which is arbitrarily close to paperclipping?

Maybe something survives a paperclipper. It wants to turn all energy into data centers but it's at least conceivable that something survives this. The optimizer might, say, dissassemble Mercury and Venus to turn it into a Matryoshka brain but not need further such materials from Earth. Earth still might get some emanent heat from the sun despite all of the solar panels nested around it, and be the right temperature to turn the whole thing into data centers. But not all materials can be turned into data centers, so maybe some of the ocean is left in place. Maybe the Earth's atmosphere is intentionally cooled for faster data centers, but there's still geothermal heat for some bizarre animals.

But probably not. As @Davidmanheim points out (who changed my mind on this), you'll probably still want to disassemble the Earth to mine out all of the key resources for computing, whether for the Matryoshka brain or the Jupiter brain, and the most efficient way to do this probably isn't cautious precision mining. 

Absent a powerful optimizer you'd expect some animals to survive. There's a lot of fish, some of them very deep in the ocean, and ocean life seems pretty wildly adaptive, particularly down at the bottom where they do crazy stuff like feeding off volcanic heat vents to turn their bodies into iron and withstand pressures that crumble submarines.

So by far the biggest parameter is going to be how much you expect the world to end from a powerful optimizer. This is the biggest threat in the near term, though if we don't build ASI or build it safely other existential threats loom larger.

I think a key consideration here is whether AI disempowerment of humans, where humans are at least as well off as now, counts as X risk (and, as an aside, P(doom)). Since it would be destruction of humanity's long-term potential, I think Bostrom would say that disempowerment is an X risk, but Ord may not.

This is a good point that I hadn't thought of when I wrote my poll answer—a "gradual disempowerment" risk scenario would probably not kill all sentient animals, and it represents a non-trivial percentage of AI risk.

I realize I didn't choose a clear position on this in my description, and I'm actually not sure. I'd call a complete, seemingly irreversible collapse of civilization, even with humans remaining on earth (what the outcome of a nuclear war could be, for example), an X-risk even if it's not full-on extinction, but when it comes to lock-in and disempowerment, since humans (and presumably other animals) remain numerous and living, it doesn't feel like it should be part of the same question. I'd say my question is about X-risks involving death and destruction (or even mass sterilization), rather than a change in who controls the outcome.

MichaelDickens
6
1
0
60% ➔ 40% agree

The next existential catastrophe is likelier than not to wipe off all animal sentience from the planet

If an existential catastrophe occurs, it will probably (~90%) be AI, and an AI that kills all humans would probably also (~80%) kill all sentient animals.

The argument against killing all animals is that they are less likely than humans to interfere with the AI's goals. The argument in favor is that they are made of atoms.

Edit: Updated downward a bit based on Denkenberger's comment.

GavinRuneblade
2
0
0
30% disagree

The next existential catastrophe is likelier than not to wipe off all animal sentience from the planet

Not even the great dying got everything, so of the known natural (defined as coming from nature not technology) risks such as asteroid impact, climate change, etc. I don't give a better than 50% weight to them "wiping off all animal sentience". Nuclear weapons... we don't have enough to saturate the planet to the needed level; so they are also below 50%. That only leaves AI, and while I have a higher than 50% chance AI takes out all humanity, I suspect rather a lot of intelligent animals will get through it. At the risk of being the chimp that thinks it is safe from humans up in the tree because it's not smart enough to understand guns and helicopters, it seems that even for a paperclip optimizer, at nearly every point in time, it will be more efficient/optimal to go out into space and get resources to make more paperclips than to dive into more remote parts of the ocean or expand into more remote environments on land. There are many remote islands and tribes and animals that are in places where it just seems impractical to be looking for resources compared to the moon, asteroids, the orbital clutter, etc. At what point is the extremely small amount of resources on pitcarin island, for example, worth harvesting vs the cost? Return on Investment seems like something an optimizer would care about, and I think that would get the most remote locations significant time, especially as the optimizer's capabilities scaled up more and more making much larger resource deposits accessible. Eventually, maybe, it will get around to hunting down every last atom, but is that still the same event or another one? I am not sure. This makes me move my estimate below 50% for the extreme claim "all animal sentience". "most" or "over 98% of animal sentience" I would definitely be above 50% likely. "all" is a very extreme qualifier.

akash 🔸
2
0
1
90% disagree

The next existential catastrophe is likelier than not to wipe off all animal sentience from the planet

Intuitively seems very unlikely.

  1. The Chicxulub impact wiped out dinosaurs but not smaller mammals, fish, and insects. Even if a future extinction event caused a total ecosystem collapse, I would expect that some arthropods will  be able to adapt and survive.
  2. I feel a goal-driven, autonomous ASI won't care much about the majority of non-humans. We don't care about anthills we trample when constructing buildings (ideally, we should); similarly an ASI would not intentionally target most non-humans — they aren't competing for the same resources or obstructing the ASI's goals.
Curated and popular this week
Relevant opportunities