My biggest takeaway from the Essays on Longtermism anthology is that irrecoverable collapse is a serious concern and we should not assume that humanity will rebound from a global catastrophe. The two essays that convinced me of this were "Depopulation and Longtermism" by Michael Geruso and Dean Spears and "Is Extinction Risk Mitigation Uniquely Cost-Effective? Not in Standard Population Models" by Gustav Alexandrie and Maya Eden. These essays argue that human population does not automatically or necessarily grow in the rapid, exponential way we became accustomed to over the last few hundred years.

In the discourse on existential risk, it's often assumed that even if only 1% of the human population survives a global disaster, eventually humanity will rebound. On this assumption, while extinction reduces future lives to zero, a disaster that kills 99% of the human population only reduces the eventual number of future lives from some astronomically large figure to some modestly lower astronomically large figure. This idea goes back to Derek Parfit, who (as far as I know) was the first analytic philosopher to discuss human extinction from a population ethics standpoint. Nick Bostrom, who is better known for popularizing the topic of existential risk, has cited Parfit as an influence. So, this assumption has been with us from the beginning.

Irrecoverable collapse, as I would define it, means that population does not ever rebound to pre-collapse levels and science, technology, and industry do not recover to pre-collapse levels, either. So, digital minds and other futuristic fixes don't get us out of the jam. While the two aforementioned papers are primarily about population, the paper on depopulation by Geruso and Spears also persuasively argues that technological progress depends on population. This spells trouble for any scenario where a global catastrophe kills a large percentage of living human beings.[1] 

While a small global population of humans might live on Earth for a very long time, the overall number of future lives will be much less than if science and technology continued to progress, if the global economy continued to grow, and if global population continued to grow or at least stayed roughly steady. If irrecoverable collapse reduces the number of future lives by something like 99.9%, we should be concerned about irrecoverable collapse for the same reason we're concerned about extinction.[2]

For several kinds of existential threat, such as asteroids, pandemics, and nuclear war, it seems like the chance of an event that kills a devastating percentage of the world's population but not 100% is significantly higher than the chance of a full-on extinction event. If irrecoverable collapse scenarios are almost as bad as extinction events, then the putatively greater likelihood of irrecoverable collapse scenarios probably matters a lot!

If irrecoverable collapse reduces the number of future lives by almost as much as extinction and if irrecoverable collapse scenarios are more likely than extinction scenarios, then it may be more important to try to prevent irrecoverable collapse than extinction. In practice, maybe trying to prevent extinction looks the same as trying to prevent sub-extinction disasters. For example, pandemic prevention probably looks similar whether you're trying to prevent another pandemic like covid-19 or a pandemic 10x worse or a pandemic 10x worse than that. However, I can think of two areas where this idea about irrecoverable collapse might be practically relevant:

  1. It might become more important to detect smaller asteroids using space telescopes like NASA's planned NEO Surveyor. It's plausible to think there may be asteroids that are too small to cause human extinction but large enough to cause irrecoverable collapse, especially if they hit a densely populated part of Earth. (Similar reasoning might apply to other threats like large volcanoes.)
  2. Maybe it's worthwhile thinking more about ways to reboot civilization after a collapse. There has been some discussion in the existential risk literature about long-term shelters or refuges, which could be a relevant intervention. See, for example, Nick Beckstead's excellent paper on the topic. However, Beckstead's paper seems to make the assumption that I'm now saying is dubious: if even a small number of people survive, that's good enough.
     

One topic not discussed in Essays on Longtermism is humanity's one-time endowment of easily accessible fossil fuels. These fossil fuels have been used up and if industrial civilization collapsed, it could not be rebooted along the same pathway it originally took. A hopeful idea I once heard offered in this context was that maybe charcoal, which is made from wood, could replace coal. I don't know whether or not that is feasible. This is a worrying problem and if there any good ideas for how to solve it, I would love to hear them.

There are other considerations. For example, if humanity regressed to a pre-scientific stage, are we confident that a Scientific Revolution would eventually happen again? Is the Scientific Revolution inevitable and guaranteed, given enough time, or are we lucky that it happened? 

Let's say we want to juice the odds. Could we store scientific knowledge over the very long term, possibly carved in stone or engraved in nickel, in a way that would remain understandable to people for centuries after a collapse? How might we encourage future people to care about this knowledge? Would people be curious about it? How could we make sure they would find it? 

Not much research has been done into so-called "doomsday archives". To clarify: there has been some research on how to physically store data for a very long time, with proofs of concept that store data in dehydrated DNA or that use lasers to encode data in quartz glass or diamond. However, very little research has been done into how to make information accessible and understandable to a low-tech society that has drifted culturally and linguistically away from the creators of the archive in the centuries following a global disaster.

If irrecoverable collapse is indeed as important as I have entertained in this essay, then a few recommendations follow:

  • People who are concerned about existential risks primarily because of the reduction the number of future lives should look more broadly at mitigating potential disasters that would not cause extinction but might cause an irrecoverable collapse.
  • That same class of people should look into any way that a devastated civilization could recover without the easily accessible fossil fuels that human civilization had the first time around.
  • Another potential research direction is doomsday archives that can preserve knowledge not only physically but also practically for people with limited technology and limited background knowledge.

In short, we should not assume humanity will automatically recover from a sub-extinction global catastrophe and should plan accordingly.

  1. ^

    If we were able to create digital minds, concerns about the biological human population and fertility rates would suddenly become much less pressing. However, getting to the point where we can create digital minds would require that the human population not collapse before then.

  2. ^

    This is not a new idea. As early as 2002, Nick Bostrom defined an existential risk as: "One where an adverse outcome would either annihilate Earth−originating intelligent life or permanently and drastically curtail its potential." Even so, I think this idea has been under-emphasized.

Comments4
Sorted by Click to highlight new comments since:

I agree that extinction has been overemphasized in the discussion of existential risk. I would add that it's not just irrecoverable collapse, but the potential increased risk of subsequent global totalitarianism or worse values ending up in AI. Here are some papers that I have been on that have addressed some of these issues: 1, 2, 3, 4. And here is another relevant paper: 1, and very relevant project 2.

Thanks for sharing the papers. Some of those look really interesting. I’ll try to remember to look at these again when I think of it and have time to absorb them. 

What do you think of the Arch Mission Foundation's Nanofiche archive on the Moon?

Wouldn’t a global totalitarian government — or a global government of any kind — require advanced technology and a highly developed, highly organized society? So, this implies a high level of recovery from a collapse, but, then, why would global totalitarianism be more likely in such a scenario of recovery than it is right now? 

I have personally never bought the idea of “value lock-in” for AGI. It seems like an idea inherited from the MIRI worldview, which is a very specific view on AGI with some very specific and contestable assumptions of what AGI will be like and how it will be built. For instance, the concept of “value lock-in” wouldn’t apply to AGI created through human brain emulation. And for other technological paradigms that could underlie AGI, are they like human brain emulation in this respect or unlike it? But this is starting to get off-topic for this post. 

Wouldn’t a global totalitarian government — or a global government of any kind — require advanced technology and a highly developed, highly organized society? So, this implies a high level of recovery from a collapse, but, then, why would global totalitarianism be more likely in such a scenario of recovery than it is right now? 

Though it may be more likely for the world to go to global totalitarianism after recovery from collapse, I was referring to a scenario where there was not collapse, but the catastrophe pushed us towards totalitarianism. Some people think the world could have ended up totalitarian if World War II had gone differently.

What do you think of the Arch Mission Foundation's Nanofiche archive on the Moon?

I don't think it's the most cost-effective way of mitigating X risk, but I guess you could think of it as plan F:

Plan A: prevent catastrophes

Plan B: contain catastrophes (e.g. not escalating nuclear war or suppressing an extreme pandemic)

Plan C: resilience despite the catastrophe getting very bad (e.g. maintaining civilization despite blocking of sun or collapse of infrastructure because of employee pandemic fear)

Plan D: recover from collapse of civilization

Plan E: refuge in case everyone else died

Plan F: resurrect civilization

I have personally never bought the idea of “value lock-in” for AGI. It seems like an idea inherited from the MIRI worldview, which is a very specific view on AGI with some very specific and contestable assumptions of what AGI will be like and how it will be built. 

I think value lock-in does not depend on the MIRI worldview - here's a relevant article.

Thank you for sharing your perspective. I appreciate it. 

I definitely misunderstood what you were saying about global totalitarianism. Thank you for clarifying. I will say I have a hard time guessing how global totalitarianism might result from a near-miss or a sub-collapse disaster involving one of the typical global catastrophe scenarios, like nuclear war, pandemics (natural or bioengineered), asteroids, or extreme climate change. (Maybe authoritarianism or totalitarianism within some specific countries, sure, but a totalitarian world government?)

To be clear, are you saying that your own paper about storing data on the Moon is also a Plan F? I was curious what you thought of the Arch Mission Foundation because your paper proposes putting data on the Moon and someone has actually done that! They didn't execute your specific idea, of course, but I wondered how you thought their idea stacked up against yours.

I definitely agree that putting data on the Moon should be at best a Plan F, our sixth priority, if not even lower! I think the chances of data on the Moon ever being useful are slim, and I don't want the world to ever get into a scenario where it would be useful!

I think value lock-in does not depend on the MIRI worldview - here's a relevant article.

Ah, I agree, this is correct, but I meant the idea of value lock-in is inherited from a very specific way of thinking about AGI primarily popularized by MIRI and its employees but also popularized by people like Nick Bostrom (e.g. in his 2014 book Superintelligence). Thinking value lock-in is a serious and likely concern with regard to AGI does not require you to subscribe to MIRI's specific worldview or Bostrom's on AGI. So, you're right in that respect. 

But I think if recent history had played a little differently and ideas about AGI had been formed imagining that human brain emulation would be the underlying technological paradigm, or that it would be deep learning and deep reinforcement learning, then the idea of value lock-in would not be as popular in current discussions of AGI as it is. I think the popularity of the value lock-in idea is largely an artifact of the historical coincidence that many philosophical ideas about AGI got formed while symbolic AI or GOFAI was the paradigm people were imagining would produce AGI.

The same could be said for broader ideas about AI alignment. 

Curated and popular this week
Relevant opportunities