Space Settlement is likely to happen, and may not be as far in the future as we assume. It has the potential to decrease our likelihood of extinction at a manageable cost - but only if we do it right. Let us not squander its potential benefits for humanity.
In the attached essay which I authored for the Swiss Existential Risk Initiative's (CHERI) summer research fellowship in the summer of 2022, I tackle this subject from three main directions by...
- ...attempting to quantify the direct effects of spreading humanity across celestial bodies on the risk landscape. For each of the widely accepted X-risks, I appraise how humanity's vulnerability to it will change of we do not solely inhabit earth anymore (example: " how will the probability of all humans being wiped out by an asteroid change?").
- ...taking a systems - theoretical view on complex risk in a settled solar system scenario. I investigate the necessary conditions that an interplanetary human civilization must fulfil to ensure its resilience to system-level threats (example: "if an extraplanetary settlement is not self-sustaining, it may succumb even if not directly affected by a catastrophic event").
- ...investigating higher-order effects of space settlement on the X-risk landscape. Space settlement will impact human civilization in many ways that are unpredictable and may be intangible, but nevertheless highly impactful on our susceptibility to X-risk (example: "how will the existence of a human sister civilization alter our moral circle on earth?")
For some more detail, here is the abstract:
The survival of humanity is threatened by a plethora of hazards - from asteroid strikes to engineered pandemics. Can settling space increase our odds of survival? This article examines this question in detail and draws three main conclusions. 1) By spreading to other planets, some hazards will immediately be mitigated (example: supervolcanic eruptions) while others remain unaffected (example: rogue artificial intelligence). While this is favorable, becoming interplanetary alone will not fully mitigate existential risk. 2) To harness the full security potential of spreading to space, a matter of prime importance is to prevent knock-on effects of locally occurring catastrophes spreading to other settlements in space. This can be achieved by maximizing resilience to complex risk. This article offers some concrete policy suggestions to maximize resilience from a systems-theoretical point of view. Resilience comes at a price – the economic viability and the existential security of space settlements form a tradeoff. 3) Higher-order effects arising from the process of settlement can also act as existential security factors: next to their more general desirable effects, technological spinoffs will likely reduce the vulnerability to a number of existential threats in a virtuous feedback loop (examples: climate change and disaster shelter design). The psychological and socio-cultural effects of settlement (examples: the overview effect, awe and existential hope) are not to be underestimated and may lead to a broad risk reduction. It is likely that humans will explore and settle space driven mainly by their sense of curiosity, adventure, pride, economic gain and national competition, not the potential existential risk benefits. Thus, everyone concerned about existential risk should attempt to influence and shape these efforts while they are still at an early stage to ensure that humanity’s systemic resilience is increased. Space settlement, if done right, can significantly increase our security at a manageable cost.
If I have piqued your interest, please feel free to download the full essay from dropbox:
Thank you and enjoy!
Chris
Thanks for the report.
If I were to add one thing to this report, it would probably be a comparison of increasing the likelihood of space settlement vs increasing the likelihood of extremely resilient and self-sustaining disaster shelters (e.g. shelters that could be self-sustaining for decades or possibly centuries). You note the similarities in "Design of disaster shelters", but don't compare these as possible interventions (as far as I can tell).
My naive (mostly uninformed) guess would have been that very good disaster shelters are wildly cheaper and easier (prior to radical technology change like superhuman AI or nanotech) while offering most of the same benefits.
(I put a low probability on commercially viable and self-sustaining space colonies prior to some other radical change in the technical landscape, but perhaps I'm missing some story for economic viability. Like I think the probability of these sorts of space colonies in the next 60 years is low (without some other radical technical advancement like AI or nanotech happening prior in which case the value add is more complex).)
Hey Ryan, I think your scepticism is a widely held view among EAs, but IMO overlooks some crucial factors in addition to the considerations Christopher mention:
If you also think AI timelines might substantially longer than EAs typically think or that AI could be less 'extinction event' and more 'global catastrophe on the order of taking down the internet' (which seems plausible to me - there are a lot of selection effects in this community for short-timeline-doomers and evidence for some quite strong groupthink) then it starts to look like a reasonable area for further consideration, especially given how little serious scrutiny it's had among EAs to date.
Thanks for the comment.
I'd like to stress that it seems at least plausible to me that encouraging space colonization could be a worthwhile cause area. (I'm uncertain what is sufficiently neglected here, but it seems plausible that there are some key neglected areas.) And I'd like to stress that I haven't thought about the area very much.
Accordingly, feel free to not engage with the rest of my comment.
A sufficient crux for me would be thinking that it's doable to substantially effect the probability of commercially viable long-term space colonization[1] within the next 40 years. My main concern here would be that commercially viable long-term space colonization within 40 years is quite unlikely by default and thus hard to boost the absolute probability by much (it's way down on the logistic success curve). My second biggest concern would be that this doesn't really seem very neglected. (Though perhaps resources are sufficiently inefficiently allocated that there is a neglected subfield?)
To emphasize, this is a sufficient crux, but not the only sufficient crux.
I have >25% on AI taking longer than 30 years, so this isn't that much of discount on this view (but the potential for working on AI being better might be a substantial discount.)
That is, this space colonization happening prior to some other radical technological change. As, in encouraging space colonization after transformative AI or nanotech doesn't seem that important for various reasons.
I agree this is an important question - I would like to see more research effort into it (if nothing else, that question seems neglected).
That said, I am strongly averse to EAs using 'not neglected' as evidence that some area isn't worth supporting. There's so much context to it, and it's not even well defined (not neglected relative to overall amount of effort required, or in absolute terms)? At best it gives a first guess at tractability and so is a heuristic for prioritising prioritisation projects - but for a movement that has been around for over a decade and sunk millions of work hours into cause prioritisation, it really should be an obsolete heuristic.
I strongly disagree that 'not neglected' isn't good evidence. This is evidence depending only on pretty weak assumptions (returns which diminish faster than seeing more people working on a thing is evidence for the thing being good). For research-ish areas, I think you should probably have something like a log returns prior in which case this makes sense.
I'm somewhat sympathetic to "maybe someone should just do the serious prioritization research" in which case we don't need rough except for prioritising prioritisation projects. ([Low confidence] But in practice, I think high level spot checking is necessary and often better than actual reports for longtermism IMO. For this heuristics are pretty important. It's just pretty easy to have a lot of information/understanding which allows for beating carefully done prioritization research in the longermist space in many cases. Like research can be informative, but often not for the bottom line, but instead for answering various questions about fundamentals.)
See also Most problems fall within a 100x tractability range.
For the definition of ITN that I use (but maybe people use others?), tractability is basically separable from neglectedness:
(Quote from the same post I linked above.)
In absolute terms ideally (as in definition above), though this might be somewhat poorly defined still.
At the end of the day, we just want to compute expected value with respect to various actions, but I still think that "are a bunch of people already trying to solve the problem and are they approaching it in a reasonable way" is a pretty good heuristic. (In research-ish fields, clearly some fields end up having pretty linear returns and we can usually predict this with some other simple heuristics.)
I don't have time to respond to this in as much depth as I'd like, but maybe it's worth a few cursory remarks, since much of what you've said touches on my frustrations with the concept.
I'm not sure how to parse this.
This seems like an extremely strong assumption to me, especially since it's basically the assertion I'm contesting:
Some real world examples of problems and ambiguities with the concept that showcase these issues:
This seems like a very hard claim to justify, given both conceptually how difficult it is to measure outputs/value gains in the longtermist space and how few organisations in it are producing anything meaningfully measureable at all. I think many assumptions the space makes, going right back to the idea that longtermist work is better from a longtermist perspective than 'short termism' are justified on flimsy, self-supporting heuristics.
I don't believe in practice (m?)any people use that version. In order to get a neatly cancellable equation, it quietly replaces 'tractability' - a relatively intuitive concept that people can often do useful work on - with something like elasticity of tractability, which I've never heard anyone opine on, directly or indirectly in any other context.
Hi Ryan, thanks a lot for taking the time and leaving your thoughts, I appreciate it!
I agree that extremely resilient and self-sustaining disaster shelters might offer similar benefits from an X-risk perspective.
I didn't go all to deep into that topic because I feel there aren't the same incentives (and excitement) around building shelters on earth compared to starting settlements away from earth. At least, I am unaware of designated X-risk shelters being built specifically to save humanity in the event of a catastrophe (an exception might be the Svalbard Seed Vault or some of the military nuclear bunkers, or maybe nuclear submarines).
On the other hand, there are multiple efforts currently aimed at starting a space settlement, and the concept is much more embedded into the popular awareness.
I think there is an inherent attractiveness of spreading "outwards" as opposed to going "inwards". To put it romantically: there seems to be more potential for human development in the reaches of space than below the earth.
HOWEVER I totally agree with you that it would certainly be smart to prepare (x-risk) shelters on earth for many reasons! Are you aware of any projects currently pursuing this?
People have certainly talked about this on the forum, but maybe people currently think it seems somewhat more cost effective to work on other projects given the current x-risk reduction funding?
Thanks for sharing this Chris!
Kudos especially for taking the time to write up the summary.