Hide table of contents

This is an abbreviated version of the new 80,000 Hours problem profile on space governance.

Introduction

Over the last four decades, the cost to launch a kilogram of payload into space has fallen from roughly $50,000 (for NASA’s Space Shuttle) to less than $1,500 (for SpaceX’s Falcon Heavy). With its new reusable designs, SpaceX aims to further cut launch costs to around $10 per kilogram. Cheap, reusable rocket technology could mark the beginning of a new ‘space race,’ with the frequency of launches potentially increasing from hundreds per year to hundreds per day.

It’s worth taking seriously some of the crazier ways this could play out. If things go well, we could choose for our time on Earth to become just the first stage of a journey into space. We might eventually make use of an almost limitless supply of material resources orbiting around the Sun, and begin to establish self-sustaining communities living beyond Earth. In the longer run, very large numbers of people could live beyond our home planet. A spacefaring future for humanity would make us resilient to disasters local to one planet, and it could also just become varied and expansive compared to remaining Earthbound, in ways that are hard to imagine now.

Ultimately, the sheer scale of the accessible universe makes the question of what we eventually do with and within it enormously important. If the human story ends before spreading beyond Earth, perhaps we would have missed out on almost all the valuable things we could have reached.

But it’s also easy to see things going wrong. The satellites in ‘low-Earth orbit’ are critical infrastructure, but could be unusually easy to disrupt or disable. Competition over outer space, or just ambiguity over issues of liability, could increase the risk of a great power conflict or lead to an anti-satellite arms race. Different actors unilaterally competing to land the first people on Mars, or build the first permanent structures on the Moon, could cement uncooperative norms around exploring space that persist long into the future. And if different groups rush to independently settle beyond Earth in a relatively ungoverned way, it could be far more difficult to get humanity-wide agreement — such as to prohibit a powerful weapons technology, or pursue a period of ‘reflection’ before we embark down a path which would be hard to backtrack.

But simply trying to delay potentially risky moves in space probably isn’t the only strategy, and sometimes it might even be a bad one. As with the development of artificial intelligence, serious delays sometimes require an infeasible amount of multilateral agreement — because private actors are incentivised by a growing private space industry, and national actors by concerns about reconnaissance and security capabilities. Some of the best ways to make a positive difference will instead be to help navigate away from the risks (and toward the potential benefits), given whatever rate of progress the world is making.

An especially promising way to do this could be to decide in advance how to govern activities in outer space — such as how to handle many times more space debris, how to resolve disputes over property or allocate property in advance, and how to restrict the use of weapons in space.[1] Because these sorts of governance mechanisms are currently lagging far behind this new race for space driven by rapidly falling launch costs, now could be an unusually influential time for humanity’s future in space.

Why could this be a pressing problem?

Almost all of humanity’s long-run future could lie in space — it could go well, but that’s not guaranteed

If the cost of travelling to other planetary bodies continues the trend in the chart above and falls by an order of magnitude or so, then we might begin to build increasingly permanent and self-sustaining settlements. Truly self-sustaining settlements are a long way off, but both NASA and China have proposed plans for a Moon base, and China recently announced plans to construct a base on Mars.

Building a semi-permanent presence on Mars will be very, very hard. Mars’s atmosphere is around 1% as dense as Earth’s, and the surface receives around 50 times the amount of radiation that we get on Earth. Plus, Mars’s soil is toxic to humans and unsuitable for growing plants without being decontaminated. Initially, the base will require a continual supply of resources, parts, fuel, and people from Earth.

But if we wanted to, it looks like we could eventually get ambitious. We might reach material self-sufficiency, including terraforming Mars to the point at which the atmosphere is breathable. The point at which such settlements become self-sustaining is a critical one, because that’s roughly the point where they might be useful for recovering if a catastrophe occurs on Earth. But we're not at all close yet: building a self-sustaining settlement will be slow, expensive, and brutally difficult.

People (in some form) might one day also travel and even settle beyond the solar system. The technology doesn’t yet exist, so it would be naive to try describing it in detail. But we don’t yet know of any insurmountable obstacles — such as from the laws of physics, costs, or time constraints — to spreading very far through space, and even to other galaxies. As hazy as this all is, we shouldn’t rule out the possibility that people might one day spread very widely throughout space, such that almost all the people who live in the future eventually live beyond Earth.

Without intervention, the Earth will likely be rendered uninhabitable within about one billion years; while stars will continue to be capable of supporting life for at least tens of thousands of times longer.[2] So a spacefaring future for humanity might not only support far more people than an Earthbound future, but it could last far longer.

This could be very good, or very bad.

If all goes well, with abundant energy, resources, and literal space, our descendents might one day realise grand, desirable futures — some very hard to imagine from our perspective.

On the other hand, life beyond Earth might be so dominated by competition, conflict, disagreement, or adversity, that it could end up being bad overall. Maybe the openness of space would tip the balance in favour of military offence over defence, or reward the most greedy pioneers whose aim is just to lay claim to more territory than their neighbours. Unlike on Earth, it might be literally impossible to escape to a friendlier regime, and the large distances between groups could mean far less natural pressure to conform to the cooperative norms of the ‘neighbours,’ so it could be easier for values to drift in a bad direction.

Whether a future in space goes well could significantly depend on how it’s governed now

There are some reasons to expect that a lot of the variance between these good and bad outcomes could depend on how space ends up being governed.

We have examples of good and bad governance on Earth, and the quality of life under those regimes normally depends closely on the quality of governance. In particular, by providing forums for coordinating toward shared goals, and making the threat of a collective response to aggression more credible, effective international and bilateral governance seem to have reduced the risk of serious conflict between countries.

[...]

Bad governance could also lead to some of the worst imaginable futures, especially if you have some reason to think totalitarian regimes might be easier to maintain beyond Earth.

Either way, it looks like governance in space will go a long way to determining how well or badly space settlements turn out. But that doesn't show that space governance is now a pressing problem. For that, we'd need to think that how space is governed in the longer run could end up depending in some predictable way on how it’s governed in the next few decades. This is far from certain — but if it's true, then shaping space governance now could matter enormously. So how might it work?

In one scenario, we might gradually spread to self-sustaining settlements beyond Earth, over the course of a century or longer. In this case, it could be informative to look at how some countries’ early constitutions have influenced their trajectory as they grew much larger over decades or centuries.[4]

There’s another scenario in which things happen much faster, and more dramatically. This is because it may be possible to build small and extremely fast-travelling probes, which, once launched, could build settlements based on the blueprints we give them. In fact, they could replicate indefinitely, similar to how an acorn turns earth and sunlight into an oak tree, which then produces many acorns. Because of this self-replicating possibility, the probes we launch in this ‘explosive’ period might eventually settle most of the places that will ever be settled, which is perhaps a significant fraction of the entire accessible universe.[5] Yet, because we could launch many of these probes all at once, this could all happen very quickly — and it could be difficult to reverse our decisions afterwards.

Of course, biological humans wouldn’t come along for the ride in this scenario. But these probes might carry the ingredients to run or recreate at least human minds, by storing or instantiating them digitally.[6]

If something even resembling this scenario plays out, it would be a pivotal moment for humanity. We could determine the values and governance structures that get sent from Earth, and those things might then become ‘locked in’ for an extraordinarily long period of time. Holden Karnofsky[7] writes in his blog:

[…] whoever is running the process of space expansion might be able to determine what sorts of people are in charge of the settlements and what sorts of societal values they have, in a way that is stable for many billions of years.

If either of these scenarios happens this century, it seems important to begin thinking seriously about how to positively influence the process. For example:

  • How should ownership and property be allocated?
  • What could ideal constitutions (or similar) look like?
  • What rules do we want in place for sending instructions to unmanned spacecraft after they’ve left Earth?

If these scenarios don't sound wildly implausible to you — and you think that advance thinking could meaningfully improve the odds that they go well — then you might think that this could be the most important way in which space governance ends up mattering.

[...] This is of course a very speculative case. Next, we’ll consider a more immediate and concrete reason for working on space governance.

Effective arms control in space could reduce the risk of conflict back on Earth

More and more critical infrastructure is getting placed in orbit, while governance frameworks about space conflict remain weak and ambiguous. Nearly 4,000 satellites already operate in a region called low-Earth orbit. We rely on this network of satellites for communications, GPS, remote sensing, and imaging useful for disaster relief.

A situation where satellites are especially vulnerable to attack wouldn’t just be bad because we could lose civilian infrastructure. Reconnaissance satellites are also used by militaries for early warnings of ballistic missile launches, detecting nuclear explosions, and spotting aggressive movements with photography or radar. If these information-gathering satellites get disabled, the country relying on them would suddenly be far less certain about whether and when they are under attack, making escalation from perceived provocation more likely.

At the same time, India, China, the US, and Russia all appear to be developing some form of anti-satellite weapon systems — meaning ground-to-air, air-to-air, or cyberweapons designed to disable enemy satellites (utility or military). Proposed international frameworks for either banning or controlling these new weapons have not yet materialised. This combination of fragility and uncertainty could make conflict in space especially easy to trigger, in turn increasing the risk of conflict back on Earth. This kind of conflict would most likely take place between great powers, given their disproportionate presence in space.

We know that disarmament agreements can work: for instance, the START and New START treaties between the United States and Russia successfully reduced and limited stockpiles of strategic nuclear weapons. So a promising way to reduce the risk of great power conflict could be to work on pushing for disarmament agreements in space — with a special focus on rules against targeting enemy reconnaissance satellites.[8]

Now could be an especially good opportunity to influence space governance

But how easy is it to shape space governance? In fact, it looks like there could be some unusually big opportunities to do so now and in the near future.

Current international governance frameworks for space are sparse and out of date, and because private companies are suddenly getting involved in space, important decisions are likely to be made soon. The most significant international agreement to date is the Outer Space Treaty, which entered into force in 1967 — more than half a century ago.[9] There was an attempt to get countries to agree on some fairly demanding rules in 1979 with the Moon Agreement, but it almost entirely failed.[10] [11]

[...]

Meanwhile, the private space industry looks set to more than double in size in the next few decades. Today, it’s worth just over $300 billion, and global government space budgets total to around $70 billion.[12] The investment bank Morgan Stanley anticipates that the private space industry could be worth more than $1 trillion by 2040.[13]

Several major space governance agreements are already being discussed, and stand some chance of being adopted within a decade. For instance, the Proposed Prevention of an Arms Race in Space Treaty is currently being discussed in the Conference on Disarmament, a forum in the United Nations. In 2020, NASA and the US Department of State announced the Artemis Accords, an effort to establish an international framework for cooperation around space exploration beginning outside of the UN.

But perhaps the most important (and most urgent) news is that in late 2021 the Secretary General of the United Nations announced a major new agenda,[14] which includes a proposed “Summit of the Future” conference to take place in 2023. As part of that conference, the agenda calls for a “a multi-stakeholder dialogue on outer space [...] bringing together Governments and other leading space actors” whose aims would be to “seek high-level political agreement on the peaceful, secure and sustainable use of outer space.” It also notes that existing international arrangements provide “only general guidance” on “the permanent settlement of celestial bodies and responsibilities for resource management” — implying this should be corrected. It seems likely that the organisers will announce a call for proposals sometime before this 2023 conference, in order to collect ideas from the wider research community. Perhaps we’ll see a major new international space treaty emerge from this or subsequent conferences — and perhaps you could help shape it.

[In short,] the ratio of likely importance to actual funding and activity could be unusually high for space governance right now — meaning early work could be more impactful than work later.

Acting early may be especially important for arms control in space. In general, it should be easier to get agreement when fewer actors have capabilities for a given weapons technology, because there are fewer competing interests to coordinate. Likewise for when those actors have invested less in developing weapons capabilities, since they have less to lose by agreeing to limits on their use. Effective arms control may be easier still if it is entirely preemptive: if a weapon hasn’t yet been built or tested by any actor.[15]

[...]

There are identifiable areas to make progress on

Avoiding premature lock-in

Humanity should be aiming to keep its (positive) options open — we have very little idea about what kind of future on Earth or in space would be best. Embarking on ambitious projects in space might ‘lock in’ decisions that turn out to be misguided. Plausibly, we should therefore make time for a period of reflection before embarking on potentially irreversible projects to spread through space.

Furthermore, without any forethought or governance, humanity’s long-run future in space might become a kind of uncoordinated ‘free for all’ — where the most expansionist groups eventually dominate. Like extinction, this kind of fragmented future could be a kind of lock-in — harder to escape from than to enter into.

This suggests we should try to research ways to make sure that grand projects in space can be changed or reversed if it becomes clear they’re heading in a bad direction.

[...]

Avoiding weaponised asteroid deflection

When thinking about risks from space, you’ll likely think of asteroids.

[Fortunately, compared to other existential threats this century,] the risk from asteroids this century appears to be very low[16][, and unusually well managed — Toby Ord writes in The Precipice: “no other existential risk is as well handled as that of asteroids or comets”.]

But if we do develop asteroid defence systems, we should also handle them carefully: any technology capable of deflecting an asteroid away from a collision course with Earth will make it easier to divert it toward Earth [...] As Carl Sagan and Steven J. Ostro write: “premature deployment of any asteroid orbit-modification capability [...] may introduce a new category of danger that dwarfs that posed by the objects themselves.”

To address this worry, actors might:

  • Agree on a monitoring network for asteroid deflection.
  • Regulate technology with the potential to divert objects in space.
  • Consider mandating liability insurance (similar to proposals in the context of risky biological research).

For more detail, see this longer post about risks from asteroids [also on the EA Forum].

Setting up rules for space debris to keep low-Earth orbit usable

Most of the roughly 6,000 satellites in low-Earth orbit are no longer operational — they have become fast-travelling pieces of junk. Smaller pieces of debris are created when flecks of paint come loose, when derelict spacecraft fragment into small pieces, or when particles of fuel are expelled from rocket motors.[18]

The result is a cloud of orbital debris, each piece flying through space around 10 times faster than a bullet. Space debris is already making space missions costlier and more risky,[19] and the number of satellites in orbit is set to more than triple by 2028.

The dangers posed by orbital debris are mostly negative externalities — like how dumping chemical waste in a river affects not just you, but everyone downstream. In such cases, governance could impose clearer incentives to clear up debris — and develop the tech to do so.

Relatedly, there is no international authority both monitoring and enforcing any traffic regulations in orbit. There aren’t major reasons why this can’t be established soon — everyone would benefit from having some rules that reduce collisions (as we’ve seen in civil aviation). Efforts to make sure this gets implemented well could be valuable.

Figuring out how to distribute resources and property

It might be worthwhile to begin thinking about mechanisms for deciding who owns what in space.

Failing to have clear rules in advance could encourage risky and competitive behaviour, as players race to grab space and resources in the absence of any sort of governance. For instance, we could eventually mine resources from asteroids and the Moon. Mandating that nearly all resources be shared will leave little to no incentive to reach them in the first place, but some clear rules for how to distribute especially large ‘windfalls’ of wealth (as has been suggested for AI development) could be good.

What are the major arguments against this problem being pressing?

Abbreviated sections —

Early efforts could be washed out later

From a longtermist perspective, the strongest case for space governance may be the idea that through early action, we can positively influence how space ends up being governed in a long-lasting way, or make sure the wrong values aren't locked in.

But it could be very likely that early governance initiatives simply get ‘washed out’ by later decisions, such that the early work ends up having little influence on the way things eventually turn out. And the further away you think serious efforts at permanently settling space are, the more likely this seems. Ultimately, we’re not sure exactly how long to expect early efforts to last.

You don’t need to think the likelihood of significant and very long-lasting effects is zero to think you shouldn't work on space governance — just that the chance of washout makes the case significantly weaker than other pressing problems.

[...]

Positively influencing the arrival of transformative AI could be much more important

If some views about advanced AI are right, it could be much more pressing to work on making sure AI aligns with the right values — in part because it looks like the highest-stakes space scenarios (e.g. those involving rapid settlement) are most likely to involve advanced AI.

After the arrival of very powerful AI, problems in space governance could look very different. Perhaps the political order will have changed, or new space technologies will emerge very quickly.

Further, the arrival of transformative AI could cause wide-reaching social and governance change. This could make it especially likely that early work on space governance gets washed out.

Finally, you might reasonably expect transformative AI to arrive sooner than successful projects to build or settle widely beyond Earth, since these projects look very difficult without the kind of sophisticated and widespread automation of engineering that transformative AI would enable.

If this story is right, then it might be more worthwhile to work on positively shaping the development of AI instead.


  1. ^

    One reason this might be important to control is that weapons launched from space can arrive with less warning, which narrows the window to respond to provocations (thereby reducing deterrence) and increases the likelihood of false alarms.

  2. ^

    Note that if humanity eventually settles widely in space, it seems very likely that almost everyone will live in artificial structures rather than on planetary surfaces.

  3. ^

    I've used "[...]" to indicate where I have cut sections from the original problem profile.

  4. ^

    However, note that constitutions do not tend to last nearly as long as the Constitution of the United States. So you should probably start off very sceptical that a ‘space constitution’ written today will survive long enough to matter.

  5. ^

    This scenario seems most likely if advanced artificial intelligence dramatically speeds up the rate of technological progress.

  6. ^
  7. ^

    For transparency: Karnofsky is the chief executive officer of the Open Philanthropy Project, which is a major donor to 80,00 Hours as well as the Future of Humanity Institute, where I (Fin Moorhouse) currently work.

  8. ^

    Another promising idea is to use (internationally operated) satellites to help verify arms control agreements, such as prohibitions on the use of anti-satellite weapons in outer space. This is the central idea of the Canadian PAXSAT proposal.

  9. ^

    The primary focus of the Outer Space Treaty (OST) is arms control: it bars parties to the treaty from placing weapons of mass destruction anywhere in outer space, prohibits military testing or manoeuvres of any kind, and prohibits establishing permanent military bases. The other focus of the OST is on questions of claiming territory and expropriating resources. Article II states: “Outer space, including the moon and other celestial bodies, is not subject to national appropriation by claim of sovereignty, by means of use or occupation, or by any other means.” But the term “national appropriation” isn’t defined in the treaty. In particular, the OST is ambiguous over whether resources from celestial bodies can be appropriated by non-state actors. Article I states that the “use of outer space [...] shall be carried out for the benefit and in the interests of all countries,” but this alone adds little clarity. The OST does state that non-governmental entities “shall require authorization and continuing supervision by the appropriate State Party to the Treaty”.

  10. ^

    The Moon Agreement of 1979 set out to establish clearer and more demanding guidelines around using resources on the Moon and other celestial bodies, calling for an international regime to “govern the exploitation of the natural resources of the moon as such exploitation is about to become feasible.” But the agreement stipulated that resources appropriated from space shall be the “common heritage of mankind.” Though also left ambiguous, this clause too strongly suggested a regime where rewards must be fully shared. As such, no major spacefaring nation has ratified the Moon Agreement.

  11. ^

    The other major pieces of international space law are the Rescue Agreement, the Liability Convention, and the Registration Convention.

  12. ^
  13. ^

    This rapid projected growth is likely to be driven in the short term by demand for satellite infrastructure, especially for providing internet access. Today, the satellite industry makes up more than 75% of the space economy (and commercial human spaceflight much less than 1%).

  14. ^
  15. ^

    One example could be space laser weapons: destructive lasers beamed through space — either attached to satellites and aimed at ground targets, or vice-versa — capable of disabling reconnaissance satellites or intercontinental ballistic missiles mid-flight. Such capabilities could be destabilising, because they could increase the chance of a preemptive attack against the country that developed them — a worry that was raised as early as 1988, but could remain relevant. But lasers are just an illustrative example of the point: that falling costs to access space could open up the possibility of new kinds of weapons technology — some potentially destabilising — suggesting we should consider preemptive arms control for those technologies.

  16. ^

    See also Toby Ord, The Precipice (2020) p. 71, Table 3.1.

  17. ^

    This ‘dual-use’ concern mirrors other kinds of projects aimed at making us safer, but which pose their own risks, such as gain-of-function research on diseases.

  18. ^

    Debris roughly a millimetre in diameter represents the greatest mission-ending risk to most satellites in low-Earth orbit. This is because even tiny pieces of debris are travelling fast enough to cause serious damage, and most pieces of debris are very small.

  19. ^

    In 2021, a piece of space debris left a hole in a robotic arm attached to the International Space Station (ISS). A few months later, the ISS swerved to avoid a fragment of a US launch vehicle.

  20. ^

    Such as ‘existential risk,’ as in these remarks of the Secretary General in 2021.

Comments11
Sorted by Click to highlight new comments since:

Hi!  I’m an aerospace engineer at the bay-area startup Xona Space Systems & a big fan of Effective Altruism.  Xona basically works on creating a next-generation, commercial version of GPS. Before that I helped build, launch, and operate a pair of cubesats at a small company called SpaceQuest, and before that I got a masters’ degree at CU Boulder. I’ve also been a longtime fan of SpaceX, kerbal space program, and hard sci-fi.

I think this is a good writeup that does a pretty good job of disentangling many of the different EA-adjacent ideas that touch on aerospace topics. In this comment I will talk about different US government agencies and why I think US policy is probably the more actionable space-governance area than broad international agreements; hopefully I’ll get around to writing future comments on other space topics (about the Long Reflection, the differences between trying to influence prosaic space exploration vs Von Neumann stuff, about GPS and Xona Space Systems, about the governance of space resources, about other areas of overlap between EA and space),  but we'll see if I can find the time for that...

Anyways, I’m surprised that you put so much emphasis on international space agreements through the UN[1], and relatively little on US space policy.  Considering that the USA has huge and growing dominance in many space areas, it’s pretty plausible that US laws will be comparably influential to UN agreements even in the long-term future, and certainly they are quite important today. Furthermore, US regulations will likely be much more detailed / forceful than broad international agreements, and US space policy might be more tractable for at least American EAs to influence. For example, I think that Artemis Accords (signed by 19 countries so far, which represent 1601 of the 1807 objects launched into space in 2021) will probably be more influential at least in the near-term than any limited terms that the upcoming UN meeting could get universal agreement on — the UN is not about to let countries start claiming exclusive-economic-zone-esque territory on other planets, but the Artemis Accords arguably does this![2]

With that in mind, here is an incomplete list of important space-related US agencies and what they do. Some of these probably merit inclusion in your list of “key organizations you could work for”:

  • Naturally, NASA makes many decisions about the overall direction of space exploration. There are big debates about long-term strategic goals: Should we target the Moon or Mars (or learn how to construct increasingly large space stations) for human exploration and settlement? Should space exploration be driven mostly by government itself, or should the government just be one of many customers trying to encourage the development of a private space economy? Which early R&D technologies (like in-space nuclear power, advanced ion propulsion, ISRU techniques, life support equipment) should we fund now in order to help us settle space later? How should we balance and prioritize among goals like human space settlement, robotic planetary exploration, space-telescope astronomy, etc? NASA’s decisions are very influential because they fund & provide direction for private companies like SpaceX and Blue Origin, and their international partnerships mean that smaller space agencies of other Western countries often join NASA initiatives. Of course NASA has to follow the direction of Congress on some of the big-picture decisions, but NASA has lots of leeway to make their own lower-level decisions and to influence Congress’s thinking by making recommendations and essentially lobbying for what NASA thinks is the best approach to a given issue. NASA is not a regulatory agency, but besides directing much actual space activity, they also often create influential international partnerships (like the International Space Station) and agreements (like the Artemis Accords) which might be influential on the far-future.
  • Similarly, DARPA and the US Air Force + Space Force clearly make many important decisions relevant to anti-satellite / arms-race / international-norm-setting considerations. Like NASA, they also invest in important R&D projects, like the current DARPA project to demonstrate nuclear propulsion.
  • The FCC is the USA’s main space regulatory agency. They are in charge of allocating licenses allowing satellite operators to use radio frequencies.[3]  They are also responsible for licensing the launch of satellite constellations (including the funny rules where you have to launch half of what you apply for within 3 years or risk losing your right to launch anything more). Finally, the FCC is the main regulator who is working to create a proper regulatory environment for mitigating space debris, a system that will probably involve posting large bonds or taking out liability insurance against the risk of debris. (Bonds / insurance could also provide a prize-like funding mechanism for third parties to capture and deorbit hazardous, defunct satellites.)
  • The FAA, who mostly regulate airplane safety, are also in charge of licensing the launch and reentry of rockets, capsules, etc. This seems less relevant to the long-term-future than the FCC’s regulation of satellite operations, but who knows — since the FAA today regulates air traffic management and commercial space tourism, they might someday end up in charge of human flights to Mars or all around the solar system, and the norms they establish might go on to influence human space settlement even further afield.
  • Although the FCC is in charge of regulating space debris, it is STRATCOM (the nuclear-ICBM-command people) which currently provides satellite operators with timely collision-risk alerts. This responsibility is slowly being migrated to the Office of Space Commerce under NOAA, and also increasingly handled by commercial space-situational-awareness providers like LeoLabs.
  • I’m not sure who exactly makes the big-picture norm-setting diplomacy decisions about US space policy, like Kamala Harris’s recent speech pledging that the USA will eschew testing antisatellite weapons. I presume these decisions just come from White House staff in consultation with relevant experts.

In a similar spirit of “paying attention to the concrete inside-view” and recognizing that the USA is by far the leader in space exploration, I think it’s further worth paying attention to the fact that SpaceX is very well-positioned to be the dominant force in any near-term Mars or Moon settlement programs. Thus, influencing SpaceX (or a handful of related companies like Blue Origin) could be quite impactful even if this strategy doesn’t feel as EA-ish as doing something warm and multilateral like helping shape a bunch of EU rules about space resources:

  • SpaceX is pretty set on their Mars plan, so it would likely be futile to try to convince them to totally change their core objective, but influencing SpaceX’s thoughts about how a Mars settlement should be established and scaled up (from a small scientific base to an economically self-sufficient city), how it should be governed, etc, could be very important.
  • If SpaceX had some general reforms it wanted to advocate for — such as about space debris mitigation policy — their recommendations might have a lot of sway with the various US agencies with which they have a close relationship.
  • SpaceX might be more interested in listening to occasionally sci-fi-sounding rationalist/EA advice than most governing bodies would. Blue Origin is also interesting in this sense; they are sometimes reputed to have rigid management and might be less overall EA-sympathetic than an organization led by Elon Musk, but they seem very interested in think-tank-style exploration of futurist concepts like O’Neill Cylinders and using space resources for maintaining long-run economic growth, so they might be eager to advocate for wise far-future space governance.
  1. ^

    Universal UN treaties, like those on nuclear nonproliferation and bioweapons, seem best for when you are trying to eliminate an x-risk by getting universal compliance. Some aspects of space governance are like this (like stopping someone from launching a crazy von neumann probe or ruining space with ASAT attacks), but I see a many space governance issues which are more about influencing the trajectory taken by the leader in space colonization (ie, SpaceX and the USA). Furthermore, many agreements on things like ASAT could probably be best addressed in the beginning with bilateral START-style treaties, hoping to build up to universal worldwide treaties later.

  2. ^

    The Accords have deliberately been pitched as low-key thing, like “hey, this is just about setting some common-sense norms of cooperation and interoperability, no worries”, but the provisions about in-space resource use and especially the establishment of “safety zone” perimeters around nation’s launch/landing sites, is in the eyes of many people basically opening the door towards claiming national territory on celestial bodies.

  3. ^

    The process of getting spectrum is currently the riskiest and most onerous part of most satellite companies’ regulatory-approval journeys. Personally, I think that this process could probably be much improved by switching out the current paperwork-and-stakeholder-consultation-based system for some fancy mechanism that might involve auctioning self-assessed licenses or something. But fixing the FCC’s spectrum-licensing process is probably not super-influential on the far-future, so whatever.

This (and your other comments) is incredibly useful, thanks so much. Not going to respond to particular points right now, other than to say many of them stick out as well worth pursuing.

I feel like the discussion of AI is heavily underemphasized in this problem profile (in fact, in this post it is the last thing mentioned).

I used to casually think "sure, space governance seems like it could be a good idea to start on soon; space exploration needs to happen eventually, I guess," but once I started to consider the likelihood and impact of AI development within the next 200 or even ~60 years, I very heavily adjusted my thinking towards skepticism/pessimism. 

That question of AI development seems like a massive gatekeeper/determinant to this overall question: I'm unclear how any present efforts towards long-term space governance and exploration matter in the case where AI 1) is extraordinarily superintelligent and agentic, and 2) operates effectively as a "singleton" -- which itself seems like a likely outcome from (1). 

Some scenarios that come to my mind regarding AI development (with varying degrees of plausibility):

  • We create a misaligned superintelligence which leads to extinction or other forms of significant curtailment of humanity's cosmic potential, which renders all of our efforts towards space exploration unimportant for the long term.
  • We create an aligned, agentic superintelligent singleton which basically renders all of our efforts towards space exploration unimportant for the long term (because it will very very quickly surpass all of our previous reasoning and work).
  • We somehow end up with multiple highly intelligent agents (e.g., national AIs) that are somewhat "aligned" with certain values, but their intelligence does not enable them to identify/commit to positive-sum cooperation strategies (e.g., they cannot converge towards a singleton) and this curtails space expansion capabilities, but having developed norms in advance helps to (slightly?) mitigate this curtailment.
  • We determine that the alignment problem is unsolvable or too inherently risky to try to develop a superintelligence--at least for a few centuries--but we are also somehow able to prevent individual/unilateral actors from trying to create superintelligent agents, and so it may be worthwhile to get a "headstart" on space exploration, even if it only equates to improving our long term future by some (0.00...1%)
  • Creating superintelligence/AGI proves far more elusive than we expect currently (and/or the alignment problem is difficult, see above) and thus takes many decades or even centuries longer, while at the same time space becomes a domain that could trigger hostilities or tensions that undermine coordination on existential risks (including AI).
  • ??

 

Ultimately, I'd really like to see:

  1. More up-front emphasis on the importance of AI alignment as a potential determinant.
  2. Examination of the scenarios in which work on space governance would have been useful, given the first point, including how likely those scenarios appear to be.

This sentiment seems like a fully general objection to every intervention not directly related to AI safety (or TAI). 

As presented currently, many TAI or AI safety related scenarios blow out all other considerations—it won't matter how far you get to Alpha Centauri with prosaic spaceships, TAI will track you down. 

It seems like you would need to get "altitude" to give this consideration proper thought (pardon the pun). My guess is that the OP has done that.

This is partially an accurate objection (i.e., I do think that x-risks and other longtermist concerns tend to significantly outweigh near-term problems such as in health and development), but there is an important distinction to make with my objections to certain aspects of space governance:

Contingent on AI timelines, there is a decent chance that none of our efforts will even have a significantly valuable near-term effect (i.e., we won't achieve our goals by the time we get AGI). Consider the following from the post/article:

If the cost of travelling to other planetary bodies continues the trend in the chart above and falls by an order of magnitude or so, then we might begin to build increasingly permanent and self-sustaining settlements. Truly self-sustaining settlements are a long way off, but both NASA and China have proposed plans for a Moon base, and China recently announced plans to construct a base on Mars

Suppose that it would take ~80 years to develop meaningful self-sustaining settlements on Mars without AGI or similar forms of superintelligence. But suppose that we get AGI/superintelligence in ~60 years: we might get misaligned AGI and all the progress (and humanity) is erased and it fails to achieve its goals; we might create aligned AGI which might obsolesce all ~60 years of progress within 5 or so years (I would imagine even less time); or we might get something unexpected or in between, in which case maybe it does matter?

In contrast, at least with health and development causes you can argue "I let this person live another ~50 years... and then the AGI came along and did X."

Furthermore, this all is based on developing self-sustaining settlements being a valuable endeavor, which I think is often justified with ideas that we'll use those settlements for longer-term plans and experimentation for space exploration, which requires an even longer timeline.

Thanks for this, I think I agree with the broad point you're making.

That is, I agree that basically all the worlds in which space ends up really mattering this century are worlds in which we get transformative AI (because scenarios in which we start to settle widely and quickly are scenarios in which we get TAI). So, for instance, I agree that there doesn't seem to be much value in accelerating progress on space technology. And I also agree that getting alignment right is basically a prerequisite to any of the longer-term 'flowthrough' considerations.

If I'm reading you right I don't think your points apply to near-term considerations, such as from arms control in space.

It seems like a crux is something like: how much precedent-setting or preliminary research now on ideal governance setups doesn't get washed out once TAI arrives, conditional on solving alignment? And my answer is something like: sure, probably not a ton. But if you have a reason to be confident that none of it ends up being useful, it feels like that must be a general reason for scepticism that any kind of efforts at improving governance, or even values change, are rendered moot by the arrival of TAI. And I'm not fully sceptical about those efforts.

Suppose before TAI arrived we came to a strong conclusion: e.g. we're confident we don't want to settle using such-and-such a method, or we're confident we shouldn't immediately embark on a mission to settle space once TAI arrives. What's the chance that work ends up making a counterfactual difference, once TAI arrives? Notquite zero, it seems to me.

So I am indeed on balance significantly less excited about working on long-term space governance things than on alignment and AI governance, for the reasons you give. But not so much that they don't seem worth mentioning.

Ultimately, I'd really like to see [...] More up-front emphasis on the importance of AI alignment as a potential determinant.

This seems like a reasonable point, and one I was/am cognisant of — maybe I'll make an addition if I get time.

(Happy to try saying more about any of above if useful)

If I'm reading you right I don't think your points apply to near-term considerations, such as from arms control in space.

That is mostly correct: I wasn't trying to respond to near-term space governance concerns, such as how to prevent space development or space-based arms races, which I think could indeed play into long-term/x-risk considerations (e.g., undermining cooperation in AI or biosecurity), and may also have near-term consequences (e.g., destruction of space satellites which undermines living standards and other issues). 

 

But if you have a reason to be confident that none of it ends up being useful, it feels like that must be a general reason for scepticism that any kind of efforts at improving governance, or even values change, are rendered moot by the arrival of TAI. And I'm not fully sceptical about those efforts.

To summarize the point I made in response to Charles (which I think is similar, but correct me if I'm misunderstanding): I think that if an action is trying to improve things now (e.g., health and development, animal welfare, improving current institutional decision-making or social values), it can be justified under neartermist values (even if it might get swamped by longtermist calculations). But it seems that if one is trying to figure out "how do we improve governance of space settlements and interstellar travel that could begin 80–200 years from now," they run the strong risk of their efforts having effectively no impact on affairs 80–200 years from now because AGI might develop before their efforts ever matter towards the goal, and humanity either goes extinct or the research is quickly obsolesced. 

Ultimately, any model of the future needs to take into account the potential for transformative AI, and many of the pushes such as for Mars colonization just do not seem to do that, presuming that human-driven (vs. AI-driven) research and efforts will still matter 200 years from now. I'm not super familiar with these discussions, but to me this point stands out so starkly as 1) relatively easy to explain (although it may require introductions to superintelligence for some people); 2) substantially impactful on ultimate conclusions/recommendations, and 3) frequently neglected in the discussions/models I've heard so far. Personally, I would put points like this among the top 3–5 takeaway bullet points or in a summary blurb—unless there are image/optics reasons to avoid doing this (e.g., causing a few readers to perhaps-unjustifiably roll their eyes and disregard the rest of the problem profile).

 

Suppose before TAI arrived we came to a strong conclusion: e.g. we're confident we don't want to settle using such-and-such a method, or we're confident we shouldn't immediately embark on a mission to settle space once TAI arrives. What's the chance that work ends up making a counterfactual difference, once TAI arrives? Notquite zero, it seems to me.

This is an interesting point worth exploring further, but I think that it's helpful to distinguish—perhaps crudely?—between two types of problems:

  1. Technical/scientific problems and "moral problems" which are really just "the difficulty of understanding how our actions will relate to our moral goals, including what sub-goals we should have in order to achieve our ultimate moral goals (e.g., maximizing utility, maximizing virtue/flourishing)."
  2. Social moral alignment—i.e., getting society to want to make more-moral decisions instead of being self-interested at the expense of others.

It seems to me that an aligned superintelligence would very likely be able to obsolesce every effort we make towards the first problem fairly quickly: if we can design a human-aligned superintelligent AI, we should be able to have it automate or at least inform us on everything from "how do we solve this engineering problem" to "will colonizing this solar system—or even space exploration in general—be good per [utilitarianism/etc.]?" 

However, making sure that humans care about other extra-terrestrial civilizations/intelligence—and that the developers of AI care about other humans (and possibly animals)—might require some preparation such as via moral circle expansion. Additionally, I suppose it might be possible that a TAI's performance on the first problem is not as good as we expect (perhaps due to the second problems), and of course there are other scenarios I described where we can't rely as much on a (singleton) superintelligence, but my admittedly-inexperienced impression is that such scenarios seem unlikely.

Hi Fin!

This is great. Thank you for writing it up and posting it! I gave it a strong upvote.

(TLDR for what follows: I think this is very neglected, but I’m highly uncertain about tractability of formal treaty-based regulation)

As you know, I did some space policy-related work at a think tank about a year ago, and one of the things that surprised us most is how neglected the issue is — there are only a handful of organizations seriously working on it, and very few of them are the kinds of well-connected and -respected think tanks that actually influence policy (CSIS is one). This is especially surprising because — as Jackson Wagner writes below — so much of space governance runs through U.S. policy. Anyway, I think that’s another point in favor of working on this!

As I think I mentioned when we talked about space stuff a little while ago, I’m a bit skeptical about tractability of “traditional” (ie formal, treaty-based) arms control. You note some of the challenges in the 80K version of the write up. Getting the major powers to agree to anything right now, let alone something as sensitive as space tech, seems unlikely. Moreover, the difficulties of verification and ease of cheating are high, as they are with all dual-use technology. Someone can come up with a nice “debris clean up” system that just happens to also be a co-orbital ASAT, for example.

But I think there are other mechanisms for creating “rules of the orbit” — that’s the word Simonetta di Pippo, the director of UNOOSA used at a workshop I helped organize last year. (https://global.upenn.edu/sites/default/files/perry-world-house/Dipippo_SpaceWorkshop.pdf)

Cyber is an example where a lot of actors have apparently decided that treaty-based arms control isn’t going to cut it (in part for political reasons, in part because the tech moves so fast), but there are still serious attempts at creating norms and regulation (https://carnegieendowment.org/2020/02/26/cyberspace-and-geopolitics-assessing-global-cybersecurity-norm-processes-at-crossroads-pub-81110). That includes standard setting and industry-driven processes, which feel especially appropriate in space, where private actors play such an important role. We have a report on autonomous weapons and AI-enabled warfare coming out soon at Founders Pledge, and I think that’s another space where people put too much emphasis on treaty-based regulation and neglect norms and confidence building measures for issues where great powers can agree on risk reduction.

Again, I think this is a great write up, and love that you are drawing attention to these issues. Thank you!

Following up my earlier comment with a hodgepodge of miscellaneous speculations and (appropriately!) leaving the Long Reflection / Von-Neumann stuff for later-to-never. Here are some thoughts, arranged from serious to wacky:

  • Here is a link to a powerpoint presentation summarizing some personal research that I did into how bad it would be if GPS was taken 100% offline for an extended period. I look into what could cause a long-term GPS failure (cyberattack or solar storm maybe, deliberate ASAT attacks most likely — note that GPS is far from LEO so kessler syndrome is not a concern), how different industries would be affected, and bad the how the overall impact would be. I find that losing GPS for a year would create an economic hit similar in scale to the covid-19 pandemic, although of course the details of how life would be affected would be totally different — most importantly, losing GPS likely wouldn’t be an isolated crisis, but would occur as part of a larger catastrophe like great-power war or a record-breaking solar storm.
  • I have a lot of thoughts about how EA has somewhat of a natural overlap with many people who become interested in space, and how we could do a better job of trying to recruit / build connections there. In lieu of going into lots of detail, I’ll quote from a facebook comment I made recently:

For a lot of ordinary folks, getting excited about space exploration is their way of visualizing and connecting with EA-style ideas about "humanity's long term future" and contributing to the overall advancement of civilization. They might be wrong on the object-level (the future will probably depend much more on technologies like AI than technologies like reusable rockets), but their heart is often in the right place, so I think it's bad for EA to be too dismissive/superior about the uselessness of space exploration. I believe that many people who are inspired by space exploration are often natural EAs at heart; they could just use a little education about what modern experts think the future will actually look like. It's similar to how lots of people think climate change is an imminent extinction risk, and it obviously isn't, but in a certain sense their “heart is in the right place” for caring about x-risk and taking a universal perspective about our obligations to humanity and the Earth, so we should try to educate/recruit them instead of just mocking their climate anxieties.

  • EA talks about space as a potential cause area. But I also think that NASA’s recent success story of transitioning from bloated cost-plus contracts to a system of competitive public-private partnerships has some lessons that the EA movement could maybe use. As the EA movement scales up (and becomes more and more funding-heavy / “talent-constrained”), and as we start digging into “Phase 2” work to make progress on a diverse set of technically complicated issues, it will become less possible to exercise direct oversight of projects, less possible to assume good-faith and EA-value-alignment on the part of collaborators, and so forth. Organizations like OpenPhil will increasingly want to outsource more work to non-EA contractors. This is mostly a good thing which reflects the reality of being a successful movement spending resources in order to wield influence and get things done. But the high-trust good-faith environment of early EA will eventually need to give way to an environment where we rely more on making sure that we are incentivizing external groups to give us what we want (using good contract design, competition, prizes, and other mechanisms). NASA’s recent history could provide some helpful lessons in how to do that.
  • Space resources: I am an engineer, not an economist, but it seems like Georgism could be a helpful framework for thinking about this? The whole concept of Georgism is that economic rents derived from natural resources should belong equally to all people, and thus should be taxed at 100%, leaving only the genuine value added by human labor as private profit. This seems like a useful economic system (albeit far from a total solution) if we are worried about “grabby” pioneers racing to “burn the cosmic commons”.  Just like the spectrum-auction processes I mentioned, individuals could bid for licenses to resources they wish to use (like an asteroid containing valuable minerals or a solar-power orbital slot near the sun), and then pay an ongoing tax based on the value of their winning bid. Presumably we could turn the tax rate up and down until we achieved a target utilization rate (say, 0.001% of the solar system’s resources each year); thus we could allocate resources efficiency while still greatly limiting the rate of expansion.
  • One potential “EA megaproject” is the idea of creating “civilizational refuges” — giant sealed bunkers deep underground that could help maintain civilization in the event of nuclear war, pandemic, or etc. I think this project has a lot of overlap with existing knowledge about how to build life-support systems for space stations, and with near-future projects to create large underground moonbases and mars cities. It would certainly be worth trying to hire some human-spaceflight engineers to consult on a future EA bunker project. I even have a crazy vision that you might be able to turn a profit on a properly-designed bunker-digging business — attracting ambitious employees with the long-term SpaceX-style hype that you’re working on technology to eventually build underground Martian cities, and earning near-term money by selling high-quality bunkers to governments and eccentric billionaires.

I forgot from where, but I've heard criticisms of Elon Musk that he is advancing our expansion into space while not solving many of Earth's current problems. It seems logical that if we still have many problems on Earth, such as inequity, that those problems will get perpetuated as we expand into space. Also, maybe it's possible that other smaller scale problems that we don't have effective solutions for would become enormously multiplied as we expand into space (though I am not sure what an example of this would be). On the other hand, maybe the development of space technology will be the means through which we stumble onto solutions to many of the problems that we currently have on Earth.

Getting along with any possible extraterrestrial civilizations would be a concern. 

Use of biological weapons might be more attractive because the user can unleash them on a planet and not worry about it spilling over to themselves and their group.

A state, group, or individual might stumble upon a civilization and wipe them out. They would prevent anyone else from even knowing they existed.

A stray thought; I'll stumble to the Google Doc with it in a moment - regarding minimal standards of operation for space colonies' constitutions: "If a government does not allow its people to leave, that is what makes it a prison."

More from finm
Curated and popular this week
Relevant opportunities