[Crossposted from my blog]

The main inspiration for my ideas here has been Stuart Armstrong and Anders Sandberg’s paper “Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox”. It’s main point is to argue that intergalactic colonization of an intelligent civilization is highly feasible in a cosmic timescale, and to discuss the implications of this on the Fermi paradox. In doing so, it also discusses a particular method intergalactic colonization can occur: it argues a single starting solar system has enough materials and energy to directly send a probe to every reachable galaxy, and in turn each probe can self-replicates and sends probes to every star in that galaxy. While thinking through this scenario, I decided that there’s a more efficient and more plausible method that intergalactic colonization can occur. This does not substantially affect Armstrong and Sandbergs’ main points about the Fermi paradox. While re-examining this paper I found it responded to Robin Hanson’s paper “Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization”, which in many respects is closer to my picture of intergalactic colonization, and by all rights should have been an inspiration for me.

Armstrong and Sandberg were careful justifying their assumptions on the technological capability of an intelligent species, and trying to make their results robust to conservative technological assumptions. My approach is more optimistic — roughly speaking, anything that appears to be physically possible I assume to be technologically possible with enough research time. A good follow-up question to my proposal to figure out the exact technological requirements and their feasibility.

In Armstrong and Sandberg’s strategy, a single probe is created at the starting solar system and sent directly to a target galaxy. It spends most of its journey — a vast amount of time — completely inert. This is wasteful. Instead, the probe should spend that time gathering resources from the surrounding space while remaining in motion. It should use those resources to self-replicate while in transit rather than when it reaches a target star and galaxy. That way, it will be able to settle an entire galaxy at one rather than make a time-consuming second hop to colonize the galaxy. Though now there is no reason for a probe’s target to be exactly one galaxy. Instead, single probe now targets a cone-shaped region with the starting solar system as the apex.

Even if this method is more efficient, does that matter? If both strategies are more than capable of colonizing the future lightcone, isn’t it just a matter of which method the civilization chooses, rather than which one is “better”? No it is not, because the second stage for the inert probe really adds a serious delay. Imagine people implemented Armstrong and Sandberg’s proposal first, and launched a probe at 99% lightspeed to every reachable galaxy. Then, it takes ten thousand years until someone successfully launches a self-replicating-in-transit probe at the same speed to a particular galaxy. For comparison, Armstrong and Sandberg’s most pessimistic estimate is that will take eleven thousand years to launch every probe, and a more representative estimate is in the paper’s title, six hours[1]. Then the inert probe arrives to the galaxy twenty thousand years earlier, and has time to create and send secondary probes to a twenty thousand light year radius at best. Meanwhile, the self-replicating-in-transit arrives at the entire galaxy at once. If the galaxy is as large as the Milky Way, one hundred thousand light years across, then the active probe gets to most of the galaxy first. The fifty thousand years it takes the inert probe to colonize the rest of the galaxy is tiny from a cosmological perspective, which is why it was ignored in Armstrong and Sandberg, but huge from a human/historical perspective. Since we are comparing two approaches that can both be initiated by the same technological civilization as part of its historical development, it is the historical timescales that are relevant for comparing which approach wins over.

An active probe in transit won’t just gather resources and self-replicate, it can host an entire society. It can host intelligent entities, whether AIs or real or simulated humans, that are thinking and planning for their future. In particular, they can make scientific and technological advantages that will make the probe work better, either decrease its mass or energy requirements or increase its speed. This gives another reason to expect active-in-transit probes to be more successful than inert probes: If in situ research can lead to an early-launched probe accelerating from 99% lightspeed to 99.9% lightspeed then that speed difference will really add up over millions or billions of light years, beating inert probes launched earlier. It will also beat probes launched later at 99.9% lightspeed from the starting solar system due to its head start.

To make reasonable guesses on the behavior of these intelligent entities and their society, we should think about their incentives. Of all these entities, all in motion, whichever one moves fast would have first access to galaxies, stars, interstellar gas, and other natural resources. These resources have energy in a low entropy form, and both the energy and the low entropy are useful for performing computations and self-replicating. However, these resources are (relatively) stationary, and usable energy from the perspective of our fast-moving society must also have a lot of momentum. So the matter in this space must be accelerated to the speed of our society to be usable, with the remaining matter accelerated to an equal and opposite momentum. This opposing momentum will make it more difficult for any later entity from making use of these resources, especially if it’s also trying to move very fast. Moreover, due to transformation law for energy-momentum, the faster you go the more energy in stationary form is necessary to obtain the same amount of usable energy from your mobile perspective. So the faster the first movers are going, the more they’ll need to propel naturally-occurring matter in the opposite direction and make it difficult to use. So there’s a huge first-mover advantage.

This is absolutely terrible.

Really. Very. Bad.

It’s bad because it’s saying a large proportion of the resources in the future lightcone, perhaps most of them, will be burnt for nothing. Specifically, burnt as rocket fuel to make a huge number of rockets which, due to time dilation, only exist for long enough for their inhabitants to figure out how to make the rocket go so fast. I’m not sure what will be left behind after such a process, whether there will be an absolute vacuum or a maximum-entropy heat bath or whether there will still be some sort of usable energy after this entire process. Either way, it will be a terrible loss. This is what I believe will happen if intergalactic colonization is guided by the incentives of individual colonizers.

[1] Both estimates from Table 5 of the paper.

Comments7


Sorted by Click to highlight new comments since:

Am I right in thinking the conclusion is something like this:

If we get a singleton on Earth, which then has a monopoly on space colonization forever, they do the Armstrong-Sandberg method and colonize the whole universe extremely efficiently. If instead we have some sort of competitive multipolar scenario, where Moloch reigns, most of the cosmic commons get burnt up in competition between probes on the hardscrapple frontier?

If so, that seems like a reasonably big deal. It's an argument that we should try to avoid scenarios in which powerful space tech is developed prior to a singleton forming. Perhaps this means we should hope for a fast takeoff rather than a slow takeoff, for example.

 

I guess. I don't like the concept of a singleton. I prefer to think that by describing a specific failure mode this gives a more precise model for exactly what kind of coordination is needed to prevent it. Also, we definitely shouldn't assume a coordinated colonization will follow the Armstrong-Sandberg method. I'm also motivated by a "lamppost approach" to prediction: This model of the future has a lot of details that I think could be worked out to a great deal of mathematical precision, which I think makes it a good case study. Finally, if the necessary kind coordination is rare then even if it's not worth it from an EV view to plan for our civilization to end up like this we should still anticipate alien civilizations to look like this.

Agreed on all counts except that I like the concept of a singleton. I'd be interested to hear why you don't, if you wish to discuss it.

I'm glad you agree! For the sake of controversy, I'll add that I'm not entirely sure that scenario is out of consideration from an EV point of view, firstly because the exhaust will have a lot of energy and I'm not sure what will happen to it, and secondly because I'm open to a "diminishing returns" model of population ethics where the computational capacity furloughed does not have an overwhelmingly higher value.

On singletons, I think the distinction between "single agent" and "multiple agents" is more of a difference in how we imagine a system than an actual difference. Human civilization is divided into minds with a high level of communication and coordination within each mind and a significantly lower level between minds. This pattern is an accident of evolutionary history and if technological progress continues I doubt it will remain in the distant future, but I also don't think there will be perfect communication and coordination between the parts of a future civilization either. Even within a single human mind the communication and coordination is imperfect.

Mmm, good point. Perhaps the way to salvage the concept of a singleton is to define it as the opposite of moloch, i.e. a future is ruled by a singleton to the extent that it doesn't have moloch-like forces causing drift towards outcomes that nobody wants, money being left on the table, etc. Or maybe we could just say a singleton is where outcomes are on or close to the pareto frontier. Idk.

If it is to gather resources en route, it must accelerate those resources to its own speed. Or alternatively, it must slow down to a halt, pick up resources and then continue. This requires a huge expenditure of energy, which will slow down the probe.

Bussard ramjets might be viable. But I'm skeptical that it could be faster than the propulsion ideas in the Sandberg/Armstrong paper. Anyway you seem to be talking about spacecraft that will consuming planets, not Bussard ramjets.

Going from 0.99c to 0.999c requires an extraordinary amount of additional energy for very little increase in distance over time. At that point, the sideways deviations required to reach waypoints (like if you want to swing to nearby stars instead of staying in a straight line) would be more important. It would be faster to go 0.99c in a straight line than 0.999c through a series of waypoints.

If we are talking about going from 0.1c to 0.2c then it makes more sense.

  1. It's true that making use of resources while matching the probe's speed requires a huge expenditure of energy, by the transformation law of energy-momentum if for no other reason. If the remaining energy is insufficient then the probe won't be able to go any faster. Even if there's no more efficient way to extract resources than full deceleration/re-acceleration I expect this could be done infrequently enough that the probe still maintains an average speed of >0.9c. In that case the main competitive pressure among probes would be minimizing the number of stop-overs.
  2. The highest speed considered in the Armstrong/Sanders paper is 0.99c, which is high enough for my qualitative picture to be relevant. Re-skimming the paper, I don't see an explicitly stated reason why the limit it there, although I note that any higher speed won't affect their conclusion about the Fermi paradox and potential past colonizer visible from Earth. The most significant technological reasons for this limit I see them address are the energy costs of deceleration and damage from collisions with dust particles, and neither seems to entirely exclude faster speeds.
  3. Yes, at such high speeds optimizing lateral motion becomes very important and the locations of concentrated sources of energy can affect the geometry of the expansion frontier. For a typical target I'm not sure if the optimal route would involve swerving to a star or galaxy or whether the interstellar dust and dark matter in the direct path would be sufficient. For any particular route I expect a probe to compete with other probes taking a similar route so there will still be competitive pressure to optimize speed over 0.99c if technologically feasible.
  4. A lot of what I'm saying remains the same if the maximal technologically achievable speed is subrelativistic. In other ways such a picture would be different, and in particular the coordination problems would be substantially easier if there is time for substantial two-way communication between all the probes and all the colonized areas.
  5. Again, I see a lot of potential follow-up work in precisely delineating how different assumptions on what is technologically possible affect my picture.
Curated and popular this week
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att
 ·  · 1m read
 · 
Are you looking for a project where you could substantially improve indoor air quality, with benefits both to general health and reducing pandemic risk? I've written a bunch about air purifiers over the past few years, and its frustrating how bad commercial market is. The most glaring problem is the widespread use of HEPA filters. These are very effective filters that, unavoidably, offer significant resistance to air flow. HEPA is a great option for filtering air in single pass, such as with an outdoor air intake or a biosafety cabinet, but it's the wrong set of tradeoffs for cleaning the air that's already in the room. Air passing through a HEPA filter removes 99.97% of particles, but then it's mixed back in with the rest of the room air. If you can instead remove 99% of particles from 2% more air, or 90% from 15% more air, you're delivering more clean air. We should compare in-room purifiers on their Clean Air Delivery Rate (CADR), not whether the filters are HEPA. Next is noise. Let's say you do know that CADR is what counts, and you go looking at purifiers. You've decided you need 250 CFM, and you get something that says it can do that. Except once it's set up in the room it's too noisy and you end up running it on low, getting just 75 CFM. Everywhere I go I see purifiers that are either set too low to achieve much or are just switched off. High CADR with low noise is critical. Then consider filter replacement. There's a competitive market for standardized filters, where most HVAC systems use one of a small number of filter sizes. Air purifiers, though, just about always use their own custom filters. Some of this is the mistaken insistence on HEPA filters, but I suspect there's also a "cheap razors, expensive blades" component where manufacturers make their real money on consumables. Then there's placement. Manufacturers put the buttons on the top and send air upwards, because they're designing them to sit on the floor. But a purifier on the floor takes up
 ·  · 10m read
 · 
Citation: McKay, H. and Shah, S. (2025). Forecasting farmed animal numbers in 2033. Rethink Priorities. The report is also available on the Rethink Priorities website. Executive summary We produced rough-and-ready forecasts of the number of animals farmed in 2033 with the aim of helping advocates and funders with prioritization decisions. We focus on the most numerous groups of farmed animals: broiler chickens, finfishes, shrimps, and select insect species. Our forecasts suggest almost 6 trillion of these animals could be slaughtered in 2033 (Figure 1).   Figure 1: Invertebrates could account for 95% of farmed animals slaughtered in 2033 according to our midpoint estimates. Note that ‘Insects’ only includes black soldier fly larvae and mealworms. Our midpoint estimates point to a potential fourfold increase in the number of animals slaughtered from 2023 to 2033 and a doubling of the number of animals farmed at any time. Invertebrates drive the majority of this growth, and could account for 95% of farmed animals slaughtered in 2033 (see Figure 1) and three quarters of those alive at any time in our mid-point projections. We believe our forecasts point to an urgent need to address critical questions around the sentience and welfare of farmed invertebrates. Our estimates come with many caveats and warnings. In particular: * Species scope: For practicality, we produced numbers only for a few key animal groups: broiler chickens, finfishes, shrimp, and certain insects (black soldier flies and mealworms only). * Sensitivity to insect farming growth: Our forecasts are particularly sensitive to the growth in insect farming, which is highly sensitive to the success of insect farming business models and their ability to attract future investment. The recent and forecasted estimates, with 90% subjective credible intervals, can be viewed below in Table 1.  Table 1: Estimates of recent and forecasted numbers of broiler chickens, finfishes, shrimps, and insects slau