[Crossposted from my blog]

The main inspiration for my ideas here has been Stuart Armstrong and Anders Sandberg’s paper “Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox”. It’s main point is to argue that intergalactic colonization of an intelligent civilization is highly feasible in a cosmic timescale, and to discuss the implications of this on the Fermi paradox. In doing so, it also discusses a particular method intergalactic colonization can occur: it argues a single starting solar system has enough materials and energy to directly send a probe to every reachable galaxy, and in turn each probe can self-replicates and sends probes to every star in that galaxy. While thinking through this scenario, I decided that there’s a more efficient and more plausible method that intergalactic colonization can occur. This does not substantially affect Armstrong and Sandbergs’ main points about the Fermi paradox. While re-examining this paper I found it responded to Robin Hanson’s paper “Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization”, which in many respects is closer to my picture of intergalactic colonization, and by all rights should have been an inspiration for me.

Armstrong and Sandberg were careful justifying their assumptions on the technological capability of an intelligent species, and trying to make their results robust to conservative technological assumptions. My approach is more optimistic — roughly speaking, anything that appears to be physically possible I assume to be technologically possible with enough research time. A good follow-up question to my proposal to figure out the exact technological requirements and their feasibility.

In Armstrong and Sandberg’s strategy, a single probe is created at the starting solar system and sent directly to a target galaxy. It spends most of its journey — a vast amount of time — completely inert. This is wasteful. Instead, the probe should spend that time gathering resources from the surrounding space while remaining in motion. It should use those resources to self-replicate while in transit rather than when it reaches a target star and galaxy. That way, it will be able to settle an entire galaxy at one rather than make a time-consuming second hop to colonize the galaxy. Though now there is no reason for a probe’s target to be exactly one galaxy. Instead, single probe now targets a cone-shaped region with the starting solar system as the apex.

Even if this method is more efficient, does that matter? If both strategies are more than capable of colonizing the future lightcone, isn’t it just a matter of which method the civilization chooses, rather than which one is “better”? No it is not, because the second stage for the inert probe really adds a serious delay. Imagine people implemented Armstrong and Sandberg’s proposal first, and launched a probe at 99% lightspeed to every reachable galaxy. Then, it takes ten thousand years until someone successfully launches a self-replicating-in-transit probe at the same speed to a particular galaxy. For comparison, Armstrong and Sandberg’s most pessimistic estimate is that will take eleven thousand years to launch every probe, and a more representative estimate is in the paper’s title, six hours[1]. Then the inert probe arrives to the galaxy twenty thousand years earlier, and has time to create and send secondary probes to a twenty thousand light year radius at best. Meanwhile, the self-replicating-in-transit arrives at the entire galaxy at once. If the galaxy is as large as the Milky Way, one hundred thousand light years across, then the active probe gets to most of the galaxy first. The fifty thousand years it takes the inert probe to colonize the rest of the galaxy is tiny from a cosmological perspective, which is why it was ignored in Armstrong and Sandberg, but huge from a human/historical perspective. Since we are comparing two approaches that can both be initiated by the same technological civilization as part of its historical development, it is the historical timescales that are relevant for comparing which approach wins over.

An active probe in transit won’t just gather resources and self-replicate, it can host an entire society. It can host intelligent entities, whether AIs or real or simulated humans, that are thinking and planning for their future. In particular, they can make scientific and technological advantages that will make the probe work better, either decrease its mass or energy requirements or increase its speed. This gives another reason to expect active-in-transit probes to be more successful than inert probes: If in situ research can lead to an early-launched probe accelerating from 99% lightspeed to 99.9% lightspeed then that speed difference will really add up over millions or billions of light years, beating inert probes launched earlier. It will also beat probes launched later at 99.9% lightspeed from the starting solar system due to its head start.

To make reasonable guesses on the behavior of these intelligent entities and their society, we should think about their incentives. Of all these entities, all in motion, whichever one moves fast would have first access to galaxies, stars, interstellar gas, and other natural resources. These resources have energy in a low entropy form, and both the energy and the low entropy are useful for performing computations and self-replicating. However, these resources are (relatively) stationary, and usable energy from the perspective of our fast-moving society must also have a lot of momentum. So the matter in this space must be accelerated to the speed of our society to be usable, with the remaining matter accelerated to an equal and opposite momentum. This opposing momentum will make it more difficult for any later entity from making use of these resources, especially if it’s also trying to move very fast. Moreover, due to transformation law for energy-momentum, the faster you go the more energy in stationary form is necessary to obtain the same amount of usable energy from your mobile perspective. So the faster the first movers are going, the more they’ll need to propel naturally-occurring matter in the opposite direction and make it difficult to use. So there’s a huge first-mover advantage.

This is absolutely terrible.

Really. Very. Bad.

It’s bad because it’s saying a large proportion of the resources in the future lightcone, perhaps most of them, will be burnt for nothing. Specifically, burnt as rocket fuel to make a huge number of rockets which, due to time dilation, only exist for long enough for their inhabitants to figure out how to make the rocket go so fast. I’m not sure what will be left behind after such a process, whether there will be an absolute vacuum or a maximum-entropy heat bath or whether there will still be some sort of usable energy after this entire process. Either way, it will be a terrible loss. This is what I believe will happen if intergalactic colonization is guided by the incentives of individual colonizers.

[1] Both estimates from Table 5 of the paper.

Comments7


Sorted by Click to highlight new comments since:

Am I right in thinking the conclusion is something like this:

If we get a singleton on Earth, which then has a monopoly on space colonization forever, they do the Armstrong-Sandberg method and colonize the whole universe extremely efficiently. If instead we have some sort of competitive multipolar scenario, where Moloch reigns, most of the cosmic commons get burnt up in competition between probes on the hardscrapple frontier?

If so, that seems like a reasonably big deal. It's an argument that we should try to avoid scenarios in which powerful space tech is developed prior to a singleton forming. Perhaps this means we should hope for a fast takeoff rather than a slow takeoff, for example.

 

I guess. I don't like the concept of a singleton. I prefer to think that by describing a specific failure mode this gives a more precise model for exactly what kind of coordination is needed to prevent it. Also, we definitely shouldn't assume a coordinated colonization will follow the Armstrong-Sandberg method. I'm also motivated by a "lamppost approach" to prediction: This model of the future has a lot of details that I think could be worked out to a great deal of mathematical precision, which I think makes it a good case study. Finally, if the necessary kind coordination is rare then even if it's not worth it from an EV view to plan for our civilization to end up like this we should still anticipate alien civilizations to look like this.

Agreed on all counts except that I like the concept of a singleton. I'd be interested to hear why you don't, if you wish to discuss it.

I'm glad you agree! For the sake of controversy, I'll add that I'm not entirely sure that scenario is out of consideration from an EV point of view, firstly because the exhaust will have a lot of energy and I'm not sure what will happen to it, and secondly because I'm open to a "diminishing returns" model of population ethics where the computational capacity furloughed does not have an overwhelmingly higher value.

On singletons, I think the distinction between "single agent" and "multiple agents" is more of a difference in how we imagine a system than an actual difference. Human civilization is divided into minds with a high level of communication and coordination within each mind and a significantly lower level between minds. This pattern is an accident of evolutionary history and if technological progress continues I doubt it will remain in the distant future, but I also don't think there will be perfect communication and coordination between the parts of a future civilization either. Even within a single human mind the communication and coordination is imperfect.

Mmm, good point. Perhaps the way to salvage the concept of a singleton is to define it as the opposite of moloch, i.e. a future is ruled by a singleton to the extent that it doesn't have moloch-like forces causing drift towards outcomes that nobody wants, money being left on the table, etc. Or maybe we could just say a singleton is where outcomes are on or close to the pareto frontier. Idk.

If it is to gather resources en route, it must accelerate those resources to its own speed. Or alternatively, it must slow down to a halt, pick up resources and then continue. This requires a huge expenditure of energy, which will slow down the probe.

Bussard ramjets might be viable. But I'm skeptical that it could be faster than the propulsion ideas in the Sandberg/Armstrong paper. Anyway you seem to be talking about spacecraft that will consuming planets, not Bussard ramjets.

Going from 0.99c to 0.999c requires an extraordinary amount of additional energy for very little increase in distance over time. At that point, the sideways deviations required to reach waypoints (like if you want to swing to nearby stars instead of staying in a straight line) would be more important. It would be faster to go 0.99c in a straight line than 0.999c through a series of waypoints.

If we are talking about going from 0.1c to 0.2c then it makes more sense.

  1. It's true that making use of resources while matching the probe's speed requires a huge expenditure of energy, by the transformation law of energy-momentum if for no other reason. If the remaining energy is insufficient then the probe won't be able to go any faster. Even if there's no more efficient way to extract resources than full deceleration/re-acceleration I expect this could be done infrequently enough that the probe still maintains an average speed of >0.9c. In that case the main competitive pressure among probes would be minimizing the number of stop-overs.
  2. The highest speed considered in the Armstrong/Sanders paper is 0.99c, which is high enough for my qualitative picture to be relevant. Re-skimming the paper, I don't see an explicitly stated reason why the limit it there, although I note that any higher speed won't affect their conclusion about the Fermi paradox and potential past colonizer visible from Earth. The most significant technological reasons for this limit I see them address are the energy costs of deceleration and damage from collisions with dust particles, and neither seems to entirely exclude faster speeds.
  3. Yes, at such high speeds optimizing lateral motion becomes very important and the locations of concentrated sources of energy can affect the geometry of the expansion frontier. For a typical target I'm not sure if the optimal route would involve swerving to a star or galaxy or whether the interstellar dust and dark matter in the direct path would be sufficient. For any particular route I expect a probe to compete with other probes taking a similar route so there will still be competitive pressure to optimize speed over 0.99c if technologically feasible.
  4. A lot of what I'm saying remains the same if the maximal technologically achievable speed is subrelativistic. In other ways such a picture would be different, and in particular the coordination problems would be substantially easier if there is time for substantial two-way communication between all the probes and all the colonized areas.
  5. Again, I see a lot of potential follow-up work in precisely delineating how different assumptions on what is technologically possible affect my picture.
Curated and popular this week
TL;DR * Screwworm Free Future is a new group seeking support to advance work on eradicating the New World Screwworm in South America. * The New World Screwworm (C. hominivorax - literally "man-eater") causes extreme suffering to hundreds of millions of wild and domestic animals every year. * To date we’ve held private meetings with government officials, experts from the private sector, academics, and animal advocates. We believe that work on the NWS is valuable and we want to continue our research and begin lobbying. * Our analysis suggests we could prevent about 100 animals from experiencing an excruciating death per dollar donated, though this estimate has extreme uncertainty. * The screwworm “wall” in Panama has recently been breached, creating both an urgent need and an opportunity to address this problem. * We are seeking $15,000 to fund a part-time lead and could absorb up to $100,000 to build a full-time team, which would include a team lead and another full-time equivalent (FTE) role * We're also excited to speak to people who have a background in veterinary science/medicine, entomology, gene drives, as well as policy experts in Latin America. - please reach out if you know someone who fits this description!   Cochliomyia hominivorax delenda est Screwworm Free Future is a new group of volunteers who connected through Hive investigating the political and scientific barriers stopping South American governments from eradicating the New World Screwworm. In our shallow investigation, we have identified key bottlenecks, but we now need funding and people to take this investigation further, and begin lobbying. In this post, we will cover the following: * The current status of screwworms * Things that we have learnt in our research * What we want to do next * How you can help by funding or supporting or project   What’s the deal with the New World Screwworm? The New World Screwworm[1] is the leading cause of myiasis in Latin America. Myiasis “
 ·  · 4m read
 · 
As 2024 draws to a close, I’m reflecting on the work and stories that inspired me this year: those from the effective altruism community, those I found out about through EA-related channels, and those otherwise related to EA. I’ve appreciated the celebration of wins and successes over the past few years from @Shakeel Hashim's posts in 2022 and 2023. As @Lizka and @MaxDalton put very well in a post in 2022: > We often have high standards in effective altruism. This seems absolutely right: our work matters, so we must constantly strive to do better. > > But we think that it's really important that the effective altruism community celebrate successes: > > * If we focus too much on failures, we incentivize others/ourselves to minimize the risk of failure, and we will probably be too risk averse. > * We're humans: we're more motivated if we celebrate things that have gone well. Rather than attempting to write a comprehensive review of this year's successes and wins related to EA, I want to share what has personally moved me this year—progress that gave me hope, individual stories and acts of altruism, and work that I found thought-provoking or valuable. I’ve structured the sections below as prompts to invite your own reflection on the year, as I’d love to hear your responses in the comments. We all have different relationships with EA ideas and the community surrounding them, and I find it valuable that we can bring different perspectives and responses to questions like these. What progress in the world did you find exciting? * The launch of the Lead Exposure Elimination Fund this year was exciting to see, and the launch of the Partnership for a Lead-Free Future. The fund jointly committed over $100 million to combat lead exposure, compared to the $15 million in private funding that went toward lead exposure reduction in 2023. It’s encouraging to see lead poisoning receiving attention and funding after being relatively neglected. * The Open Wing Alliance repor