If humanity goes extinct due to an existential catastrophe, it is possible that aliens will eventually colonize Earth and the surrounding regions of space that Earth-originating life otherwise would have. If aliens' values are sufficiently aligned with human values, the relative harm of an existential catastrophe may be significantly lessened if it allows for the possibility of such alien colonization.
I think the probability of such alien colonization varies substantially based on the type of existential catastrophe. An existential catastrophe due to rogue AI would make it unlikely that such alien colonization would happen since it would probably be in the AI's interest to keep the resources in Earth and the surrounding regions for itself. I suspect that an existential catastrophe due to a biotechnology or nanotechnology disaster, however, would leave alien colonization relatively probable.
I think there's a decent chance that alien values would be at least somewhat aligned with humans'. Human values, for example fun and learning, exist since they were evolutionarily beneficial. This weakly suggests that aliens would also have them due to similar evolutionary advantages.
My above reasoning suggests that we should devote more effort into averting existential risks that make such colonization less likely, for example risks from rogue AI, than from other risks.
Is my reasoning correct? Has what I'm saying already been though of? If not, would be be worthwhile to inform people working on existential risk strategy, e.g. Nick Bostrom, about this?
I agree with the argument. If you buy into the idea of evidential cooperation in large worlds (formerly multiverse-wide superrationality), then this argument might go through even if you don't think alien values are very aligned with humans. Roughly, ECL is the idea that you should be nice to other value systems because that will (acausally via evidential/timeless/functional decision theory) make it more likely that agents with different values will also be nice to our values. Applied to the present argument: If we focus more on existential risks that take resources from other (potentially unaligned) value systems, then this makes it more likely that elsewhere in the universe other agents will focus on existential risks that take away resources from civilizations that happen to be aligned with us.