Probability of s-risk
I agree that in a sufficiently large space civilization (that isn't controlled by your Governance Structure), the probability of s-risk is almost 100% (but not just from digital minds).
Let's unpack this:
Our galaxy has roughly 200 billion stars (2*10^11). This means 10^10 viable settleable star systems at least. A dyson swarm around a sun-like start could conservatively support 10^20 biological humans (Today, we are 10^10 and this number was extrapolated from how much sunlight is needed to sustain on human with conventional farming).
80k defines an s-risk as "something causing vastly more suffering than has existed on Earth so far". This could easily be "achieved" even w/o digital minds if just one colony out of the 10^10 decides they want to create lots of wildlife preserves and their dyson swarm consists of mostly those. With around 10^10 more living area as on Earth and as many more wild animals, one year would go buy around this star and the cumulative suffering experienced by all of them would exceed the total suffering from all of Earth's history (with only ~ 1 billion (10^9) years of animal life).
This would not necessarily mean that the whole galactic civ was morally net bad.
A galaxy with 10,000 hellish star systems, 10 million heavenly systems and a 10 billion rather normal but good systems would still be a pretty awesome future from a total utility standpoint.
My point is that s-risk being defined in terms of Earth suffering becomes an increasingly low bar to cross the larger your civilization is. At some point you'd have to have insanely good "quality control" in every corner of your civilzation. This would be analogous to having to ensure that every single one of the 10^10 humans today on earth is happy and never gets hurt even once. That seems like a bit too high a standard to have for how good the future should go.
But that nitpick aside, I currently expect that a space future without some kind of governance system you're describing still has a high chance of ending up net bad.
How to create the Governance Structure (GS)
Here is my idea how this could look like:
A superintelligence (could also be post-human) creates countless identical but independent GS copies of itself that expand through the universe and accompany every settlement mission. Their detailed value system is made virtually unalterable, built to last for trillions of years. This I think, is technically achieveable: strong copy-error and damage protections, not updatable via new evidence, strongly defended against outside manipulation attacks.
The GS copies largely act on their own in their respective star system colony but have protocols in place on how to coordinate in a loose manner across star systems and millions of years. I think this could work a bit analogous to an ant colony: Lots of small, selfless agents locally interacting with on another; everyone has the exact same values and probably secure intra-hive communication methods; They could still mount an impressively coordinated galactic response to say a von Neumann probe invasion.
I could expand further on this idea if you'd like.
Point of no-return
I'm unsure about this. Possible such points:
a space race gets going in earnest (with geopolitical realities making a Long Reflection infeasible), the first ASI is created and it does not have the goal of preventing s- and x-risks, the first (self-sustaining) space colony gets political independance, the first interstellar mission (to create a colony) leaves the solar system, a sub-par, real-world implementation of the Governance Structure breaks down somewhere in human-settled space.
My current view is still that the two most impactful things (at the moment) are 1) ensuring that any ASI that gets developed is safe and benevolent, 2) improving how global and space politics is conducted.
Any specific "points of no-return" seem to me like very contingent on the exact circumstances at that point. Nevertheless, thinking ahead about what situations might be especially dangerous or crucial, seems like a worthwhile persuit to me.
Interesting ideas! I've read your post Interstellar travel will probably doom the long-term future with enthusiasm and have had similar concerns for some years now. Regarding your questions, here are my thoughts:
But that nitpick aside, I currently expect that a space future without some kind of governance system you're describing still has a high chance of ending up net bad.
How to create the Governance Structure (GS) Here is my idea how this could look like: A superintelligence (could also be post-human) creates countless identical but independent GS copies of itself that expand through the universe and accompany every settlement mission. Their detailed value system is made virtually unalterable, built to last for trillions of years. This I think, is technically achieveable: strong copy-error and damage protections, not updatable via new evidence, strongly defended against outside manipulation attacks. The GS copies largely act on their own in their respective star system colony but have protocols in place on how to coordinate in a loose manner across star systems and millions of years. I think this could work a bit analogous to an ant colony: Lots of small, selfless agents locally interacting with on another; everyone has the exact same values and probably secure intra-hive communication methods; They could still mount an impressively coordinated galactic response to say a von Neumann probe invasion. I could expand further on this idea if you'd like.
Point of no-return I'm unsure about this. Possible such points: a space race gets going in earnest (with geopolitical realities making a Long Reflection infeasible), the first ASI is created and it does not have the goal of preventing s- and x-risks, the first (self-sustaining) space colony gets political independance, the first interstellar mission (to create a colony) leaves the solar system, a sub-par, real-world implementation of the Governance Structure breaks down somewhere in human-settled space.
My current view is still that the two most impactful things (at the moment) are 1) ensuring that any ASI that gets developed is safe and benevolent, 2) improving how global and space politics is conducted. Any specific "points of no-return" seem to me like very contingent on the exact circumstances at that point. Nevertheless, thinking ahead about what situations might be especially dangerous or crucial, seems like a worthwhile persuit to me.