AP

Aviel Parrack

8 karmaJoined Pursuing a doctoral degree (e.g. PhD)aviparrack.com

Comments
1

Thanks for this post, I really enjoyed. You flirt with the idea that we might need serious space governance. An idea which I long opposed due to strong internal antibody reaction to lock-in, and tyranny risk type concerns. I've since been toying with the idea myself though despite the aversion. This has been spurred on by coming to think that:


a) Preventing catastrophe seems much more important for advanced civilizations than I realized and its not enough for the universe to be defense-dominated.  

b) Robustly good governance seems attainable? It may be possible to functionally 'lock-out' catastrophic-risk and tyranny-risk on approach to tech maturity and it seems conceivable (albeit challenging) to softly lock-in definitions of 'catastrophe' and 'tyranny' which can then be amended in future as cultures evolve and circumstances change. 

 

Some quick rambly thoughts:

Firstly, I think the prospect of civs at tech maturity still facing x-risks is very plausible even if only on the self-replicating machines bit. I think of Carlsmith's talk 'Can Goodness Compete?'. We can imagine that without some form of constraint between agents, decoupling power from competitiveness we expect to regress to Malthusian/Darwinian equilibria at some point. I.e. if not constrained and we proliferate into the stars we expect selection pressures to maximize evolutionary fitness. We like efficiency but its not the only thing we like. We'd like to use our resources on beautiful space cathedrals and art as well but any off-shoots of civilization who prefer to become more competitive instead will gain advantage/resources more effectively and come to overmatch us in the long run (in lieu of countermeasures).

My previous response was eh, don't worry the universe is probably defense dominant so they can't outcompete you out of existence. Maybe for example you fly off with some kilograms into the middle of nowhere and you give your civ effective immortality by distributing it across many fusion/dyson-sphere powered blacksites in 'interstellar-space'  which function as your horcruxes and even Joe's locust swarm cannot be bothered to comb through space playing Battleship for millennia to hunt you. 

On reflection this seems unacceptable though? Like what if the maximally competitive equilibria of the universe is really horrific or just super low-value/meh/mid by our lights? Seems likely. No space-cathedrals like I said. Maybe its intricate and beautiful but doesn't seem nearly as good as the ultra-diverse utopias we could build. And you say, 'evolution made many beautiful things on Earth though' and so true, but also it had millions of years and I don't think nature is as interesting or as beautiful as the stuff humans built in the last few thousand. Much of nature is also horrific. I'd prefer intelligent design at least include the option to select values to design for rather than being slave, employing intelligence solely for the goal of better maximizing evolutionary fitness. 

Further, I don't think preventing X-risk is sufficient. There are also S-risks which could be unimaginably horrible. At the scale of advanced civilizations collapse/catastrophe for even a single star system seems unbearable. Considering the immense carrying capacity enabled by the matter/energy concentration of a stellar system there is just so much to lose. If that doesn't seem salient, it might be worth taking a second to picture in some detail the black-death, the holocaust, the Atlantic slave trade then stretching these on for millennia, or multiplying the lives effected by hundreds of millions. These could also be long lived transhuman/digital beings who would otherwise have flourished beyond our imagining also. I think they'd agree and consider even exceedingly small chances of catastrophic collapse to be unacceptable. 

From this, I'm like. Hmm if the universe is offense-dominated (OD) we might need good space governance to ensure there's a universe and beings to enjoy it at all. If the universe is defense-dominated (DD) this suggests we might need good space governance because even though we expect to survive we also expect that maladaptive/horrific civilizational off-shoots are unassailable. We take 'we don't allow eternal torture' as our casus beli and launch ships to fight a just war, but... there's nothing we can do. 

Worse, in both OD and DD cases competitiveness is an issue. We seem to actually require, goodness-dominated (GD) to escape the need for substantial space governance. 
 

  • If OD but not GD, competition between civs is existential.
    • Unconstrained splinter civilizations then pose X-risk, S-risk
    • Some form of governance seems to have to curb else death by Murphy's law
    • Also aliens pose a risk but for that we need diplomacy/coordination or to merge governance structures
  • If DD but not GD, competition between civs is merely for resources/prominence on the galactic/universal/multiversal stage.
    • Unconstrained splinter civilizations then pose an S-risk, or unconstrained competition means this region of space belongs to the fittest  
  • If GD, then goodness wins by default (e.g. good is a great attractor and point of convergence for maturing civilizations such that the arcs of the civilization's evolution tend unambiguously net positive)
    • Where here good might stem from moral realism or from some idealized reflective consensus/moral-parliament
    • If GD, then libertarianism in the limit is fine. Buy probe, start space-faring civilization. No permit required.

We can debate/challenge these premises. It seems unlikely to me that the universe is GD though it might be good-favoring enough that with decent probability we could muddle through. From where I am, it seems to me to lean away from good by default but as agents we can recognize this and we design societal and governmental mechanisms to make goodness more competitive. I think of Darwinian traps, or the general brutality of nature as compared to society in the United States for example. 

If we buy your argument here Jordan or my takeaways from Joe's talk then we're like, ah man we may need really strong space governance. Like excellent, robust space governance. But no, No! This is a tyranny risk. What if the government becomes corrupt or just inept/maladaptive and can't be overthrown? That's very scary and any time people talk about it I get itchy. Hopefully you can feel through the comment my hovering nowhere near any buttons, very unsure, not getting ready to cut my teeth biting any bullets, very cautious... 

Okay. Maybe there's a way to have strong space governance, that we agree to and favor. There are a lot of concerns and failure modes. Let me try to outline some terminal properties that seem ideal. 

  • As powerful as it needs to be to prevent the aforementioned ills
  • Maximally free,
    • providing oversight only in the dimensions it needs to
    • providing oversight to the necessary extent and no further
  • Locks-out tyranny risk, governmental overstep, corrosion, corruption
  • Is responsive to the people, can stably evolve alongside culture/needs/desires of the governed

Probably missing things here, we can add/refine. Naturally, these points are like, sure. That's nice and all but you have to concretize. How much oversight is necessary?  How powerful is, 'as powerful as needed?' And there seems to be some sense in which universally satisfactory answers appear infeasible. I agree it seems tough. Very complicated. We've only tried as a species to build stable and good governance for a few thousand years after all (read sarcastically). 

Well but actually. We've only tried as a species to build stable and good governance for a few thousand years after all (read earnestly). And boy has there been progress. Now today the rapid progress of technology puts new options on the table, better mechanisms for communication, assurance, etc. AI in particular seems poised to change up the game quite a lot. We have both the benefit of learning from the success/failures of our ancestors and more tools in the toolkit. And this is preaching to the choir but there is the distinct possibility that some or all of the seemingly inevitable tensions/tradeoffs which plagued governance in the past could be mitigated/eliminated by progress in technology/civics. 

In particular it seems possible to forcibly couple the power to govern with goodness. Today, to obtain governmental power you need to be competitive yes, but you also need at least to act as though standing for some envisioning of the 'good'. There are also checks and balances, watchdogs, etc. which aim to block the abuse of power. In this way we've partially decoupled power with competitiveness and partially intertwined the obtainment of power with goodness. In the future we could arrange to be dramatically more successful in this. We may be able to ensure that the only path to wielding governmental power is to be verifiably worthy of it.

The hard 'when the chips are down', 'at the end of the day' kind of power then is wielded by assured enforcement mechanisms of our design or insofar as agency is needed then by servant angels who are powerful but have low privilege, tight constraints on their behavior, they are our best attempt at verifiably virtuous, maximally trustworthy minds. Provided we develop a strong science of intelligence it seems possible to create a very robust and capable yet verifiably restrained system. It could be comprised of highly intelligent but transparently/provably safe entities, narrow/programmatic ensembles of enforcement mechanisms/protocols or a mixture of the two.

Here's a picture of what I mean. On the top left, the initial Malthusian/Darwinian equilibrium in which power and competitiveness are highly correlated, then later in today's US with some checks and balances curbing the extent to which hard power is correlated with competitiveness, then later still in the future if hard power is tightly correlated with goodness. On the bottom, the pyramid of power today stratifies classes with hard power and access to privilege mostly correlated. Then in the future we imagine inverting it such that a large elite class enjoys lots of privilege and has power and agency but not hard power. Hard power is possessed by the 'servant angels' 

Plausibly this costs very little, and there is little to miss? There is a very nice asymmetry between base and virtual reality that enables this. The vast majority of beings are likely to be digital and digital spaces may be very permissive since wild experiments and happenings there can in principle occur in isolation. You might ascend into an upper layer of virtual reality to create a sandbox world with friends. Here you have the full configuration space available to you except for those limitations such as the suffering floor which were enabled after the long reflection and some addendums which have since been added over the course of subsequent cultural evolution. Then later you doff your god-like powers to descend into a more public layer of reality in which some additional constraints bind you though probably these are unimaginably liberal by our lights. You probably aren't the kind of entity that would descend all the way into the terribly impermissive base reality but if you did you'd be less powerful there still. Here you can only download your consciousness into a range of A-class ships and synthetic bodies which can be quite fun but you suffer all the lame mundanities such as friction and inertia. Maybe you want this challenge though and its a source of meaning and purpose to undertake a project in base reality. Of course you are functionally no threat to the much more powerful S-class vessels manned by our servant angels who see to the protection of virtual worlds and proceed establishing new colonies. You couldn't outrun them outshoot them, or escape their notice if you tried but that's okay because well let's say, you got your permits to begin a new civilization already and so they'll escort you to your new star system. They help with auto-factory construction, and just ensure all the base and virtual worlds constructed are up to code. Or what if you want to descend into base reality to join those great guardians looking after the quintillion minds in your region because you find great meaning and purpose in it. Yes, perhaps this kind of mobility can be allowed too but the stakes are so great. You'd have to become transparent, you'd have to become like the servant angels, virtuous and infallible as can be. 

Maybe some folk didn't want to become post-human, maybe I'm one of them who did't want to self-modify or maybe I just value being in base reality inherently. Seems like there's a ton of room for that as well. I'd guess at this time in the future humans would be essentially free to live without oversight or restriction if desired (perhaps also with the exception of some suffering floors or provisions to step-in in the event of impending collapse/catastrophe). At some point it probably seems insane to have thousands or millions of virtual worlds running untold numbers of minds and not make their shared source of vulnerability in base reality as impervious to threat (agential and natural) as physics allows. 

This is hand waving a lot and a very cavalier treatment of an immensely serious subject but yes, I wanted to raise that

a) Seems to me like you might need very competent and powerful space governance even in a 'safe' DD universe. 

b) There seem to me to be very cool, very agreeable endgames even if we need some kind of 'locked-in' governance (This is not to say its easy to get there). 

Final note, there's another flavor of 'solution' here which goes like:

  1. Build a good god
  2. Rest and watch the sunrise on a grateful universe

Build, and good do a lot of load bearing here. There's also the sort hand-off or die roll wherein you cede/lose power to something and can't get it back unless so willed by the entity in question. I prefer my sketch of marching to decouple governmental power from competitiveness. You get to iterate your way toward sophisticated governance mechanisms and servant angels. The governance system gains in power as it trades you assurances. This continues the conception of the social contract and insofar as we cede power we do it as part of an amendable agreement between agents rather than establishing a (hopefully benevolent) dictator.