JordanStone

Astrobiologist @ Imperial College London
546 karmaJoined Pursuing a doctoral degree (e.g. PhD)London, UK
www.imperial.ac.uk/people/j.stone22

Bio

Participation
3

Searching for life on Mars @ Imperial College London

Lead of the Space Generation Advisory Council, Cosmic Futures project. 

Interested in: Space Governance, Great Power Conflict, Existential Risk, Cosmic threats, Academia, International policy

 

Chilling the f*** out is the path to utopia

How others can help me

If you'd like to chat about space governance or existential risk please book a meeting!

How I can help others

Wanna know about space governance? Then book a meeting!! - I'll get an email and you'll make me smile because I love talking about space governance :D

Sequences
1

Actions for Impact | Offering services examples

Comments
65

Thanks for this comment! I broadly agree with it all and it was very interesting to read. Thanks in particular for advancing my initial takes on governance (I'm far more comfortable discussing quantum physics than governance systems). 

a) Preventing catastrophe seems much more important for advanced civilizations than I realized and its not enough for the universe to be defense-dominated. 

b) Robustly good governance seems attainable? It may be possible to functionally 'lock-out' catastrophic-risk and tyranny-risk on approach to tech maturity and it seems conceivable (albeit challenging) to softly lock-in definitions of 'catastrophe' and 'tyranny' which can then be amended in future as cultures evolve and circumstances change. 

Agreed on both. Locking stuff out seems possible, and then as knowledge advances (in terms of moral philosophy or fundamental physics) and new possibilities come to light and priorities change, the "governance system" could be updated from a centralised position, like a software update expanding at the speed of light. Then the main tradeoff is between ensuring no possibility of a galactic x-risk or s-risk you know about could ever happen, and being adaptable to changing knowledge and emerging risks. 

At the scale of advanced civilizations collapse/catastrophe for even a single star system seems unbearable.

I strongly agree with this. We (as in, humanity) are at a point where we can control what the long-term future in space will look like. We should not tolerate a mostly great future with some star systems falling into collapse or suffering - we are responsible for preventing that and allowing it to happen at all is inconceivably terrible even if the EV calculation is positive. We're better than naive utilitarianism. 

If we buy your argument here Jordan or my takeaways from Joe's talk then we're like, ah man we may need really strong space governance. Like excellent, robust space governance. But no, No! This is a tyranny risk.

There are ways to address the risks I outlined without a centralised government that might be prone to tyranny (echoing your "hand waving" section later):

  • Digital World creation – super capable machine with blueprints (not an AGI superintelligence) goes to each star system and creates digital sentient beings. That’s it. No need for governance of independent civs.
  • We only send out probes to collect resources from the galaxy and bring them back to our solar system. We can expand in the digital realm here and remain coordinated.
  • Right from the beginning we figure out a governance system with 100% existential security and 0% s-risk (whatever that is). The expansion into space is supervised to ensure that each independent star system begins with this super governance system, but other than that they have liberty.
  • Just implement excellent observation of inhabited star systems. Alert systems for bad behaviour to nearby star systems prevents s-risks from lasting millennia (but, of course, carries conflict risks). 

Maybe if we find the ultimate moral good and are coordinated enough to spread that, then the universe will be homogenous, so there is no need for governance to address unpredicted behaviour.

In particular it seems possible to forcibly couple the power to govern with goodness.

I think this is a crucial point. I'm hopeful of that. If it's possible to lock in that strong correlation then does that ensure absolute existential security and no s-risks? I think it depends on goodness. If the "goodness" is based on panbiotic ethics, then we have a universe full of suffering Darwinian biology. If the "goodness" is utilitarian then the universe becomes full of happiness machines... maybe that's bad. Don't know. It seems that the goodness in your USA example is defined by Christian values, which maybe don't give us the best possible long-term future. Maybe I'm reading too far into your simple model (I find it conceptually very helpful though).

There's also the sort hand-off or die roll wherein you cede/lose power to something and can't get it back unless so willed by the entity in question. I prefer my sketch of marching to decouple governmental power from competitiveness.

Yeah I agree. But I think it's dependent on the way that society evolves. If we're able to have a long reflection (which I think unlikely), then maybe we can build a good God more confidently. But your model sounds more realistic. 

I had a discussion with Robin Hanson about this post, available here: 

Hi Josh. I'm not a careers advisor but I'm working on some space governance projects.

I would recommend checking out the sections on space governance in this recent report from William MacAskill and Fin Moorhouse to get an idea of what some effective altruists are currently thinking about in relation to space governance: https://www.forethought.org/research/preparing-for-the-intelligence-explosion

I'd also really recommend getting involved with the Space Generation Advisory Council if you'd like to work on challenges in space tech and governance. They have lots of project groups you can get involved in on many different topics like space law and policy and space safety and sustainability. 

I'm happy to have a chat about space governance and effective altruism if you want to book a chat: https://savvycal.com/AstroJordanStone/2cb3cbdb

Could you expand on why you think space would be defense dominant?

Thanks for the in-depth comment. I agree with most of it. 

if interstellar colonization would predictably doom the long-term future, then people would figure out solutions to that.

Agreed, I hope this is the case. I think there are some futures where we send lots of ships out to interstellar space for some reason or act too hastily (maybe a scenario where transformative AI speeds up technological development, but not so much our wisdom). Just one mission (or set of missions) capable of self-propagating to other star systems almost inevitably leads to galactic civilisation in the end, and we'd have to catch up to it to ensure existential security, which would become challenging if they create von-Neumann probes.

"50%" in the survey was about vacuum decay being possible in principle, not about it being possible to technologically induce (at the limit of technology). The survey reported significantly lower probability that it's possible to induce. This might still be a big deal though!

Yeah this is my personal estimate based on that survey and its responses. I was particularly convinced by one responder who put 100% probability that its possible to induce (conditional on the vacuum being metastable), as anything that's permitted by the laws of physics is possible to induce with arbitrarily advanced technology (so, 50% based on that chance of the vacuum is metastable).

Thanks Jacob.

I really like this idea to get around the problem of liberty. Though, I'm not sure how rapid the response would have to be from others to someone initiating vacuum decay - could a 'bad actor' initiate vacuum decay in the time it takes for the system to send an alert and for a response to arrive? I think having a non-intrusive surveillance system would work in a world where near-instant communication between star systems is possible (e.g. wormholes or quantum coupling). 

I'm borrowing the term "N-D lasers" from Charlie Stross in this post: https://www.antipope.org/charlie/blog-static/2015/04/on-the-great-filter-existentia.html 

N-dimensional is just referring to an arbitrarily powerful laser, potentially beyond our current understanding of physics. These lasers might be so powerful they could travel vast distances through interstellar space and destroy a star system. They would travel at the speed of light so it'd be impossible to see them coming. Kurzgesagt made a great video on this: 

Hi Birk. Thank you for your very in-depth response, I found it very interesting. That's pretty much how i imagined the governance system when I wrote the post. I actually had it as a description like that originally but I hated the implications for liberalism, so i took a step back and listed requirements instead (which didn't actually help). 

The "points of no return" do seem quite contingent, and I'm always sceptical about the tractability of trying to prevent something from happening - usually my approach is: it's probably gonna happen, how do we prepare? But besides that, I'm going to look into more specific "points of no return" as there could be a needle hiding in the noodles somewhere. I feel like this is the kind of area where we could be missing something, e.g. the point of no return is really close, or there could be a tractable way to influence the implementation of that point of no return.

Definitely on base :D 

I think a galactic civilisation needs to have absolute existential security, or a galactic x-risk will inevitably occur (i.e., they need a coin that always lands on heads). If your galactic civilisation has survived for longer than you would have expected it to based on cumulative chances, then you can be very confident you've achieved absolute existential security (you have that coin). But a galactic civ would have to know whether they have the coin that is always heads, or the coin that is heads 99.9999999% of the time. I'm not sure how that's possible. 

Load more