JordanStone

Astrobiologist @ Imperial College London
567 karmaJoined Pursuing a doctoral degree (e.g. PhD)London, UK
www.imperial.ac.uk/people/j.stone22

Bio

Participation
3

Space governance

Sequences
1

Actions for Impact | Offering services examples

Comments
67

From my own experience and from what I've seen, I think it's common for new contributors to the forum to underestimate the amount of previous work that the discourse here builds on. And downvotes aren't used to disagree with a post, but are supposed to be used as something like a quality assessment. So my guess from a read of your downvoted post is that the downvotes reflect the fact that the argument you're making has been made before on the forum and within the wider EA community and you haven't engaged with that.

Maybe search for stuff like "AI-enabled coups", "power grabs", and "gradual disempowerment".

JordanStone
2
0
0
20% disagree

Effective altruists should spend more time and money on global systemic change

I'm mostly worried about low tractability and, if tractable, a lack of ability to predict the final outcomes if advocating for a World government. Maybe the safe option is to pursue traditional methods to promote international collaboration: treaties, information sharing, panel discussions etc. 

Thanks for this comment! I broadly agree with it all and it was very interesting to read. Thanks in particular for advancing my initial takes on governance (I'm far more comfortable discussing quantum physics than governance systems). 

a) Preventing catastrophe seems much more important for advanced civilizations than I realized and its not enough for the universe to be defense-dominated. 

b) Robustly good governance seems attainable? It may be possible to functionally 'lock-out' catastrophic-risk and tyranny-risk on approach to tech maturity and it seems conceivable (albeit challenging) to softly lock-in definitions of 'catastrophe' and 'tyranny' which can then be amended in future as cultures evolve and circumstances change. 

Agreed on both. Locking stuff out seems possible, and then as knowledge advances (in terms of moral philosophy or fundamental physics) and new possibilities come to light and priorities change, the "governance system" could be updated from a centralised position, like a software update expanding at the speed of light. Then the main tradeoff is between ensuring no possibility of a galactic x-risk or s-risk you know about could ever happen, and being adaptable to changing knowledge and emerging risks. 

At the scale of advanced civilizations collapse/catastrophe for even a single star system seems unbearable.

I strongly agree with this. We (as in, humanity) are at a point where we can control what the long-term future in space will look like. We should not tolerate a mostly great future with some star systems falling into collapse or suffering - we are responsible for preventing that and allowing it to happen at all is inconceivably terrible even if the EV calculation is positive. We're better than naive utilitarianism. 

If we buy your argument here Jordan or my takeaways from Joe's talk then we're like, ah man we may need really strong space governance. Like excellent, robust space governance. But no, No! This is a tyranny risk.

There are ways to address the risks I outlined without a centralised government that might be prone to tyranny (echoing your "hand waving" section later):

  • Digital World creation – super capable machine with blueprints (not an AGI superintelligence) goes to each star system and creates digital sentient beings. That’s it. No need for governance of independent civs.
  • We only send out probes to collect resources from the galaxy and bring them back to our solar system. We can expand in the digital realm here and remain coordinated.
  • Right from the beginning we figure out a governance system with 100% existential security and 0% s-risk (whatever that is). The expansion into space is supervised to ensure that each independent star system begins with this super governance system, but other than that they have liberty.
  • Just implement excellent observation of inhabited star systems. Alert systems for bad behaviour to nearby star systems prevents s-risks from lasting millennia (but, of course, carries conflict risks). 

Maybe if we find the ultimate moral good and are coordinated enough to spread that, then the universe will be homogenous, so there is no need for governance to address unpredicted behaviour.

In particular it seems possible to forcibly couple the power to govern with goodness.

I think this is a crucial point. I'm hopeful of that. If it's possible to lock in that strong correlation then does that ensure absolute existential security and no s-risks? I think it depends on goodness. If the "goodness" is based on panbiotic ethics, then we have a universe full of suffering Darwinian biology. If the "goodness" is utilitarian then the universe becomes full of happiness machines... maybe that's bad. Don't know. It seems that the goodness in your USA example is defined by Christian values, which maybe don't give us the best possible long-term future. Maybe I'm reading too far into your simple model (I find it conceptually very helpful though).

There's also the sort hand-off or die roll wherein you cede/lose power to something and can't get it back unless so willed by the entity in question. I prefer my sketch of marching to decouple governmental power from competitiveness.

Yeah I agree. But I think it's dependent on the way that society evolves. If we're able to have a long reflection (which I think unlikely), then maybe we can build a good God more confidently. But your model sounds more realistic. 

I had a discussion with Robin Hanson about this post, available here: 

Hi Josh. I'm not a careers advisor but I'm working on some space governance projects.

I would recommend checking out the sections on space governance in this recent report from William MacAskill and Fin Moorhouse to get an idea of what some effective altruists are currently thinking about in relation to space governance: https://www.forethought.org/research/preparing-for-the-intelligence-explosion

I'd also really recommend getting involved with the Space Generation Advisory Council if you'd like to work on challenges in space tech and governance. They have lots of project groups you can get involved in on many different topics like space law and policy and space safety and sustainability. 

I'm happy to have a chat about space governance and effective altruism if you want to book a chat: https://savvycal.com/AstroJordanStone/2cb3cbdb

Could you expand on why you think space would be defense dominant?

Thanks for the in-depth comment. I agree with most of it. 

if interstellar colonization would predictably doom the long-term future, then people would figure out solutions to that.

Agreed, I hope this is the case. I think there are some futures where we send lots of ships out to interstellar space for some reason or act too hastily (maybe a scenario where transformative AI speeds up technological development, but not so much our wisdom). Just one mission (or set of missions) capable of self-propagating to other star systems almost inevitably leads to galactic civilisation in the end, and we'd have to catch up to it to ensure existential security, which would become challenging if they create von-Neumann probes.

"50%" in the survey was about vacuum decay being possible in principle, not about it being possible to technologically induce (at the limit of technology). The survey reported significantly lower probability that it's possible to induce. This might still be a big deal though!

Yeah this is my personal estimate based on that survey and its responses. I was particularly convinced by one responder who put 100% probability that its possible to induce (conditional on the vacuum being metastable), as anything that's permitted by the laws of physics is possible to induce with arbitrarily advanced technology (so, 50% based on that chance of the vacuum is metastable).

Thanks Jacob.

I really like this idea to get around the problem of liberty. Though, I'm not sure how rapid the response would have to be from others to someone initiating vacuum decay - could a 'bad actor' initiate vacuum decay in the time it takes for the system to send an alert and for a response to arrive? I think having a non-intrusive surveillance system would work in a world where near-instant communication between star systems is possible (e.g. wormholes or quantum coupling). 

I'm borrowing the term "N-D lasers" from Charlie Stross in this post: https://www.antipope.org/charlie/blog-static/2015/04/on-the-great-filter-existentia.html 

N-dimensional is just referring to an arbitrarily powerful laser, potentially beyond our current understanding of physics. These lasers might be so powerful they could travel vast distances through interstellar space and destroy a star system. They would travel at the speed of light so it'd be impossible to see them coming. Kurzgesagt made a great video on this: 

Load more