JordanStone

Astrobiologist @ Imperial College London
387 karmaJoined Pursuing a doctoral degree (e.g. PhD)London, UK
www.imperial.ac.uk/people/j.stone22

Bio

Participation
3

Searching for life on Mars @ Imperial College London

Lead of the Space Generation Advisory Council, Cosmic Futures project. 

Interested in: Space Governance, Great Power Conflict, Existential Risk, Cosmic threats, Academia, International policy

 

Chilling the f*** out is the path to utopia

How others can help me

If you'd like to chat about space governance or existential risk please book a meeting!

How I can help others

Wanna know about space governance? Then book a meeting!! - I'll get an email and you'll make me smile because I love talking about space governance :D

Sequences
1

Actions for Impact | Offering services examples

Comments
48

Yeah that's true. 

I think 1000 is where I would start to get very worried intuitively, but there would be hundreds of millions of habitable planets in the Milky Way, so theoretically a galactic civilisation could have that many if it didn't kill itself before then. 

I guess the probability of one of these civilisations initiating an s-risk or galactic x-risk would just increase with the size of the galactic civilisation. So the more that humanity expands throughout the galaxy, the greater the risk.

Yeah sure, it's like the argument that if you get infinite chimpanzees and put them in front of type writers, then one of them would write Shakespeare. If you have a galactic civilisation, it would be very dispersed and most likely each 'colony' occupying each solar system would govern itself independently. So they could be treated as independent actors sharing the same space, and there might be hundreds of millions of them. In that case, the probability that one of those millions of independent actors creates astronomical suffering becomes extremely high, near 100%. I used digital sentience as an example because its the risk of astronomical suffering that I see as the most terrifying - like IF digital sentience is possible, then the amount of suffering beings that it would be possible to create could conceivably outweigh the value of a galactic civilisation. That 'IF' contains a lot of uncertainty on my part. 

But this also applies to tyrannous governments, how many of those independent civilisations across a galaxy will become tyrannous and cause great suffering to their inhabitants? How many of those civilisations will terraform other planets and start biospheres of suffering beings?

The same logic also applies to x-risks that affect a galactic civilisation:

all it takes is one civilization of alien ass-hat griefers who send out just one Von Neumann Probe programmed to replicate, build N-D lasers, and zap any planet showing signs of technological civilization, and the result is a galaxy sterile of interplanetary civilizations until the end of the stelliferous era (at which point, stars able to power an N-D laser will presumably become rare). (Charlie Stross)

Stopping these things from happening seems really hard. It's like a galactic civilisation needs to be designed right from the beginning to make sure that no future colony does this.

JordanStone
3
0
0
40% agree

Do you expect to be more of a mentor or a mentee?

 

I'm very active in space governance and I'm excited to chat about how that crosses over with many other EA cause areas. 

Link to my swapcard

Elon Musk recently presented SpaceX's roadmap for establishing a self-sustaining civilisation on Mars (by 2033 lol). Aside from the timeline, I think there may be some important questions to consider with regards to space colonisation and s-risks: 

  1. In a galactic civilisation of thousands of independent and technologically advanced colonies, what is the probability that one of those colonies will create trillions of suffering digital sentient beings? (probably near 100% if digital sentience is possible… it only takes one)
  2. Is it possible to create a governance structure that would prevent any person in a whole galactic civilisation from creating digital sentience capable of suffering? (sounds really hard especially given the huge distances and potential time delays in messaging… no idea)
  3. What is the point of no-return where a domino is knocked over that inevitably leads to self-perpetuating human expansion and the creation of galactic civilisation? (somewhere around a self-sustaining civilisation on Mars I think). 

If the answer to question 3 is "Mars colony", then it's possible that creating a colony on Mars is a huge s-risk if we don't first answer question 2. 

Would appreciate some thoughts. 

 

Stuart Armstrong and Anders Sandberg’s article on expanding throughout the galaxy rapidly, and Charlie Stross’ blog post about griefers influenced this quick take.

Hey! I'm requesting some help with "Actions for Impact", it's a notion page with activities people can get involved in that take less than 30 minutes and can contribute to EA cause areas. This includes signing petitions, emailing MPs, voting for effective charities in competitions, responding to 'calls for evidence', or sharing something online. EA UK has the notion page linked on their website: https://www.effectivealtruism.uk/get-involved 

It should serve as a hub to leverage the size of the EA community when it's needed. 

I'm excited about the idea and I thought I'd have enough time to keep it updated and share it with organisations and people, but I really don't. If the idea sounds exciting and you have an hour or two per week spare please DM me, I'd really appreciate a couple of extra hands to get the ball rolling a bit more (especially if you have involvement in EA community building as I don't at all). 

I didn't write this post with the intention of criticising the importance of space governance, so I wouldn't go as far as you. I think reframing space governance in the context of how it supports other cause areas reveals how important it really is. But space governance also has its own problems to deal with, so it's not just a tool or a background consideration. Some (pressing) stuff that could be very bad in the 2030s (or earlier) without effective space governance:

  • China/Russia and the USA disagree over how to claim locations for a lunar base, and they both want to build one on the south pole. High potential for conflict in space (would also increase tensions on Earth). Really bad precedent for the long term future.
  • I think space mining companies have a high chance of accidentally changing the orbits of multiple asteroids, increasing the risk of short warning times from asteroids with suddenly altered orbits (or creation of lots of fragments that could damage satellites). No policy exists to protect against this risk.
  • Earth's orbit is getting very full of debris and satellites. Another few anti-satellite weapons tests or a disaster involving a meteroid shower may trigger Kessler syndrome. Will Elon Musk de-orbit all of his thousands of Starlink satellites?
  • The footprints of the first humans to ever set foot on another celestial body still exist on the moon. They will be destroyed by lunar plumes caused by mining in the 2030s - this will be a huge blow to the long term future (I think it could even be the greatest cultural heritage of all time to a spacefaring civilisation and we're gonna lose it). All it takes is one small box around some of the footprints to protect 90% of the value.  
  • Earth's orbit is filled with debris. The moon's orbit is smaller and we can't just get rid of satellites by burning them in the atmosphere. No policy exists to set a good precedent around that yet so the moon's orbit will probably end up being even worse than Earth's - people are already dodging each other's satellites around the moon, and ESA & NASA want to build whole networks for moon internet. 

My conclusions are different throughout the post including in the title! I'm still not sure whether space governance is more like international policy, or more like EA community building - maybe its a mix of the two, where it's actually like international policy but we should treat it more like EA community building. 

So either space governance is a "meta-cause area" or an "area of expertise", but not a "cause area" in the sense that the term is most often used (i.e. a cause to address). 

I disagree with the suggestion but I upvoted as I think it is an important discussion to have on the forum. Especially with the Musk example, longtermism gets a lot of criticism for ideas that aren't associated with it (even in the space policy literature). But I agree with @Davidmanheim's comment. Thanks for making the post!

Thanks for your reply, lots of interesting points :)

Consciousness may not be binary, in that case, we don't know if humans are low, medium, or high consciousness, I only know that I am not at zero. We should then likely assume we are average. Then, the relevant comparison is no longer between P(humanity is "conscious") and P(aliens creating SFCs are "conscious") but between P(humanity's consciousness > 0) and P(aliens-creating-SFC's consciousness > 0)

I particularly appreciate that reframing of consciousness. I think it's probably both binary and continuous though. Binary in the sense that you need a "machinery" that's capable of producing consciousness i.e. neurons in a brain seem to work. And then if you have that capable machinery, you then have the range from low to high consciousness, like we see on Earth. If intelligence is related to consciousness level as it seems to be on Earth, then I would expect that any alien with "capable machinery" that's intelligent enough to become spacefaring would have consciousness high enough to satisfy my worries (though not necessarily at the top of the range). 

So then any alien civilisation would either be "conscious enough" or "not conscious at all", conditional on (a) the machinery of life being binary in its ability to produce a scale of consciousness and (b) consciousness being correlated with intelligence.

So I'm not betting on it. The stakes are so high (a universe devoid of sentience) that I would have to meet and test the consciousness of aliens with a 'perfect' theory of consciousness before I updated any strategy towards reducing P(ancestral-human SFC) even if there's an extremely high probability of Civ-Similarity Hypothesis being true.

Load more