Astrobiologist @ Imperial College London
Partnerships Coordinator @ SGAC Space Safety and Sustainability Project Group
Interested in: Space Governance, Great Power Conflict, Existential Risk, Cosmic threats, Academia, International policy
If you'd like to chat about space and existential risk please book a meeting! I'm particularly interested in the role of international collaborations in reducing the probability of a great power conflict, and in space activities that tackle existential risks, such as monitoring nuclear weapons testing and climate change impacts, and missions to test asteroid deflection and understand cosmic threats. I'm based in London and happy to meet in person. You can email me at j.stone22 at imperial dot ac dot uk
I am a freelance scientific illustrator. I create diagrams to visualise your research for presentations, publications, grant proposals, visual summaries etc.
Check out this post on the forum for more info.
Haha good point, that's precisely why I asked.
I've just put together the trial version of "Actions for Impact", the newsletter to leverage the effective altruism (EA) community’s size to complete quick high impact tasks to support EA cause areas. I'm getting feedback on the first version at the moment.
DM me on the forum if you're interested and I'll send you the first version - very welcome to feedback!
Great post! I've never been convinced that the Precipice ends when we become multi-planetary. So I really enjoyed this summary and critique of Thorstad. And I might go even further and argue that not only does space settlement not mitigate existential risk, but it actually might make it worse.
I think it's entirely possible that the more planets in our galaxy that we colonise, the higher the likelihood of the extinction of life in the universe will be. It breaks down like this:
Assumption 1: The powers of destruction will always be more powerful than the powers of construction or defence. i.e. at the limits of technology, there will be powers that a galactic civilisation would not be able to defend against if they were created. Even if the colonies do not communicate with others and remain isolated.
Examples:
Assumption 2: For any of the above examples (only one of them has to be possible), it would only take one civilisation in the galaxy to create one of them (by accident or otherwise) and all life in the galaxy could be at risk.
Assumption 3: It would be extremely difficult to centrally govern all of these colonies and detect the development of these technologies as each colony will be lightyears apart. It would take potentially thousands of years to send and receive messages between the different colonies.
Assumption 3: The more colonies that exist in our galaxy, the higher the likelihood that one of those galaxy-ending inventions will be created.
So if the above is true, then I see 3 options:
I would like to look into this further. If it's true then longtermism is pretty much bust and we should focus on saving animals from factory farming instead... or solve the galaxy destroying problem... it would be nice to have a long pause to do that.
This event is now open to virtual attendees! It is happening today at 6:30PM BST. The discussion will focus on how the space sector can overcome international conflicts, inspired by the great power conflict and space governance 80K problem profiles.
I have written this post introducing space and existential risk and this post on cosmic threats, and I've come up with some ideas for stuff I could do that might be impactful. So, inspired by this post, I am sharing a list of ideas for impactful projects I could work on in the area of space and existential risk. If anyone working on anything related to impact evaluation, policy, or existential risk feels like ranking these in order of what sounds the most promising, please do that in the comments. It would be super useful! Thank you! :)
(a) Policy report on the role of the space community in tackling existential risk: Put together a team of people working in different areas related to space and existential risk (cosmic threats, international collaborations, nuclear weapons monitoring, etc.). Conduct research and come together to write a policy report with recommendations for international space organisations to help tackle existential risk more effectively.
(b) Anthology of articles on space and existential risk: Ask researchers to write articles about topics related to space and existential risk and put them all together into an anthology. Publish it somewhere.
(c) Webinar series on space and existential risk: Build a community of people in the space sector working on areas related to existential risk by organising a series of webinars. Each webinar will be available virtually.
(d) Series of EA forum posts on space and existential risk: This should help guide people to an impactful career in the space sector, build a community in EA, and better integrate space into the EA community.
(e) Policy adaptation exercise SMPAG > AI safety: Use a mechanism mapping policy adaptation exercise to build on the success of the space sector in tackling asteroid impact risks (through the SMPAG) to figure out how organisations working on AI safety can be more effective.
(f) White paper on Russia and international space organisations: Russia’s involvement in international space missions and organisations following its invasion of Ukraine could be a good case study for building robust international organisations. E.g. Russia was ousted from ESA, is still actively participating on the International Space Station, and is still a member of SMPAG but not participating. Figuring out why Russia stayed involved or didn’t with each organisation could be useful.
(g) Organise an in-person event on impactful careers in the space sector: This would be aimed at effective altruists and would help gauge interest and provide value.
I don't think there's anything we can do right now about rogue celestial bodies - so not worth thinking about for me.
For space weather stuff like solar flares, the main jobs are proofing technology against high amounts of radiation, especially when it comes to nuclear reactors and national defence infrastructure. Researching exactly what the impacts might be from different threats, and their probabilities, would definitely help governments defend against these threats more effectively.
Yes good point, thank you. I have updated the post to clarify that the probability estimate is for a scenario as bad as the worst case.
I think that if I do it as severity in terms of population loss, it will be a lot harder to pin down. In the severity scores I'm also thinking about how badly it will affect our long term future, and how it affects the probability of other x-risks. So if I assessed it on population loss I might have to add other factors, and it might bit out of the scope of what I'm going for with the post. The severity estimates are fairly open to interpretation as I've done them, and I think that's fine for this, which is an introduction/overview of cosmic threats.
Thanks for the feedback :)
Interestingly, the singularity could actually have the opposite effect. Where originally human exploration of the Solar System was decades away, extremely intelligent AI could speed up technology to where it's all possible within a decade.
The space policy landscape is not ready for that at all. There is no international framework for governing the use of space resources, and human exploration is still technically illegal on Mars due to contamination of the surface (and the moon! Yes we still care a lot).
So I lean more towards superintelligent AI being a reason to care more about space, not less. Will Macaskill discusses it in more detail here.