This is a special post for quick takes by JordanStone. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Elon Musk recently presented SpaceX's roadmap for establishing a self-sustaining civilisation on Mars (by 2033 lol). Aside from the timeline, I think there may be some important questions to consider with regards to space colonisation and s-risks: 

  1. In a galactic civilisation of thousands of independent and technologically advanced colonies, what is the probability that one of those colonies will create trillions of suffering digital sentient beings? (probably near 100% if digital sentience is possible… it only takes one)
  2. Is it possible to create a governance structure that would prevent any person in a whole galactic civilisation from creating digital sentience capable of suffering? (sounds really hard especially given the huge distances and potential time delays in messaging… no idea)
  3. What is the point of no-return where a domino is knocked over that inevitably leads to self-perpetuating human expansion and the creation of galactic civilisation? (somewhere around a self-sustaining civilisation on Mars I think). 

If the answer to question 3 is "Mars colony", then it's possible that creating a colony on Mars is a huge s-risk if we don't first answer question 2. 

Would appreciate some thoughts. 

 

Stuart Armstrong and Anders Sandberg’s article on expanding throughout the galaxy rapidly, and Charlie Stross’ blog post about griefers influenced this quick take.

Interesting ideas! I've read your post Interstellar travel will probably doom the long-term future with enthusiasm and have had similar concerns for some years now. Regarding your questions, here are my thoughts:

  1. Probability of s-risk I agree that in a sufficiently large space civilization (that isn't controlled by your Governance Structure), the probability of s-risk is almost 100% (but not just from digital minds). Let's unpack this: Our galaxy has roughly 200 billion stars (2*10^11). This means 10^10 viable settleable star systems at least. A dyson swarm around a sun-like start could conservatively support 10^20 biological humans (Today, we are 10^10 and this number was extrapolated from how much sunlight is needed to sustain on human with conventional farming). 80k defines an s-risk as "something causing vastly more suffering than has existed on Earth so far". This could easily be "achieved" even w/o digital minds if just one colony out of the 10^10 decides they want to create lots of wildlife preserves and their dyson swarm consists of mostly those. With around 10^10 more living area as on Earth and as many more wild animals, one year would go buy around this star and the cumulative suffering experienced by all of them would exceed the total suffering from all of Earth's history (with only ~ 1 billion (10^9) years of animal life). This would not necessarily mean that the whole galactic civ was morally net bad. A galaxy with 10,000 hellish star systems, 10 million heavenly systems and a 10 billion rather normal but good systems would still be a pretty awesome future from a total utility standpoint. My point is that s-risk being defined in terms of Earth suffering becomes an increasingly low bar to cross the larger your civilization is. At some point you'd have to have insanely good "quality control" in every corner of your civilzation. This would be analogous to having to ensure that every single one of the 10^10 humans today on earth is happy and never gets hurt even once. That seems like a bit too high a standard to have for how good the future should go.

But that nitpick aside, I currently expect that a space future without some kind of governance system you're describing still has a high chance of ending up net bad.

  1. How to create the Governance Structure (GS) Here is my idea how this could look like: A superintelligence (could also be post-human) creates countless identical but independent GS copies of itself that expand through the universe and accompany every settlement mission. Their detailed value system is made virtually unalterable, built to last for trillions of years. This I think, is technically achieveable: strong copy-error and damage protections, not updatable via new evidence, strongly defended against outside manipulation attacks. The GS copies largely act on their own in their respective star system colony but have protocols in place on how to coordinate in a loose manner across star systems and millions of years. I think this could work a bit analogous to an ant colony: Lots of small, selfless agents locally interacting with on another; everyone has the exact same values and probably secure intra-hive communication methods; They could still mount an impressively coordinated galactic response to say a von Neumann probe invasion. I could expand further on this idea if you'd like.

  2. Point of no-return I'm unsure about this. Possible such points: a space race gets going in earnest (with geopolitical realities making a Long Reflection infeasible), the first ASI is created and it does not have the goal of preventing s- and x-risks, the first (self-sustaining) space colony gets political independance, the first interstellar mission (to create a colony) leaves the solar system, a sub-par, real-world implementation of the Governance Structure breaks down somewhere in human-settled space.

My current view is still that the two most impactful things (at the moment) are 1) ensuring that any ASI that gets developed is safe and benevolent, 2) improving how global and space politics is conducted. Any specific "points of no-return" seem to me like very contingent on the exact circumstances at that point. Nevertheless, thinking ahead about what situations might be especially dangerous or crucial, seems like a worthwhile persuit to me.

Hi Birk. Thank you for your very in-depth response, I found it very interesting. That's pretty much how i imagined the governance system when I wrote the post. I actually had it as a description like that originally but I hated the implications for liberalism, so i took a step back and listed requirements instead (which didn't actually help). 

The "points of no return" do seem quite contingent, and I'm always sceptical about the tractability of trying to prevent something from happening - usually my approach is: it's probably gonna happen, how do we prepare? But besides that, I'm going to look into more specific "points of no return" as there could be a needle hiding in the noodles somewhere. I feel like this is the kind of area where we could be missing something, e.g. the point of no return is really close, or there could be a tractable way to influence the implementation of that point of no return.

probably near 100% if digital sentience is possible… it only takes one


Can you expand on this? I guess the stipulation of thousands of advanced colonies does some of the work here, but this still seems overconfident to me given how little we understand about digital sentience.

Yeah sure, it's like the argument that if you get infinite chimpanzees and put them in front of type writers, then one of them would write Shakespeare. If you have a galactic civilisation, it would be very dispersed and most likely each 'colony' occupying each solar system would govern itself independently. So they could be treated as independent actors sharing the same space, and there might be hundreds of millions of them. In that case, the probability that one of those millions of independent actors creates astronomical suffering becomes extremely high, near 100%. I used digital sentience as an example because its the risk of astronomical suffering that I see as the most terrifying - like IF digital sentience is possible, then the amount of suffering beings that it would be possible to create could conceivably outweigh the value of a galactic civilisation. That 'IF' contains a lot of uncertainty on my part. 

But this also applies to tyrannous governments, how many of those independent civilisations across a galaxy will become tyrannous and cause great suffering to their inhabitants? How many of those civilisations will terraform other planets and start biospheres of suffering beings?

The same logic also applies to x-risks that affect a galactic civilisation:

all it takes is one civilization of alien ass-hat griefers who send out just one Von Neumann Probe programmed to replicate, build N-D lasers, and zap any planet showing signs of technological civilization, and the result is a galaxy sterile of interplanetary civilizations until the end of the stelliferous era (at which point, stars able to power an N-D laser will presumably become rare). (Charlie Stross)

Stopping these things from happening seems really hard. It's like a galactic civilisation needs to be designed right from the beginning to make sure that no future colony does this.

Thanks. In the original quick take, you wrote "thousands of independent and technologically advanced colonies", but here you write "hundreds of millions".

If you think there's a 1 in 10,000 or 1 in a million chance of any independent and technologically advanced colony creating astronomical suffering, it matters if there are thousands or millions of colonies. Maybe you think it's more like 1 in 100, and then thousands (or more) would make it extremely likely.
 

Yeah that's true. 

I think 1000 is where I would start to get very worried intuitively, but there would be hundreds of millions of habitable planets in the Milky Way, so theoretically a galactic civilisation could have that many if it didn't kill itself before then. 

I guess the probability of one of these civilisations initiating an s-risk or galactic x-risk would just increase with the size of the galactic civilisation. So the more that humanity expands throughout the galaxy, the greater the risk.

Hey! I'm requesting some help with "Actions for Impact", it's a notion page with activities people can get involved in that take less than 30 minutes and can contribute to EA cause areas. This includes signing petitions, emailing MPs, voting for effective charities in competitions, responding to 'calls for evidence', or sharing something online. EA UK has the notion page linked on their website: https://www.effectivealtruism.uk/get-involved 

It should serve as a hub to leverage the size of the EA community when it's needed. 

I'm excited about the idea and I thought I'd have enough time to keep it updated and share it with organisations and people, but I really don't. If the idea sounds exciting and you have an hour or two per week spare please DM me, I'd really appreciate a couple of extra hands to get the ball rolling a bit more (especially if you have involvement in EA community building as I don't at all). 

I'm thinking about organising a seminar series on space and existential risk. Mostly because it's something I would really like to see. The webinar series would cover a wide range of topics:

  • Asteroid Impacts
  • Building International Collaborations
  • Monitoring Nuclear Weapons Testing
  • Monitoring Climate Change Impacts
  • Planetary Protection from Mars Sample Return
  • Space Colonisation
  • Cosmic Threats (supernovae, gamma-ray bursts, solar flares)
  • The Overview Effect
  • Astrobiology and Longtermism 

I think this would be an online webinar series. Would this be something people would be interested in? 

I have written this post introducing space and existential risk and this post on cosmic threats, and I've come up with some ideas for stuff I could do that might be impactful. So, inspired by this post, I am sharing a list of ideas for impactful projects I could work on in the area of space and existential risk. If anyone working on anything related to impact evaluation, policy, or existential risk feels like ranking these in order of what sounds the most promising, please do that in the comments. It would be super useful! Thank you! :)

(a) Policy report on the role of the space community in tackling existential risk: Put together a team of people working in different areas related to space and existential risk (cosmic threats, international collaborations, nuclear weapons monitoring, etc.). Conduct research and come together to write a policy report with recommendations for international space organisations to help tackle existential risk more effectively. 

(b) Anthology of articles on space and existential risk: Ask researchers to write articles about topics related to space and existential risk and put them all together into an anthology. Publish it somewhere. 

(c) Webinar series on space and existential risk: Build a community of people in the space sector working on areas related to existential risk by organising a series of webinars. Each webinar will be available virtually.

(d) Series of EA forum posts on space and existential risk: This should help guide people to an impactful career in the space sector, build a community in EA, and better integrate space into the EA community. 

(e) Policy adaptation exercise SMPAG > AI safety: Use a mechanism mapping policy adaptation exercise to build on the success of the space sector in tackling asteroid impact risks (through the SMPAG) to figure out how organisations working on AI safety can be more effective. 

(f) White paper on Russia and international space organisations: Russia’s involvement in international space missions and organisations following its invasion of Ukraine could be a good case study for building robust international organisations. E.g. Russia was ousted from ESA, is still actively participating on the International Space Station, and is still a member of SMPAG but not participating. Figuring out why Russia stayed involved or didn’t with each organisation could be useful. 

(g) Organise an in-person event on impactful careers in the space sector: This would be aimed at effective altruists and would help gauge interest and provide value. 

(d) might be interesting to read

The space industry is well-funded and already cares a lot about demonstrating impact (using a broader definition of impact than EA) to justify its funding, so (a)-(c) might be possible with industry support, and to some extent already exists. 

I think the overarching story behind (f) is relatively uncomplicated particularly in the context of ongoing trade between Russia and Ukraine-supporters over oil etc : Roscosmos continued to collaborate with NASA et al on stuff like ISS because agreements remained in place and were too critical to suspend. Russia was never actually part of ESA and I suspect many people would have preferred it if Roscosmos was kicked off projects like ExoMars earlier. Probably helps that the engineers and cosmonauts on both sides are likely a good deal more levelheaded than Dmitry Rogozhin, but I don't think we'll hear what went on behind closed doors for a while...

I am a researcher in the space community and I recently wrote a post introducing the links between outer space and existential risk. I'm thinking about developing this into a sequence of posts on the topic. I plan to cover:

  1. Cosmic threats - what are they, how are they currently managed, and what work is needed in this area. Cosmic threats include asteroid impacts, solar flares, supernovae, gamma-ray bursts, aliens, rogue planets, pulsar beams, and the Kessler Syndrome. I think it would be useful to provide a summary of how cosmic threats are handled, and determine their importance relative to other existential threats.
  2. Lessons learned from the space community. The space community has been very open with data sharing - the utility of this for tackling climate change, nuclear threats, ecological collapse, animal welfare, and global health and development cannot be understated. I may include perspective shifts here, provided by views of Earth from above and the limitless potential that space shows us. 
  3. How to access the space community's expertise, technology, and resources to tackle existential threats. 
  4. The role of the space community in global politics. Space has a big role in preventing great power conflicts and building international institutions and connections. With the space community growing a lot recently, I'd like to provide a briefing on the role of space internationally to help people who are working on policy and war. 

Would a sequence of posts on space and existential risk be something that people would be interested in? (please agree or disagree to the post) I haven't seen much on space on the forum (apart from on space governance), so it would be something new.

Hey Jordan, I work in the space sector and I'm also based in London. I am currently working on a Government project assessing the impact of space weather on UK critical national infrastructure. I've written a little on the existential risk of space weather, too, e.g. https://forum.effectivealtruism.org/posts/9gjc4ok4GfwuyRASL/cosmic-rays-could-cause-major-electronic-disruption-and-pose

I'll message you as it would be good to connect!

Hi Matt. Sorry I missed your post and thanks for getting in touch! Your research sounds very interesting, I've messaged you directly :)

Greetings! I'm a doctoral candidate and I have spent three years working as a freelance creator, specializing in crafting visual aids, particularly of a scientific nature. However, I'm enthusiastic about contributing my time to generate visuals that effectively support EA causes. 

Typically, my work involves producing diagrams for academic grant applications, academic publications, and presentations. Nevertheless, I'm open to assisting with outreach illustrations or social media visuals as well. If you find yourself in need of such assistance, please don't hesitate to get in touch! I'm happy to hop on a zoom chat

https://forum.effectivealtruism.org/events/cJnwCKtkNs6hc2MRp/panel-discussion-how-can-the-space-sector-overcome 

This event is now open to virtual attendees! It is happening today at 6:30PM BST. The discussion will focus on how the space sector can overcome international conflicts, inspired by the great power conflict and space governance 80K problem profiles. 

I searched google for "gain of function UK" and the first hit was a petition to ban gain of function research in the UK that only got 106 signatures out of the 10,000 required. 

https://petition.parliament.uk/petitions/576773#:~:text=Closed%20petition%20Ban%20%E2%80%9CGain%20of,the%20consequences%20could%20be%20severe.

How did this happen? Should we try again? 

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 3m read
 · 
Hi all, This is a one time cross-post from my substack. If you like it, you can subscribe to the substack at tobiasleenaert.substack.com. Thanks Gaslit by humanity After twenty-five years in the animal liberation movement, I’m still looking for ways to make people see. I’ve given countless talks, co-founded organizations, written numerous articles and cited hundreds of statistics to thousands of people. And yet, most days, I know none of this will do what I hope: open their eyes to the immensity of animal suffering. Sometimes I feel obsessed with finding the ultimate way to make people understand and care. This obsession is about stopping the horror, but it’s also about something else, something harder to put into words: sometimes the suffering feels so enormous that I start doubting my own perception - especially because others don’t seem to see it. It’s as if I am being gaslit by humanity, with its quiet, constant suggestion that I must be overreacting, because no one else seems alarmed. “I must be mad” Some quotes from the book The Lives of Animals, by South African writer and Nobel laureate J.M. Coetzee, may help illustrate this feeling. In his novella, Coetzee speaks through a female vegetarian protagonist named Elisabeth Costello. We see her wrestle with questions of suffering, guilt and responsibility. At one point, Elisabeth makes the following internal observation about her family’s consumption of animal products: “I seem to move around perfectly easily among people, to have perfectly normal relations with them. Is it possible, I ask myself, that all of them are participants in a crime of stupefying proportions? Am I fantasizing it all? I must be mad!” Elisabeth wonders: can something be a crime if billions are participating in it? She goes back and forth on this. On the one hand she can’t not see what she is seeing: “Yet every day I see the evidences. The very people I suspect produce the evidence, exhibit it, offer it to me. Corpses. Fragments of