Hide table of contents
This is a linkpost for https://youtu.be/st9EJg_t6yc


Merely listening to alien messages might pose an extinction risk, perhaps even more so than sending messages into outer space. Our new video explores the threat posed by passive SETI and potential mitigation strategies.

Below, you can find the script of the video. Matthew Barnett, the author of this related post, wrote the first draft. Most of the original draft survives, but I've made significant restructuring, edits, deletions, and additions


One day, a few Earthly astronomers discover something truly remarkable. They’ve pointed their radio telescopes at a previously unexplored patch in the night sky, and recorded a binary message that is too inexplicable to have come from any natural source. Curious about what the distant aliens have sent us, the scientists begin trying to decipher the message. After an arduous process of code-breaking, the scientists find that the message encodes instructions on how to build a device. Unfortunately, the aliens left no description about what the device actually does.

Excited to share their discovery with the world, the astronomers agree to publish the alien instructions on the internet, and send a report to the United Nations. Immediately, the news captivates the entire world. For once, there is indisputable proof that we are not alone in the universe. And what’s more: the aliens have sent us a present, and no one knows what its purpose might be.

In a breathtaking frenzy that surpasses even the Space Race of the 1960s, engineers around the world rush to follow these instructions, to uncover the secrets behind the gift the aliens have left for us.

But soon after, a horrifying truth is revealed: the instructions do not describe a cure to all diseases, or a method of solving world hunger. Rather, the aliens have sent us explicit, and easy to follow instructions on how to build a very powerful bomb: an anti-matter explosive device with the yield of over one thousand hydrogen bombs. The most horrifying part is that the instructions require only common household materials, combined in just the right way.

The horror of this development begins to sink in around the world. Many propose that we should censor the information, in an attempt to prevent a catastrophe. But the reality is that the information is already loose. Sooner or later, someone will build the bombs, either from raw curiosity, or deliberate ill-intent. And then, right after that, the world will end.

This story is unrealistic. In real life, there’s probably no way to combine common household materials in just the right way to produce an antimatter bomb. Rather, this story illustrates the risk we take by listening to messages in the night sky, and being careless about how these potential messages are disseminated.

With this video, we don’t want to argue that humanity will necessarily go extinct if we listen to alien messages, nor that this is necessarily among the biggest threats we’re facing. In fact, the probability that humanity will go extinct in this exact way is small, but the risk we take from listening to alien messages is still an idea worth considering. As with all potential existential threats, the entire future of humanity is at stake.

We’ll model alien civilizations as being “grabby”, in the sense described by Robin Hanson’s paper on Grabby Aliens, which we covered in two previous videos. Grabby civilizations expand at a non-negligible fraction of the speed of light, and occupy all available star systems in their wake. By doing so, every grabby civilization creates a sphere of expanding influence. Together, all the grabby civilizations will one day enclose the universe with technology and intelligently designed structures.

However, since grabby aliens cannot expand at the speed of light, there is a second larger sphere centered around every grabby civilization’s origin, which is defined by the earliest radio signals sent by the alien civilization as it first gained the capacity for deep-space communication. This larger sphere expands at the speed of light, faster than the grabby civilization itself.

Let’s call the space between the first and second spheres the “outer shell” of the grabby alien civilization. If grabby alien civilizations leave highly distinct marks on galaxies and star systems they’ve occupied, then their civilization should be visible to any observers within this outer shell. As we noted in the grabby aliens videos, if we were in the outer shell of a grabby alien civilization, they would likely appear to be large in the night sky. On the other hand, if grabby civilizations left more subtle traces that we can’t currently spot with our technology, that would explain why we aren’t seeing them.

In this video, let’s assume that grabby aliens leave more subtle traces on the cosmos, making it plausible that Earth could be in the outer shell of a grabby alien civilization right now without us currently realizing that. This is a model variation, but it leaves the basics of the Grabby Aliens theory intact.

Here’s where things could turn out  dangerous for humanity. If, for example, a grabby alien civilization felt threatened by competition that it might encounter in the future, it could try to wipe out potential competitors inside this outer shell before they ever got the chance to meet physically. This is because, if they wanted, the grabby alien civilization could send out a manipulative deep-space message to any budding civilization in the outer shell gullible enough to listen, forcing their self-destruction.

In our illustrative story we made the example of instructions for building antimatter bombs with household material. A more realistic possibility could be instructions for building advanced artificial intelligence, which then turns out to be malicious.

We could make a number of plausible hypotheses about the content of the message, but it’s difficult to foresee what it would actually contain, as the alien civilization would be a lot more advanced than us, and, potentially, millions of years old. They would have much more advanced technology, and a lot of time to think carefully about what messages to send to budding civilizations. They  could spend centuries to craft the perfect message that would hijack or destroy infant civilizations that are unfortunate enough to tune in.

But maybe you’re still unconvinced. Potential first contact with aliens could even be the best thing to ever happen to humanity. Aliens might be very friendly to us, and could send us information that would help our civilization and raise our well-being to unprecedented levels. 

Perhaps this whole idea is rather silly. Our parochial, tribal brains are simply blind to the reality that very advanced aliens would have abandoned warfare, domination, and cold-hearted manipulation long ago, and would instead be devoted to the mission of uplifting all sentient life.

On the other hand, life on other planets probably arose by survival of the fittest, as our species did, which generally favors organisms that are expansionist and greedy for resources. Furthermore, we are more likely to get a message from an expansionist civilization than a non-expansionist civilization, since the latter civilizations will command far fewer resources and will presumably be more isolated from one another. This provides us even more reason to expect that any alien civilization that we detect might try to initiate a first strike against us.

It’s also important to keep in mind that the risk of a malicious alien message is still significant even if we think aliens are likely to be friendly. For instance, even if we believe that 90% of alien civilizations in the universe will be friendly to us in the future, the prospect of encountering the 10% that are unfriendly could be so horrifying that we are better plugging our ears and tuning out for now, at least until we grow up as a species, and figure out how to handle such information without triggering a catastrophe.

But even if SETI is dangerous, banning the search for extraterrestrial intelligence is an unrealistic goal at this moment in time. Even if it were the right thing to do to mitigate risk of premature human extinction, there is practically no chance that enough people will be convinced that this is the right course of action.

More realistically, we should instead think about what rules and norms humanity should adopt to robustly give our civilization a better chance at surviving a malicious SETI attack.

As a start, it seems wise to put in place a policy to review any confirmed alien messages for signs that they might be dangerous, before releasing any potentially devastating information to the public.

Consider two possible policies we could implement concerning how we review alien messages. 

In the first policy, we treat every alien message with an abundance of caution. After a signal from outer space is confirmed to be a genuine message from extraterrestrials, humanity forms a committee with the express purpose of debating whether this information should be released to the public, or whether it should be sealed away for at least another few decades, at which point another debate will take place.

In the second policy, after a signal is confirmed to be a genuine message from aliens, we immediately release all the data publicly, flooding the internet with whatever information aliens have sent us. In this second policy, there is no review process; everything we receive from aliens, no matter the content, is instantly declassified and handed over to the wider world without a moment’s hesitation.

If you are even mildly sympathetic to our thesis here — that SETI is risky for humanity — you probably agree that the second policy would be needlessly reckless, and might  put our species in danger. Yet, the second policy is precisely what the influential SETI Institute recommends humanity do in the event of successful alien contact. You can find more information in their document titled Protocols for an ETI Signal Detection, which was adopted unanimously by the SETI Permanent Study Group of the International Academy of Astronautics in 2010.

The idea that SETI might be dangerous is not new . It was perhaps first showcased in the 1961 British drama serial, A for Andromeda, in which aliens from Andromeda sent humanity the instructions on how to build an artificial intelligence whose final goal was to subjugate humanity. In the show, humans ended up victorious over the alien artificial intelligence, but  we would not be so lucky in the real world.

In intellectual communities and academia, the idea that SETI is dangerous has received very little attention, either positive or negative. In its place, the risk from METI has taken the spotlight, which is: sending messages to outer space rather than listening to them. This might explain why, as a species, we do not appear to currently be taking the risk from SETI very seriously.

Yet it’s imperative that humanity safeguards its own survival. If we survive the next few centuries, we have great potential as a species. In the long-run, we could reach the stars and become a grabby civilization ourselves, potentially expanding into thousands or millions of galaxies, creating trillions of worthwhile lives. But not necessarily endangering lives already present on other star systems, of course! To ensure we have a promising future, let’s proceed carefully with SETI. It could end up being the most important decision we ever make.


 

40

0
0

Reactions

0
0

More posts like this

Comments5


Sorted by Click to highlight new comments since:

Since we're already in existential danger due to AI risk, it's not obvious that we shouldn't read a message that has only a 10% chance of being unfriendly, a friendly message could pretty reliably save us from other risks. Additionally, I can make an argument for friendly messages potentially being quite common:

If we could pre-commit now to never doing a SETI attack ourselves, or if we could commit to only sending friendly messages, then we'd know that many other civs, having at some point stood in the same place as us, will have also made the same commitment, and our risk would decrease.
But I'm not sure, it's a nontrivial question as to whether that would be a good deal for us to make, would the reduction in risk of being subjected to a SETI attack be greater than the expected losses of no longer being allowed to do SETI attacks?

Cross-posting with multiple authors is broken as a feature.

When Matthew had to approve co-authorship, the post appeared on the home page, but if clicked on, it only showed an error message.

Then I moved the post to drafts, and when I interacted with it using the three dots on the right side, there was another error message.

Now Matthew doesn't appear as a coauthor here.

Haven't read the post, but my answer to the title is "yes". SETI seems like a great example for researchers unilaterally rushing to do things that might be astronomically impactful and are very risky; driven by the fear that someone else will end up snatching the credit and glory for their brilliant idea.

[EDIT: changed "not net-positive" to "very risky".]

Great job once again! Loved it :)

Thanks :)

More from Writer
Curated and popular this week
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 4m read
 · 
Summary I’m excited to announce a “Digital Sentience Consortium” hosted by Longview Philanthropy, in collaboration with The Navigation Fund and Macroscopic Ventures, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include research fellowships, career transition fellowships, and a broad request for proposals for applied work on these topics.  For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of  Claude 4’s model card are about its potential moral patienthood. Scientific experts increasingly say that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field to increase the chance that society deals with the possibility of digital sentience reasonably, and with concern for all involved. * Apply to Research Fellowship * Apply to Career Transition Fellowship * Apply to Request for Proposals Motivation & Focus For about as long as I’ve been reading about transformative AI, I’ve wondered whether society would face critical decisions involving AI sentience. Until recently, I thought there was not much to be done here besides perhaps more philosophy of mind and perhaps some ethics—and I was not sure these approaches would make much progress.  Now, I think there are live areas where people can contribute: * Technically informed research on which AI systems are sentient, like this paper applying existing theories of consciousness to a few AI architectures. * Innovative approaches to investigate sentience, potentially in a way that avoids having to take a stand on a particular theory of consciousness, like work on  AI introspection. * Political philosophy and policy research on the proper role of AI in society. * Work to ed
 ·  · 11m read
 · 
Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding.  Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute.  The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not at all obvious. You might think compute and cognitive labor are gross substitutes because more labor can substitute for a higher quantity of experiments via more careful experimental design or selection of experiments. Or you might indeed think they are gross complements because eventually, ideas need to be tested out in compute-intensive, experimental verification.  Ideally, we could use empirical evidence to get some clarity on whether compute and cognitive labor are gross complements; however, the existing empirical evidence is weak. The main empirical estimate that is discussed in Tom's article is Oberfield and Raval (2014), which estimates the elasticity of substitution (the standard measure of whether goods are complements or substitutes) between capital and labor in manufacturing plants. It is not clear how well we can extrapolate from manufacturing to AI research.  In this article, we will try to remedy this by estimating the elasticity of substitution between research compute and cognitive labor in frontier AI firms.  Model  Baseline CES in Compute To understand how we estimate the elasticity of substitution, it will be useful to set up a theoretical model of researching better alg