Hide table of contents
This is a linkpost for https://youtu.be/st9EJg_t6yc


Merely listening to alien messages might pose an extinction risk, perhaps even more so than sending messages into outer space. Our new video explores the threat posed by passive SETI and potential mitigation strategies.

Below, you can find the script of the video. Matthew Barnett, the author of this related post, wrote the first draft. Most of the original draft survives, but I've made significant restructuring, edits, deletions, and additions


One day, a few Earthly astronomers discover something truly remarkable. They’ve pointed their radio telescopes at a previously unexplored patch in the night sky, and recorded a binary message that is too inexplicable to have come from any natural source. Curious about what the distant aliens have sent us, the scientists begin trying to decipher the message. After an arduous process of code-breaking, the scientists find that the message encodes instructions on how to build a device. Unfortunately, the aliens left no description about what the device actually does.

Excited to share their discovery with the world, the astronomers agree to publish the alien instructions on the internet, and send a report to the United Nations. Immediately, the news captivates the entire world. For once, there is indisputable proof that we are not alone in the universe. And what’s more: the aliens have sent us a present, and no one knows what its purpose might be.

In a breathtaking frenzy that surpasses even the Space Race of the 1960s, engineers around the world rush to follow these instructions, to uncover the secrets behind the gift the aliens have left for us.

But soon after, a horrifying truth is revealed: the instructions do not describe a cure to all diseases, or a method of solving world hunger. Rather, the aliens have sent us explicit, and easy to follow instructions on how to build a very powerful bomb: an anti-matter explosive device with the yield of over one thousand hydrogen bombs. The most horrifying part is that the instructions require only common household materials, combined in just the right way.

The horror of this development begins to sink in around the world. Many propose that we should censor the information, in an attempt to prevent a catastrophe. But the reality is that the information is already loose. Sooner or later, someone will build the bombs, either from raw curiosity, or deliberate ill-intent. And then, right after that, the world will end.

This story is unrealistic. In real life, there’s probably no way to combine common household materials in just the right way to produce an antimatter bomb. Rather, this story illustrates the risk we take by listening to messages in the night sky, and being careless about how these potential messages are disseminated.

With this video, we don’t want to argue that humanity will necessarily go extinct if we listen to alien messages, nor that this is necessarily among the biggest threats we’re facing. In fact, the probability that humanity will go extinct in this exact way is small, but the risk we take from listening to alien messages is still an idea worth considering. As with all potential existential threats, the entire future of humanity is at stake.

We’ll model alien civilizations as being “grabby”, in the sense described by Robin Hanson’s paper on Grabby Aliens, which we covered in two previous videos. Grabby civilizations expand at a non-negligible fraction of the speed of light, and occupy all available star systems in their wake. By doing so, every grabby civilization creates a sphere of expanding influence. Together, all the grabby civilizations will one day enclose the universe with technology and intelligently designed structures.

However, since grabby aliens cannot expand at the speed of light, there is a second larger sphere centered around every grabby civilization’s origin, which is defined by the earliest radio signals sent by the alien civilization as it first gained the capacity for deep-space communication. This larger sphere expands at the speed of light, faster than the grabby civilization itself.

Let’s call the space between the first and second spheres the “outer shell” of the grabby alien civilization. If grabby alien civilizations leave highly distinct marks on galaxies and star systems they’ve occupied, then their civilization should be visible to any observers within this outer shell. As we noted in the grabby aliens videos, if we were in the outer shell of a grabby alien civilization, they would likely appear to be large in the night sky. On the other hand, if grabby civilizations left more subtle traces that we can’t currently spot with our technology, that would explain why we aren’t seeing them.

In this video, let’s assume that grabby aliens leave more subtle traces on the cosmos, making it plausible that Earth could be in the outer shell of a grabby alien civilization right now without us currently realizing that. This is a model variation, but it leaves the basics of the Grabby Aliens theory intact.

Here’s where things could turn out  dangerous for humanity. If, for example, a grabby alien civilization felt threatened by competition that it might encounter in the future, it could try to wipe out potential competitors inside this outer shell before they ever got the chance to meet physically. This is because, if they wanted, the grabby alien civilization could send out a manipulative deep-space message to any budding civilization in the outer shell gullible enough to listen, forcing their self-destruction.

In our illustrative story we made the example of instructions for building antimatter bombs with household material. A more realistic possibility could be instructions for building advanced artificial intelligence, which then turns out to be malicious.

We could make a number of plausible hypotheses about the content of the message, but it’s difficult to foresee what it would actually contain, as the alien civilization would be a lot more advanced than us, and, potentially, millions of years old. They would have much more advanced technology, and a lot of time to think carefully about what messages to send to budding civilizations. They  could spend centuries to craft the perfect message that would hijack or destroy infant civilizations that are unfortunate enough to tune in.

But maybe you’re still unconvinced. Potential first contact with aliens could even be the best thing to ever happen to humanity. Aliens might be very friendly to us, and could send us information that would help our civilization and raise our well-being to unprecedented levels. 

Perhaps this whole idea is rather silly. Our parochial, tribal brains are simply blind to the reality that very advanced aliens would have abandoned warfare, domination, and cold-hearted manipulation long ago, and would instead be devoted to the mission of uplifting all sentient life.

On the other hand, life on other planets probably arose by survival of the fittest, as our species did, which generally favors organisms that are expansionist and greedy for resources. Furthermore, we are more likely to get a message from an expansionist civilization than a non-expansionist civilization, since the latter civilizations will command far fewer resources and will presumably be more isolated from one another. This provides us even more reason to expect that any alien civilization that we detect might try to initiate a first strike against us.

It’s also important to keep in mind that the risk of a malicious alien message is still significant even if we think aliens are likely to be friendly. For instance, even if we believe that 90% of alien civilizations in the universe will be friendly to us in the future, the prospect of encountering the 10% that are unfriendly could be so horrifying that we are better plugging our ears and tuning out for now, at least until we grow up as a species, and figure out how to handle such information without triggering a catastrophe.

But even if SETI is dangerous, banning the search for extraterrestrial intelligence is an unrealistic goal at this moment in time. Even if it were the right thing to do to mitigate risk of premature human extinction, there is practically no chance that enough people will be convinced that this is the right course of action.

More realistically, we should instead think about what rules and norms humanity should adopt to robustly give our civilization a better chance at surviving a malicious SETI attack.

As a start, it seems wise to put in place a policy to review any confirmed alien messages for signs that they might be dangerous, before releasing any potentially devastating information to the public.

Consider two possible policies we could implement concerning how we review alien messages. 

In the first policy, we treat every alien message with an abundance of caution. After a signal from outer space is confirmed to be a genuine message from extraterrestrials, humanity forms a committee with the express purpose of debating whether this information should be released to the public, or whether it should be sealed away for at least another few decades, at which point another debate will take place.

In the second policy, after a signal is confirmed to be a genuine message from aliens, we immediately release all the data publicly, flooding the internet with whatever information aliens have sent us. In this second policy, there is no review process; everything we receive from aliens, no matter the content, is instantly declassified and handed over to the wider world without a moment’s hesitation.

If you are even mildly sympathetic to our thesis here — that SETI is risky for humanity — you probably agree that the second policy would be needlessly reckless, and might  put our species in danger. Yet, the second policy is precisely what the influential SETI Institute recommends humanity do in the event of successful alien contact. You can find more information in their document titled Protocols for an ETI Signal Detection, which was adopted unanimously by the SETI Permanent Study Group of the International Academy of Astronautics in 2010.

The idea that SETI might be dangerous is not new . It was perhaps first showcased in the 1961 British drama serial, A for Andromeda, in which aliens from Andromeda sent humanity the instructions on how to build an artificial intelligence whose final goal was to subjugate humanity. In the show, humans ended up victorious over the alien artificial intelligence, but  we would not be so lucky in the real world.

In intellectual communities and academia, the idea that SETI is dangerous has received very little attention, either positive or negative. In its place, the risk from METI has taken the spotlight, which is: sending messages to outer space rather than listening to them. This might explain why, as a species, we do not appear to currently be taking the risk from SETI very seriously.

Yet it’s imperative that humanity safeguards its own survival. If we survive the next few centuries, we have great potential as a species. In the long-run, we could reach the stars and become a grabby civilization ourselves, potentially expanding into thousands or millions of galaxies, creating trillions of worthwhile lives. But not necessarily endangering lives already present on other star systems, of course! To ensure we have a promising future, let’s proceed carefully with SETI. It could end up being the most important decision we ever make.


 

Comments5


Sorted by Click to highlight new comments since:

Since we're already in existential danger due to AI risk, it's not obvious that we shouldn't read a message that has only a 10% chance of being unfriendly, a friendly message could pretty reliably save us from other risks. Additionally, I can make an argument for friendly messages potentially being quite common:

If we could pre-commit now to never doing a SETI attack ourselves, or if we could commit to only sending friendly messages, then we'd know that many other civs, having at some point stood in the same place as us, will have also made the same commitment, and our risk would decrease.
But I'm not sure, it's a nontrivial question as to whether that would be a good deal for us to make, would the reduction in risk of being subjected to a SETI attack be greater than the expected losses of no longer being allowed to do SETI attacks?

Cross-posting with multiple authors is broken as a feature.

When Matthew had to approve co-authorship, the post appeared on the home page, but if clicked on, it only showed an error message.

Then I moved the post to drafts, and when I interacted with it using the three dots on the right side, there was another error message.

Now Matthew doesn't appear as a coauthor here.

Haven't read the post, but my answer to the title is "yes". SETI seems like a great example for researchers unilaterally rushing to do things that might be astronomically impactful and are very risky; driven by the fear that someone else will end up snatching the credit and glory for their brilliant idea.

[EDIT: changed "not net-positive" to "very risky".]

Great job once again! Loved it :)

Thanks :)

Curated and popular this week
 ·  · 13m read
 · 
Notes  The following text explores, in a speculative manner, the evolutionary question: Did high-intensity affective states, specifically Pain, emerge early in evolutionary history, or did they develop gradually over time? Note: We are not neuroscientists; our work draws on our evolutionary biology background and our efforts to develop welfare metrics that accurately reflect reality and effectively reduce suffering. We hope these ideas may interest researchers in neuroscience, comparative cognition, and animal welfare science. This discussion is part of a broader manuscript in progress, focusing on interspecific comparisons of affective capacities—a critical question for advancing animal welfare science and estimating the Welfare Footprint of animal-sourced products.     Key points  Ultimate question: Do primitive sentient organisms experience extreme pain intensities, or fine-grained pain intensity discrimination, or both? Scientific framing: Pain functions as a biological signalling system that guides behavior by encoding motivational importance. The evolution of Pain signalling —its intensity range and resolution (i.e., the granularity with which differences in Pain intensity can be perceived)— can be viewed as an optimization problem, where neural architectures must balance computational efficiency, survival-driven signal prioritization, and adaptive flexibility. Mathematical clarification: Resolution is a fundamental requirement for encoding and processing information. Pain varies not only in overall intensity but also in granularity—how finely intensity levels can be distinguished.  Hypothetical Evolutionary Pathways: by analysing affective intensity (low, high) and resolution (low, high) as independent dimensions, we describe four illustrative evolutionary scenarios that provide a structured framework to examine whether primitive sentient organisms can experience Pain of high intensity, nuanced affective intensities, both, or neither.     Introdu
 ·  · 3m read
 · 
We’ve redesigned effectivealtruism.org to improve understanding and perception of effective altruism, and make it easier to take action.  View the new site → I led the redesign and will be writing in the first person here, but many others contributed research, feedback, writing, editing, and development. I’d love to hear what you think, here is a feedback form. Redesign goals This redesign is part of CEA’s broader efforts to improve how effective altruism is understood and perceived. I focused on goals aligned with CEA’s branding and growth strategy: 1. Improve understanding of what effective altruism is Make the core ideas easier to grasp by simplifying language, addressing common misconceptions, and showcasing more real-world examples of people and projects. 2. Improve the perception of effective altruism I worked from a set of brand associations defined by the group working on the EA brand project[1]. These are words we want people to associate with effective altruism more strongly—like compassionate, competent, and action-oriented. 3. Increase impactful actions Make it easier for visitors to take meaningful next steps, like signing up for the newsletter or intro course, exploring career opportunities, or donating. We focused especially on three key audiences: * To-be direct workers: young people and professionals who might explore impactful career paths * Opinion shapers and people in power: journalists, policymakers, and senior professionals in relevant fields * Donors: from large funders to smaller individual givers and peer foundations Before and after The changes across the site are aimed at making it clearer, more skimmable, and easier to navigate. Here are some side-by-side comparisons: Landing page Some of the changes: * Replaced the economic growth graph with a short video highlighting different cause areas and effective altruism in action * Updated tagline to "Find the best ways to help others" based on testing by Rethink
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f