Hide table of contents

Co-authored by Nick Beckstead, Peter Singer, and Matt Wage

Many scientists believe that a large asteroid impact caused the extinction of the dinosaurs. Could humans face the same fate?

It’s a possibility. NASA has tracked most of the large nearby asteroids and many of the smaller asteroids. If a large asteroid were found to be on a collision course with Earth, that could give us time to deflect the asteroid. NASA has analyzed multiple options for deflecting an asteroid in this kind of scenario, including using a nuclear strike to knock the asteroid off course, and it seems that some of these strategies would be likely to work. The search is, however, not yet complete. The new B612 foundation has recently begun a project to track the remaining asteroids in order to “protect the future of civilization on this planet.” Finding one of these asteroids could be the key to preventing a global catastrophe.

Fortunately, the odds of an extinction-sized asteroid hitting the earth this century are low, on the order of one in a million. Unfortunately, asteroids aren’t the only threats to humanity’s survival. Other potential threats stem from bio-engineered diseases, nuclear war, extreme climate change, and dangerous future technologies.

Given that there is some risk of humanity going extinct over the next couple of centuries, the next question is whether we can do anything about it. We will first explain what we can do about it, and then ask the deeper ethical question: how bad would human extinction be?

The first point to make here is that if the risks of human extinction turn out to be “small,” this shouldn’t lull us into complacency. No sane person would say, “Well, the risk of a nuclear meltdown at this reactor is only 1 in 1000, so we’re not going to worry about it.” When there is some risk of a truly catastrophic outcome and we can reduce or eliminate that risk at an acceptable cost, we should do so. In general, we can measure how bad a particular risk is by multiplying the probability of the bad outcome by how bad the outcome would be. Since human extinction would, as we shall shortly argue, be extremely bad, reducing the risk of human extinction by even a very small amount would be very good.

Humanity has already done some things that reduce the risk of premature extinction. We’ve made it through the cold war and scaled back our reserves of nuclear weapons. We’ve tracked most of the large asteroids near Earth. We’ve built underground bunkers for “continuity of government” purposes, which might help humanity survive certain catastrophes. We’ve instituted disease surveillance programs that track the spread of diseases, so that the world could respond more quickly in the event of a large-scale pandemic. We’ve identified climate change as a potential risk and developed some plans for responding, even if the actual response so far has been lamentably inadequate. We’ve also built institutions that reduce the risk of extinction in subtler ways, such as decreasing the risk of war or improving the government’s ability to respond to a catastrophe.

One reason to think that it is possible to further reduce the risk of human extinction is that all these things we’ve done could probably be improved. We could track more asteroids, build better bunkers, improve our disease surveillance programs, reduce our greenhouse gas emissions, encourage non-proliferation of nuclear weapons, and strengthen world institutions in ways that would probably further decrease the risk of human extinction. There is still a substantial challenge in identifying specific worthy projects to support, but it is likely that such projects exist.

So far, surprisingly little work has been put into systematically understanding the risks of human extinction and how best to reduce them. There have been a few books and papers on the topic of low-probability, high-stakes catastrophes, but there has been very little investigation into the most effective methods of reducing these risks. We know of no in-depth, systematic analysis of the different strategies for reducing these risks. A reasonable first step toward reducing the risk of human extinction is to investigate these issues more thoroughly, or support others in doing so.

If what we’ve said is correct, then there is some risk of human extinction and we probably have the ability to reduce this risk. There are a lot of important related questions, which are hard to answer: How high a priority should we place on reducing the risk of human extinction? How much should we be prepared to spend on doing so? Where does this fit among the many other things that we can and should be doing, like helping the global poor? (On that, see www.thelifeyoucansave.com) Does the goal of reducing the risk of extinction conflict with ordinary humanitarian goals, or is the best way of reducing the risk of extinction simply to improve the lives of people alive today and empower them to solve the problem themselves?

We won’t try to address those questions here. Instead, we’ll focus on this question: How bad would human extinction be?

One very bad thing about human extinction would be that billions of people would likely die painful deaths. But in our view, this is, by far, not the worst thing about human extinction. The worst thing about human extinction is that there would be no future generations.

We believe that future generations matter just as much as our generation does. Since there could be so many generations in our future, the value of all those generations together greatly exceeds the value of the current generation.

Considering a historical example helps to illustrate this point. About 70,000 years ago, there was a supervolcanic eruption known as the Toba eruption. Many scientists believe that this eruption caused a “volcanic winter” which brought our ancestors close to extinction. Suppose that this is true. Now imagine that the Toba eruption had eradicated humans from the earth. How bad would that have been? Some 3000 generations and 100 billion lives later, it is plausible to say that the death and suffering caused by the Toba eruption would have been trivial in comparison with the loss of all the human lives that have been lived from then to now, and everything humanity has achieved since that time.

Similarly, if humanity goes extinct now, the worst aspect of this would be the opportunity cost. Civilization began only a few thousand years ago. Yet Earth could remain habitable for another billion years. And if it is possible to colonize space, our species may survive much longer than that.

Some people would reject this way of assessing the value of future generations. They may claim that bringing new people into existence cannot be a benefit, regardless of what kind of life these people have. On this view, the value of avoiding human extinction is restricted to people alive today and people who are already going to exist, and who may want to have children or grandchildren.

Why would someone believe this? One reason might be that if people never exist, then it can’t be bad for them that they don’t exist. Since they don’t exist, there’s no “them” for it to be bad for, so causing people to exist cannot benefit them.

We disagree. We think that causing people to exist can benefit them. To see why, first notice that causing people to exist can be bad for those people. For example, suppose some woman knows that if she conceives a child during the next few months, the child will suffer from multiple painful diseases and die very young. It would obviously be bad for her child if she decided to conceive during the next few months. In general, it seems that if a child’s life would be brief and miserable, existence is bad for that child.

If you agree that bringing someone into existence can be bad for that person and if you also accept the argument that bringing someone into existence can’t be good for that person, then this leads to a strange conclusion: being born could harm you but it couldn’t help you. If that is right, then it appears that it would be wrong to have children, because there is always a risk that they will be harmed, and no compensating benefit to outweigh the risk of harm.

Pessimists like the nineteenth-century German philosopher Arthur Schopenhauer, or the contemporary South African philosopher David Benatar accept this conclusion. But if parents have a reasonable expectation that their children will have happy and fulfilling lives, and having children would not be harmful to others, then it is not bad to have children. More generally, if our descendants have a reasonable chance of having happy and fulfilling lives, it is good for us to ensure that our descendants exist, rather than not. Therefore we think that bringing future generations into existence can be a good thing.

The extinction of our species – and quite possibly, depending on the cause of the extinction, of all life - would be the end of the extraordinary story of evolution that has already led to (moderately) intelligent life, and which has given us the potential to make much greater progress still. We have made great progress, both moral and intellectual, over the last couple of centuries, and there is every reason to hope that, if we survive, this progress will continue and accelerate. If we fail to prevent our extinction, we will have blown the opportunity to create something truly wonderful: an astronomically large number of generations of human beings living rich and fulfilling lives, and reaching heights of knowledge and civilization that are beyond the limits of our imagination.

Comments6


Sorted by Click to highlight new comments since:

This article is generally sound, but I'm not sure I agree with the idea that the experiences of the current generation are trivial compared to the possibility of future generations. Future generations don't exist yet and therefore have nothing to lose, while living creatures have everything to lose.

Sure, a human could be conceived and live a reasonably happy life (if they're lucky), but they could also never be conceived and be none the worse. When we, as living humans, think of the possibility of never being born, we are saddened because we know what we have to lose, but a pair of non-fertilized zygotes has no such feelings.

Because they're only newly conscious? The same can be said of your sef tomorrow morning, but you'll have memories and experiences that will quickly orient you to your identity, your place in the wotld and your desires, as will future generations.

But I'm already alive, so if I'm no longer alive tomorrow morning it'll mean that I died during the night - which involves a certain amount of suffering. If I die without knowing it, it would cause me no suffering at all, but my loved ones would still suffer, and my life, which is already established as being happy, would have been cut short for no good reason.

None of these things are true for a non-conceived human because they can't feel pain and have no established ability (or desire) to experience a happy life.

I have a minor philosophical nitpick.

No sane person would say, “Well, the risk of a nuclear meltdown at this reactor is only 1 in 1000

There are (checks Wikipedia) 400ish nuclear reactors, which means if everyone followed this reasoning, the risk of a nuclear meltdown would be pretty high.

Existential risks with low probabilities don't add up in the same way. It's my belief that the magnitude of a risk equals the badness times the probability (which for xrisk comes out to very, very bad) but not everyone might agree with me, and I'm not sure the nuclear reactor example would convince them.

Has anyone done an EA evaluation of (formerly B612) Sentinel Mission's expected value?

Not Sentinel Mission is particular, but some work has been done on asteroids. Basically, the the value of asteroid surveillance for reducing extinction risk is small as we have already identified basically all of the >1km asteroids, and that's the size that they would need to be to cause an extinction-level catastrophe.

That's to say nothing of the prospects for learning to intercept asteroids, or the prospects of preventing events that fall short of an extinction-level threat.

The other thing to note here is that we've survived asteroids for lots of geological time (millions of years), so it would be really surprising if we got taken out by a natural risk in the next century. That's why people generally think that tech risks are more likely.

I can't find much online but there's this, and you could also search for Carl Shulman and Seth Baum, who might've also covered the issue.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 1m read
 · 
Shape and lead the future of effective altruism in the UK — apply to be the Director of EA UK. The UK has the world's second-largest EA community, with London having the highest concentration of EAs globally. This represents a significant opportunity to strengthen and grow the effective altruism movement where it matters most. The EA UK board is recruiting for a new Director, as our existing Director is moving on to another opportunity. We believe that the strongest version of EA UK is one where the Director is implementing a strategy that they have created themselves, hence the open nature of this opportunity. As Director of EA UK, you'll have access to: * An established organisation with 9 years of London community building experience * An extensive network and documented history of what works (and what doesn't) * 9+ months of secured funding to develop and implement your vision, and additional potential funding and connections to funders * A supportive board and engaged community eager to help you succeed Your task would be to determine how to best leverage these resources to maximize positive impact through community building in the UK. This is a unique opportunity for a self-directed community-builder to shape EA UK's future. You'll be responsible for both setting the strategic direction and executing on that strategy. This is currently a one-person organisation (you), so you'll need to thrive working independently while building connections across the EA ecosystem. There is scope to pitch expansion to funders and the board. We hope to see your application! Alternatively, if you know anyone who might be a good fit for this role, please email madeleine@goodstructures.co. 
 ·  · 2m read
 · 
TL;DR Starting an Effective Altruism (EA) group might be one of the highest-impact opportunities available right now. Here’s how you can get involved: * University students: Apply to the Centre for Effective Altruism’s Organiser Support Programme (OSP) by Sunday, June 22. * City or national group organisers: You’re invited, too. See details here! * Interested in mentorship? Apply to mentor organisers by Wednesday, June 18. * Know someone who could be an organiser or mentor? Forward this post or recommend them directly. OSP provides mentorship, workshops, funding support, and practical resources to build thriving EA communities. Why Starting an EA Group Matters EA Groups, especially university groups, are often the very first exposure people have to effective altruism principles such as scale, tractability, and neglectedness. One conversation, one fellowship, one book club - these seemingly small moments can reshape someone’s career trajectory. Changing trajectories matters - even if one person changes course because of an EA group and ends up working in a high-impact role, the return on investment is huge. You don’t need to take our word for it: * 80,000 Hours: "Probably one of the highest-impact volunteer opportunities we know of." * Rethink Priorities: Only 3–7% of students at universities have heard of EA, indicating neglectedness and a high potential to scale. * Open Philanthropy: In a survey of 217 individuals identified as likely to have careers of particularly high altruistic value from a longtermist perspective, most respondents reported first encountering EA ideas during their college years. When asked what had contributed to their positive impact, local groups were cited most frequently on their list of biggest contributors. This indicates that groups play a very large role in changing career trajectories to high-impact roles. About the Organiser Support Programme (OSP) OSP is a remote programme by the Centre for Effective Altruism designed