This is a link-post for an explainer of NASA's Double Asteroid Redirection Test (DART). It may be one of the most prominent existential risk reduction activities in the public sphere (the explainer even describes the likelihood of asteroid collisions large enough to threaten civilisation). I hadn't seen much talk about it.

DART will be reaching a two asteroid system in the evening of September 26. It has been travelling for around 10 months, and is now around 11 million kilometres away. The asteroids are not a threat to Earth in any way. It will autonomously target the smaller asteroid (Dimorphos, around 160m diameter) and collide with it at a speed of around 26,000 km/hr. 

This should inform the potential for future asteroid-redirection efforts. As noted in 'The Precipice' though, while potentially reducing the risk from asteroids, such a capability may pose a larger risk itself if used by malicious actors to target asteroids towards Earth.

70

0
0

Reactions

0
0
Comments14


Sorted by Click to highlight new comments since:
DC
20
2
4

As noted in 'The Precipice' though, while potentially reducing the risk from asteroids, such a capability may pose a larger risk itself if used by malicious actors to target asteroids towards Earth.

 

I am very confident that dual-use risk of improved asteroid deflection technology in general is much more likely than a random asteroid hitting us, and that therefore  this experiment has likely made the world worse off (with a bit less confidence, because maybe it's still easier to deflect asteroids defensively rather than offensively, and this experiment improved that defensive capability?). This is possibly my favorite example  of a crucial consideration, and also more speculatively, evidence that the sum of all x-risk reduction efforts taken together could be net-harmful (I'd give that a 5-25% chance?).

This is much more of a problem (and an overwhelming one) for risks/opportunities  that are microscopic compared to others. Baseline asteroid/comet risk is more like 1 in a billion. Much less opportunity for that with 1% or 10% risks.

To use asteroid deflection offensively, you’d have to:

  • Have motivation to destroy earth in indiscriminate fashion
  • Have asteroid deflection be within your technological and organizational capabilities
  • Have asteroid deflection be your easiest method of mass destruction
  • Avoid having your plans to hit Earth with an asteroid detected and disrupted in advance of launch of your asteroid deflection weapon
  • Redirect the asteroid with probably a large deviation in trajectory onto a very precise collision course with Earth
  • Avoid having that trajectory subsequently observed and disrupted by currently existing asteroid observation and deflection infrastructure now operating

By contrast, to have asteroid deflection offer a benefit given current information, the requirements are:

  • There has to be an asteroid on course to hit Earth that we haven't already detected
  • The asteroid has to be of a size class we can build and launch a DART at in time to nudge it anywhere slightly off course

A second form of benefit might be

  • Successfully operating a form of X-risk infrastructure gives a concrete example of something we already do to prevent X-risk and creates a path for government to sponsor more such projects.

As have previously been noted, the implicit flattish hierarchy of different points in pro-con lists can sometimes cause people to make bad decisions. 

 

Source: 80000 Hours

Some entirely made-up numbers (for the next 50 years):

  • Have motivation to destroy earth in indiscriminate fashion (~1)
  • Have asteroid deflection be within your technological and organizational capabilities (~1/10)
  • Have asteroid deflection be your easiest method of mass destruction (~1/7)
  • [Added] have naturally occurring asteroids on close enough trajectories that deflecting them towards Earth is a realistic proposition (~1/20?)
    • I think I have the least resilience here.
  • Avoid having your plans to hit Earth with an asteroid detected and disrupted in advance of launch of your asteroid deflection weapon (~1/50)
  • Redirect the asteroid with probably a large deviation in trajectory onto a very precise collision course with Earth (~1/8?)
  • Avoid having that trajectory subsequently observed and disrupted by currently existing asteroid observation and deflection infrastructure now operating (~1/10)
    • I think this is not independent of the previous 3 points, otherwise it'd be a lower probability)

~=1/5,600,000 or 1 in 5.6 * 10^6. However, I think these numbers are a bit of an understatement for total risk. This is because when I was making up numbers earlier, I was imagining the most likely actor to be able to pull this off in the next 50 years. But anthropogenic risks are disjunctive, multiple actors can attempt the same idea.

By contrast, to have asteroid deflection offer a benefit given current information, the requirements are:

  • There has to be an asteroid on course to hit Earth that we haven't already detected (~1/1,000,000,000)
    • (Note this is just numbers I pulled from Shulman's comment below)
  • The asteroid has to be of a size class we can build and launch a DART at in time to nudge it anywhere slightly off course (~1/2)
  • [Added] Just-in-time asteroid deflection without prior experiments are not sufficient (~1/2)

~=1/4,000,000,000 or 1 in 4*10^9.

So overall I'm skeptical that the first-order effects of deflecting natural asteroid risks is larger than the first-order effects of anthropogenic asteroid risks.

A second form of benefit might be

  • Successfully operating a form of X-risk infrastructure gives a concrete example of something we already do to prevent X-risk and creates a path for government to sponsor more such projects.

I agree with this. If the first-order effects are small, it's easy for second-order effects to dominate (assuming the second-order effects come from an entirely different channel than the first-order effects).

I appreciate the effort to put some numbers into this Fermi format! I'm not sure whether you intend the numbers, or the result, to represent your beliefs about the relative risks and benefits of this program. If they are representative, then I have a couple points to make.

I'm surprised you think there's a 10% chance that an actor who wants to destroy the Earth this century will have asteroid deflection within their technological capabilities. I'd assign this closer to a 1/1000 probability. The DART mission cost $324.5 million, was carried out by the world's economic and technological superpower, and its team page lists hundreds of names, all of whom I am sure are highly-qualified experts in one thing or another.

Maybe North Korea could get there, and want to use this as a second-strike alternative if they can't successfully develop a nuclear program? But we're spying on them like mad and I fully expect the required testing to make such a weapon work would receive the same harsh sanctions as their other military efforts.

I'd downweight the likelihood that asteroid deflection is their easiest method for doing so due to the difficulty with precision targeting from 1/7 to 1/1000. An asteroid of the size targeted by DART would take out hundreds of square miles (New York is 302 square miles, Earth's surface area is 197 million square miles). Targeting a high-population area puts even steeper demands on precision targeting and greater opportunity to mitigate damage by deflection to a lower-impact zone. It seems to me there are much easier ways for a terrorist to take out New York City than asteroid deflection.

Since your estimates for the two scenarios are only off by 3 OOMs, I think that these form the crux of our disagreement. I also note that this Fermi estimate no doubt has several conceptual shortcomings, and it would probably be useful to come up with an improved way to structure it.

Thanks for the engagement! Re:

I appreciate the effort to put some numbers into this Fermi format! I'm not sure whether you intend the numbers, or the result, to represent your beliefs about the relative risks and benefits of this program.

Those are meant to be my actual (possibly unstable) beliefs. With the  very important caveats that a) this is not a field I've thought about much at all and b) the numbers are entirely pulled from intuition, not even very simple models or basic online research.

Also, apparently NASA is putting the odds of a collision with Bennu, which is about the same size as Dimorphos, at 1/1750 in the next three centuries. That's not quite the same timeframe, and this is just a quick Google search result. A more authoritative number would be helpful. Given AI risk and the pace of tech change,  I think it makes sense to highly prioritize asteroid impacts this century, not in three centuries.

What I take from this mission is not so much 

"Great, now we are a bit safer from asteroids hitting the earth." 

but more like

"Great, NASA and the American public think existential risks like asteroids are worth taking seriously. The success of this mission might make it a bit easier to convince people that, one, there are other existential risks worth taking seriously and, two, that we can similarly reduce those risks through policy and technology innovation. Maybe now other existential risk reduction efforts will become more politically palatable, now that we can point to the success of this mission".

[Edit: here's a relevant article that supports my point: "Nasa’s mission gives hope we can defend our planet but human nature and technology present risks of their own"  https://on.ft.com/3LNySAM]

For more on this risk, see this interesting recent book: Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity (Jun. 2020) Daniel Deudney

https://academic-oup-com.ezp.lib.cam.ac.uk/book/33656?login=true 

https://www.amazon.co.uk/Dark-Skies-Expansionism-Planetary-Geopolitics/dp/0190903341 

I really don't think dual use is in any way worrisome if humanity has several institutions capable of asteroid deflection, and a tiny one if there is only one. Quoting a comment I gave to finm in his post on  asteroid risks:

I don't think the dual-use should worry us much. I cannot estimate how much harder it is in general to divert an asteroid toward Earth than away from it, but I can confidently say that it is several orders of magnitude higher than 10x [figure finm gives as example in his text] (the precision needed would be staggering). In addition, to divert an asteroid toward Earth, one needs an asteroid. The closer the better. The fact that the risk of a big-enough asteroid hitting the Earth is so low indicates that there are not too many candidates. This factor has to be taken into account as well.

But, even if diverting an asteroid towards the Earth would be only 10 times harder than diverting it from the Earth, dual-use does not need to be a big concern. To actually manage to divert an asteroid towards the Earth one does not only need to divert it, one also needs to prevent the rest of humanity from diverting it away on time, which is much easier. So, as long as a small bunch of independent institutions are able and ready to divert asteroids, dual-use does not seem a concern to me.

I've been keeping tabs on this since mid-August when the following Metaculus question was created:

The community and I (97%, given NASA's track record of success) seem in agreement that it is unlikely DART fails to make an impact. Here are some useful Wikipedia links that aided me with the prediction: (Asteroid impact avoidance, Asteroid impact prediction, Near Earth-object (NEO), Potentially hazardous object). 

There are roughly 3 hours remaining until impact (https://dart.jhuapl.edu/); it seems unlikely that something goes awry, and I am firmly hoping for success.  

While I'm unfamiliar with the state of research on asteroid redirection or trophy systems for NEOs, DART seems like a major step in the correct direction, one where humanity faces a lower levels of risks from the collision of asteroids, comets, and other celestial objects with Earth. 

Here's a livestream - impact should be at 7:16 pm ET https://www.youtube.com/watch?v=-6Z1E0mW2ag

Impact successful - so exciting!

Curated and popular this week
abrahamrowe
 ·  · 9m read
 · 
This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.  Commenting and feedback guidelines:  I'm posting this to get it out there. I'd love to see comments that take the ideas forward, but criticism of my argument won't be as useful at this time, in part because I won't do any further work on it. This is a post I drafted in November 2023, then updated for an hour in March 2025. I don’t think I’ll ever finish it so I am just leaving it in this draft form for draft amnesty week (I know I'm late). I don’t think it is particularly well calibrated, but mainly just makes a bunch of points that I haven’t seen assembled elsewhere. Please take it as extremely low-confidence and there being a low-likelihood of this post describing these dynamics perfectly. I’ve worked at both EA charities and non-EA charities, and the EA funding landscape is unlike any other I’ve ever been in. This can be good — funders are often willing to take high-risk, high-reward bets on projects that might otherwise never get funded, and the amount of friction for getting funding is significantly lower. But, there is an orientation toward funders (and in particular staff at some major funders), that seems extremely unusual for charitable communities: a high degree of deference to their opinions. As a reference, most other charitable communities I’ve worked in have viewed funders in a much more mixed light. Engaging with them is necessary, yes, but usually funders (including large, thoughtful foundations like Open Philanthropy) are viewed as… an unaligned third party who is instrumentally useful to your organization, but whose opinions on your work should hold relatively little or no weight, given that they are a non-expert on the direct work, and often have bad ideas about how to do what you are doing. I think there are many good reasons to take funders’ perspectives seriously, and I mostly won’t cover these here. But, to
Dorothy M.
 ·  · 5m read
 · 
If you don’t typically engage with politics/government, this is the time to do so. If you are American and/or based in the U.S., reaching out to lawmakers, supporting organizations that are mobilizing on this issue, and helping amplify the urgency of this crisis can make a difference. Why this matters: 1. Millions of lives are at stake 2. Decades of progress, and prior investment, in global health and wellbeing are at risk 3. Government funding multiplies the impact of philanthropy Where things stand today (February 27, 2025) The Trump Administration’s foreign aid freeze has taken a catastrophic turn: rather than complying with a court order to restart paused funding, they have chosen to terminate more than 90% of all USAID grants and contracts. This stunningly reckless decision comes just 30 days into a supposed 90-day review of foreign aid. This will cause a devastating loss of life. Even beyond the immediate deaths, the long-term consequences are dire. Many of these programs rely on supply chains, health worker training, and community trust that have taken years to build, and which have already been harmed by U.S. actions in recent weeks. Further disruptions will actively unravel decades of health infrastructure development in low-income countries. While some funding may theoretically remain available, the reality is grim: the main USAID payment system remains offline and most staff capable of restarting programs have been laid off. Many people don’t believe these terminations were carried out legally. But NGOs and implementing partners are on the brink of bankruptcy and insolvency because the government has not paid them for work completed months ago and is withholding funding for ongoing work (including not transferring funds and not giving access to drawdowns of lines of credit, as is typical for some awards). We are facing a sweeping and permanent shutdown of many of the most cost-effective global health and development programs in existence that sa
 ·  · 3m read
 · 
Written anonymously because I work in a field where there is a currently low but non-negligible and possibly high future risk of negative consequences for criticizing Trump and Trumpism. This post is an attempt to cobble together some ideas about the current situation in the United States and its impact on EA. I invite discussion on this, not only from Americans, but also those with advocacy experience in countries that are not fully liberal democracies (especially those countries where state capacity is substantial and autocratic repression occurs).  I've deleted a lot of text from this post in various drafts because I find myself getting way too in the weeds discoursing on comparative authoritarian studies, disinformation and misinformation (this is a great intro, though already somewhat outdated), and the dangers of the GOP.[1] I will note that I worry there is still a tendency to view the administration as chaotic and clumsy but retaining some degree of good faith, which strikes me as quite naive.  For the sake of brevity and focus, I will take these two things to be true, and try to hypothesize what they mean for EA. I'm not going to pretend these are ironclad truths, but I'm fairly confident in them.[2]  1. Under Donald Trump, the Republican Party (GOP) is no longer substantially committed to democracy and the rule of law. 1. The GOP will almost certainly continue to engage in measures that test the limits of constitutional rule as long as Trump is alive, and likely after he dies. 2. The Democratic Party will remain constrained by institutional and coalition factors that prevent it from behaving like the GOP. That is, absent overwhelming electoral victories in 2024 and 2026 (and beyond), the Democrats' comparatively greater commitment to rule of law and democracy will prevent systematic purging of the GOP elites responsible for democratic backsliding; while we have not crossed the Rubicon yet, it will get much worse before things get better. 2. T