This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.
Commenting and feedback guidelines:
I'm posting this to get it out there. I'd love to see comments that take the ideas forward, but criticism of my argument won't be as useful at this time, in part because I won't do any further work on it.
This is a post I drafted in November 2023, then updated for an hour in March 2025. I don’t think I’ll ever finish it so I am just leaving it in this draft form for draft amnesty week (I know I'm late). I don’t think it is particularly well calibrated, but mainly just makes a bunch of points that I haven’t seen assembled elsewhere. Please take it as extremely low-confidence and there being a low-likelihood of this post describing these dynamics perfectly.
I’ve worked at both EA charities and non-EA charities, and the EA funding landscape is unlike any other I’ve ever been in. This can be good — funders are often willing to take high-risk, high-reward bets on projects that might otherwise never get funded, and the amount of friction for getting funding is significantly lower.
But, there is an orientation toward funders (and in particular staff at some major funders), that seems extremely unusual for charitable communities: a high degree of deference to their opinions.
As a reference, most other charitable communities I’ve worked in have viewed funders in a much more mixed light. Engaging with them is necessary, yes, but usually funders (including large, thoughtful foundations like Open Philanthropy) are viewed as… an unaligned third party who is instrumentally useful to your organization, but whose opinions on your work should hold relatively little or no weight, given that they are a non-expert on the direct work, and often have bad ideas about how to do what you are doing.
I think there are many good reasons to take funders’ perspectives seriously, and I mostly won’t cover these here. But, to
I am very confident that dual-use risk of improved asteroid deflection technology in general is much more likely than a random asteroid hitting us, and that therefore this experiment has likely made the world worse off (with a bit less confidence, because maybe it's still easier to deflect asteroids defensively rather than offensively, and this experiment improved that defensive capability?). This is possibly my favorite example of a crucial consideration, and also more speculatively, evidence that the sum of all x-risk reduction efforts taken together could be net-harmful (I'd give that a 5-25% chance?).
This is much more of a problem (and an overwhelming one) for risks/opportunities that are microscopic compared to others. Baseline asteroid/comet risk is more like 1 in a billion. Much less opportunity for that with 1% or 10% risks.
To use asteroid deflection offensively, you’d have to:
By contrast, to have asteroid deflection offer a benefit given current information, the requirements are:
A second form of benefit might be
As have previously been noted, the implicit flattish hierarchy of different points in pro-con lists can sometimes cause people to make bad decisions.
Source: 80000 Hours
Some entirely made-up numbers (for the next 50 years):
~=1/5,600,000 or 1 in 5.6 * 10^6. However, I think these numbers are a bit of an understatement for total risk. This is because when I was making up numbers earlier, I was imagining the most likely actor to be able to pull this off in the next 50 years. But anthropogenic risks are disjunctive, multiple actors can attempt the same idea.
~=1/4,000,000,000 or 1 in 4*10^9.
So overall I'm skeptical that the first-order effects of deflecting natural asteroid risks is larger than the first-order effects of anthropogenic asteroid risks.
I agree with this. If the first-order effects are small, it's easy for second-order effects to dominate (assuming the second-order effects come from an entirely different channel than the first-order effects).
I appreciate the effort to put some numbers into this Fermi format! I'm not sure whether you intend the numbers, or the result, to represent your beliefs about the relative risks and benefits of this program. If they are representative, then I have a couple points to make.
I'm surprised you think there's a 10% chance that an actor who wants to destroy the Earth this century will have asteroid deflection within their technological capabilities. I'd assign this closer to a 1/1000 probability. The DART mission cost $324.5 million, was carried out by the world's economic and technological superpower, and its team page lists hundreds of names, all of whom I am sure are highly-qualified experts in one thing or another.
Maybe North Korea could get there, and want to use this as a second-strike alternative if they can't successfully develop a nuclear program? But we're spying on them like mad and I fully expect the required testing to make such a weapon work would receive the same harsh sanctions as their other military efforts.
I'd downweight the likelihood that asteroid deflection is their easiest method for doing so due to the difficulty with precision targeting from 1/7 to 1/1000. An asteroid of the size targeted by DART would take out hundreds of square miles (New York is 302 square miles, Earth's surface area is 197 million square miles). Targeting a high-population area puts even steeper demands on precision targeting and greater opportunity to mitigate damage by deflection to a lower-impact zone. It seems to me there are much easier ways for a terrorist to take out New York City than asteroid deflection.
Since your estimates for the two scenarios are only off by 3 OOMs, I think that these form the crux of our disagreement. I also note that this Fermi estimate no doubt has several conceptual shortcomings, and it would probably be useful to come up with an improved way to structure it.
Thanks for the engagement! Re:
Those are meant to be my actual (possibly unstable) beliefs. With the very important caveats that a) this is not a field I've thought about much at all and b) the numbers are entirely pulled from intuition, not even very simple models or basic online research.
Same :D
Also, apparently NASA is putting the odds of a collision with Bennu, which is about the same size as Dimorphos, at 1/1750 in the next three centuries. That's not quite the same timeframe, and this is just a quick Google search result. A more authoritative number would be helpful. Given AI risk and the pace of tech change, I think it makes sense to highly prioritize asteroid impacts this century, not in three centuries.
What I take from this mission is not so much
"Great, now we are a bit safer from asteroids hitting the earth."
but more like
"Great, NASA and the American public think existential risks like asteroids are worth taking seriously. The success of this mission might make it a bit easier to convince people that, one, there are other existential risks worth taking seriously and, two, that we can similarly reduce those risks through policy and technology innovation. Maybe now other existential risk reduction efforts will become more politically palatable, now that we can point to the success of this mission".
[Edit: here's a relevant article that supports my point: "Nasa’s mission gives hope we can defend our planet but human nature and technology present risks of their own" https://on.ft.com/3LNySAM]
For more on this risk, see this interesting recent book: Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity (Jun. 2020) Daniel Deudney
https://academic-oup-com.ezp.lib.cam.ac.uk/book/33656?login=true
https://www.amazon.co.uk/Dark-Skies-Expansionism-Planetary-Geopolitics/dp/0190903341
I really don't think dual use is in any way worrisome if humanity has several institutions capable of asteroid deflection, and a tiny one if there is only one. Quoting a comment I gave to finm in his post on asteroid risks:
I've been keeping tabs on this since mid-August when the following Metaculus question was created:
The community and I (97%, given NASA's track record of success) seem in agreement that it is unlikely DART fails to make an impact. Here are some useful Wikipedia links that aided me with the prediction: (Asteroid impact avoidance, Asteroid impact prediction, Near Earth-object (NEO), Potentially hazardous object).
There are roughly 3 hours remaining until impact (https://dart.jhuapl.edu/); it seems unlikely that something goes awry, and I am firmly hoping for success.
While I'm unfamiliar with the state of research on asteroid redirection or trophy systems for NEOs, DART seems like a major step in the correct direction, one where humanity faces a lower levels of risks from the collision of asteroids, comets, and other celestial objects with Earth.
Here's a livestream - impact should be at 7:16 pm ET https://www.youtube.com/watch?v=-6Z1E0mW2ag
Impact successful - so exciting!