Hide table of contents

This is the second part of "Mistakes in the moral mathematics of existential risk", a series of blog posts by David Thorstad that aims to identify ways in which estimates of the value of reducing existential risk have been inflated. I've made this linkpost part of a sequence.


In the decades to come, advanced bioweapons could threaten human existence. Although the probability of human extinction from bioweapons may be low, the expected value of reducing the risk could still be large, since such risks jeopardize the existence of all future generations. We provide an overview of biotechnological extinction risk, make some rough initial estimates for how severe the risks might be, and compare the cost-effectiveness of reducing these extinction-level risks with existing biosecurity work. We find that reducing human extinction risk can be more cost-effective than reducing smaller-scale risks, even when using conservative estimates. This suggests that the risks are not low enough to ignore and that more ought to be done to prevent the worst-case scenarios.

Millett and Snyder-Beattie, “Existential risk and cost-effective biosecurity

1. Introduction

This is Part 2 of a series based on my paper “Mistakes in the moral mathematics of existential risk”.

Part 1 introduced the series and discussed the first mistake: focusing on cumulative rather than per-unit risk. We saw how bringing the focus back to per-unit rather than cumulative risk was enough to change a claimed `small’ risk reduction of one millionth of one percent into an astronomically large reduction that would drive risk to almost one in a million per century.

Today, I want to focus on a second mistake: ignoring background risk. The importance of modeling background risk is one way to interpret the main lesson of my paper and blog series “Existential risk pessimism and the time of perils.” Indeed, in Part 7 of that series, I suggested just this interpretation.

It turns out that blogging is sometimes a good way to write a paper. Today, I want to expand my discussion in Part 7 of the existential risk pessimism series to clarify the second mistake (ignoring background risk) and to show how it interacts with the first mistake (focusing on cumulative risk) in a leading discussion of cost-effective biosecurity.

Some elements of this discussion are lifted verbatim from my earlier post. In my defense, I remind my readers that I am lazy.

2. Snyder-Beattie and Millett on cost-effective biosecurity

Andrew Snyder-Beattie holds a DPhil in Zoology from Oxford, and works as a Senior Program Officer at Open Philanthropy. Snyder-Beattie is widely considered to be among the very most influential voices on biosecurity within the effective altruist community.

Piers Millett is a Senior Research Fellow at the Future of Humanity Institute. Millett holds advanced degrees in science policy, research methodology and international security, and has extensive industry experience in biosecurity.

Millett and Snyder-Beattie’s paper, “Existential risk and cost-effective biosecurity”, is among the most-cited papers on biosecurity written by effective altruists. The paper argues that even very small reductions in existential risks in the biosecurity sector (henceforth, `biorisks’) are cost-effective by standard metrics.

Millett and Snyder-Beattie estimate the cost-effectiveness of an intervention as C/(NLR), where:

  • C is the cost of the intervention.
  • is “the number of biothreats we expect to occur in 1 century”.
  • L is “the number of life-years lost in such an event”.
  • R is “the reduction in risk [in this century only] achieved by spending … C”.

Millett and Snyder Beattie estimate these quantities as follows:

  • C is fixed at $250 billion.
  • is estimated in min/max ranges using three different approaches.
  • is calculated assuming that humanity remains earthbound, with an annual population of 10 billion people, lasting for a million years, so that L = 1016 life years.
  • R is estimated at a 1% relative reduction, i.e. risk is reduced from N to .99N.

Because N is estimated using three different models, Millett and Snyder-Beattie estimate cost-effectiveness as C/(NLR) on each model, giving:

ModelN (biothreats/century)C/NLR (cost per life/year)
Model 10.005 to 0.02$0.125 to $5.00
Model 21.6*10-6 to 8*10-5$31.00 to $1,600
Model 35*10-5 to 1.4*10-4$18.00 to $50.00

Standard government cost-effectiveness metrics in the United States value a life-year in the neighborhood of a few hundred thousand dollars, so Millett and Snyder-Beattie conclude that across models, a small intervention (such as a 1% relative risk reduction in this century) is cost-effective even at a high price (such as $250 billion).

3. A complaint to set aside

There are many complaints that could be made about this model. One such complaint was nicely formalized by Joshua Blake in a comment on my earlier presentation of the MSB estimate.

MSB want to estimate the expected cost of saving a life, E[C/NLR]. By separately estimating each of C, N, L, and R, they have in effect estimated E[C]/(E[N]E[L]E[R}).

This move is legal if, and only if, C,N,L,R are probabilistically independent. [Edit: Whoops, that’s still not enough to bail them out. See comment by Mart.] However, N and L are highly negatively correlated: the riskier the future is, the fewer humans we should expect to live in it, because humanity becomes more likely to meet an early grave.

I want to set this complaint aside for now. Perhaps I am not really setting this complaint aside, since it will be one way of explaining how the MSB estimate commits the first mistake, a point to which I return below. But for now, I mention this complaint to illustrate first that there can be other mistakes beyond those mentioned in this paper, and second that my readers are quite helpful and good at math.

4. Second mistake: Ignoring background risk

When we intervene on some given risks (in this case, existential biorisk), we leave other risks largely unchanged. Call these unchanged risks background risks.

The lesson of my paper and blog series “Existential risk pessimism and the time of perils” is that background risk matters quite a lot. However, the MSB model does not say anything about background risk. Building background risk into the MSB model will reduce the MSB cost-effectiveness estimates, particularly when background risk is large.

If we assume biological and nonbiological risks cannot occur in the same century, then we can split per-century risk r into its biological component b and non-biological component n as:

r = b + n.

MSB envision an intervention X which provides a 1% relative reduction in biorisk, shifting risk to:

rX = 0.99b + n.

Prior to intervention, how many life-years did the future hold in expectation? On the MSB model, a century involves a stable population of 1010 lives, for a total of 1012 life-years. We make it through this century with probability (1-r), survive the first two centuries with probability (1-r)2, and so on, so that the expected number of future lives (over a million years, or ten thousand centuries) is:

Our intervention X increases the expected number of future lives by reducing per-century risk from r to rX, giving a post-intervention expected number of lives of:

Intervention X adds, in expectation, E[L|X] – E[L] lives, which works out to (see appendix):

In rough outline, X provides ten billion additional life-years, scaled down by the initial biorisk b, but scaled up (approximately) by 1 / the square of background risk r. If background risk is low, this may be a large boost indeed, but if background risk is high, things become more dire.

Below, I’ve revised the MSB estimates across two levels of background risk: a 20%/century risk close to that favored by many effective altruists, and a more optimistic 1%/century risk.

ModelN MSB estimater = 0.2r = 0.01
Model 10.005-0.02$0.125-5.00$50-200$0.25-$0.50
Model 21.6*10-6-8*10-5$31.00-1,600$12,500-$625,000$30-1,500
Model 35*10-5-1.4*10-4$18.00-50.00$7,100-20,000$18-50

For comparison, GiveWell estimates that the best short-termist interventions save a life for about $5,000. Hence even if we assume that short-termist interventions do not have positive knock-on effects (an assumption we should not make), a good short-termist benchmark will be on the order of $50-200/life-year.

Already, we see that under high levels of background risk, the MSB estimate at best ties the short-termist benchmark, and at worst falls significantly below it. By contrast, on low levels of background risk, the MSB estimate may fare well.

Does this mean that we can salvage the MSB estimate by being optimistic about background risk? Not so fast.

5. First mistake: Focusing on cumulative risk

MSB, like Bostrom, are concerned with cumulative risk. They treat risk reduction as providing an increased chance that all future people throughout a million-year stretch will come to exist, not just that people in this century will come to exist.

We saw in Part 1 of this series that this is a mistake. Focusing on cumulative risk dramatically overstates the value of existential risk reduction, and also takes us away from the policy-relevant question of how we should intervene on risks in nearby centuries.

Let us replace MSB’s stylized intervention X with an intervention X’ that provides a 1% relative reduction in biorisk, not in all centuries at once, but rather in our own century. That is, X’ reduces risk in this century to:

rX’ = 0.99b + n

but leaves risk in future centuries at r = b + n.

How many additional lives does X’ provide in expectation. It turns out (see paper for details) that:

Whereas before, this expression was divided through (roughly) by r2, now it divides through only by r. That will tend to reduce the cost-effectiveness of X’, since r is a number between 0 and 1. Importantly, the penalty is worse for low values of r. The loss of a second r in the denominator shaves a factor of approximately five off this expression for a pessimistic r = 0.2, but a whopping two orders of magnitude off in the case that r = 0.01.

Below, I’ve revised the MSB estimates to incorporate not only background risk, but also risk reduction within a single century rather than across all time.

ModelN MSB estimater = 0.2r = 0.01
Model 10.005-0.02$0.125-5.00$250-1,000$13-50
Model 21.6*10-6-8*10-5$31.00-1,600$60,000-3,100,000$3,000-150,000
Model 35*10-5-1.4*10-4$18.00-50.00$35,000-100,000$1,800-5,000

This is bad news. Only the most optimistic model (Model 1 with r=0.01) makes biosecurity competitive with the short-termist benchmark of $50-200/life-year. Across most other models, biosecurity turns out not only to be less cost-effective than the best short-termist interventions, but often many orders of magnitude less cost-effective.

Again, we see that mistakes in moral mathematics matter. Correcting the first and second mistakes within the MSB model took biosecurity from robustly cost-effective to robustly cost-ineffective in comparison to short-termist benchmarks.

6. Wrapping up

So far, we have met two mistakes in the moral mathematics of existential risk:

  • First mistake: Focusing on cumulative risk rather than per-unit risk.
  • Second mistake: Ignoring background risk.

We looked at a leading model of cost-effective biosecurity due to Piers Millett and Andrew Snyder-Beattie. We saw that the model commits both mistakes, and that correcting the mistakes leads to a near-complete reversal of Millett and Snyder-Beattie’s conclusion: it makes biosecurity look robustly less cost-effective than leading short-termist interventions, rather than robustly more cost-effective.

In the next post, I introduce a final mistake in the moral mathematics of existential risk: neglecting population dynamics.

Comments7


Sorted by Click to highlight new comments since:

Great post!

It is worth noting most of the expected value of reducing existential risk comes from worlds where the time of perils hypothesis (TOP) is true, and the post-peril risk is low (the longterm future should be discounted at the ~lowest possible rate). In this case, a reduction in existential risk in the next 100 years would not differ much from a reduction in total existential risk, and therefore the mistakes you mention do not apply. 

To give an example. If existential risk is 10 % per century for 3 centuries[1], and then drops to roughly 0, the risk in the next 3 centuries is 27.1000 % (= 1 - (1 - 0.1)^3). If one decreases bio risk by 1 % for 1 century, from 1 % to 0.99 % (i.e. 0.01 pp), the new risk for the next century would be 9.99 % (= 10 - 0.01). So the new risk for the next 3 centuries would be 27.0919 % (= 1 - (1 - 0.0999)*(1 - 0.1)^2). Therefore the reduction of the total risk would be 0.008 pp (= 27.1000 - 27.0919), i.e. very similar to the reduction of bio risk during the next century of 0.01 pp.

As a result, under TOP, I think reducing bio existential risk by 0.01 pp roughly decreases total existential risk by 0.01 pp. For the conservative estimate of 10^28 expected future lives given in Newberry 2021 (Table 3), that would mean saving 10^24 (= 10^(28 - 4)) lives, or 4*10^12 life/$ (= 10^24/(250*10^9)). If TOP only has 1 in a trillion chance of being true, the cost-effectiveness would be 4 life/$, over 4 OOMs better than GiveWell's top charities cost-effectiveness of 2.5*10^-4 life/$ (= 1/4000).

On the one hand, I am very uncertain about how high is bio existential risk this century. If it is something like 10^-6 (i.e. 0.01 % of what I assumed above), the cost-effectiveness of reducing bio risk would be similar to that of GiveWell's top charities. On the other hand, 1 in a trillion chance for TOP being true sounds too low, and a future value of 10^28 lives is probably an underestimate. Overall, I guess longtermist interventions will tend to be much more cost-effective.

FWIW, I liked David's series on Existential risk pessimism and the time of perils. I agree there is a tension between high existential risk this century, and TOP being reasonably likely. I guess existential risk is not as high as commonly assumed, because superintelligent AI disempowering humans does not have to lead to loss of value under moral realism, but I do not know.

  1. ^

    In The Precipice, Toby Ord guesses total existential risk to be 3 times (= (1/2)/(1/6)) that from 2021 to 2120.

Thanks Vasco! Yes, as in my previous paper, though (a) most of the points I'm making get some traction against models in which the time of perils hypothesis is true, (b) they get much more traction if the Time of Perils is false.

For example, on the first mistake, the gap between cumulative and per-unit risk is lower if risk is concentrated in a few centuries (time of perils) whereas if it's spread across many centuries. And on the second mistake, the the importance of background risk is reduced if that background risk is going to be around for only a few centuries at a meaningful level. 

I think that the third mistake (ignoring population dynamics) should retain much of its importance on time of perils models. Actually, it might be more important insofar as those models tend to give higher probability to large-population scenarios coming about. I'd be interested to see how the numbers work out here, though.

I'm not much at maths so I found this hard to follow.

Is the basic thrust that reducing the chance of extinction this year isn't so valuable if there remains a risk of extinction (or catastrophe) in future because in that case we'll probably just go extinct (or die young) later anyway?

Yep - nailed it!

Ah great, glad I got it!

I think I had always assumed that the argument for x-risk relied on the possibility that the annual risk of extinction would eventually either hit or asymptote to zero. If you think of life spreading out across the galaxy and then other galaxies, and then being separated by cosmic expansion, then that makes some sense.

To analyse it the most simplistic way possible — if you think extinction risk has a 10% chance of permanently going to 0% if we make it through the current period, and a 90% chance of remaining very high even if we make it through the current period, then extinction reduction takes a 10x hit to its cost-effectiveness from this effect. (At least that's what I had been imagining.)

I recall there's an Appendix to The Precipice where Ord talks about this sort of thing. At least I remember that he covers the issue that it's ambiguous whether a high or low level of risk today makes the strongest case for working to reduce extinction being cost-effective. Because as I think you're pointing out above — while a low risk today makes it harder to reduce the probability of extinction by a given absolute amount, it simultaneously implies we're more likely to make it through future periods if we don't go extinct in this one, raising the value of survival now.

David addresses a lot of the arguments for a 'Time of Perils'in his 'Existential Risk, Pessimism and the Time of Perils' paper which this moral mathematics paper is a follow up to

Seems like David agrees that once you were spread across many star systems this could reduce existential risk a great deal.

The other line of argument would be that at some point AI advances will either cause extinction or a massive drop in extinction risk.

The literature on a 'singleton' is in part addressing this issue.

Because there's so much uncertainty about all this, it seems like an overly-confident claim that it's extremely unlikely for extinction risk to drop near zero within the next 100 or 200 years.

Curated and popular this week
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att
 ·  · 10m read
 · 
Citation: McKay, H. and Shah, S. (2025). Forecasting farmed animal numbers in 2033. Rethink Priorities. The report is also available on the Rethink Priorities website. Executive summary We produced rough-and-ready forecasts of the number of animals farmed in 2033 with the aim of helping advocates and funders with prioritization decisions. We focus on the most numerous groups of farmed animals: broiler chickens, finfishes, shrimps, and select insect species. Our forecasts suggest almost 6 trillion of these animals could be slaughtered in 2033 (Figure 1).   Figure 1: Invertebrates could account for 95% of farmed animals slaughtered in 2033 according to our midpoint estimates. Note that ‘Insects’ only includes black soldier fly larvae and mealworms. Our midpoint estimates point to a potential fourfold increase in the number of animals slaughtered from 2023 to 2033 and a doubling of the number of animals farmed at any time. Invertebrates drive the majority of this growth, and could account for 95% of farmed animals slaughtered in 2033 (see Figure 1) and three quarters of those alive at any time in our mid-point projections. We believe our forecasts point to an urgent need to address critical questions around the sentience and welfare of farmed invertebrates. Our estimates come with many caveats and warnings. In particular: * Species scope: For practicality, we produced numbers only for a few key animal groups: broiler chickens, finfishes, shrimp, and certain insects (black soldier flies and mealworms only). * Sensitivity to insect farming growth: Our forecasts are particularly sensitive to the growth in insect farming, which is highly sensitive to the success of insect farming business models and their ability to attract future investment. The recent and forecasted estimates, with 90% subjective credible intervals, can be viewed below in Table 1.  Table 1: Estimates of recent and forecasted numbers of broiler chickens, finfishes, shrimps, and insects slau