Hide table of contents
Miniature dishware.

Epistemics (Part 10: The nontrivial probability gambit)

Even just a 1% chance of extremely high stakes is sufficient to establish high stakes in expectation. So we should not feel assured of low stakes even if a highly credible model—warranting 99% credence—entails low stakes. It hardly matters at all how many credible models entail low stakes. What matters is whether any credible model entails extremely high stakes. If one does—while warranting just 1% credence—then we have established high stakes in expectation, no matter what the remaining 99% of credibility-weighted models imply (unless one inverts the high stakes in a way that cancels out the other high-stakes possibility).

Richard Yetter-Chappell, “Rule high stakes in, not out

Listen to this post

1. Introduction

This is Part 10 in my series on epistemics: practices that shape knowledge, belief and opinion within a community. In this series, I focus on areas where community epistemics could be productively improved.

Part 1 introduced the series and briefly discussed the role of funding, publication practices, expertise and deference within the effective altruist ecosystem.

Part 2 discussed the role of examples within discourse by effective altruists, focusing on the cases of Aum Shinrikyo and the Biological Weapons Convention.

Part 3 looked at the role of peer review within the effective altruism movement.

Part 4 looked at the declining role of cost-effectiveness analysis within the effective altruism movement. Part 5 continued that discussion by explaining the value of cost-effectiveness analysis.

Part 6 looked at instances of extraordinary claims being made on the basis of less than extraordinary evidence.

Part 7 looked at the role of legitimate authority within the effective altruism movement.

Part 8 looked at two types of decoupling.

Part 9 looked at ironically authentic speech.

Today’s post looks at the nontrivial probability gambit, a strategy for responding to criticism of strong views about the shape of the future.

2. The nontrivial probability gambit

One of the themes of my work has been that the case for longtermism rests on a number of highly nontrivial claims about the long-term future. These include the time of perils hypothesis and claims that threats such as artificial intelligence and biosecurity pose a significant near-term existential risk which can be tractably reduced.

In each case, I have argued that:

  1. (Antecedent Implausibility) The claim in question is not very antecedently plausible.
  2. (Insufficient Evidence) Insufficient evidence has been offered to support the claim in question.

The upshot of Antecedent Implausibility is that we should assign low prior credence to the questioned claims. The upshot of Insufficient Evidence is that we should not be significantly moved from this prior by existing arguments.

What I would like to see is extended and rigorous argument for the questioned claims. Those arguments, if successful, would target Insufficient Evidence, and related arguments could perhaps be made against Antecedent Implausibility.

Sometimes this is done, but often longtermists try another tack. The nontrivial probability gambit does not (directly) contest Antecedent Implausibility or Insufficient Evidence. Rather, it holds that the questioned claims should be assigned nontrivial probability, and that assigning them nontrivial probability is sufficient to vindicate the case for longtermism.

For a few recent examples, here is Richard Yetter-Chappell:

[Thorstad] calls the arguments for the time of perils hypothesis “inconclusive”. But either way, the time of perils hypothesis can (and should) rationally shape our expected value judgments without needing to be conclusively established or even probable. Warranting some non-negligible credence would suffice. Because, again, even just a 1% chance of extremely high stakes establishes high stakes in expectation … To rule out high stakes, you need to establish that the most longtermist-friendly scenario or model is not just unlikely, but vanishingly so.

And here is the blogger Bentham’s Bulldog:

The expected value of existential risk reduction is—if not infinite, which I think it clearly is in expectation—extremely massive. If you think the Bostrom number of 10^52 happy people has a .01% chance of being right, then you’ll get 10^48 expected future people if we don’t go extinct, meaning reducing odds of existential risks by 1/10^20 creates 10^28 extra lives. So even if we think the Thorstad math means that getting the odds of going extinct this century down 1% matters 100 times less, it still easily swamps short-term interventions in expectation.

In each case, nontrivial probability is assigned to very high values for existential risk mitigation. Importantly, this is not done on the basis of substantial new argument for the challenged claims. For example, here is Bentham’s Bulldog on the case for assigning nontrivial probability to very large future populations:

There is some chance that the far future could contain stupidly large numbers of people. For instance, maybe we come up with some system that produces exponential growth with respect to happy minds relative to resources input. So, as you increase the amount of energy by some constant amount, you double the number of minds. I wouldn’t bet on such a scenario, but it’s not impossible. And if the odds are 1 in a trillion of such a scenario, then this clearly gets expected value much higher than the 10^52 number. Such a scenario potentially opens up numbers of happy minds like 2^1 quadrillion. There’s also some chance we’ll discover ways to bring about infinite happy minds—if the odds of this are non-zero, the expected number of future happy minds is infinity.

And here is Yetter-Chappell on the time of perils:

We only need one credible model entailing extremely high stakes in order to establish high stakes in expectation. And “credible” here does not even require high credence … The time of perils hypothesis can (and should) rationally shape our expected value judgments without needing to be conclusively established or even probable. Warranting some non-negligible credence would suffice. Because, again, even just a 1% chance of extremely high stakes establishes high stakes in expectation.

What makes both arguments instances of the nontrivial probability gambit is that they do not provide significant new evidence for the challenged claims. Their primary argumentative move is to assign nontrivial probabilities without substantial new evidence.

I don’t think this is a good way to argue. I think that nontrivial probability assignments to strong and antecedently implausible claims should be supported by extensive argument rather than manufactured probabilities. Let me say a bit about why this is so.

3. The naming game

One of the best-known facts about high-stakes, low-probability claims is that we can almost always name more of them. Call this the naming game.

Perhaps the most familiar example of the naming game is the many gods objection to Pascal’s Wager. Pascal’s Wager says that you should assign nonzero probability to the existence of a God who will infinitely reward you for your faith. On this basis, it is argued, you should believe (or get yourself to believe) that God exists.

The many gods objection notes that we might equally well name hypotheses on which you will be infinitely punished for your faith. Perhaps there are two possible gods, but only one exists. Each will damn you for eternity if you believe in the other. Or perhaps there is only one god, but they find it amusing to send believers to hell and sinners to heaven. Or perhaps God punishes believers who aren’t named Carol (or maybe it was Darryl?). And, the objection goes, you should assign some nontrivial probability to each of these claims.

What is the right way to respond to the many gods objection? Not, I take it, by seeing who can name more or stronger low-probability claims to support their favored conclusion and then tallying up the claims named by each party. That is a never-ending game of objection-naming. The right way to respond to the many gods objection, if such a response exists, will probably have something to do with the relative likelihoods of each claim. (Matters are more complicated if the claimed values are genuinely infinite, but let us leave those complications aside for now).

The point is that we can play the naming game for any number of hypotheses, such as the time of perils hypothesis. Consider, for example, the time of carols hypothesis on which everyone in the future will be tied up and forced to listen to endless Christmas carols. Or consider the time of Carol hypothesis on which a dictator named Carol will torture all living beings for a very long time.

The right response to the time of carols hypothesis, or the time of Carol hypothesis, would not be to name competing hypotheses about benevolent Darryls or barrels of Christmas cheer. The right response would be to argue that both claims are implausible (and indeed they are).

The point raised by the naming game is that there is no way to escape substantive argument about the comparative plausibility of competing claims about how the future might go. Just because a claim would, if true, make the future very good or very bad is not yet a reason to think in expectation that the future will be very good or very bad.

Once we move beyond the nontrivial probability gambit to engage in substantive argument, it is not obvious that claims such as the time of perils hypothesis will carry the day.

4. Very low probabilities are ubiquitous

Longtermists correctly note that the value of future scenarios can be very high. While there are on the order of 10^10 humans alive today, there could be 10^30, 10^40 or 10^50 future people. These are very large numbers, and their size matters.

What longtermists do not always note is that the probabilities of future scenarios can be very low. Often the nontrivial probability gambit invites us to assign quite substantial probabilities to very strong claims. For example, Chappell writes that:

Even just a 1% chance of extremely high stakes is sufficient to establish high stakes in expectation.

But just as the value of future scenarios can be extremely high, their probabilities can be very low. I wouldn’t assign a 1% chance to the time of carols hypothesis. I probably wouldn’t bat an eye at assigning a probability beneath 10^(-100) to it. This is because the time of carols hypothesis is antecedently implausible, and nobody has ever offered enough evidence to substantially raise its probability.

Consider, now, a claim like the time of perils hypothesis. Some versions of this claim may be relatively more plausible. But the versions of the time of perils hypothesis underlying very high value estimates often make claims like the following.

First, levels of existential risk are right now startlingly high, for example 10-20% in this century.

Second, in a few short centuries, levels of existential risk will drop quickly and dramatically.

Third, this drop will be perhaps 4-5 orders of magnitude in levels of per-century risk.

Four, levels of existential risk will remain low (with no exceptions) for a very long time, such as a million or a billion years.

This, in turn, is coupled with ambitious hypotheses about the level of population and welfare growth possible within a time of perils scenario, and with comparatively low probability assignments to bad outcomes.

It is not at all obvious that we should assign a probability in the neighborhood of 1% to the conjunction of these claims. Nor is it obvious that this probability should be in the neighborhood of 10^-5 or 10^-15.

The reason for this is that low probabilities are ubiquitous. If we look at the conceptual space of strong claims about the long-term future, there are countless competing claims that could be made. Most, like the time of carols hypothesis, must be assigned very low probabilities as a matter of mathematical necessity, since competing hypotheses cannot be true together.

If we are going to assign nontrivial probabilities to strong claims, or especially to the conjunction of many strong claims, we need to make an argument for this probability assignment. The default probability assignment to such claims is not nontrivial. It is trivial.

5. Shedding zeroes

My book manuscript Beyond longtermism pursues what I call the shedding zeroes strategy. This strategy begins with the longtermist claim that the best longtermist interventions are many orders of magnitude better than competing interventions. It then develops an overlapping series of challenges, each of which aims to shed orders of magnitude from the value of longtermist options.

The point of the shedding zeroes strategy is this. Very few positions admit of one-shot refutations. Generally, there are many strengths and weaknesses of a view. But even if one individual challenge to longtermism is not enough to scuttle the view, many such challenges strung together might well do so.

Longtermists do not just use the nontrivial probability gambit to defend a single claim. They use it many times, for example in response to decision-theoretic uncertainty (over fanaticism or risk-aversion) and in response to moral uncertainty (over competing nonconsequentialist duties).

For example, just three days after invoking the nontrivial probability gambit in response to my work on the time of perils hypothesis, high population estimates, and other quantities (note: this already involves several invocations of the nontrivial probability gambit), the blogger Bentham’s Bulldog considers the case for fanaticism. In a post entitled “Fanaticism dominates given moral uncertainty,” he again pulls the nontrivial probability gambit.

Under uncertainty, fanatical considerations dominate. If you’re not sure if fanaticism is right, you should mostly behave as a fanatic.

The nontrivial probability gambit is not an infinitely-repeatable get-out-of-jail free card. It is a very expensive card to play. Playing it repeatedly rapidly drives down the value of longtermist interventions. Orders of magnitude are precious things, and even the most optimistic longtermist value estimates have only so many orders of magnitude to shed.

Can longtermists make the nontrivial gambit once? Perhaps. It depends on the numbers. Can they make it a dozen times? That is unlikely. Twice in a week? That’s how you go bankrupt.

6. Beyond expected value: Fanaticism and stakes-sensitivity

The costs of the nontrivial probability gambit can be heightened if we move beyond expected value theory.

For example, one thing that effective altruists often note is that even if fanaticism is true, the threshold below which probabilities should be discounted might be very low. While the Marquis de Condorcet recommended discounting probabilities around 10^-5 and Borel recommended discounting at 10^-6, a recent defense of fanaticism by Bradley Monton adopts a discounting threshold of 5 * 10^-16.

As longtermists rightly note, longtermist interventions may well have a probability substantially above 5 * 10^-16 of success. That is particularly true if they are understood collectively, rather than assessing each individual donation for its chance of preventing existential catastrophe.

As a result, the bare invocation of anti-fanaticism may not be enough to scuttle longtermism. Here, for example, are Hilary Greaves and Will MacAskill:

The probabilities involved in the argument for longtermism might not be sufficiently extreme for any plausible degree of resistance to ‘fanaticism’ to overturn the verdicts of an expected-value approach, at least at the societal level. For example, it would not seem ‘fanatical’ to take action to reduce a 1 in 1 million risk of dying, as one incurs from cycling 35 miles or driving 500 miles (respectively, by wearing a helmet or wearing a seat belt (Department of Transport 2020)). But it seems that society can positively affect the very long-term future with probabilities well above this threshold. For instance … we suggested a lower bound of 1 in 100,000 on a plausible credence that $1 billion of carefully targeted spending would avert an existential catastrophe from artificial intelligence.

This reply may be more plausible when the only source of uncertainty is ordinary empirical uncertainty. But when ordinary empirical uncertainty is coupled with many quite radical empirical claims (such as the time of perils hypothesis, and high levels of near-term existential risk) and also uncertain philosophical claims (such as the correct decision theory or deontic theory), levels of uncertainty in many of the longtermist’s best-case scenarios can easily dip below even high thresholds such as 5 * 10^-16. As such, pulling the nontrivial probability gambit many times makes it harder to square anti-fanaticism with longtermism.

A similar point occurs in response to deontic objections, which cite competing duties beyond duties of beneficence towards future people. Again, following Greaves and MacAskill, a standard strategy is to make the stakes-sensitivity argument:

(P1) When the axiological stakes are very high, there are no serious side-constraints, and the personal prerogatives are comparatively minor, one ought to choose a near-best option.

(P2) In the most important decision situations facing agents today, the axiological stakes are very high, there are no serious side-constraints, and the personal prerogatives are comparatively minor.

(C) So, in the most important decision situations facing agents today, one ought to choose a near-best option.

On the standard ex ante reading of the stakes-sensitivity argument, (P1) relies on the claim that the expected value of longtermist interventions is not merely better, but much, much better than that of competing interventions.

This way of arguing for longtermism allows fewer uses of the nontrivial probability gambit, because we need to show not just that longtermist interventions continue to be better than competitors given uncertainty, but that they continue to be much better. Again, danger lurks.

The lesson of both examples is that the nontrivial probability gambit admits fewer uses in many use cases meant to address competing normative views.

7. Evidence

This post explored the nontrivial probability gambit. Many claims, such as the time of perils hypothesis, have been claimed to satisfy:

  1. (Antecedent Implausibility) The claim in question is not very antecedently plausible.
  2. (Insufficient Evidence) Insufficient evidence has been offered to support the claim in question.

In defense, some longtermists pull the nontrivial probability gambit. They do not question Antecedent Implausibility or Insufficient Evidence, but rather argue that any nontrivial probability assignment to the hypotheses in question is enough to vindicate longtermism.

We saw that the nontrivial probability gambit faces challenges.

We saw in Section 3 that some degree of evidence is necessary, or else we are merely playing the naming game of naming scenarios in which a purported action would be very good, or very bad.

We saw in Section 4 that low probabilities are ubiquitous. It is not at all surprising to assign very low probabilities to strong, implausible and insufficiently evidenced claims about the long-term future. Most claims of this form must, as a matter of mathematical necessity, be given very low probabilities.

We saw in Section 5 that even if the low-probability gambit works once, it cannot be repeated many times without great cost. And we saw in Section 6 that the number of permissible repetitions drops further in many use cases.

What, then, would I have longtermists do in place of the low probability gambit? The answer is simple. I would like to see more and better direct arguments for the challenged claims, on the basis of which it can be seen to be appropriate to assign them nontrivial probabilities.


Note: This essay is cross-posted from the blog Reflective Altruism written by David Thorstad. It was originally published there on December 26, 2025. The account making this post has no affiliation with Reflective Altruism or David Thorstad. You can leave a comment on this post on Reflective Altruism here. You can read the rest of the post series here.

22

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since:

This sort of "many gods"-style response is precisely what I was referring to with my parenthetical: "unless one inverts the high stakes in a way that cancels out the other high-stakes possibility."

I don't think that dystopian "time of carols" scenarios are remotely as credible as the time of perils hypothesis. If someone disagrees, then certainly resolving that substantive disagreement would be important for making dialectical progress on the question of whether x-risk mitigation is worthwhile or not.

What makes both arguments instances of the nontrivial probability gambit is that they do not provide significant new evidence for the challenged claims. Their primary argumentative move is to assign nontrivial probabilities without substantial new evidence.

I don’t think this is a good way to argue. I think that nontrivial probability assignments to strong and antecedently implausible claims should be supported by extensive argument rather than manufactured probabilities.

I'd encourage Thorstad to read my post more carefully and pay attention to what I am arguing there. I was making an in principle point about how expected value works, highlighting a logical fallacy in Thorstad's published work on this topic. (Nothing in the paper I responded to seemed to acknowledge that a 1% chance of the time of perils would suffice to support longtermism. He wrote about the hypothesis being "inconclusive" as if that sufficed to rule it out, and I think it's important to recognize that this is bad reasoning on his part.)

Saying that my "primary argumentative move is to assign nontrivial probabilities without substantial new evidence" is poor reading comprehension on Thorstad's part. Actually, my primary argumentative move was explaining how expected value works. The numbers are illustrative, and suffice for anyone who happens to share my priors (or something close enough). Obviously, I'm not in that post trying to persuade someone who instead thinks the correct probability to assign is negligible. Thorstad is just radically misreading what my post is arguing.

(What makes this especially strange is that, iirc, the published paper of Thorstad's to which I was replying did not itself argue that the correct probability to assign to the ToP hypothesis is negligible, but just that the case for the hypothesis is "inconclusive". So it sounds like he's now accusing me of poor epistemics because I failed to respond to a different paper than the one he actually wrote? Geez.)

Obviously David, as a highly trained moral philosopher with years of engagement with EA understands how expected value works though. I think the dispute must really be about whether to assign time of perils very low credence. (A dispute where I would probably side with you if "very low" is below say 1 in 10,000). 

There's "understanding" in the weak sense of having the info tokened in a belief-box somewhere, and then there's understanding in the sense of never falling for tempting-but-fallacious inferences like those I discuss in my post.

Have you read the paper I was responding to? I really don't think it's at all "obvious" that all "highly trained moral philosophers" have internalized the point I make in my blog post (that was the whole point of my writing it!), and I offered textual support. For example, Thorstad wrote: "the time of perils hypothesis is probably false. I conclude that existential risk pessimism may tell against the overwhelming importance of existential risk mitigation." This is a strange thing to write if he recognized that merely being "probably false" doesn't suffice to threaten the longtermist argument! 

(Edited to add: the obvious reading is that he's making precisely the sort of "best model fallacy" that I critique in my post: assessing which empirical model we should regard as true, and then determining expected value on the basis of that one model. Even very senior philosophers, like Eric Schwitzgebel, have made the same mistake.)

Going back to the OP's claims about what is or isn't "a good way to argue," I think it's important to pay attention to the actual text of what someone wrote. That's what my blog post did, and it's annoying to be subject to criticism (and now downvoting) from people who aren't willing to extend the same basic courtesy to me.

Curated and popular this week
Relevant opportunities