A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because:
-Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions (or more) of years likely requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.
-Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support.
Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of future population size down rather than up: Thorstad probably hasn't been looking for those with the same level of effort.)
I am interested in whether there has been any kind of "official" or quasi-official response to Thorstad on these points from leading EA orgs or at least leading individual long-termists. (I know there has been discussion in forum comments, but I mean something more than that.) After all, 80k has now effectively gone all in on AI risk as the cause partly on the basis of longtermist arguments (though they've always been a primarily longtermist org I think) and Open Phil also spends a lot of money on projects that arguably are only amongst the most effective uses of the money if longtermism is true. (It's possible I guess that AI safety work could be high expected value per $ just for saving current lives.) Thorstad used to work for the Global Priorities Institute and I think it is great that they were prepared to employ someone to repeteadly harshly critique the famous theory they are most associated with. But there's not much point in EA soliciting serious criticism if people then mostly ignore it.
I know David well, and David, if you're reading this, apologies if it comes across as a bit uncharitable. But as far as I've ever been able to see, every important argument he makes in any of his papers against longtermism or the astronomical value of x-risk reduction was refuted pretty unambiguously before it was written. An unfortunate feature of an objection that comes after its own rebuttal is that sometimes people familiar with the arguments will skim it and say "weird, nothing new here" and move on, and people encountering it for the first time will think no response has been made.
For example,[1] I think the standard response to his arguments in "The Scope of Longtermism" would just be the Greaves and MacAskill "Case for Strong Longtermism".[2] The Case, in a nutshell, is that by giving to the Planetary Society or B612 Foundation to improve our asteroid/comet monitoring, we do more than 2x as much good in the long term, even on relatively low estimates of the value of the future, than giving to the top GiveWell charity does in the short term. So if you think GiveWell tells us the most cost-effective way to improve the short term, you have to think that, whenever your decision problem is "where to give a dollar", the overall best action does more good in the long term than in the short term.
You can certainly disagree with this argument on various grounds--e.g. you can think that non-GiveWell charities do much more good in the short term, or that the value of preventing extinction by asteroid is negative, or for that matter that the Planetary Society or B612 Foundation will just steal the money--but not with the arguments David offers in "The Scope of Longtermism".
His argument [again, in a nutshell] is that there are three common "scope-limiting phenomena", i.e. phenomena that make it the case that the overall best action does more good in the long term than in the short term in relatively few decision situations. These are
He grants that when Congress was deciding what to do with the money that originally went into an asteroid monitoring program called the Spaceguard Survey, longtermism seems to have held. So he's explicitly not relying on an argument that there isn't much value to trying to prevent x-risk from asteroids. Nevertheless, he never addresses the natural follow-up regarding contributing to improved asteroid monitoring today.
Re (1), he cites Kelly (2019) and Sevilla (2021) as reasons to be skeptical of claims from the "persistence" literature about various distant cultural, technological, or military developments having had long-term effects on the arc of history. Granting this doesn't affect the Case that whenever your decision problem is "where to give a dollar", the overall best action does more good in the long term than in the short term.[3]
Re (2), he says that we often have only weak evidence about a given action's impact on the long-term future. He defends this by pointing out (a) that attempts to forecast actions' impacts on a >20 year timescale have a mixed track record, (b) that professional forecasters are often skeptical of the ability to make such forecasts, and (c) that the overall impact of an action on the value of world is typically composed of its impacts on various other variables (e.g. the number of people and how well-off they are), and since it's hard to forecast any of these components, it's typically even harder to forecast the action's impact on value itself. None of this applies to the Case. We can grant that most actions have hard-to-predict long-term consequences, and that forecasters would recognize this, without denying that in most decision-situations (including all those where the question is where to give a dollar), there is one action that has long-term benefits more than 2x as great as the short-term benefits of giving to the top GiveWell charity: namely the action of giving to the Planetary Society or B612 Foundation. There is not a mixed track record of forecasting the >20 year impact of asteroid/comet monitoring, and no evidence that professional forecasters are skeptical of making such forecasts, and he implicitly grants that the complexity of forecasting its long-term impact on value isn't an issue in this case when it comes to the Space Guard Survey.
Re (3), again, the claim the Case makes is that we have identified one such action.
I also emailed him about an objection to his "Existential Risk Pessimism and the Time of Perils" in November and followed up in February, but he's responded only to say that he's been too busy to consider it.
Which he cites! Note that Greaves and MacAskill defend a stronger view than the one I'm presenting here, in particular that all near-best actions do much more good in the long term than in the short term. But what David argues against is the weaker view I lay out here.
Incidentally, he cites the fact that "Hiroshima and Nagasaki returned to their pre-war population levels by the mid-1950s" as an especially striking illustration of lack of persistence. But as I mentioned to him at the time, it's compatible with the possibility that those regions have some population path, and we "jumped back in time" on it, such that from now on the cities always have about as many people at t as they would have had at t+10. If so, bombing them could still have most of its effects in the future.
Thanks for saying a bit more about how you’re interpreting “scope of longtermism”. To be as concrete as possible, what I'm assuming is that we both read Thorstad as saying “a philanthropist giving money away so as to maximize the good from a classical utilitarian perspective” is typically outside the scope of decision-situations that are longtermist, but let me know if you read him differently on that. (I think it’s helpful to focus on this case because it’s simple, and the one G&M most clearly argue is longtermist on the basis of those two premises.)
I... (read more)