A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because:
-Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions (or more) of years likely requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.
-Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support.
Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of future population size down rather than up: Thorstad probably hasn't been looking for those with the same level of effort.)
I am interested in whether there has been any kind of "official" or quasi-official response to Thorstad on these points from leading EA orgs or at least leading individual long-termists. (I know there has been discussion in forum comments, but I mean something more than that.) After all, 80k has now effectively gone all in on AI risk as the cause partly on the basis of longtermist arguments (though they've always been a primarily longtermist org I think) and Open Phil also spends a lot of money on projects that arguably are only amongst the most effective uses of the money if longtermism is true. (It's possible I guess that AI safety work could be high expected value per $ just for saving current lives.) Thorstad used to work for the Global Priorities Institute and I think it is great that they were prepared to employ someone to repeteadly harshly critique the famous theory they are most associated with. But there's not much point in EA soliciting serious criticism if people then mostly ignore it.
Carl Shulman's response here responds to objection 1. You can also see the tag for the time of perils hypothesis for a bit more discussion.
On 2, the structure of the objection is similar to Shulman's response on 1: we're not vanishingly unlikely to have very large (or even near maximal) population sizes. For instance, a variety of people (including longtermists) are interested in ultimately creating vast numbers of digital minds or other sources of value and there aren't clearly opposing groups which directly have preferences against this happening. I don't see a very strong analogy between current low fertility and long run cosmic resource utilization, and at a more basic level, current low fertility isn't stable: even if the status quo continues for a long time (without e.g. the creation of powerful AI resulting in much faster progress in technology), selection will likely lead to the fertility rate increasing at some point in the future unless this is actively suppressed.
Thanks for the good points, Ryan.
I can see the annual probability of the absolute value of the welfare of Earth-originating beings dropping to 0 becoming increasingly low, and their population increasingly large. However, I do not think this means decreasing the nearterm risk of human extinction is more cost-effective than donating to GiveWell's top charities, or organisations working on invertebrate welfare.
Longtermists often estimate the expected value of the future from EV = p*V = "probability of reaching existential safety"*"expected value of the futur... (read more)