A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because:
-Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions (or more) of years likely requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.
-Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support.
Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of future population size down rather than up: Thorstad probably hasn't been looking for those with the same level of effort.)
I am interested in whether there has been any kind of "official" or quasi-official response to Thorstad on these points from leading EA orgs or at least leading individual long-termists. (I know there has been discussion in forum comments, but I mean something more than that.) After all, 80k has now effectively gone all in on AI risk as the cause partly on the basis of longtermist arguments (though they've always been a primarily longtermist org I think) and Open Phil also spends a lot of money on projects that arguably are only amongst the most effective uses of the money if longtermism is true. (It's possible I guess that AI safety work could be high expected value per $ just for saving current lives.) Thorstad used to work for the Global Priorities Institute and I think it is great that they were prepared to employ someone to repeteadly harshly critique the famous theory they are most associated with. But there's not much point in EA soliciting serious criticism if people then mostly ignore it.
I don't see how Thorstad's claim that the Space Guard Survey is a "special case" of a strong longtermist priority being reasonable (and that other longtermist proposals did not have the same justification) is "rebutted" by the fact that Greaves and McAskill use the Space Guard Survey as its example. The actual scope of longtermism is clearly not restricted to observing exogenous risks with predictable regularity and identifiable and sustainable solutions, and thus is subject at least to some extent to the critiques Thorstad identified.
Even the case for the Space Guard Survey looks a lot weaker than Thorstad granted if one considers that the x-risk from AI in the near term is fairly significant, which most longtermists seem to agree with. Suddenly instead of it having favourable odds of enabling a vast future, it simply observes asteroids[1] for three decades before AI becomes so powerful that human ability to observe asteroids is irrelevant, and any positive value it supplies is plausibly swamped by alternatives like researching AI that doesn't need big telescopes to predict asteroid trajectories and can prevent unfriendly AI and other x-risks. The problem is of course, that we don't know what that best case solution looks like[2] and most longtermists think many areas of spending on AI look harmful rather than near best case, but don't high certainty (or any consensus) about which areas those are. Which is Thorstad's 'washing out' argument
As far as I can see, Thorstad's core argument is that even if it's [trivially] true that the theoretical best possible course of action has most of its consequences in the future, we don't know what that course of action is or even near best solutions are. Given that most longtermists don't think the canonical asteroid example is the best possible course of action and there's widespread disagreement over whether actions like accelerating "safe" AI research are increasing or reducing risk, I don't see his concession the Space Guard Survey might have merit under some assumptions as undermining that.
ex post, we know that so far it's observed asteroids that haven't hit us and won't in the foreseen future.
in theory it could even involve saving a child who grows up to be an AI researcher from malaria. This is improbable, but when you're dealing with unpredictable phenomena with astronomical payoffs...