A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because:
-Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions (or more) of years likely requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.
-Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support.
Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of future population size down rather than up: Thorstad probably hasn't been looking for those with the same level of effort.)
I am interested in whether there has been any kind of "official" or quasi-official response to Thorstad on these points from leading EA orgs or at least leading individual long-termists. (I know there has been discussion in forum comments, but I mean something more than that.) After all, 80k has now effectively gone all in on AI risk as the cause partly on the basis of longtermist arguments (though they've always been a primarily longtermist org I think) and Open Phil also spends a lot of money on projects that arguably are only amongst the most effective uses of the money if longtermism is true. (It's possible I guess that AI safety work could be high expected value per $ just for saving current lives.) Thorstad used to work for the Global Priorities Institute and I think it is great that they were prepared to employ someone to repeteadly harshly critique the famous theory they are most associated with. But there's not much point in EA soliciting serious criticism if people then mostly ignore it.
I'm going to actually disagree with your initial premise - the basic points are that the expected number of people in the future is much lower than longtermists estimate - because, at least in the Reflective Altruism blog series, I don't see that as being the main objection David has to (Strong) Longtermism. Instead, I think he instead argues That the interventions Longtermists support require additional hypotheses (the time of perils) which are probably false and that the empirical evidence longtermists give for their existential pessimism are often non-robust on further inspection.[1] Of course my understanding is not complete, David himself might frame it differently, etc etc.
One interesting result from his earlier Existential risk pessimism and the time of perils paper is that on a simple model, though he expands the results to more complex ones, people with low x-risk should be longtermists about value, and those with high x-risk estimates should be focused on the short term, which is basically the opposite of what we see happening in real life. The best way out for the longtermist, he argues, is to believe in 'the time of perils hypothesis'. I think the main appeals to this being the case are either a) interstellar colonisation giving us existential security so we're moral value isn't tethered to one planet,[2] or of course from b) aligned superintelligence allowing us unprecedented control over the universe and the ability to defuse any sources of existential risk. But of course, many working on Existential AI Risk are actually very pessimistic about the prospects for alignment and so, if they are longtermist,[3] why aren't they retiring from technical AI Safety and donating to AMF? More disturbingly, are longtermists just using the 'time of perils' belief to backwards-justify their prior beliefs that interventions in things like AI are the utilitarian-optimal interventions to be supporting? I haven't seen a good longtermist case answering these questions, which is not to say that one doesn't exist.
Furthermore, in terms of responses from EA itself, what's interesting is that when you look at the top uses of the Longtermism tag on the Forum, all of the top 8 were made ~3 years ago, and only 3 of the top 20 within the last 3 years. Longtermism isn't a used a lot even amongst EA any more - the likely result of negative responses from the broader intelligensia during the 2022 soft launch, and then the incredibly toxic result of the FTX collapse shortly after the release of WWOTF. So while I find @trammell's comment below illuminating in some aspects about why there might be fewer responses than expected, I think sociologically it is wrong about the overarching reasons - I think longtermism doesn't have much momentum in academic philosophical circles right now. I'm not plugged into the GPI-Sphere though, so I could be wrong about this.
So my answer to your initial question is "no" if you mean 'something big published post-Thorstad that responds directly or implicitly to him from a longtermist perspective'. Furthermore, were they to do so, or to point at one already done (like The Case for Strong Longtermism) I'd probably just reject many of the premises that give the case legs in the first place, such as that it's reasonable to do risk-neutral-expected-value reasoning about the very long run future in the first place as a guide to moral action. Nevertheless, other objections to Longtermism I am sympathetic to are those from Eric Schwitzgebel (here, here) among others. I don't think this is David's perspective though, I think he believes that the empirical warrant for the claims aren't there but that he would support longtermist policies if he believed they could be supported this way.
I'm also somewhat disturbed by the implication that some proportion of the EA Brain-Trust, and/or those running major EA/AI Safety/Biorisk organisations, are actually still committed longtermists or justify their work in longtermist terms. If so they should make sure this is known publicly and not hide it. If you think your work on AI Policy is justified on strong longtermist grounds, then I'd love to see your the model used for that, and parameters used for the length of the time of perils, the marginal difference to x-risk the policy would make, and the evidence backing up those estimates. Like if 80k have shifted to be AI Safety focused because of longtermist philosophical commitments, then lets see those commitments! The inability of many longtermist organisations to do that is a sign of what Thorstad calls the regression to the inscrutable,[4] which is I think one of his stronger critiques.
Disagreement about future population estimates would be a special case of the latter here
In The Epistemic Challenge to Longtermism, Tarsney notes that:
Note these considerations don't apply to you if you're not an impartial longtermist, but then again, if many people working in this area don't count themselves as longtermists, it certainly seems like a poor sign for longtermism
Term coined in this blog post about WWOTF
A general good rule for life
(I am not a time-invariant-risk-neutral-totally-impartial-utilitarian, for instance)
Fair enough, I think the lack of a direct response has been due to an interaction between the two things. At first, people familiar with the existing arguments didn't see much to respond to in David's arguments, and figured most people would see through them. Later, when David's arguments had gotten around more and it became clear that a response would be worthwhile (and for that matter when new arguments had been made which were genuinely novel), the small handful of people who had been exploring the case for longtermism had mostly moved on to other proje... (read more)