A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because:
-Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions (or more) of years likely requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.
-Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support.
Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of future population size down rather than up: Thorstad probably hasn't been looking for those with the same level of effort.)
I am interested in whether there has been any kind of "official" or quasi-official response to Thorstad on these points from leading EA orgs or at least leading individual long-termists. (I know there has been discussion in forum comments, but I mean something more than that.) After all, 80k has now effectively gone all in on AI risk as the cause partly on the basis of longtermist arguments (though they've always been a primarily longtermist org I think) and Open Phil also spends a lot of money on projects that arguably are only amongst the most effective uses of the money if longtermism is true. (It's possible I guess that AI safety work could be high expected value per $ just for saving current lives.) Thorstad used to work for the Global Priorities Institute and I think it is great that they were prepared to employ someone to repeteadly harshly critique the famous theory they are most associated with. But there's not much point in EA soliciting serious criticism if people then mostly ignore it.
We asked David about longtermists' responses to his work in the podcast episode we did with him. Here's the (rough, automatically generated) transcript, but you can listen to the relevant section here; it starts at ~33:50.
David: I think to contextualize that, let me use the word I'm going to use in my book, namely a strategy of shedding zeros. So, longtermists say, look, the axiological case for longtermism is 10 orders of magnitude or 15 orders of magnitude better than the case for competing short-termist interventions.
So, therefore, unless you are radically non-consequentialist, longtermism is going to win at the level. And I want to chip away a lot of zeros in those value estimates, and then maybe do some other deontic things too. And so if the longtermist is just in one swoop gonna hand me five or ten or twenty zeros, I think there's two things to say. The first is they might run out of zeros just there.
5, 10, 20 orders of magnitude is a lot. But the second is this isn't the only time I'm gonna ask them for some orders of magnitude back. And this thing that they do, which is correct, is they point at every single argument I make and they say "I can afford to pay that cost and that cost and that cost." But the question is whether they can afford to pay them all together, and I think, at least as the line of the argument in my book, that if we're really tossing orders of magnitude around that freely, we're probably going to run out of orders of magnitude quite quickly.
Leah: Got it. Okay. And, I just want to follow up on the last thing you said. So has that been the response of the people who are writing on these issues? Like, do they read your work and say, yeah, I concede that?
David: I get, well, sometimes it's concessive, sometimes it's not, but almost always, somebody raised their hand, they say, David, couldn't I believe that, and still be a longtermist? So I had to rewrite some of the demographics section in my paper. They said, look, aren't you uncertain about demographics?
Maybe there's a 10 to the 8th probability I'm right about demographics, so maybe I lose eight orders of magnitude, and the response there is, okay, maybe you do. And then they'll say about the time of perils, maybe there's a 10 to the 9th chance I'm right about the time of perils, maybe I lose nine orders of magnitude, and okay, you do.
Obviously, we have a disagreement about how many orders of magnitude are lost each time, but I think it's a response I see in isolation every time I give a paper, and I'd like people to see it as a response that works in isolation, but can't just keep being repeated.