Crosspost from my blog. The blog version also had a few jokes that I felt were a bit too spicy for the EA forum.
It is a law of nature that every article a journalist writes that’s critical of effective altruism will inevitably contain at least one sneering paragraph about Longtermism, most likely calling it eugenics. This has been proved scientifically beyond 8 sigma. It is more certain than death and taxes. Rumor has it that in an unedited draft of his Meditations, Descartes used this fact as the inerrant foundation of all of the rest of his beliefs.
But such articles generally overlook a crucial fact: Longtermism is obviously correct.
We should prioritize the far future of humanity over the next few decades just as you should prioritize the next 40 years of your life over the next seven seconds (say, if you were considering having an exhilarating jump from a tall building.)
Now, there are two different versions of Longtermism. The first is called Weak Longtermism. It just holds that we should be doing a lot more to make the future go well. In a world with emerging deadly AI, biotechnology, and potential nuclear war, such a thesis seems incredibly modest and reasonable.
This is especially so because, as Thornley and Shulman have demonstrated, even ignoring Longtermist considerations, we should be doing way more to avert existential catastrophe. Even if you think that the odds of an existential catastrophe are only 1/1,000 in the next century, that still means existential catastrophe will on average kill around 8 million people. We should spend more than an infinitesimal fraction of the budget stopping problems that will kill in expectation around the same number of people as the holocaust, even on very conservative assumptions.
The second kind of Longtermism is called Strong Longtermism. It holds that making sure the future goes well is considerably more important than averting present problems. If undergoing major sacrifices—say, reducing present welfare by half—would make the life of every future person 1% better, Strong Longtermists would say that would be an improvement.
Now, Strong Longtermism is often treated as a deeply shocking and dreadful thesis. The standard journalistic treatment is to provide an uncharitable summary of it—making no note of crucial distinctions—quote some Longtermist saying something that sounds bad out of context, and then point and sputter at it.
But really, I think the thesis is very modest! We should all accept it. It’s almost trivial.
Presumably when a person exists does not much affect the strength of our moral reasons to help them. If you could press a button that would help a random stranger in five years, that would not be much better than pressing a button that would help a random stranger in seventy years.
Even if you think we have stronger duties to help those who are currently alive, at the very least those reasons should not be orders of magnitude stronger. Helping a person five years from now isn’t, say, 100,000 times better than helping a person 500 years from now. It would be better to press a button that helped 100,000 people in 1,000 years than one that would help one person in five years.
So we should all accept:
- Helping some particular future person is at least within a few orders of magnitude as important as helping some particular present person.
But we should also accept:
- Nearly all people who will exist in expectation will be in the distant future.
After all, the future could go on for millions or billions of years. One estimate by Bostrom that a plausible upper bound for the number of future people there could reasonably be is 10^52—and it could be much more if we’re ignorant about physics. Surely the number of expected people who will exist in the future dwarfs in numbers those who are alive today. There’s a realistic possibility that we’ll spread across the galaxy and bring about extreme quantities of value.
Sometimes people reject 1 by appealing to a discount rate. They say the amount we should value the future should decrease by some function—perhaps each year in the future we should value the impacts of our actions 3% less. But this is problematic: it implies that people 5,000 years ago were on the order of 10^64 times more important than present people. It implies that Pharaoh Narmer eating a cake was more important than all the welfare that presently exists in the world. I think that’s false! If Narmer eating a cake required consigning every modern person to 1,000 years of torture, I think it would be wrong for him to eat the cake.
But from these premises it simply follows that the future is overwhelmingly more important than the present. If the future has people who are around as important as people in the present, and it has way more of them, then the future has overwhelmingly more importance.
As an analogy: suppose that we learned that there were lots of people in caves. We couldn’t calculate their exact numbers, but there could very easily be billions of times more people in caves than people not in caves. The people in caves were just like us in relevant respects—equally able to suffer, experience joy, and so on.
Now imagine some philosophers came along and adopted some doctrine called Strong People-In-Cavesism, according to which what happens to the potentially quintillions of people in caves is a lot more important than what happens to present people. This wouldn’t be an extreme claim; it would be downright modest! It might seem very inconvenient to the people outside the cave, but sometimes morality is inconvenient. Sorry, I don’t make the rules.
It might be easy fodder for journalists, who would write things like, “by deprioritizing the issues of people outside the cave, Strong-People-In-Caveists imply that what happens out of the cave matters less than what happens to the people in caves.” But this would be a ridiculous objection! Merely noting that prioritizing A over B deprioritizes B relative to A tells us nothing about the merits of prioritizing A over B. It’s no surprise that if you and everyone you ever talk to is part of group B, then it will sound counterintuitive that you should prioritize the interests of group B over group A!
I regard Strong Longtermism as around as trivial as Strong People-In-Caveism. Once you realize that future people matter and there are way more of them than present people, it seems their collective interests obviously dwarf our own.
Now, once you think that Strong Longtermism is right, there’s a further question about which actions you should take. Some people think we’re entirely clueless, so that we haven’t the faintest clue about which actions will benefit the far future. I disagree with this position for reasons Richard Y Chappell has explained very persuasively. It would be awfully convenient if after learning that the far future has nearly all the expected value in the world, it turned out that this had no significant normative implications. If you think humans existing is a good thing—which I do—then just trying to reduce the risks of extinction is a pretty safe bet.
If you’re totally clueless about whether humans existing is a good thing, it seems like spreading important values is a good idea. Directing the future to be more in line with the set of values you think are correct is likely to improve it relative to your current values. Doing research into preventing S-risks—risks of astronomical suffering—is a good bet on everyone’s values. So worst case scenario, probably fund that sort of stuff. It’s hard to imagine a scenario where doing research into preventing extreme future suffering via unimaginably nightmarish dystopia is a bad thing.
You might also think that we should discount very low risks. Thus, even though the far future is vastly more important than the present, we shouldn’t do very much to steer it so that it goes well because the odds of any individual action positively affecting the far future are quite low. Efforts to reduce nuclear risk, for instance, are unlikely to have any effect.
I think this argument is very dubious. For one, as Petra Kosonen (author of a very short-lived but absolutely excellent substack) has argued, it’s implausible that you should always discount low risks even in collective action cases. If every person on Earth could press a button that would raise risks of extinction by one in seven billion, but would produce some benefit, pressing the button would be wrong. Your low risk action doesn’t exist in a vacuum! So then if lots of people collectively acting to reduce some risk would majorly bear on its probability, you shouldn’t ignore that risk. But that is absolutely true of the risks Longtermists address. If lots of people act to reduce existential risks, they can plausibly majorly affect the probability of existential risk.
For another, as Hillary Greaves has argued, the standard kinds of risk aversion actually strengthen the case for caring about existential risk mitigation. If you’re particularly worried about risks, then you should be especially worried about either the end of the world or potentially extremely horrendous scenarios.
For a third, I think all of our actions are extremely risky in the sense that we have basically no idea whether they’ll turn out for better or for worse. Any time you drive to the store, you change when people have sex by a little bit. This causes a new child to be born who will take many actions that will change the identity of still more future people! Thus, as a result of decisions as innocuous as driving to the store, every single person in the world a few centuries from now will be totally different from who they’d have been if you’d never existed. None of our actions are risk free—even the best actions have only around a 50% chance of making the world better. In a world where we are this clueless, it might counterintuitively be that trying for scenarios with really high upshots maximizes one’s chance of having a net positive impact on the world.
Lastly, we should have some non-zero credence in the many worlds interpretation of quantum mechanics, according to which the number of universes is constantly increasing—at least 10^10^18 new universes per second. In such a world all your risky endeavors will pay off in some of those universes. So therefore it seems that actions that we think of as risky really have a non-trivial chance of paying off some very large number of times.
But again, this is all a side-show. It does not matter to the thesis of Strong Longtermism. The thesis of Strong Longtermism is just that the future is overwhelmingly more important than the present. Exactly what practical implications this should have is a separate issue.
Lastly, people often object to Strong Longtermism by holding that it’s not good for a person to be created with a good life. They wouldn’t have otherwise existed. They wouldn’t have otherwise missed out. So how could anything of value have been lost?
Now, I think this view of population ethics is dead wrong. I even have a published paper about this. In short, I think there are totally decisive and knockdown arguments for it being good to bring a person into being, provided they live good lives.
But even if you disagree with me, I think you should still be a Strong Longtermist. Because Strong Longtermism doesn’t say anything intrinsically about how we should steer the far future. It just says the far future is a lot more important than the present. So even if you don’t think it’s important that there are loads of happy people in the far future, steering it so that it goes well is still important. Funding research to prevent risks of extreme suffering is still valuable.
Strong Longtermism thus strikes me as pretty trivial. But once you conclude that—contrary to our naive intuitions—nearly all that matters is in the distant future, I think you should take a very serious look at actions to make the far future go well. It would be awfully convenient if the optimal decision-procedure involves ignoring nearly all of what matters.
In short, most of the objections to Strong Longtermism are really objections to stuff related to Strong Longtermism but that you don’t have to believe to be a Strong Longtermist. They’re mostly objections to the idea that it’s good to create happy people. But that’s different from whether Strong Longtermism is correct. When one seriously thinks about the thesis of Strong Longtermism, it turns out to be far more trivial than one might suspect. In practice, every plausible view will be supportive of Longtermist actions.
(Now that’s what I call a STRONG Longtermist)!
(Standard offer goes—if you donate, in response to this article, at least 30 dollars a month to the Long Term Future Fund or the Center on Long-term Risk, you can have a free paid subscription. I just gave them 50 dollars! Also, seriously consider taking one of the careers that 80,000 hours recommends for steering the future to go well)
I'd be interested in your thoughts on an argument I once tried sharing on the forum here: https://forum.effectivealtruism.org/posts/RCmgGp2nmoWFcRwdn/should-strong-longtermists-really-want-to-minimize
In summary: It seems to me that strong longtermists are committed to adopting beliefs which would allow for large futures, over beliefs which are most likely to be correct, at least to the extent that these beliefs influence their actions (they should act as if they believe these unlikely things, even though privately they may not).
This seems like a strong reductio ad absurdum argument against strong longtermism to me.