Lundgren and Kudlek (2024), in their recent article, discuss several challenges to longtermism as it currently stands. Below is a summary of these challenges.
The Far Future is Irrelevant for Moral Decision-Making
- Longtermists have not convincingly shown that taking the far future into account impacts decision-making in practice. In the examples given, the moral decisions remain the same even if the far future is disregarded.
- Example: Slavery
We didn’t need to consider the far future to recognize that abolishing slavery was morally right, as its benefits were evident in the short term. - Example: Existential Risk
The urgency of addressing existential risks does not depend on the far future; the importance of avoiding these risks is clear when focusing on the present and the next few generations.
- Example: Slavery
- As a result, the far future has little relevance to most moral decisions. Policies that are good for the far future are often also good for the present and can be justified based on their benefits to the near future.
The Far Future Must Conflict with the Near Future to be Morally Relevant
- For the far future to be a significant factor in moral decisions, it must lead to different decisions compared to those made when only considering the near future. If the same decisions are made in both cases, there is no need to consider the far future.
- Given the vastness of the future compared to the present, focusing on the far future, risks harming the present. Resources spent on the far future could instead be used to address immediate problems like health crises, hunger, and conflict.
We Are Not in a Position to Predict the Best Actions for the Far Future
- There are two main reasons for this:
- Unpredictability of Future Effects
It's nearly impossible to predict how our actions today will influence the far future. For instance, antibiotics once seemed like the greatest medical discovery, estimating the long-term effects of medical research in 10,000 years—or even millions of years—is beyond our capacity. - Unpredictability of Future Values
Technological advancements significantly change moral values and social norms over time. For example, contraceptives contributed to shifts in values regarding sexual autonomy during the sexual revolution. We cannot reliably predict what future generations will value.
- Unpredictability of Future Effects
Implementing Longtermism is Practically Implausible
- Human biases and limitations in moral thinking lead to distorted and unreliable judgments, making it difficult to meaningfully care about the far future.
- Our moral concern is naturally limited to those close to us, and our capacity for empathy and care is finite. Even if we care about future generations in principle, our resources are constrained.
- Focusing on the far future comes at a cost to addressing present-day needs and crises, such as health issues and poverty.
- Implementing longtermism would require radical changes to human psychology or to social institutions, which is a major practical hurdle.
I'm interested to hear your opinions on these challenges and how they relate to understanding longtermism.
Longtermism should certainly prioritise the best persistent state possible. If we could lock-in a state of the world where there were the maximum number of beings with maximum wellbeing of course I would do that, but we probably can't.
Ultimately the great value from a longtermist intervention does comes from comparing it to the state of the world that would have happened otherwise. If we can lock-in value 5 instead of locking in value 3, that is better than if we can lock-in value 9 instead of locking value 8.
I think we just have different intuitions here. The future will be different, but I think we can make reasonable guesses about what will be good. For example, I don't have a problem with a claim that a future where people care about the wellbeing of sentient creatures is likely to be better than one where they don't. If so, expanding our moral circle seems important in expectation. If you're asking "why" - it's because people who care about the wellbeing of sentient creatures are more likely to treat them well and therefore more likely to promote happiness over suffering. They are also therefore less likely to lock-in suffering. And fundamentally I think happiness is inherently good and suffering inherently bad and this is independent of what future people think. I don't have a problem with reasoning like this, but if you do then I just think our intuitions diverge too much here.
Maybe fair, but if that's the case I think we need to find those interventions that are not very ambiguous. Moral circle expansion seems one of those that is very hard to argue against. (I know I'm changing my interventions - it doesn't mean I don't think the previous ones I said are still good, I'm just trying to see how far your scepticism goes).
Considering this particular example - If we spread out to the stars then x-risk from asteroids drops considerably as no one asteroid can kill us all - that is true. But the value of the asteroid reduction intervention is borne from actually getting us to that point in the first place. If we hadn't reduced risk from asteroids and had gone extinct then we'd have value 0 for the rest of time. If we can avert that and become existentially secure than we have non-zero value for the rest of the time. So yes, we would have indeed done an intervention that has impacts enduring for the rest time. X-risk reduction interventions are trying to get us to a point of existential security. If they do that, their work is done.