Lundgren and Kudlek (2024), in their recent article, discuss several challenges to longtermism as it currently stands. Below is a summary of these challenges.
The Far Future is Irrelevant for Moral Decision-Making
- Longtermists have not convincingly shown that taking the far future into account impacts decision-making in practice. In the examples given, the moral decisions remain the same even if the far future is disregarded.
- Example: Slavery
We didn’t need to consider the far future to recognize that abolishing slavery was morally right, as its benefits were evident in the short term. - Example: Existential Risk
The urgency of addressing existential risks does not depend on the far future; the importance of avoiding these risks is clear when focusing on the present and the next few generations.
- Example: Slavery
- As a result, the far future has little relevance to most moral decisions. Policies that are good for the far future are often also good for the present and can be justified based on their benefits to the near future.
The Far Future Must Conflict with the Near Future to be Morally Relevant
- For the far future to be a significant factor in moral decisions, it must lead to different decisions compared to those made when only considering the near future. If the same decisions are made in both cases, there is no need to consider the far future.
- Given the vastness of the future compared to the present, focusing on the far future, risks harming the present. Resources spent on the far future could instead be used to address immediate problems like health crises, hunger, and conflict.
We Are Not in a Position to Predict the Best Actions for the Far Future
- There are two main reasons for this:
- Unpredictability of Future Effects
It's nearly impossible to predict how our actions today will influence the far future. For instance, antibiotics once seemed like the greatest medical discovery, estimating the long-term effects of medical research in 10,000 years—or even millions of years—is beyond our capacity. - Unpredictability of Future Values
Technological advancements significantly change moral values and social norms over time. For example, contraceptives contributed to shifts in values regarding sexual autonomy during the sexual revolution. We cannot reliably predict what future generations will value.
- Unpredictability of Future Effects
Implementing Longtermism is Practically Implausible
- Human biases and limitations in moral thinking lead to distorted and unreliable judgments, making it difficult to meaningfully care about the far future.
- Our moral concern is naturally limited to those close to us, and our capacity for empathy and care is finite. Even if we care about future generations in principle, our resources are constrained.
- Focusing on the far future comes at a cost to addressing present-day needs and crises, such as health issues and poverty.
- Implementing longtermism would require radical changes to human psychology or to social institutions, which is a major practical hurdle.
I'm interested to hear your opinions on these challenges and how they relate to understanding longtermism.
I think you have misunderstood this. An option can be the best thing you can do because it averts a terrible outcome, as opposed to achieving the best possible outcome. For example, if we are at risk of entering a hellscape that will last for eternity and you can press a button to simply stop that from happening, that seems to me like it would be the single best thing anyone can do (overall or for the far future). The end result however would just be a continuation of the status quo. This is the concept of counterfactual impact - we compare the world after our intervention to the world that would have happened in the absence of the intervention and the difference in value is essentially how good the intervention was. Indeed a lot of longtermists simply want to avert s-risks (risks of astronomical suffering).
I don't understand some of what you're saying including on ambiguity. I don't find it problematic to say that the US winning the race to superintelligence is better in expectation than China winning. China has authoritarian values, so if they control the world using superintelligence they are more likely to control it according to authoritarian values, which means less freedom, but freedom is important for wellbeing etc. etc. I think we can say, if we assume persistence, that future people would more likely be thankful the US won the race to superintelligence than China did. I am extrapolating that future people will also like freedom. Could I be wrong, sure, but we are doing things based on expectation.
I would say that your doubts about persistence are the best counter to longtermism. The claim that superintelligence may allow a state to control the world for a very long time is perhaps a more controversial one, but not one I am willing to discount. If you want to engage with object-level arguments on this point check out this document: Artificial General Intelligence and Lock-In.