Lundgren and Kudlek (2024), in their recent article, discuss several challenges to longtermism as it currently stands. Below is a summary of these challenges.
The Far Future is Irrelevant for Moral Decision-Making
- Longtermists have not convincingly shown that taking the far future into account impacts decision-making in practice. In the examples given, the moral decisions remain the same even if the far future is disregarded.
- Example: Slavery
We didn’t need to consider the far future to recognize that abolishing slavery was morally right, as its benefits were evident in the short term. - Example: Existential Risk
The urgency of addressing existential risks does not depend on the far future; the importance of avoiding these risks is clear when focusing on the present and the next few generations.
- Example: Slavery
- As a result, the far future has little relevance to most moral decisions. Policies that are good for the far future are often also good for the present and can be justified based on their benefits to the near future.
The Far Future Must Conflict with the Near Future to be Morally Relevant
- For the far future to be a significant factor in moral decisions, it must lead to different decisions compared to those made when only considering the near future. If the same decisions are made in both cases, there is no need to consider the far future.
- Given the vastness of the future compared to the present, focusing on the far future, risks harming the present. Resources spent on the far future could instead be used to address immediate problems like health crises, hunger, and conflict.
We Are Not in a Position to Predict the Best Actions for the Far Future
- There are two main reasons for this:
- Unpredictability of Future Effects
It's nearly impossible to predict how our actions today will influence the far future. For instance, antibiotics once seemed like the greatest medical discovery, estimating the long-term effects of medical research in 10,000 years—or even millions of years—is beyond our capacity. - Unpredictability of Future Values
Technological advancements significantly change moral values and social norms over time. For example, contraceptives contributed to shifts in values regarding sexual autonomy during the sexual revolution. We cannot reliably predict what future generations will value.
- Unpredictability of Future Effects
Implementing Longtermism is Practically Implausible
- Human biases and limitations in moral thinking lead to distorted and unreliable judgments, making it difficult to meaningfully care about the far future.
- Our moral concern is naturally limited to those close to us, and our capacity for empathy and care is finite. Even if we care about future generations in principle, our resources are constrained.
- Focusing on the far future comes at a cost to addressing present-day needs and crises, such as health issues and poverty.
- Implementing longtermism would require radical changes to human psychology or to social institutions, which is a major practical hurdle.
I'm interested to hear your opinions on these challenges and how they relate to understanding longtermism.
For now i'm finding the gaps in between useful for reflecting, thanks though. Perhaps in the future!
The world will be radically different, yet you feel confident in predicting that some element of this radically different world will remain constant for a very long time and this being so, moving towards this state is one of the best options for the far future.
I think you may be departing from strong longtermism. The first proposition for ASL is "Every option that is near-best overall is near-best for the far future." We are talking about making decisions whose outcome is one of the best things we can do for the far future. It's not merely something that is better than something deemed terrible.
Perhaps I didn't explain the point about ambiguity well enough. Of all possible states, S, there is some possible state X, that is 'near-best', 'best-possible', 'close to best', what have you, for the far future. Call the 'near-best' state for the far future n-bX. There are microstates of n-bX that make it such that it is this 'near-best' state. Presumably you need to have some idea of what these microstates are, in order to make predictions regarding what we can do today that will lead towards them.
Therefore, there must be something about the state of the US having dominance over the world as opposed to China, that will presumably lead to the instantiation of some of these microstates of n-bX. Presumably, these beneficial microstates of n-bX don't involve a country called "the US" and a country called "China", and arguably lack the property of "dominance".
So there must be some other thing, state, or property, call it n-bP, whose long term instantiation in the near-present world, is linked to n-bX. So the questions are, what is n-bP, and how is n-bP hypothesised to be linked to "US dominance..", and how is it hypothesised to be instantiated for a very long time, and how is it hypothesised to be linked to n-bX. It's ambiguous on all these questions.
We are not talking about what you would rather, we're talking about what the far future would rather. I get the sense that what you are really defending are ways to incrementally improve the world that are currently under-appreciated. I don't have an issue with that. What I am unconvinced by, is how reference to the lives of beings quadrillions of years into the future, can meaningfully guide our decisions.