Lundgren and Kudlek (2024), in their recent article, discuss several challenges to longtermism as it currently stands. Below is a summary of these challenges.
The Far Future is Irrelevant for Moral Decision-Making
- Longtermists have not convincingly shown that taking the far future into account impacts decision-making in practice. In the examples given, the moral decisions remain the same even if the far future is disregarded.
- Example: Slavery
We didn’t need to consider the far future to recognize that abolishing slavery was morally right, as its benefits were evident in the short term. - Example: Existential Risk
The urgency of addressing existential risks does not depend on the far future; the importance of avoiding these risks is clear when focusing on the present and the next few generations.
- Example: Slavery
- As a result, the far future has little relevance to most moral decisions. Policies that are good for the far future are often also good for the present and can be justified based on their benefits to the near future.
The Far Future Must Conflict with the Near Future to be Morally Relevant
- For the far future to be a significant factor in moral decisions, it must lead to different decisions compared to those made when only considering the near future. If the same decisions are made in both cases, there is no need to consider the far future.
- Given the vastness of the future compared to the present, focusing on the far future, risks harming the present. Resources spent on the far future could instead be used to address immediate problems like health crises, hunger, and conflict.
We Are Not in a Position to Predict the Best Actions for the Far Future
- There are two main reasons for this:
- Unpredictability of Future Effects
It's nearly impossible to predict how our actions today will influence the far future. For instance, antibiotics once seemed like the greatest medical discovery, estimating the long-term effects of medical research in 10,000 years—or even millions of years—is beyond our capacity. - Unpredictability of Future Values
Technological advancements significantly change moral values and social norms over time. For example, contraceptives contributed to shifts in values regarding sexual autonomy during the sexual revolution. We cannot reliably predict what future generations will value.
- Unpredictability of Future Effects
Implementing Longtermism is Practically Implausible
- Human biases and limitations in moral thinking lead to distorted and unreliable judgments, making it difficult to meaningfully care about the far future.
- Our moral concern is naturally limited to those close to us, and our capacity for empathy and care is finite. Even if we care about future generations in principle, our resources are constrained.
- Focusing on the far future comes at a cost to addressing present-day needs and crises, such as health issues and poverty.
- Implementing longtermism would require radical changes to human psychology or to social institutions, which is a major practical hurdle.
I'm interested to hear your opinions on these challenges and how they relate to understanding longtermism.
I don't think I have explained this well enough. I'd be happy to have a call sometime if you want as that might be more efficient than this back and forth. But I'll reply for now.
No. This is not what I'm saying.
The key thing is that there are two attractor states that differ in value and you can affect if you end up in one or the other. The better one does not have to be the best possible state of the world, it just has to be better than the other attractor state.
So if you achieve the better one you persist at that higher expected value for a very long time compared to the counterfactual of persisting at the lower value for a very long time. So even if the difference in value (at any given time) is kind of small, the fact that this difference persists for a very long time is what gives you the very large counterfactual impact.
Not necessarily. To use the superintelligence example, the world will look radically different under either the US or China having superintelligence than it does now.
As I said earlier it doesn't necessarily lead to one of the best futures, but to cover the persistence point - this is a potentially fair push back. Some people doubt the persistence of longtermist interventions/attractor states, which would then dampen the value of longtermist interventions. We can still debate the persistence of different states of the world though and many think that a government controlling superintelligence would become very powerful and so be able to persist for a long time (exactly how long I don't know but "long time" is all we really need for it to become an important question).
Yeah I guess in this case I'm talking about the US having dominance over the world as opposed to China having dominance over the world. Remember I'm just saying one attractor state is better than the other in expectation, not that one of them is so great. I think it's fair to say I'd rather the US control the world than China control the world given the different values the two countries hold. Leopold Aschenbrenner talks more about this here. Of course I can't predict the future precisely, but we can talk about expectations.