Hello, it’s me again! I’ve been happily reading through the EA introductory concepts and I would like to have my say on “Longtermism”, which I read about at 80000 hours. I have also read Against the Social Discount Rate, which argues that the discount rate for future people should be zero.
Epistemic status: I read some articles and papers on the 80000 hours and effective altruism websites, and thought about whether it made sense. ~12 hours
Summary
- Longtermism assumes that thoughtful actions now can have a big positive impact in the future. At worst, they could fail and have no effect.
- But this is probably false.
- In the past, all events with big positive impacts on the future occurred because people wanted to solve a problem or improve their circumstances, not because of longtermism.
- All causes indicated by longtermism can be tackled without longtermism.
- Longtermism is costly and has doubtful benefit. Therefore, it should be deprioritized.
Can we be effective?
Derek Parfit, who co-wrote Against the Social Discount rate, has said the following:
“Why should costs and benefits receive less weight, simply because they are further in the future? When the future comes, these benefits and costs will be no less real. Imagine finding out that you, having just reached your twenty-first birthday, must soon die of cancer because one evening Cleopatra wanted an extra helping of dessert. How could this be justified?”
This has been quoted several times, even though it’s an absurd argument on its face. Imagine the world where Cleopatra skipped dessert. How does this cure cancer? I can think of two possibilities.
Cleopatra spends the extra time saved by skipping dessert, and invents biology, chemistry, biochemistry, oncology, and an efficacious cancer treatment. I assign this close to zero probability.By skipping dessert, the saved resources cause a chain reaction that makes Egypt, at the time a client state of the Roman Republic, significantly stronger. I think this is quite unlikely.
Did you see the rhetorical sleight of hand? Parfit claimed that skipping dessert leads to a cure for cancer. We are supposed to take as axiomatic that a small sacrifice now will have benefits in the future. But in fact, we cannot assume this.
Edit: I learned in the comments that I misunderstood this example - it was a hypothetical to show that "time discount rate" is invalid. I agree time discount rate is invalid, so I don't have anything against this example in its context. Sorry about my misunderstanding!
***
Most of the 80000 hours article attempts to persuade the reader that longtermism is morally good, by explaining the reasons that we should consider future people. But the part about how we are able to benefit future people is very short. Here is the entire segment, excerpted:

We can “impact” the future. The implicit assumption – so obvious that it’s not even stated – is that, sure, maybe we don’t know exactly how we can be the most effective. But if we put our minds to it, surely we could come up with interventions with results that range from zero (in the worst case) to positive.
The road to hell is paved with good intentions
Would this have been true in the past? I imagined what a high conviction longtermist would do at various points of time in history. Our longtermist would be an elite in the society of the time, someone with the ability to impact things. Let’s call him “Steve”. Steve adopts the values of the time he travels to, just as a longtermist in 2022 adopts the values of 2022 when deciding what benefits future people.
1960 AD
The Cold War has started, and the specter of nuclear winter is terrifying. Steve is worried about nuclear existential risk, but realizes that he has no hope of getting the United States and the Soviet Union to disarm. Instead, he focuses on what could impact people in the far future. The answer is immediately obvious: nuclear meltdowns and radioactive nuclear waste. Meltdowns can contaminate land for tens of thousands of years, and radioactive waste can similarly be dangerous for thousands of years. Therefore, Steve uses his influence to obstruct and delay the construction of nuclear power plants. Future generations will be spared the blight of thousands of nuclear power plants everywhere.
1100 AD
Steve is a longtermist in Europe. Thinking for the benefit of future humans, he realizes that he must save as many future souls as possible, so that they are able to enter heaven and enjoy an abundance of utils. What better way to do this than to reclaim the Holy Land from Islamic rule? Some blood may be shed and lives may be lost, but the expected value is strongly positive. Therefore, Steve uses his power to start the Crusades, saving many souls over the next 200 years.
50 BC
Steve is now living in Egypt. Thinking of saving future people from cancer, he convinces Cleopatra to skip dessert. Somehow, this causes Egypt to enter a golden age, and Roman rule over Europe lasts a century longer than it would have.
Unfortunately, this time Steve messed up. He forgot that the Industrial Revolution, which was the starting point for a massive upgrade of humanity’s living standards and required for cancer cures, happened due to a confluence of factors in the United Kingdom (relatively high bargaining power of labor, the Magna Carta, balance of power between nobles and the Crown, the strength of the Catholic Church). Roman domination was incompatible with all of those things, and its increased longevity actually delayed the cure for cancer by a century!
Edit: the source of the example is invalid, but I think the theme of "hard to know future outcomes" in the second paragraph is still plausible if we accept the first paragraph's hypothesis.
***
I believe these examples show that it’s really unlikely that a longtermist at any point in history would have a good sense of how to benefit future people. Well-intentioned interventions could just as likely turn out to be harmful. I don’t see any reason why the current moment in time would be different. The world is a complex system, and trying to affect the far future state of a complex system is a fool’s errand.
In the past, what sorts of events have benefitted future humans?
Great question. The articles I read generally point to economic growth as the main cause of prosperity. Economic growth is said to increase due to technological innovation such as discoveries, social innovation such as more effective forms of government, and larger population which has a multiplier effect.
Let’s look at a few major breakpoints and see whether longtermism was a significant factor. Some examples are the discovery of fire, sedentary agriculture, invention of the wheel, invention of writing, invention of the printing press, and the industrial revolution.
- Fire was discovered and controlled around a million years ago. While we do not have records of the inventor’s motivations, it’s likely that they were not thinking of the far future. This is because it’s mostly likely they wanted fire because it provided warmth, and could be used to cook food, both of which were a big help for survival.
- Sedentary agriculture developed around 10000 BC, because it led to a larger amount of food production. The purpose of agriculture was to gain an edge against competing tribes, because the larger population density of agricultural tribes could defeat hunter-gatherers. The benefits to future humans were not a consideration.
- The wheel was invented around 4000 BC, because it enabled transport of items with less effort. It is unlikely that people were motivated by the utils of far future humans, yet the invention of the wheel undoubtedly contributed to future prosperity.
- Writing was first invented around 3400 BC in Mesopotamia. Historians believe that people invented writing in order to keep counts of assets, as a recording device, and to communicate with people at greater distances. Here, too, it is unlikely that they thought of future humans – they liked writing for its immediate benefits.
- The printing press was invented around 1440 AD, and Wikipedia says that “The sharp rise of medieval learning and literacy amongst the middle class led to an increased demand for books which the time-consuming hand-copying method fell far short of accommodating.” The Printing Revolution, which contributed to the literacy and education flywheels, was created as “the entrepreneurial spirit of emerging capitalism increasingly made its impact on medieval modes of production, fostering economic thinking and improving the efficiency of traditional work processes.” It was not because of longtermism.
- The industrial revolution, happening around 1760 to 1840, was really good for humanity overall. It came about because people wanted to become richer and more powerful. Longtermism is almost never mentioned as a reason for the industrial revolution.
In summary, it looks as though most advances that have benefitted the future come about because people have a problem they want to solve, or they want to increase the immediate benefits to themselves.
We can achieve longtermism without longtermism
There are examples of people taking actions that look like they require a longtermism mindset to make sense. For example:
- An Indian man planting 1,360 acres of trees on barren land, turning it into a forest
- A church in Spain has been under construction for 140 years (intentionally – not due to red tape) and is expected to need at least 10 more years to finish
- The United States and the Soviet Union built spaceships to explore space, with no reward except for the dream of humanity heading to the stars
Energy too cheap to meter through thousands of nuclear power plants(banned by Steve)
But note, an explanation that does not include longtermism is available for all of these cases:
- Trees only take several years to grow, and so the man could enjoy the fruits of his labor within his lifetime
- The act of building the church itself became a focal point and a tourist attraction
- The reason they had the space race was because of geopolitical competition, not longtermism
Longtermism is also not required for many popular causes commonly associated with it. Taking existential risks as an example:
- Pandemic risk prevention can be justified on economic grounds or humanitarian grounds, as it pretty obviously affects current humans; we don’t need longtermism to justify working on this
- AI risk, within the timelines proposed by knowledgeable researchers, will impact most current people in their lifetimes, or their children
- Nuclear war similarly can be justified without longtermism, which we know because this has been the case for many decades already
Conclusion
The main point is that intervening for long term reasons is not productive, because we cannot assume that interventions are positive. Historically, interventions based on “let’s think long term”, instead of solving an immediate problem, have tended to be negative or negligible in effect.
Additionally, longtermism was not a motivating factor behind previous increases in prosperity. It is not necessary to tackle most current cause areas, such as existential risk. Longtermism is costly, because it reduces popular support for effective altruism through “crowding out” and “weirdness” effects.
Why do we think that longtermism, now, will have a positive effect and will be a motivating factor?
If it does not serve any useful purpose, then why focus on longtermism?
I'm quite happy that you are thinking critically about what you are reading! I don't think you wrote a perfect criticism (see below), but the act of taking the time to write a criticism and posting it to a public venue is not an easy step. EA always needs people who are willing and eager to probe its ethical foundations. Below I'm going to address some of your specific points, mostly in a critical way. I do this not because I think your criticism is bad (though I do disagree with a lot of it), but because I think it can be quite useful to engage with newer people who take the time to write reasonably good reactions to something they've read. Hopefully, what I say below is somewhat useful for understanding the reasons for longtermism and what I see as some flaws in your argument. I would love for you to reply with any critiques of my response.
It doesn't, and that's not Parfit's point. Parfit's point is that if one were to employ a discount rate, Cleopatra's dessert would matter more than nearly anything today. Since (he claims) this is clearly wrong, there is something clearly wrong with a discount rate.
Well yes, but that's because it's in the other pages linked there. Mostly, this has to do with thinking about whether existential risks exist soon, and whether there is anything we can do about them. That isn't really in the scope of that article but I agree the article doesn't show it.
That isn't entirely true. There are some things that routinely affect the far future of complex systems. For instance, complex systems can collapse, and if you can get them to collapse, you can pretty easily affect its far future. If it's about to collapse due to an extremely rare event, then preventing that collapse can affect its far future state.
Obviously, it wasn't. But of course it wasn't! There wasn't even longtermism at all, so it wasn't a significant factor in anyone's decisions. Maybe you are trying to say "people can make long term changes without being motivated by longtermism." But that doesn't say anything about whether longtermism might make them better at creating long term changes than they otherwise would be.
I generally agree with this and so do many others. For instance see here and here. However, I think it's possible that this may not be true at some time in the future. I personally would like to have longtermism around, in case there is really something where it matters, mostly because I think it is roughly correct as a theory of value. Some people may even think this is already the case. I don't want to speak for anyone, but my sense is that people who work on suffering risk are generally considering longtermism but don't care as much about existential risk.
First, I agree that interventions may be negative, and I think most longtermists would also strongly agree with this. In terms of whether historical "long term" interventions have been negative, you've asserted it but you haven't really shown it. I would be very interested in research on this; I'm not aware of any. If this were true, I do think that would be a knock against longtermism as a theory of action (though not decisive, and not against longtermism as a theory of value). Though it maybe could still be argued that we live at "the hinge of history" where longtermism is especially useful.
I made some distinguishment between theory of value and theory of action. A theory of value (or axiology) is a theory about what states of the world are most good. For instance, it might say that a world with more happiness, or more justice, is better than a world with less. A theory of action is a theory about what you should do; for instance, that we should take whichever action produces the maximum expected happiness. Greaves and MacAskill make the case for longtermism as both. But it's possible you could imagine longtermism as a theory of value but not a theory of action.
For instance, you write:
Various philosophers, such as Parfit himself, have suggested that for this reason, many utilitarians should actually "self-efface" their morality. In other words, they should perhaps start to believe that killing large numbers of people is bad, even if it increases utility, because they might simply be wrong about the utility calculation, or might delude themselves into thinking what they already wanted to do produces a lot of utility. I gave some more resources/quotes here.
Thanks for writing!
Thanks ThomasWoodside! I noticed the forum has relatively low throughput so I decided to "learn in public" as it were :)
I understand the Cleopatra paragraph now and I've edited my post. I wasn't able to understand his point before, so I got it wrong. Thanks for explaining it!