Hide table of contents

What We Owe the Future, the book that is the culmination of a new movement called “longtermism”, was released a couple of months ago and has since been everywhere. Countless YouTube channels, newspapers, and magazines have covered and promoted the content and even more people have already shared their thoughts on it online. There is no need to attempt to be comprehensive and repeat everything that has already been said. Instead, I want to focus on one problem I have with the book: that it fails to provide a full and accurate picture of what longtermism entails and is convincing at the cost of actually making readers grapple with the difficult moral questions themselves.


A Brief Overview

In MacAskill’s words, longtermism is “the idea that positively influencing the longterm future is a key moral priority of our time” The view is supported by three fundamental claims that are outlined in What We Owe the Future:

“Future people count. There could be a lot of them. We can make their lives go better.”

There is nothing all that surprising about any of these claims, except how far longtermists take them.

Most of us would agree that future people count, but MacAskill claims that “impartially considered, future people should count for no less, morally, than the present generation.”

It would also make sense that, absent some catastrophe, there would be a lot more people to come, but MacAskill wants us to take seriously just how many: “even if humanity lasts only as long as the typical mammalian species (one million years), and even if the world population falls to a tenth of its current size, 99.5 percent of [human] life would still be ahead of you.” This claim also informs what MacAskill means when he refers to “future generations”: we are not talking about your grandchildren or their grandchildren, but people more than thousands of years from now.

It also seems intuitive that the actions of current generations influence future generations, but MacAskill emphasizes that this might be the case for us more than ever before because “we might soon be approaching a critical juncture in the human story.” He is hopeful that there are things we can do that would make the future go better. Or stop humanity from going extinct.

According to longtermists, it follows from these three claims that our efforts should focus on improving the very long-term future, so that humanity can achieve its full potential. Therefore, a primary focus of longtermists is avoiding existential catastrophe—an event that would curtail human potential, perhaps even beyond recovery. This leads longtermists to be worried about nuclear weapons, biotechnology, AI and such.


When I read What We Owe the Future, I was surprised to see that the book contains relatively little for someone who is already somewhat aware of the longtermist movement. Most of it is a rearticulation of ideas that are already out there, with a strange emphasis on historical anecdotes and a few new concepts sprinkled in that I had not yet encountered (which might just be because of my limited exposure to the movement). Which made me realize that I did not have correct expectations going in: the goal of the book is not to put new ideas out there, but to make the ideas that are already floating around accessible and popular outside the community.

In other words, the aim of the book seems to be to convince people who do not care about the long-term future that they should. But not only that, based on the title, it is trying to convince them that they owesomething to future generations and that this book will reveal what is owed.

Based on how the book has been received, it seems to do a relatively good job, but that is not my concern. I am concerned about whether it actually provides the full picture while trying to convince. I am not sure that it does.

Prioritizing something that was not previously a priority is always a question of deprioritizing other things, of shifting the focus. Prioritizing the very long-term means deprioritizing the short-term, a trade-off that is somewhat obscured in the book. And when this trade-off is dealt with, a crucial aspect of what separates the present and the short-term from the very long-term is left out: uncertainty.

But before I go into it, I want to emphasize that with longtermism, I feel it is particularly important to separate the principles that the movement promotes from what they do in practice. A lot of their priorities now are things that most of us can get on board with, but that is primarily because they also concern the short-term. A nuclear war would not only affect those in the very long-term future; climate change and AI already affect lives now. Still, there are other things that I could not care less about, like buying coal mines to keep coal in the ground in case humanity needs to reindustrialize.

In any case, it is important to evaluate a movement based on its principles because those principles will determine what they do in practice in the future. Therefore, that is my primary focus while reviewing the book.


Uncertainty about the existence of future generations

The most crucial claim that longtermists make is the first claim: that future people count no less than the present generation. Here is how MacAskill explains it:

“The idea that future people count is common sense. Future people, after all, are people. They will exist. They will have hopes and joys and pains and regrets, just like the rest of us. They just don’t exist yet.

This claim makes sense to a certain extent. The life of someone who lived a thousand years ago mattered no less than that of someone who is alive right now. For example, imagine that you could save the life of someone who is alive now by going back in time and killing some innocent person who lived a thousand years ago. Is this decision different than killing someone alive now to save that person? To me, it seems to be the same: the fact that one lived in 2022 and the other in 1022 makes no real difference to how wrong it is to kill them. So why should it be different for 3022?

But it is different because we do not really know if the people in the very long-term future will exist even though MacAskill says we do, we just know they might. (For all I know, the earth might be demolished two days from now because of an intergalactic highway construction!)

Longtermists like to claim that distance in space is the same as distance in time. If a child is starving and you can help them, your decision to help should not depend on whether they are 100 meters away or on the other side of the world. Longtermists think that the same is true for time: why should it matter if the child you are saying will exist now or 100 years from now?

But the uncertainty is an undeniable problem: if you told me that with increased distance the chances of my donation making it to the child decreased, then my decision would depend on the distance. This is precisely the problem with distance in time and it is a fact that the book seems to repeatedly ignore, even though it is so obvious.

For example, the book asks if there are reasons to care more about people alive today and points to only two reasons: (1) partiality, that we have relationships with people in the present, and (2) reciprocity, that people who are alive now can repay our good deeds but the future people cannot benefit the present generation. Both reasons communicate a clear message: if you care more about people alive, you are self-interested. No mention of the fact that we are weighing “hopes and joys and pains and regrets” that exist or are pretty certain to exist against those that only might.

 

Uncertainty about what the future holds

Even if we disregarded the uncertainty about the existence of the very long-term future, there is still another uncertainty that matters: we have no idea what will happen in the meantime. Consider this metaphor from the book:

“Suppose that, while hiking, I drop a glass bottle on the trail and it shatters. And suppose that if I don’t clean it up, later a child will cut herself badly on the shards. In deciding whether to clean it up, does it matter when the child will cut herself? Should I care whether it’s a week, or a decade, or a century from now? No. Harm is harm, whenever it occurs.”

But realistically, that is not the decision we face in choosing whether we should care about the very long-term future. It is more something like this:

Suppose that, while hiking, I drop a glass bottle on the trail, it shatters and the shards go in two directions. I see that a child is about to cut herself badly on the shards to the right. I also guess that at some point in the future, another child might walk along the path with the shards to the left. What should I do, if I can only pick up the shards on one side?

Clearly, I would save the child on the right. I don’t know what will happen by the time the other child walks over the shards. Maybe someone else will have cleaned them up, maybe they will invent technologies that would ensure the child would not get hurt (a.k.a. shoes), maybe we will all go extinct by then! What reason is there to save the child on the left rather than the right?

For longtermists, the answer seems to also be a matter of scale: the way they see it, while there is one child on the right, there are millions on the left. But I don’t think they are right about that either. Realistically, trying to help the problems faced today could be just as impactful as focusing on the ones that are faced just in the future. For example, if we focus on improving the lives of the next generation including giving them a better education, then they would be in a better position than us to positively influence the next generation, and so on.

 

Caring about “humanity” over humans

So far, I have focused on two kinds of uncertainty that demonstrate we have good reason to prioritize the present generation (or the short-term) rather than the very long-term future. I don’t think these are novel ideas. In fact, I have focused on them precisely because they are so obvious, and yet the analogies in the book do not take them into account when making the case for longtermism. Uncertainty only becomes a part of the picture when discussing how best to positively influence the long term, in the conclusion.

Part of the reason why uncertainty is excluded early on is because it actually might not matter for longtermists, because there is something other than the lives of individuals that also matters: the trajectory of “humanity”.

Longtermists often draw analogies between individual people and humanity. Humanity, in this case, seems to be a separate entity, rather than the sum of individual human lives. This is clear from the first of the three primary metaphors the book relies on:

“The first (metaphor) is of humanity as an imprudent teenager. Most of a teenager’s life is still ahead of them, and their decisions can have lifelong impacts. In choosing how much to study, what career to pursue, or which risks are too risky, they should think not just about the short-term thrills but also about the whole course of life ahead of them.”

Most of us make some sacrifices in our earlier years for a better future. We might work a lot even though it is unpleasant to live comfortably later on, for example. But is the same true for humanity? The teenager gets to benefit from their sacrifices because the future in question is their future. The same is not true for humanity: the present generation will not get to benefit from what happens in the future even if they dedicate their lives to making sure the future goes well.

We don’t really consider an individual’s life as the sum of each separate moment but as a whole. But aren’t the individual people that make up humanity precisely what is important? Does the trajectory of humanity have any value other than how each individual lives their lives? I don’t think so, but longtermists seem to.

Longtermists, for example, talk about the “potential” of humanity as analogous with the “potential” of a scientist.1 Just as the promising scientist has some potential they could hopefully realize, humanity has a potential: human civilization could accomplish things beyond our wildest dreams, develop tremendously culturally and scientifically, and provide a wonderful life for the inhabitants of that future. What we have to do is to make sure that future can be realized, by, for example, making sure we don't go extinct before. That is why the two kinds of uncertainty I have been talking about are not reasons to get discouraged for longtermists but instead reasons to care even more about the future, to make sure humanity achieves its potential.

I agree that it would be great if humanity achieved its potential, but it would only be great because of what it would mean for the lives of individuals. And insofar as that is the only thing that matters, and those people are not essentially more worthy of a good life than the present generation, I don’t see why the present generation should dedicate their lives to the future like the teenager. Especially when the present generation has its own problems.

Focusing on “humanity” as this abstract separate entity obscures from the fact that we are talking about the lives of individuals and that fundamentally those individuals are what we care about.

But even for longtermists who like to think about humanity as a whole, I think the particular teenager analogy fails to provide an accurate picture. Imagine seeing a teenager who is starving, getting sick all the time, and is struggling with poverty, and all they are talking about is saving up for a luxurious house they might be able to afford in their 50s. Wouldn’t we encourage them to take care of themselves now rather than think so far into the future? I think that is sort of like the situation we are in right now. Shifting our priorities should come after achieving some satisfactory present state which we are far from.


Conclusion: the language of “obligation”

Longtermism, while it is slowly becoming its own movement, started off as an idea within effective altruism.

If you go on effectivealtruism.org, this is the slogan you see for effective altruism: “Effective altruism is about doing good better.” According to the website, the desire that led to the movement is “to make sure that attempts to do good actually work”. If this is the basic premise of the movement, then it is something we can all agree with: if you want to do good, then you should presumably want your action to do as much good as it can.

Notice that if we take “to do good better” to be the only goal, effective altruism does not claim that all our lives should be dedicated to the most effective way of doing good. It is only saying that if you want to dedicate your life to the good, then there are certain ways of doing so that will be more effective. It is also not specifying a certain area of focus: doing good for living humans? for future generations? for the continuation of the species? for animals? for the planet? It does not pick sides, or, at least, it does not have to. Effective altruism taken at its core is consistent with people caring about different things: it does not have to convince someone who cares about current living conditions that the future matters more, it can still help them figure out what sorts of actions will be effective to improve whatever it is they care more about.

This is the big difference that I find odd: effective altruism does not argue for an obligation, so why does longtermism?

The answer seems to lie in the book: MacAskill himself talks about how “the thought of trying to improve the lives of unknown future people initially left [him] cold.” I think this is true for most of us and perhaps the only way to be moved by unknown future people is to feel the force of an obligation on us. This is not necessary when it comes to people who exist now: we can be moved to do something even if we don’t feel we have to.

But if the book is going to argue that we have an obligation, it has to argue for a particular moral framework, which it does not really do. Instead, the aim seems to be to make people feel they have an obligation whatever their existing beliefs are, by pushing them to think about the future in a certain way. But in doing so through examples and metaphors that obscure what exactly the trade-off is and the importance of uncertainty, it no longer feels like philosophy but rhetoric. The reader might be moved, but it is unclear if they are aware of what exactly the view they are moved by entails.


Most of us don’t care about humanity in the abstract. We care about the possibility of extinction insofar as it affects us or the people we care about. We don’t think of human lives in terms of potential; people who did not come into existence are not wasted potential. Looking at the future, our aim is not to maximize the number of people who will exist and live sufficiently happy lives.

Most of us only care about people who exist (and will exist in the short-term) and what their experience of their life is like. We care about people having good lives. Longtermism is valuable in pointing out to us that the people who exist right now are not the only ones that will exist, that our concerns should extend beyond the present generation. If we had a time machine or a fortune-telling orb that could collapse the distance in time as technology collapses physical distances, the calculus would be different. In that case, we would know who would come to exist and have a better sense of how our actions would impact them, so caring for the future would not be sufficiently different than caring about the present.

But as things stand, uncertainty changes the entire calculus. We are weighing the lives of people who exist and are relatively certain to exist against those who might. When the interests of the future and present don’t align, we are weighing actions that will have an impact against those that are possibly helpful, but possibly completely useless or unnecessary. Any dollar spent on the far future is a dollar not spent on the now. If people are fine with that, that is a different story, but I think they should be at least aware that this is the reality.

Imagine that we spend the entire next century prioritizing the very long-term future, only to find out that humanity is sure to go extinct the next day. Would it not feel like we have made a big blunder?

1 https://longtermism.com/introduction

 


 


 

20

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since: Today at 3:56 AM

I think I disagree with several arguments here, and one of the main arguments could be thought of as an argument for longtermism. And I have to add, this post is really well-written and the arguments/intuitions are really clearly expressed! Also, epistemic status of the last paragraph is quite speculative.

First of all, most longtermist causes and projects aim to increase the chance of survival for existing humans(misaligned AI, engineered pandemics and nuclear war are catastrophes that could take within this century, if you don't have a reason to completely disregard forecasters and experts)  or reduce the chance of global catastrophic events for the generation who is already alive, again biorisk and pandemics could be thought of longtermist causes but if more people would be working on these issues pre-2020, their actions and work would be impactful not only for future generations, but already existing people who suffered throughout COVID-19.

If I'm not misunderstanding one of the main ideas/intuitions that form the basis for this review is "It is uncertain whether future people will exist or not, so therefore we should give more weight to the idea that humanity may cease to exist and donating to or working for longtermist causes may be less impactful compared to neartermist causes". If we ought to give more weight to the idea that future people may not exist, isn't this argument for working on x-risk reduction? Even if you have a person-affecting view of population ethics, since the world could be destroyed tomorrow, the following week or within this year/decade/century, thinking about s-risk that could result from a misaligned AI or stable totalitarianism are all events that could impact people who are already alive, and cause them to suffer at an astronomical level, or if we're optimistic, curtail humanity's potential in a way that will render the lives of already existing more unbearable and prevent us from coordinating to reduce suffering. 

Thirdly, I think it wouldn't be wrong to say "excited altruism" rather than "obligatory altruism" emphasized more and more as EAs started focusing on scaling and community-building. Peter Singer do think we have an obligation to help those who suffer as long as it doesn't cost us astronomically. Most variants of utilitarianism and Kantian-ish moral views would use the word "obligation" in a non-trivial way for framing our responsibility to help those who suffer and who are worse-off. Should I buy a yacht or save 100 children in Africa? Even though a lot of EAs wouldn't say "they are obligated to not buy the yacht and donate to GiveWell", some EAs including me would probably agree that is a moral dilemma where we could say that billionaire kinda has an obligation to help. But you may disagree with this and I would totally understand and you may even be right, because maybe there are no moral truths! But, I wouldn't say longtermism too can be and is framed within a paradigm of excited altruism, because the stakes are too high and longtermism is usually targeted at already EA audiences, people use the word "should" because this conversation usually takes place between people who already agree that we should do good. So even if you're not a moral realist and don't believe in moral obligations, you can be a longtermist.

As a final point, I do agree we don't care about humanity in abstract, usually people care about existing people because of intuitions/sentiments. But, also most people with the exception of few cultures, didn't care about animals at all throughout humanity's history. So when it comes who should we care about and how we should think about that questions, our hunch and intuitions usually don't work really well. We, I personally don't at a sentimental level, tend to also don't think about the welfare of insects and shrimps, but is there some chance that we should include these beings into our moral circles and care about them? I definitely wouldn't say no. Also a lot of people's hunch is that we should care about people around us, but again, that is incompatible with the idea that certain people aren't more worthy of saving and caring  just because people closer to us, which probably isn't the case and a Brit should save a British person instead 180  people from Malawi, even though almost everyone(in the literal sense) acted that way until Peter Singer because they had a hunch, but that hunch is unfortunately probably inaccurate if we want to do good, so we may have this intuition that when we're doing good, we shall think more about people who already exist, but we may have to disregard that intuition, and think about uncertainty more seriously and rationally rather than just disregard future people's welfare because those people may not exist.

As a final-final point, coming up with a decision theory that prevents us from caring about our posteriority and future people is really really hard, even if you are very skeptical of uncertainty, if you don't believe Toby Ord, Macaskill or top forecasters like Eli Lifland who published a magnificent critique of this book completely and think x-risk probability is very overestimated, I think arguments based on the intuition that "It's uncertain whether future people will exist",  isn't a counterargument against not only weak longtermism but also strong longtermism, and I think this argument should lead us to think about which decision theory is best to navigate the uncertainty we face, rather than prioritize people who already exist.

Btw, if you're from Turkey and would like to connect with the community in Turkey, feel free to dm!

"Imagine that we spend the entire next century prioritizing the very long-term future, only to find out that humanity is sure to go extinct the next day. Would it not feel like we have made a big blunder?"

No more than we would have feel we had made a big blunder if we invested in a pension, only to find out that we have terminal cancer and won't live to draw that pension. Uncertainty still requires action.

Imagine that we spend the entire next century prioritizing the very long-term future, only to find out that humanity is sure to go extinct the next day. Would it not feel like we have made a big blunder?

 

I guess if an asteroid will hit us tomorrow, we in the EA community will be able to make a peace with ourselves of the time we spend on trying despite the outcome. Making sacrifices for the future is one of the best ways I found to justify the life we have been gifted with...