Hide table of contents

Note: The Global Priorities Institute (GPI) has started to create summaries of some working papers by GPI researchers with the aim to make our research more accessible to people outside of academic philosophy (e.g. interested people in the effective altruism community). We welcome any feedback on the usefulness of these summaries.

Summary: The Case for Strong Longtermism

This is a summary of the GPI Working Paper "The case for strong longtermism" by Hilary Greaves and William MacAskill. The summary was written by Elliott Thornley.

In this paper, Greaves and MacAskill make the case for strong longtermism: the view that the most important feature of our actions today is their impact on the far future. They claim that strong longtermism is of the utmost significance: that if the view were widely adopted, much of what we prioritise would change.

The paper defends two versions of strong longtermism. The first version is axiological, making a claim about the value of our actions. The second version is deontic, making a claim about what we should do. According to axiological strong longtermism (ASL), far-future effects are the most important determinant of the value of our actions. According to deontic strong longtermism (DSL), far-future effects are the most important determinant of what we should do. The paper argues that both claims are true even when we draw the line between the near and far future a surprisingly long time from now: say, a hundred years.

Axiological strong longtermism

The argument for ASL is founded on two key premises. The first is that the expected number of future lives is vast. If there is even a 0.1% probability that humanity survives until the Earth becomes uninhabitable – one billion years from now – with at least ten billion lives per century, the expected future population is at least 100 trillion (1014). And if there is any non-negligible probability that humanity spreads into space or creates digital sentience, the expected number of future lives is larger still. These kinds of considerations lead Greaves and MacAskill to conclude that any reasonable estimate of the expected future population is at least 1024.

The second key premise of the argument for ASL is that we can predictably and effectively improve the far future. We can have a lasting impact on the future in at least two ways: by reducing the risk of premature human extinction and by guiding the development of artificial superintelligence.

Take extinction first. Both human survival and human extinction are persistent states. They are states which – upon coming about – tend to persist for a long time. These states also differ in their long-run value. Our survival through the next century and beyond is, plausibly, better than our extinction in the near future. Therefore, we can have a lasting impact on the future by reducing the risk of premature human extinction.

Funding asteroid detection is one way to reduce this risk. Newberry (2021) estimates that spending $1.2 billion to detect all remaining asteroids with a diameter greater than 10 kilometres would decrease the chance that we go extinct within the next hundred years by 1-in-300-billion. Given an expected future population of 1024, the result would be approximately 300,000 additional lives in expectation for each $100 spent. Preventing future pandemics is another way to reduce the risk of premature human extinction. Drawing on Millet and Snyder-Beattie (2017), Greaves and MacAskill estimate that spending $250 billion strengthening our healthcare systems would reduce the risk of extinction within the next hundred years by about 1-in-2,200,000, leading to around 200 million extra lives in expectation for each $100 spent. By contrast, the best available near-term-focused interventions save approximately 0.025 lives per $100 spent (GiveWell 2020). Further investigation may reveal more opportunities to improve the near future, but it seems unlikely that any near-term-focused interventions will match the cost-effectiveness of pandemic-prevention in the long-term.

Of course, the case for reducing extinction risk hangs on our moral view. If we embrace a person-affecting approach to future generations (see Greaves 2017, section 5) – where we care about making lives good but not about making good lives – then a lack of future lives would not be such a loss, and extinction would not be so bad. Alternatively, if we expect humanity’s long-term survival to be bad on balance, we might judge that extinction in the near-term is the lesser evil. 

Nevertheless, the case for strong longtermism holds up even on these views. That is because reducing the risk of premature human extinction is not the only way that we can affect the far future. We can also affect the far future by (for example) guiding the development of artificial superintelligence (ASI). Since ASI is likely to be influential and long-lasting, any effects that we have on its development are unlikely to wash out. By helping to ensure that ASI is aligned with the right values, we can decrease the chance that the far future contains a large number of bad lives. That is important on all plausible moral views.

While there is a lot of uncertainty in the above estimates of cost-effectiveness, this uncertainty does not undermine the case for ASL because we also have ‘meta’ options for improving the far future. For example, we can conduct further research into the cost-effectiveness of various longtermist initiatives and we can invest resources for use at some later time.

Greaves and MacAskill then address two objections to their argument. The first is that we are clueless about the far-future effects of our actions. They explore five ways of making this objection precise – by appeal to simple cluelessness, conscious unawareness, arbitrariness, imprecision, and ambiguity aversion – and conclude that none undermines their argument. The second objection is that the case for ASL hinges on tiny probabilities of enormous values, and that chasing these tiny probabilities is fanatical. For example, it might seem fanatical to spend $1 billion on ASI-alignment for the sake of a 1-in-100,000 chance of preventing a catastrophe, when one could instead use that money to help many people with near-certainty in the near-term. Greaves and MacAskill take this to be one of the most pressing objections to strong longtermism, but make two responses. First, denying fanaticism has implausible consequences (see Beckstead and Thomas 2021, Wilkinson 2022) so perhaps we should be fanatical on balance. Second, the probabilities in the argument for strong longtermism might not be so small that fanaticism becomes an issue. They thus tentatively conclude that the fanaticism objection does not undermine the case for strong longtermism.

Deontic strong longtermism

Greaves and MacAskill then argue for deontic strong longtermism: the claim that far-future effects are the most important determinant of what we should do. Their ‘stakes-sensitivity argument’ employs the following premise:

In situations where (1) some actions have effects much better than all others, (2) the personal cost of performing these actions is comparatively small, and (3) these actions do not violate any serious moral constraints, we should perform one of these actions.

Greaves and MacAskill argue that each of (1)-(3) is true in the most important decision situations facing us today. Actions like donating to prevent pandemics and guide ASI development meet all three conditions: their effects are much better than all others, their personal costs are small, and they violate no serious moral constraints. Therefore, we should perform these actions. Since axiological strong longtermism is true, it is the far-future effects of these actions that make their overall effects best, and deontic strong longtermism follows.

The paper concludes with a summary of the argument and its practical implications. Humanity’s future could be vast, and we can influence its course. That suggests the truth of strong longtermism: impact on the far future is the most important feature of our actions today.

References

Nicholas Beckstead and Teruji Thomas (2021). A paradox for tiny probabilities and enormous values. GPI Working Paper No. 7-2021

GiveWell (2020). GiveWell’s Cost-Effectiveness Analyses. Accessed 26 January 2021.

Hilary Greaves (2017). Population axiology. Philosophy Compass

Piers Millett and Andrew Snyder-Beattie (2017). Existential Risk and Cost-Effective Biosecurity. Health Security 15(4):373–383.

Toby Newberry (2021). How cost-effective are efforts to detect near-Earth-objects? Global Priorities Institute Technical Report T1-2021

Hayden Wilkinson (2022). In defense of fanaticism. Ethics 132(2):445–477

Comments27


Sorted by Click to highlight new comments since:

First, denying fanaticism has implausible consequences (see Beckstead and Thomas 2021, Wilkinson 2022) so perhaps we should be fanatical on balance.

I haven't read the paper, but if we accept fanaticism shouldn't we be chasing the highest probability of infinite utility? That seems pretty inconsistent with how longtermists seem to reason (though it probably still leads to similar actions like reducing x-risk, since we probably have to be around in order to affect the world and increase the probability of infinite utility).

Can you give some examples of infinite utility?

  1. We figure out how to prevent the heat death of the universe indefinitely. (Technically this doesn't lead to infinite utility, since you could still destroy everything of value in the universe, but by driving the probability of that low enough you can get arbitrarily large amounts of utility, which leads to the same fanatical conclusions.)
  2. We figure out that a particular configuration of matter produces experiences so optimized for pleasure that it is infinite utility (i.e. we'd accept any finite amount of torture to create it even for one second).
  3. We discover a previously unknown law of physics that allow us to run hypercomputers which can run infinite simulations of happy people.

None of these seem particularly likely, but I'm not literally certain that they can't happen / that I can't affect their probability, and if you accept fanaticism then you should be striving to increase the probability of making something like this happen. (Which, to be clear, could be the right thing to do! But it's not how longtermists tend to reason in practice.)

you should be striving to increase the probability of making something like this happen. (Which, to be clear, could be the right thing to do! But it's not how longtermists tend to reason in practice.)

As you said in your previous comment we essentially are increasing the probability of these things happening by reducing x-risk. I'm not convinced we don't tend to reason fanatically in practice - after all Bostrom's astronomical waste argument motivates reducing x-risk by raising the possibility of achieving incredibly high levels of utility (in a footnote he says he is setting aside the possibility of infinitely many people). So reducing x-risk and trying to achieve existential security seems to me to be consistent with fanatical reasoning.

It's interesting to consider what we would do if we actually achieved existential security and entered the long reflection. If we take fanaticism seriously at that point (and I think we will) we may well go for infinite value. It's worth noting though that certain approaches to going for infinite value will probably dominate other approaches by having a higher probability of success. So we'd probably decide on the most promising possibility and run with that. If I had to guess I'd say we'd look into creating infinitely many digital people with extremely high levels of utility.

I'm not sure whether you are disagreeing with me or not. My claims are (a) accepting fanaticism implies choosing actions that most increase probability of infinite utility, (b) we are not currently choosing actions based on how much they increase probability of infinite utility, (c) therefore we do not currently accept fanaticism (though we might in the future), (d) given we don't accept fanaticism we should not use "fanaticism is fine" as an argument to persuade people of longtermism.

Is there a specific claim there you disagree with? Or were you riffing off what I said to make other points?

Yes I disagree with b) although it's a nuanced disagreement.

I think the EA longtermist movement is currently choosing the actions that most increase probability of infinite utility, by reducing existential risk.

What I'm less sure of is that achieving infinite utility is the motivation for reducing existential risk. It might just be that achieving "incredibly high utility" is the motivation for reducing existential risk. I'm not too sure on this.

My point about the long reflection was that when we reach this period it will be easier to see the fanatics from the non-fanatics. 

I think the EA longtermist movement is currently choosing the actions that most increase probability of infinite utility, by reducing existential risk.

This is not in conflict with my claim (b). My claim (b) is about the motivation or reasoning by which actions are chosen. That's all I rely on for the inferences in claims (c) and (d).

I think we're mostly in agreement here, except that perhaps I'm more confident that most longtermists are not (currently) motivated by "highest probability of infinite utility".

Yeah that's fair. As I said I'm not entirely sure on the motivation point. 

I think in practice EAs are quite fanatical, but only to a certain point. So they probably wouldn't give in to a Pascal's mugging but many of them are willing to give to a long-term future fund over GiveWell charities -  which is quite a bit fanaticism! So justifying fanaticism still seems useful to me, even if EAs put their fingers in their ears with regards to the most extreme conclusion...

many of them are willing to give to a long-term future fund over GiveWell charities

It really doesn't seem fanatical to me to try to reduce the chance of everyone dying, when you have a specific mechanism by which everyone might die that doesn't seem all that unlikely! That's the right action according to all sorts of belief systems, not just longtermism! (See also these posts.)

Hmm I do think it's fairly fanatical. To quote this summary:

For example, it might seem fanatical to spend $1 billion on ASI-alignment for the sake of a 1-in-100,000 chance of preventing a catastrophe, when one could instead use that money to help many people with near-certainty in the near-term.

The probability that any one longtermist's actions will actually prevent a catastrophe is very small. So I do think longtermist EAs are acting fairly fanatically.

Another way of thinking about it is that, whilst the probability of x-risk may be fairly high, the x-risk probability decrease any one person can achieve is very small. I raised this point on Neel's post. 

By this logic it seems like all sorts of ordinary things are fanatical:

  1. Buying less chicken from the grocery store is fanatical (this only reduces the number of suffering chickens if you buying less chicken was the tipping point that caused the grocery store to order one less shipment of chicken, and that one fewer order was the tipping point that caused the factory farm to reduce the number of chickens it aimed to produce; this seems very low probability)
  2. Donating small amounts to AMF is fanatical (it's very unlikely that your $25 causes AMF to do another distribution beyond what it would have otherwise done)
  3. Voting is fanatical (the probability of any one vote swinging the outcome is very small)
  4. Attending a particular lecture of a college course is fanatical (it's highly unlikely that missing that particular lecture will make a difference to e.g. your chance of getting the job you want).

Generally I think it's a bad move to take a collection of very similar actions and require that each individual action within the collection be reasonably likely to have an impact.

To quote this summary

I don't know of anyone who (a) is actively working reducing the probability of catastrophe and (b) thinks we only reduce the probability of catastrophe by 1-in-100,000 if we spend $1 billion on it. Maybe Eliezer Yudkowsky and Nate Soares, but probably not even them. The summary is speaking theoretically; I'm talking about what happens in practice.

Probabilities are on a continuum. It’s subjective at what point fanaticism starts. You can call those examples fanatical if you want to, but the probabilities of success in those examples are probably considerably higher than in the case of averting an existential catastrophe.

I think the probability that my personal actions avert an existential catastrophe is higher than the probability that my personal vote in the next US presidential election would change its outcome.

I think I'd plausibly say the same thing for my other examples; I'd have to think a bit more about the actual probabilities involved.

That's fair enough, although when it comes to voting I mainly do it for personal pleasure / so that I don't have to lie to people about having voted!

When it comes to something like donating to GiveWell charities on a regular basis / going vegan for life I think one can probably have greater than 50% belief they will genuinely save lives / avert suffering. Any single donation or choice to avoid meat will have far lower probability, but it seems fair to consider doing these things over a longer period of time as that is typically what people do (and what someone who chooses a longtermist career essentially does).

Why consider only a single longtermist career in isolation, but consider multiple donations in aggregate?

Given that you seem to agree voting is fanatical, I'm guessing you want to consider the probability that an individual's actions are impactful, but why should the locus of agency be the individual? Seems pretty arbitrary.

If you agree that voting is fanatical, do you also agree that activism is fanatical? The addition of a single activist is very unlikely to change the end result of the activism.

Why consider only a single longtermist career in isolation, but consider multiple donations in aggregate?

A longtermist career spans decades, as would going vegan for life or donating regularly for decades. So it was mostly a temporal thing, trying to somewhat equalise the commitment associated with different altruistic choices.

but why should the locus of agency be the individual? Seems pretty arbitrary.

Hmm well aren't we all individuals making individual choices? So ultimately what is relevant to me is if my actions are fanatical?

If you agree that voting is fanatical, do you also agree that activism is fanatical?

Pretty much yes. To clarify - I have never said I'm against acting fanatically. I think the arguments for acting fanatically, particularly the one in this paper, are very strong. That said, something like a Pascal's mugging does seem a bit ridiculous to me (but I'm open to the possibility I should hand over the money!).

Hmm well aren't we all individuals making individual choices? So ultimately what is relevant to me is if my actions are fanatical?

We're all particular brain cognitions that only exist for ephemeral moments before our brains change and become a new cognition that is similar but not the same. (See also "What counts as death?".) I coordinate both with the temporally-distant (i.e. future) brain cognitions that we typically call "me in the past/future" and with the spatially-distant brain cognitions that we typically call "other people". The temporally-distant cognitions are more similar to current-brain-cognition than the spatially-distant cognitions but it's fundamentally a quantitative difference, not a qualitative one.

That said, something like a Pascal's mugging does seem a bit ridiculous to me (but I'm open to the possibility I should hand over the money!).

By "fanatical" I want to talk about the thing that seems weird about Pascal's mugging and the thing that seems weird about spending your career searching for ways to create infinitely large baby universes, on the principle that it slightly increases the chance of infinite utility.

If you agree there's something weird there and that longtermists don't generally reason using that weird thing and typically do some other thing instead, that's sufficient for my claim (b).

Certainly agree there is something weird there! 

Anyway I don't really think there was too much disagreement between us, but it was an interesting exchange nonetheless!

I'm super excited for you to continue making these research summaries! I have previously written about how I want to see more accessible ways to understand important foundational research - you've definitely got a reader in me.

I also enjoy the video summaries. It would be great if GPI video and written summaries were made as standard. I appreciate it's a time commitment, but in theory there's quite a wide pool of people who could do the written summaries and I'm sure you could get funding to pay people to do them.

As a non-academic I don't think I can assist with writing any summaries but if a bottleneck is administrative resource let me know and I may be happy to volunteer some time to help with this.

I appreciated this summary

I appreciated this summary

10^24 population expectation seems like the key assumption here. It’s easy to get that wrong by several orders of magnitude. All other assumptions are irrelevant if you assume that.

Perhaps we could work with probability distributions instead of point estimates.

leading to around 200 million extra lives in expectation for each $100 spent. By contrast, the best available near-term-focused interventions save approximately 0.025 lives per $100 spent (GiveWell 2020).

What does 'longtermism' add beyond the standard EA framework of maximizing cost-effectiveness? It seems like a regular EA would support allocating funding to the intervention that saves more lives per dollar.

Valuing “saves” lives that are already exist/likely to exist versus creating (or making it possible for others to create more lives)?

Perhaps that’s the main distinction in the deep assumptions/values.

Although, they argue that longtermism goes through even if you accept person-affecting views:

Nevertheless, the case for strong longtermism holds up even on these views. [...] We can also affect the far future by (for example) guiding the development of artificial superintelligence

Funding asteroid detection is one way to reduce this risk. Newberry (2021) estimates that spending $1.2 billion to detect all remaining asteroids with a diameter greater than 10 kilometres would decrease the chance that we go extinct within the next hundred years by 1-in-300-billion. [...] Preventing future pandemics is another way to reduce the risk of premature human extinction. Drawing on Millet and Snyder-Beattie (2017), Greaves and MacAskill estimate that spending $250 billion strengthening our healthcare systems would reduce the risk of extinction within the next hundred years by about 1-in-2,200,000

How much funding would it take to fully fund all extinction risk projects?

Curated and popular this week
 ·  · 7m read
 · 
This is a linkpost for a paper I wrote recently, “Endogenous Growth and Excess Variety”, along with a summary. Two schools in growth theory Roughly speaking: In Romer’s (1990) growth model, output per person is interpreted as an economy’s level of “technology”, and the economic growth rate—the growth rate of “real GDP” per person—is proportional to the amount of R&D being done. As Jones (1995) pointed out, populations have grown greatly over the last century, and the proportion of people doing research (and the proportion of GDP spent on research) has grown even more quickly, yet the economic growth rate has not risen. Growth theorists have mainly taken two approaches to reconciling [research] population growth with constant economic growth. “Semi-endogenous” growth models (introduced by Jones (1995)) posit that, as the technological frontier advances, further advances get more difficult. Growth in the number of researchers, and ultimately (if research is not automated) population growth, is therefore necessary to sustain economic growth. “Second-wave endogenous” (I’ll write “SWE”) growth models posit instead that technology grows exponentially with a constant or with a growing population. The idea is that process efficiency—the quantity of a given good producible with given labor and/or capital inputs—grows exponentially with constant research effort, as in a first-wave endogenous model; but when population grows, we develop more goods, leaving research effort per good fixed. (We do this, in the model, because each innovator needs a monopoly on his or her invention in order to compensate for the costs of developing it.) Improvements in process efficiency are called “vertical innovations” and increases in good variety are called “horizontal innovations”. Variety is desirable, so the one-off increase in variety produced by an increase to the population size increases real GDP, but it does not increase the growth rate. Likewise exponential population growth raise
 ·  · 4m read
 · 
Sometimes working on animal issues feels like an uphill battle, with alternative protein losing its trendy status with VCs, corporate campaigns hitting blocks in enforcement and veganism being stuck at the same percentage it's been for decades. However, despite these things I personally am more optimistic about the animal movement than I have ever been (despite following the movement for 10+ years). What gives? At AIM we think a lot about the ingredients of a good charity (talent, funding and idea) and more and more recently I have been thinking about the ingredients of a good movement or ecosystem that I think has a couple of extra ingredients (culture and infrastructure). I think on approximately four-fifths of these prerequisites the animal movement is at all-time highs. And like betting on a charity before it launches, I am far more confident that a movement that has these ingredients will lead to long-term impact than I am relying on, e.g., plant-based proteins trending for climate reasons. Culture The culture of the animal movement in the past has been up and down. It has always been full of highly dedicated people in a way that is rare across other movements, but it also had infighting, ideological purity and a high level of day-to-day drama. Overall this made me a bit cautious about recommending it as a place to spend time even when someone was sold on ending factory farming. But over the last few years professionalization has happened, differences have been put aside to focus on higher goals and the drama overall has gone down a lot. This was perhaps best embodied by my favorite opening talk at a conference ever (AVA 2025) where Wayne and Lewis, leaders with very different historical approaches to helping animals, were able to share lessons, have a friendly debate and drive home the message of how similar our goals really are. This would have been nearly unthinkable decades ago (and in fact resulted in shouting matches when it was attempted). But the cult
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference