Hide table of contents

Epistemic status: quickly writtren, rehashing (reheating?) old, old takes. Also, written in a grumpier voice than I’d endorse (I got rained on this morning).

Some essays on longtermism came out recently! Perhaps you noticed. I overall think these essays were just fine[1], and that we should all talk less about longtermism.

In which I talk about longtermism

(In what follows, I’ll take “longtermism” as shorthand for: “the effects of our actions on the long term future should be a key moral priority.")

Critics often have two[2] broad kinds of objections to longtermism:

  1. It's too revisionary or radical in its implications
  2. It’s not action-guiding, it’s irrelevant in its implications

Here I’ll say a bit more on (2). Specifically, I’m going to argue that (A) longtermism isn’t necessary to motivate most high priority work, and that ((B) for the work longtermism might be necessary to motivate, talking about object-level features of the world[3] is more useful than debating the abstract framework. Given this, I think we should all talk less about longtermism.

Longtermism doesn’t distinctively motivate much work

Okay, so what does longtermism distinctively motivate? Some notes.

  • Longtermism and existential risk – not a crux
    • As argued here, here, here, etc.
    • My summary: Longtermism isn't necessary to think that x-risk (of at least some varieties) is a top priority problem. You don't need a very high credence in e.g. AI x risk for it to be the most likely reason you and your family die, and governments' implied value of life suggests they should spend much more on mitigations than they do. (“Holy shit, x-risk” is a good pitch.)
    • Longtermism does add scale and weight. In particular, you might worry that stories about x-risk reduction work can look like this: “Ok, there's a moderate probability of a bad thing happening. I could work on preventing it, and if I do, there's a small chance that I reduce the probability of this bad thing  by a truly infinitesimal amount – but wait, hang on, why this? I could be working for a normal charity, or making lots of money!".
    • Even e.g. deontologists who are sceptical of longtermism will happily work on reducing the risk of catastrophe from new technologies (see e.g. Unruh's essay).
  • Longtermism and non-existential GCRs — not a crux
    • There’s not much to say here: you don’t need longtermism to motivate working on reducing the chances that lots of suffering happens in the near future.
  • Longtermism and better futures
    • This is plausibly one place you need longtermism to motivate work.
    • But even here, you still need additional claims to be true, e.g. we need:
      • Some way of predicting the effects of our actions with reasonable accuracy
      • Some reason to think these effects won't wash out
      • Some way of comparing possible actions

Longtermists act like normal people, mostly[4]

  • As Askell and Neth discuss, longtermists may be myopic for a few reasons:
    • “Causal diffusion” – i.e., the claim that the effects of our actions “wash out” over time.
      • I think this should be your prior, but that if you think we live at a hinge of history, some actions (e.g. steering the development of TAI) seem like they might not wash out. So I’m not very moved by this.
    • “Epistemic diffusion” – i.e., that it’s increasingly hard to predict the effects of our actions over longer timescales
      • I’m most sympathetic to this.
    • Moral uncertainty – they argue that moral uncertainty should make longtermists act more myopically. Broadly, this is because there are many plausible arguments for having a positive discount rate, and fewer arguments for having a zero or negative discount rate, so on balance we should put a small probability on the correct discount rate being small but non-zero.
      • I agree with something like this when considered from the perspective of “how humanity overall should act”, but disagree that it’s relevant for how longtermists should act. Given that the overwhelming majority of actors have a positive discount rate, it seems basically right to me that longtermists should individually act as if they have a zero discount rate, to move the overall implied discount rate lower.
  • “Believing that longtermism is true doesn’t necessarily mean you’ll act as if you have zero time preference” isn’t a terribly novel point, so I won’t say more about this.

What about work that seemingly does need longtermism?

As I've discussed above, work on reducing existential risk does not need longtermism to motivate it. However, Better Futures-style work aimed at improving the value of the far future seems to me like it probably will need longtermism to be a key moral priority.[5]

I think that even here, it's more useful to talk about specific features of the world rather than to continue debating whether longtermism is true in general. Concretely, I'm most excited about work that tries to identify the actions one should take if you're very compelled by better-future-style reasoning, bearing in mind the difficulty of predicting or influencing the future. (See this comment for similar thoughts.)

Some concrete recommendations

So what should we do instead of debating longtermism? Some ideas.

  • Focus essay collections/research agendas/etc on particular interventions or causes, not on frameworks as general as longtermism. "Essays on Better Futures" or "Essays on Space Governance" would be more useful, imo, than "Essays on Longtermism."
  • Think hard about predictability and washing out for specific actions, rather than in the abstract.
  • Be concrete about empirical assumptions. I think cruxes for particular interventions are often about hinginess, tractability, and the shape of the world. It is unfortunately super fun to just think about philosophy, but I think in general I’d trade lots of marginal philosophizing for more concrete takes. (She says, abstractly philosophizing.)
  1. ^

     Here’s a footnote in a cranky voice (apologies). I was pretty underwhelmed with the Essays on Longtermism collection. I broadly agree with Oscar that the articles were either (1) reprints of classic essays which had some relationship to longtermism, or (2) new work, which mostly didn't seem to succeed at being novel and plausibly true and important.

    I guess more accurately, I thought the essay collection was just fine, looked good by academic standards, was probably a decent idea ex ante, and that there is nothing very interesting to say about the essays. So mostly I'm like "hm, I don't really get why there was an EAF contest to write about them".

    I think the collection can still have some academic value, e.g. by:

    • Making it higher status (and better for your academic career) to discuss longtermism-related ideas
    • Collecting some classic foundational essays (and again, making it easier to cite them in academic work)
    • Broadening the base of support for longtermism, or assessing how robust longtermism is to different moral views (e.g. deontological perspectives, contractualism)

    My overall gripe is: longtermism doesn't seem very important. I think it would have been better to collect essays on a particular intervention longtermists are often interested in, rather than about an axiological claim which (I argue) doesn't really matter for prioritisation.

  2. ^

     Setting aside “it’s false, but not because it’s revisionary, for some other reason”.

  3. ^

     E.g., discussing reasons to think this problem in particular must be dealt with now, rather than delegated to future, wiser people to solve; arguing why some actions will likely have persistent, predictable, and robustly good effects.

  4. ^

     They look just like me and you! Your friends, colleagues, and neighbours may even be longtermists…

  5. ^

     I'm not making the claim that BF-style work definitely will need longtermism to be motivated. My impression is that lots of the interventions recommended by this work are still quite abstract and general, and I think it's possible that as we drill down into the details and look more for actions with predictable, persistent, robustly good effects, the kinds of actions that a BF-style longtermist will recommend might look very similar to the kinds of actions that non-longtermists recommend. (E.g.:  strengthening institutions, reducing the risks of concentration of power, generally preserving optionality beyond just non-extinction optionality.) However, my current guess is that there will be some things that BF-style researchers are excited about, for which you basically do need to be a longtermist in order to consider them key moral priorities.

  6. Show all footnotes

25

1
3

Reactions

1
3

More posts like this

Comments6
Sorted by Click to highlight new comments since:

A few scattered points that make me think this post is directionally wrong, whilst also feeling meh about the forum competition and essays:

  • I agree that the essay competition doesn't seem to have surfaced many takes that I thought were particularly interesting or action-guiding, but I don't think that this is good evidence for "talking about longtermism not being important".
  • There's a lot of things that I would describe as "talking about longtermism" that seem important and massively underdiscussed (e.g. acausal trade and better futures-y things). I think you also think this.
  • The claim in the title seems about as valid as "talking about AI safety is not important" or talking about global health is not important" because most of the AI safety and global health work is relatively unimportant. That said, mean GH, AIS, and longtermist work are very important. I think that pushing the academic writing about longtermism button increases the amount of "good" longtermist writing at least a bit - though I'd be like 50x more excited about Forethought + Redwood running a similar competition on things they think are important that are still very philosophy-ish/high level.
  • The track record of talking about longtermism seems very strong. For example, I think it's hard to tell a story for Open Phil's work in AIS and biosecurity that doesn't significantly route through "writing about longtermism".
  • I feel like this post is more about "is convincing people to be longermists important" or should we just care about x-risk/AI/bio/etc. I strongly believe that most of the influential technical AIS contributors have been significantly influenced by longtermist writing, including the more philosophical aspects - though I wouldn't be surprised if, by their lights, longtermist writing isn't useful for the people they want to hire.
     

I would like to separate out two issues:

  1. Is longtermism a crux for our decisions?
  2. Should we spend a lot of time talking about longtermist philosophy?

On 1, I think it is more crux-y than you do, probably (and especailly that it will be in the future). I think currently, there are some big 'market' inefficiencies where even shortermists don't care as much as idealised versions of their utility functions would. If shortermists institutions start acting more instrumentally rationally, lots of the low-hanging fruit of x-risk reduction interventions will be taken, and longtermists will need to focus specifically on the weirder things that are more specific to our views. E.g. ensuring the future is large, and that we don't spread wild animal suffering to the stars, etc. So actually maybe I agree that for now lots of longtermists should focus on x-risks while there are still lots of relatively cheap wins, but I expect this to be a pretty short-lived thing (maybe a few decades?) and that after that longtermism will have a more distinct set of recommendations.

On 2, I also don't want to spend much more time on longtermist philosphy since I am already so convinced of longtermism that I expect another critique like all the ones we have already had won't move me much. And I agree better-futures style work (especially empirically groudned work) seems more promising.

Thanks for commenting!

> So actually maybe I agree that for now lots of longtermists should focus on x-risks while there are still lots of relatively cheap wins, but I expect this to be a pretty short-lived thing (maybe a few decades?) and that after that longtermism will have a more distinct set of recommendations.


Yeah, this seems reasonable to me. Max Nadeau also pointed out something similar to me (longtermism is clearly not a crux for supporting GCR work, but also clearly important for how e.g. OP relatively prioritises x risk reduction work vs mere GCR reduction work). I should have been clearer that I agree "not necessary for xrisk" doesn't mean "not relevant", and I'm more intending to answer "no" to your (2) than "no" to your (1). 

(We might still relatively disagree over your (1) and what your (2) should entail —for example, I'd guess I'm a bit more worried about predicting the effects of our actions than you, and more pessimistic about "general abstract thinking from a longtermist POV" than you are.)

Whether longtermism is a crux will depend on what we mean by 'long,' but I think concern for future people is a crux for x-risk reduction. If future people don't matter, then working on global health or animal welfare is the more effective way to improve the world. The more optimistic of the calculations that Carl and I do suggest that, by funding x-risk reduction, we can save a present person's life for about $9,000 in expectation. But we could save about 2 present people if we spent that money on malaria prevention, or we could mitigate the suffering of about 12.6 million shrimp if we donated to SWP.

Whether longtermism is a crux will depend on what we mean by 'long'

Yep, I was being imprecise. I think the most plausible (and actually believed-in) alternative to longtermism isn't "no care at all for future people", but "some >0 discount rate", and I think xrisk reduction will tend to look good under small >0 discount rates.

I do also agree that there are some combinations of social discount rate and cost-effectiveness of longtermism, such that xrisk reduction isn't competitive with other ways of saving lives. I don't yet think this is clearly the case, even given the numbers in your paper — afaik the amount of existential risk reduction you predicted was pretty vibes-based, so I don't really take the cost-effectiveness calculation it produces seriously. (And I  haven't done the math myself on discount rates and cost-effectiveness.) 

Even if xrisk reduction doesn't look competitive with e.g. donating to AMF, I think it would be pretty reasonable for some people to spend more time thinking about it to figure out if they could identify more cost-effective interventions. (And especially if they seemed like poor fits for E2G or direct work.)

Makes sense! Unfortunately any x-risk cost-effectiveness calculation has to be a little vibes-based because one of the factors is 'By how much would this intervention reduce x-risk?', and there's little evidence to guide these estimates.

More from cb
Curated and popular this week
Relevant opportunities