Abstract from the paper
Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict— perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present options is mainly determined by short-term considerations. This paper aims to precisify and evaluate (a version of) this epistemic objection to longtermism. To that end, I develop two simple models for comparing “longtermist” and “short-termist” interventions, incorporating the idea that, as we look further into the future, the effects of any present intervention become progressively harder to predict. These models yield mixed conclusions: If we simply aim to maximize expected value, and don’t mind premising our choices on minuscule probabilities of astronomical payoffs, the case for longtermism looks robust. But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these “Pascalian” probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism.
Why I'm making this linkpost
- I want to draw a bit more attention to this great paper
- I think this is one of the best sources for people interested in arguments for and against longtermism
- For people who are interested in learning about longtermism and are open to reading (sometimes somewhat technical) philosophy papers, I think the main two things I'd recommend they read are The Case for Strong Longtermism and this paper
- Other leading contenders are The Precipice, Existential Risk Prevention as Global Priority, and some of the posts tagged Longtermism
- For people who are interested in learning about longtermism and are open to reading (sometimes somewhat technical) philosophy papers, I think the main two things I'd recommend they read are The Case for Strong Longtermism and this paper
- I think this is one of the best sources for people interested in arguments for and against longtermism
- I want to make it possible to tag the post so that people see it later when it's relevant to what they're looking for via tags (e.g., I'd want people who check out the Longtermism tag to see a pointer to this paper to come up prominently)
- I want to make it easier for people to get a quick sense of whether it's worth their time to engage with this paper, given their goals (because people can check this post's karma, comments, and/or tags)
- I want to give people a space to discuss the paper in a way that other people can see and build on
- I'll share a bunch of my own comments below
- (I'll try to start each one with a tl;dr for that comment)
- I'll share a bunch of my own comments below
On his estimate of the difference in probability we can achieve promoting one state over its complement, it's worth mentioning that this does not consider the possibility of doing more harm than good, e.g. AI safety work advancing AGI more than it aligns it, and with the very low (but in his view, extremely conservative) probabilities that he uses in his argument, the possibility of backfire effects outweighing them becomes more plausible.
Furthermore, it does not argue that we can effectively predict that any particular state is better than its complement, e.g. is extinction good or bad? How should we deal with moral uncertainty, especially around population ethics?
For these reasons, it may be difficult to justifiably identify robustly positive expected value longtermist interventions ahead of time, which the case for longtermism depends on. I mean this even with subjective probabilities, since such probabilities supporting longtermist interventions tend to be particularly poorly-informed (largely for absence of good evidence) and so seem more prone to biases and whims, e.g. wishful thinking and the non-rational particulars of people's brains and priors. This is just deep uncertainty and moral cluelessness.
For what it's worth, I don't think it makes much sense for this paper to address such issues in detail given its current length already, although they seem worth mentioning.
(Also, I read the paper a while ago, so maybe it did discuss these issues and I missed it.)
In line with your comment:
But Tarsney does acknowledge roughly that second point in one place:
... (read more)