Abstract from the paper
Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict— perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present options is mainly determined by short-term considerations. This paper aims to precisify and evaluate (a version of) this epistemic objection to longtermism. To that end, I develop two simple models for comparing “longtermist” and “short-termist” interventions, incorporating the idea that, as we look further into the future, the effects of any present intervention become progressively harder to predict. These models yield mixed conclusions: If we simply aim to maximize expected value, and don’t mind premising our choices on minuscule probabilities of astronomical payoffs, the case for longtermism looks robust. But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these “Pascalian” probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism.
Why I'm making this linkpost
- I want to draw a bit more attention to this great paper
- I think this is one of the best sources for people interested in arguments for and against longtermism
- For people who are interested in learning about longtermism and are open to reading (sometimes somewhat technical) philosophy papers, I think the main two things I'd recommend they read are The Case for Strong Longtermism and this paper
- Other leading contenders are The Precipice, Existential Risk Prevention as Global Priority, and some of the posts tagged Longtermism
- For people who are interested in learning about longtermism and are open to reading (sometimes somewhat technical) philosophy papers, I think the main two things I'd recommend they read are The Case for Strong Longtermism and this paper
- I think this is one of the best sources for people interested in arguments for and against longtermism
- I want to make it possible to tag the post so that people see it later when it's relevant to what they're looking for via tags (e.g., I'd want people who check out the Longtermism tag to see a pointer to this paper to come up prominently)
- I want to make it easier for people to get a quick sense of whether it's worth their time to engage with this paper, given their goals (because people can check this post's karma, comments, and/or tags)
- I want to give people a space to discuss the paper in a way that other people can see and build on
- I'll share a bunch of my own comments below
- (I'll try to start each one with a tl;dr for that comment)
- I'll share a bunch of my own comments below
I actually think that those two sentences are consistent with each other. And I think that, as Tarsney says, his models and estimates do not show that fanaticism is necessarily required for the case for longtermism to hold.
Basically (from memory and re-skimming), Tarsney gives two model structures, some point estimates for most of the parameters, and then later some probability distributions for the parameters. He intends both models to represent plausible empirical views. He intends his point estimates and probability distributions to represent beliefs that are reasonable but at the pessimistic end for longtermism (so it's not crazy to think those things, but his all-things-considered beliefs about those parameters would probably be more favourable to longtermism). And he finds that the case for longtermism holds given the following assumptions:
(There are various complications, caveats, and additional points, but this stuff is key.)
So his reasoning is consistent with it being that case that the most reasonable empirical position would support longtermism without requiring any minuscule probabilities of extremely huge payoffs, or with that not being the case.
E.g., that could be the case is if we should have a non-minuscule credence in the cubic growth model and that "prima facie plausible" value for the long-run rate of ENEs.
Incorporating uncertainty, and this suggesting that the potential upside of one thing makes that the thing we should go for, doesn't necessarily mean fanaticism is involved. E.g., I made many job applications that I expected would turn out to have not been worth the time they took, due to the potential upside, and without having a clear point estimate for my odds of getting the job or how valuable that'd be (so I sort-of implicitly had a probability distribution over possible credences). This'd only be fanatical if the probabilities involved were minuscule and the payoffs huge enough to "make up for that", and Tarsney's analysis suggests that that may or may not be the case when it comes to longtermism.