The newsletter will go out later today, but you can start voting now. If you aren't subscribed to the EA Newsletter, you can do so here.
In honour of the leading essay from the 'Essays on Longtermism' Collection - 'The Case For Strong Longtermism', this month's newsletter poll will be on deontic strong longtermism...
Clarifications
The poll's wording is taken from the paper itself, though I have used the shorter version of the deontic strong longtermism thesis. The longer one is:
Deontic strong longtermism (DSL): In the most important decision situations facing agents today,
(i) One ought to choose an option that is near-best for the far future.
(ii) One ought to choose an option that delivers much larger benefits in the far future than in the near future.
As far as I can tell, the shorter statement is equivalent to the longer one. Should you disagree, please vote on the shorter statement (as that's how your vote will appear to anyone browsing the vote distribution).
A couple more clarifications:
- Not all longtermists are deontic strong longtermists. The weaker version of the longtermist thesis, as Will MacAskill phrases it in What We Owe The Future, is 'the view that positively influencing the long-term future is a key moral priority of our time'. I.e., don't look at this poll, think 'longtermism, yup' and vote strong agree. My best guess from reading MacAskill is that he would only be around 10% agree, and he literally wrote the book on longtermism (though he can prove me wrong by voting).
- 'Most important' needn't mean 'only important' and you can agree with this strongly and still believe that there are some side constraints which would stop you from prioritising the far future in particular decision situations. For some discussion of side constraints, see section 9 of the article.
I agree it's insanely hard to know what will affect the far future, and how. But I think we should still try, often by using heuristics (one I'm currently fond of is "what kinds of actions seem to put us on a good trajectory, e.g. to be doing well in 100 years?")
I think that in cases where we do have reason to think an action will affect the long run future broadly and positively in expectation (i.e. even if we're uncertain) that's an extremely strong reason -- and usually an overriding one -- to favour it over one that looks worse for the long-run future. I think that's sufficient for agreement with the statement.
I agree with a bunch of the skepticism in the comments made, but (to me) it seems like there are not enough people on the strong longtermist side.
A couple points responding to some of the comments:
On the other hand, a one point against that I don't think was brought up:
Point 1 in favour reads very much like "focus on near-future benefits because this will (most likely) bring far-future benefits", which is in practice indistinguishable from just "focus on near-future benefits". Plus the assumption --improving near-future will most likely improve far-future, which I also tend to think-- is far from certain (you acknowledge it). With this reasoning, technically, the underlying reason to do X is improving the far-future, indeed. But the actual effect of X is improving the near-future with much higher certainty than improving the far-future; everything is aligned. This is not, AFAI understand, the point of the question. Who wouldn't do X? Answering assuming this scenario doesn't give any information.
Consider the following different scenario: doing X will make near-future worse with much higher certainty than it does far-future better (assume same value generation magnitudes as before, just make the value for near-future negative). Would you then advocate doing X? I think this gives the information the question is asking for.
I would overwhelmingly most likely not do X in that scenario. Because I know how dumb we (I) am and how complex reality is, so the longer the time frame, the less I trust any reasoning (cluelessness) [It is difficult enough to find out whether an action taken has actually been net positive for the immediate future!]. Would you? If your answer tends to No, then the far-future effects are not the most important determinant of what we ought to do for you.
It depends on the case, but there are definitely cases where I would.
Also, while you make a good point that these can sometimes converge, I think the priority of concerns is extremely different under shortermism vs longtermism, which i see as the important part of "most important determinant of what we ought to do." (Setting aside mugging and risk aversion/robustness) some very small or even directional shift could make something hold the vast majority of your moral weight, as opposed to before where the impact might have not been that big/ impact would have been outweighed by lack of neglectedness or tractability.
P.S. If one (including myself) failed to do x, given that it would shift priorities but wouldn't affect what one would do in light of short term damage, I think that would say less about one's actual beliefs and more about their intuitions of disgust towards means-end-reasoning - but this is just a hunch and somewhat based on my own introspection (to be fair, sometimes this comes from moral uncertainty/reputational concerns that should be used in this reasoning and is to your point).
Then it definitely fits with your vote. I just meant that the fact that you (and me) tend to think that making the near-future better also will make the far-future better shouldn't influence the answer to this question.
We just disagree on how confident we are on our assessment of how our actions will affect the far-future. And probably this is because of our age ;-)
Reducing the nearterm risk of human extinction is not astronomically cost-effective, and I think empirical evidence suggests effects after 100 years are negligible.
Curious what you think of the arguments given by Kollin et al. (2025), Greaves (2016), and Mogensen (2021) that the indirect effects of donations to AMF/MAWF swamp the intended direct effects. Is it that you agree but you think the unintended indirect effects that swamp the calculus all play out before in 100 years (and the effects beyond that are small enough to be safely neglected)?
Thanks for the question, Jim. Yes.
(Nice, thanks for explaining.) And how do you think saving human lives now impacts the soil nematodes that will be born between 100 years from now and until the end of time? And how does this not dwarf the impact on soil nematodes that will be born in the next 100 years? What happens in 100 years that reduces to pretty much zero the impact of saving human lives now on soil nematodes?
The decrease in soil-animal-years is proportional to the increase in agricultural-land-years, which is the increase in human population times the agricultural land per capita. I think both of these factors decrease over time, and therefore so does the decrease in soil-animal-years.
The agricultural land per capita in low income countries (LICs) has been decreasing.
Figuring out the increase in human population across time is tricky. The people whose lives were extended will tend to have more children as a result, but decreasing mortality also decreases fertility. From the paper The Impact of Life-Saving Interventions on Fertility by David Roodman:
Nothing special happens in 100 years. This is just a rough guess for when the future impact becomes smaller than 10 % of the past impact.
Oh ok so you're saying that:
1. In ~100 years, some sort of (almost) unavoidable population equilibrium will be reached no matter how many human lives we (don't) save today. (Ofc, nothing very special in exactly 2125, as you say, and it's not that binary, but you get the point.)
Saving human lives today changes the human population curve between 2025 and ~2125 (multiple possible paths represented by dotted curves). But in ~2125 , our impact (no matter in which direction it was) is canceled out.
2. Even if 1 is a bit false (such that what the above black curve looks like after 2125 actually depends on how many human lives we save today), this won't translate into a difference in terms of agricultural land use (and hence in terms of soil nematode populations).
Almost no matter how many humans there are after ~2125, total agricultural land remains roughly the same.
Is that a fair summary of your view? If yes, what do you make of, say, the climate change implications of changing the total number of humans in the next 100 years? Climate change seems substantially affected by total human population (and therefore by how many human lives we save today). And the total number of soil nematodes seems substantially affected by climate change (e.g., could make a significant difference in whether there will ever be soil nematodes in current dead zones close to the poles), including long after ~2125 (nothing similar to your above points #1 and #2 applies here; climate-change effects last). Given the above + the simple fact that the next 100 years constitute a tiny chunk of time in the scheme of things, the impact we have on soil nematodes counterfactually affected by climate change between ~2125 and the end of time seems to, at least plausibly, dwarf our impact on soil nematodes affected by agricultural land use between now and ~2125.[1] What part of this reasoning goes wrong, exactly, in your view, if any?
We might have no clue about the sign of our impact on the former such that some would suggest we should ignore it in practice (see, e.g., Clifton 2025; Kollin et al. 2025), but it's a very different thing from assuming this impact almost certainly is negliglble relative to short-term impact.
Thanks, Jim.
Yes, that is a fair summary, with the caveat that I do not think the global population or agricultural land will stabilise after a certain date. I just believe they will be roughly the same longterm with or without intervention.
There is nothing in particular which is wrong about what you said. However, evidence from randomized controlled trials (RCTs), which are the best way to empirically assess causal effects, still shows these decrease over time.
For the possibilities considered in David Bernard's post, 90 % of the effects materialise in 68 to 13.3 k years. I think the timelines above are too long because David assumed "the variance of the prior was the same for each time horizon whereas the variance of the signal increases with time horizon for simplicity". Without any information, I would guess my actions can have a much greater effect in 10 years that in 10 M years. So I would assume the variance of the prior decreases over time, in which case the signal would be more heavily discounted than in David's analysis, and therefore the time until 90 % of the effects materialising would be shorter.
Nice, thanks. To the extent that, indeed, noise generally washes out our impact over time, my impression is that the effects of increasing human population in the next 100 years on long-term climate change may be a good counterexample to this general tendency.
Not all long-term effects are equal in terms of how significant they are (relative to near-term effects). A ripple on a pond barely lasts, but current science gives us good indications that i) releasing carbon into the atmosphere lingers for tens of thousands of years, and ii) increased carbon in the atmosphere plausibly hugely affects the total soil nematode population (see, e.g., Tomasik's writings on climate change and wild animals)[1]. It is not effects like (i) and (ii) Bernard's post studies, afaict. I don't see why we should extrapolate from his post that there has to be something that makes us mistaken about (i) and/or (ii), even if we can't say exactly what.
Again, we might have no clue in which direction, but it still does.
I forgot to comment on your example about climate change. The question is not whether carbon dioxide (CO2) will remain in the atmosphere, but whether emitting 1 kg more today means there will be 1 kg more in e.g. 1 k years.
As an analogy, I think banning caged hens in a country may well imply there will be no caged hens there forever, but I still think the difference between the expected number of caged hens there without and with the ban still decreases over time. Animal welfare corporate campaigns have resulted in fewer hens in cages, and, more longterm, I believe technological development will lead to alternative proteins which decrease the consumption of eggs, or new systems which displace cages. In addition, economic growth means greater willingness to pay for animal welfare.
Likewise, blocking the construction of a farm which produces 10 t/year of chicken meat per year does not mean the global production of chicken meat will be 10 t/year lower forever. Existing farms can increase their production, and additional new farms be built to offset the initial drop in production.
The chance of the end goal being achieved via other means can be modelled with an annual discount rate. Any cost-effectiveness analysis estimating finite benefits necessarily assumes the benefits decrease over time. Otherwise, they would be infinite.
Interesting. Thanks for taking the time to explain all that :)
Thanks for the interesting questions too, Jim!
Cluelessness. The world is far too complex and we are far too dumb to pretend that we can predict whether if what we do now is going to be net-positive in the very long run.
The argument for strong longtermism as I understand it seems structurally identical to Pascal's mugging. (There is some small chance that the value of the future is enormous, therefore reducing extinction risk today has enormous value).
It frustrates me that I can't explain exactly what is wrong with this argument, but I am sceptical of it for the same reason that I wouldn't hand over my wallet to a Pascal mugger.
10%➔ 30% agreeI believe far future effects are important, but only very slightly so. When combined with short- and mid-term effects they plan an important role, but should not be artificially singled out for emphasis. In other words, the question puts in opposition a timeline which should not ever be separated.
The future is extremely big, and if we work hard, we are in an unusually high leverage position to influence it.
(10% disagree) I do not think there are any robust interventions for current humans who wish to improve "impartial welfare" in the future, but I'd find these interventions probably dominant if I believed there were any.
I don't want to say I'm "not a longtermist" since I'm never sure whether action-guidance has to be contained within one's theory of morality, but given the framing of the question is about what to do, I have to put myself in disagree, as I'm quite gung-ho on extreme neartermism (seeing a short path to impact as a sort of multiplier effect, though I may be wrong).
Time, like distance, has no relevance for moral judgement.
Our knowledge about future is so weak.
-0.5 (75% present and near future, 25% far future).
The future is infinite and poorly predictable, and if one cares only about it - then what's even a point of all of this at all? I prefer to appreciate subjective experience.
There's lots of small things we can do today that will greatly impact the future but won't come to fruition for years. We should absolutely do them.
However, it's important to balance future optimization with current optimization--are there opportunities to move the needle today, right now? We shouldn't completely discount those suffering today in favor of those who will benefit in the future.
The uncertainty of the future discounts the importance of possible outcomes
Far-future effects are so hypothetical that they cannot be used as a determinant for our actions. The counterfactual branching factor is so high that your present action can only apply to a minuscule number of futures. Our time and effort should have an exponential decay - focus most of the effort on the present and (a/b)^y for each year where a/b < 1
Difficult to predict far-future effects, so while these have some influence, it is natural to focus mostly on more near-term impacts and actions.
I think we need to take a mixed approach, as it is extremely hard to predict what will happen in the far future. Improving the world today will also generally make the future better.
In general we have no idea what the far-future effects of our actions will be and this fact weighs heavily against this proposition.
"Far future" is an extremely fuzzy concept. We won't get very far anyway if we don't solve the near-term problems, like preventing ASI. For me, longtermism really means keeping the space of possible decisions for the next generation as large as possible, not determining the future in a way we today think is "good". So it's not much different from sustainability.
From a totalist population ethics viewpoint, ensuring that there are humans in the future is critical because there can be so many more of them than there are humans currently alive. which can't happen if extinction occurs, or lock in on bad states like AI authoritarianism.
With certain caveats, the reason longtermism is such an appealing concept to me is because of how neglected it is relative to other philosophies and positions
I think it's probably reasonable to discount the future to some extent (especially when we're uncertain there's even going to be a far future), but I also see potential for unimaginable future good, and think there's really value in getting to that point sooner rather than later
the uncertainty of what will be the best for the longterm future means we should focus on what we know is best in the nearer future
Assuming this captures x-risk considerations, the scale of the future is significantly bigger than present day.
While this is ostensibly called "strong longtermism", the precision of saying "near-best" instead of "best" makes (i) hard to deny (the opposite statement would be "one ought to choose an option that is significantly far from the best for the far future"). The best cruxes against (ii) would be epistemic ones i.e. whether benefits rapidly diminish or wash out over time.
I don’t think the opposite of (i) is true.
Imagine a strong fruit loopist, believes there’s an imperative to maximise total fruit loops.
If you are not a strong fruit loopist there’s no need to minimise total fruit loops, you can just have preferences that don’t have much of an opinion on how many fruit loops should exist (I.e. everyone’s position).