Hide table of contents

The newsletter will go out later today, but you can start voting now. If you aren't subscribed to the EA Newsletter, you can do so here

In honour of the leading essay from the 'Essays on Longtermism' Collection - 'The Case For Strong Longtermism', this month's newsletter poll will be on deontic strong longtermism...

Far-future effects are the most important determinant of what we ought to do
J
LR
M
Q
RK
KVW
V
A
BA
I
J
EC
RB
R
VG
AG
AB
BSN
F
N
PM
SA
FS
MBP
T
AC
AG
AO
JI
RD
TO
RR
RK
JF
DG
Y
APS
AD
L
BC
JD
TK
T
NL
DM
FW
AK
C
IL
J
J
C
JW
NHH
K
AK
VK
AK
C
NB
K
N
C
JB
T
M
disagree
agree

Clarifications

The poll's wording is taken from the paper itself, though I have used the shorter version of the deontic strong longtermism thesis. The longer one is: 

Deontic strong longtermism (DSL): In the most important decision situations facing agents today,

(i) One ought to choose an option that is near-best for the far future.

(ii) One ought to choose an option that delivers much larger benefits in the far future than in the near future.

As far as I can tell, the shorter statement is equivalent to the longer one. Should you disagree, please vote on the shorter statement (as that's how your vote will appear to anyone browsing the vote distribution). 

A couple more clarifications:

  • Not all longtermists are deontic strong longtermists. The weaker version of the longtermist thesis, as Will MacAskill phrases it in What We Owe The Future, is 'the view that positively influencing the long-term future is a key moral priority of our time'. I.e., don't look at this poll, think 'longtermism, yup' and vote strong agree. My best guess from reading MacAskill is that he would only be around 10% agree, and he literally wrote the book on longtermism (though he can prove me wrong by voting).
  • 'Most important' needn't mean 'only important' and you can agree with this strongly and still believe that there are some side constraints which would stop you from prioritising the far future in particular decision situations. For some discussion of side constraints, see section 9 of the article. 

18

0
0

Reactions

0
0
Comments41
Sorted by Click to highlight new comments since:

Far-future effects are the most important determinant of what we ought to do


I agree it's insanely hard to know what will affect the far future, and how. But I think we should still try, often by using heuristics (one I'm currently fond of is "what kinds of actions seem to put us on a good trajectory, e.g. to be doing well in 100 years?")

I think that in cases where we do have reason to think an action will affect the long run future broadly and positively in expectation (i.e. even if we're uncertain) that's an extremely strong reason -- and usually an overriding one -- to favour it over one that looks worse for the long-run future. I think that's sufficient for agreement with the statement. 

Far-future effects are the most important determinant of what we ought to do

I agree with a bunch of the skepticism in the comments made, but (to me) it seems like there are not enough people on the strong longtermist side. 

A couple points responding to some of the comments: 

  1. You should have some non trivial probability in the times of perils hypothesis/lock in (perhaps in large part because AI might be a big deal) -- the idea that we're living in a time where the chances of existential risk are particularly high but if we get past it the rate of x risk will go down indefinitely (or at least for a very long while). This is plausible because increasing uncertainty as time goes furthes, as thorstad points out, makes x risk rate regress to mean, and the mean is quite plausibly low. If this is true, you don't need to make so many claims about the far future in order to have massive amount of impact on it.
  2. A lot of people refer to pascal's mugging or fanaticism here, which I don't usually think is correct. (Unless we reject pascal's mugging for ambuity aversion reasons, which i am uncertain about but probably don't) the probabilities that people usually put on longtermism are not near the kind of bets we shouldn't take if we're anti being fanatical because we take these similarly low probabilities all the time -- for instance, having fire extinguishers, wearing seatbelts, maybe most clinical trials. Unless you have significantly lower probability than that, invoking pascal's mugging feels a bit overly pessimistic about our ability to affect things like this. Also (and this is a cheeky move), if you just have some non-mugging-level probability in that claim being correct, you probably still get far future being most important without a mugging. 

On the other hand, a one point against that I don't think was brought up: 

  1. In the XPT, the Superforecaster median prediction was that there will only exist 500 billion humans (not near as many as, say, the bostrom or newberry numbers, which may make the cost + tractability concerns potentially such that it's not as important in expectation as, say, affecting very large amounts of shrimp or insects now (to be fair, the 95th percentile superforecaster was at 100 trillion, so maybe the uncertainty becomes fairly assymetrical quickly, though). 

Point 1 in favour reads very much like "focus on near-future benefits because this will (most likely) bring far-future benefits", which is in practice indistinguishable from just "focus on near-future benefits". Plus the assumption --improving near-future will most likely improve far-future, which I also tend to think-- is far from certain (you acknowledge it). With this reasoning, technically, the underlying reason to do X is improving the far-future, indeed. But the actual effect of X is improving the near-future with much higher certainty than improving the far-future; everything is aligned. This is not, AFAI understand, the point of the question. Who wouldn't do X? Answering assuming this scenario doesn't give any information.

Consider the following different scenario: doing X will make near-future worse with much higher certainty than it does far-future better (assume same value generation magnitudes as before, just make the value for near-future negative). Would you then advocate doing X? I think this gives the information the question is asking for.

I would overwhelmingly most likely not do X in that scenario. Because I know how dumb we (I) am and how complex reality is, so the longer the time frame, the less I trust any reasoning (cluelessness) [It is difficult enough to find out whether an action taken has actually been net positive for the immediate future!]. Would you? If your answer tends to No, then the far-future effects are not the most important determinant of what we ought to do for you.

It depends on the case, but there are definitely cases where I would. 

Also, while you make a good point that these can sometimes converge, I think the priority of concerns is extremely different under shortermism vs longtermism, which i see as the important part of "most important determinant of what we ought to do." (Setting aside mugging and risk aversion/robustness) some very small or even directional shift could make something hold the vast majority of your moral weight, as opposed to before where the impact might have not been that big/ impact would have been outweighed by lack of neglectedness or tractability. 

P.S. If one (including myself) failed to do x, given that it would shift priorities but wouldn't affect what one would do in light of short term damage, I think that would say less about one's actual beliefs and more about their intuitions of disgust towards means-end-reasoning - but this is just a hunch and somewhat based on my own introspection (to be fair, sometimes this comes from moral uncertainty/reputational concerns that should be used in this reasoning and is to your point).

It depends on the case, but there are definitely cases where I would.

Then it definitely fits with your vote. I just meant that the fact that you (and me) tend to think that making the near-future better also will make the far-future better shouldn't influence the answer to this question.

We just disagree on how confident we are on our assessment of how our actions will affect the far-future. And probably this is because of our age ;-)

Vasco Grilo🔸
8
0
0
80% disagree

Far-future effects are the most important determinant of what we ought to do

Reducing the nearterm risk of human extinction is not astronomically cost-effective, and I think empirical evidence suggests effects after 100 years are negligible.

I think empirical evidence suggests effects after 100 years are negligible.

Curious what you think of the arguments given by Kollin et al. (2025), Greaves (2016), and Mogensen (2021) that the indirect effects of donations to AMF/MAWF swamp the intended direct effects. Is it that you agree but you think the unintended indirect effects that swamp the calculus all play out before in 100 years (and the effects beyond that are small enough to be safely neglected)?

Thanks for the question, Jim. Yes.

I believe effects on soil animals are much larger than those on target beneficiaries. I am confident the exponent of the number of neurons [described here] is the parameter which affects the ratio between the effects on soil animals and target beneficiaries the most by far, and effects on soil animals dominate at least for values of the exponent up to 1, which are the ones I consider plausible. I get the following increase in the welfare of soil ants, termites, springtails, mites, and nematodes as a fraction of the increase in the welfare of the target beneficiaries. For exponents of the number of neurons of 0.19, 0.5, and 1:

  • For cage-free corporate campaigns, 77.8 k, 1.15 k, and 1.48.
  • For buying beef, 4.92 billion, 32.4 M, and 12.1 k.
  • For broiler welfare corporate campaigns, 1.22 M, 18.0 k, and 23.3.
  • For GiveWell’s top charities, 263 M, 610 k, and 41.5.
  • For HIPF, 206 M, 477 k, and 32.5.

(Nice, thanks for explaining.) And how do you think saving human lives now impacts the soil nematodes that will be born between 100 years from now and until the end of time? And how does this not dwarf the impact on soil nematodes that will be born in the next 100 years? What happens in 100 years that reduces to pretty much zero the impact of saving human lives now on soil nematodes?

The decrease in soil-animal-years is proportional to the increase in agricultural-land-years, which is the increase in human population times the agricultural land per capita. I think both of these factors decrease over time, and therefore so does the decrease in soil-animal-years.

The agricultural land per capita in low income countries (LICs) has been decreasing.

Figuring out the increase in human population across time is tricky. The people whose lives were extended will tend to have more children as a result, but decreasing mortality also decreases fertility. From the paper The Impact of Life-Saving Interventions on Fertility by David Roodman:

[...] In places where lifetime births/woman has been converging to 2 or lower, saving one child’s life should lead parents to avert a birth they would otherwise have. The impact of mortality drops on fertility will be nearly 1:1, so population growth will hardly change. In the increasingly exceptional locales where couples appear not to limit fertility much, such as Niger and Mali, the impact of saving a life on total births will be smaller, and may come about mainly through the biological channel of lactational amenorrhea. Here, mortality-drop-fertility-drop ratios of 1:0.5 and 1:0.33 appear more plausible. But in the long-term, it would be surprising if these few countries do not join the rest of the world in the transition to lower and more intentionally controlled fertility.

Nothing special happens in 100 years. This is just a rough guess for when the future impact becomes smaller than 10 % of the past impact.

Oh ok so you're saying that:

1. In ~100 years, some sort of (almost) unavoidable population equilibrium will be reached no matter how many human lives we (don't) save today. (Ofc, nothing very special in exactly 2125, as you say, and it's not that binary, but you get the point.)

Saving human lives today changes the human population curve between 2025 and ~2125 (multiple possible paths represented by dotted curves). But in ~2125 , our impact (no matter in which direction it was) is canceled out.

2. Even if 1 is a bit false (such that what the above black curve looks like after 2125 actually depends on how many human lives we save today), this won't translate into a difference in terms of agricultural land use (and hence in terms of soil nematode populations).

Almost no matter how many humans there are after ~2125, total agricultural land remains roughly the same.

Is that a fair summary of your view? If yes, what do you make of, say, the climate change implications of changing the total number of humans in the next 100 years? Climate change seems substantially affected by total human population (and therefore by how many human lives we save today). And the total number of soil nematodes seems substantially affected by climate change (e.g., could make a significant difference in whether there will ever be soil nematodes in current dead zones close to the poles), including long after ~2125 (nothing similar to your above points #1 and #2 applies here; climate-change effects last). Given the above + the simple fact that the next 100 years constitute a tiny chunk of time in the scheme of things, the impact we have on soil nematodes counterfactually affected by climate change between ~2125 and the end of time seems to, at least plausibly, dwarf our impact on soil nematodes affected by agricultural land use between now and ~2125.[1] What part of this reasoning goes wrong, exactly, in your view, if any?

  1. ^

    We might have no clue about the sign of our impact on the former such that some would suggest we should ignore it in practice (see, e.g., Clifton 2025; Kollin et al. 2025), but it's a very different thing from assuming this impact almost certainly is negliglble relative to short-term impact.

Thanks, Jim.

Yes, that is a fair summary, with the caveat that I do not think the global population or agricultural land will stabilise after a certain date. I just believe they will be roughly the same longterm with or without intervention.

There is nothing in particular which is wrong about what you said. However, evidence from randomized controlled trials (RCTs), which are the best way to empirically assess causal effects, still shows these decrease over time.

Figure 4: Posterior expected value from forecast signal with expected value 1 and linearly increasing noise over time, 1,000,000 years, log-log scale

[...]

Figure 6: Posterior expected value from forecast signal with expected value 1 and non-linearly increasing noise over time, 1,000,000 years, log-log scale

[...]

Table 4: How long it takes until posterior expected value is some fraction of signal expected value

 Years until posterior expected value is x% of signal
x%Central fixed effects Upper bound, fixed effectsCentral, no fixed effectsUpper bound, no fixed effectsIncreasing rate of increaseDecreasing rate of increase
10%561158187846813,276
1%6,1681,7352,047923337484,318
0.1%62,23317,50720,6519,3071,57115.5 million
0.01%622,886175,218206,69093,1447,294491 million

We can see that under the preferred central fixed effects estimate, signals of the value produced with a horizon of 561 years produce a posterior expected value that is 10% of the expected value of the signal. Every order of magnitude increase in forecast horizon after that results in a posterior expected value roughly an order of magnitude smaller. 

For the possibilities considered in David Bernard's post, 90 % of the effects materialise in 68 to 13.3 k years. I think the timelines above are too long because David assumed "the variance of the prior was the same for each time horizon whereas the variance of the signal increases with time horizon for simplicity". Without any information, I would guess my actions can have a much greater effect in 10 years that in 10 M years. So I would assume the variance of the prior decreases over time, in which case the signal would be more heavily discounted than in David's analysis, and therefore the time until 90 % of the effects materialising would be shorter.

Nice, thanks. To the extent that, indeed, noise generally washes out our impact over time, my impression is that the effects of increasing human population in the next 100 years on long-term climate change may be a good counterexample to this general tendency

Not all long-term effects are equal in terms of how significant they are (relative to near-term effects). A ripple on a pond barely lasts, but current science gives us good indications that i) releasing carbon into the atmosphere lingers for tens of thousands of years, and ii) increased carbon in the atmosphere plausibly hugely affects the total soil nematode population (see, e.g., Tomasik's writings on climate change and wild animals)[1]. It is not effects like (i) and (ii) Bernard's post studies, afaict. I don't see why we should extrapolate from his post that there has to be something that makes us mistaken about (i) and/or (ii), even if we can't say exactly what.

  1. ^

    Again, we might have no clue in which direction, but it still does.

I forgot to comment on your example about climate change. The question is not whether carbon dioxide (CO2) will remain in the atmosphere, but whether emitting 1 kg more today means there will be 1 kg more in e.g. 1 k years.

As an analogy, I think banning caged hens in a country may well imply there will be no caged hens there forever, but I still think the difference between the expected number of caged hens there without and with the ban still decreases over time. Animal welfare corporate campaigns have resulted in fewer hens in cages, and, more longterm, I believe technological development will lead to alternative proteins which decrease the consumption of eggs, or new systems which displace cages. In addition, economic growth means greater willingness to pay for animal welfare.

Likewise, blocking the construction of a farm which produces 10 t/year of chicken meat per year does not mean the global production of chicken meat will be 10 t/year lower forever. Existing farms can increase their production, and additional new farms be built to offset the initial drop in production.

The chance of the end goal being achieved via other means can be modelled with an annual discount rate. Any cost-effectiveness analysis estimating finite benefits necessarily assumes the benefits decrease over time. Otherwise, they would be infinite.

Interesting. Thanks for taking the time to explain all that :)

Thanks for the interesting questions too, Jim!

Far-future effects are the most important determinant of what we ought to do

Cluelessness. The world is far too complex and we are far too dumb to pretend that we can predict whether if what we do now is going to be net-positive in the very long run.

Far-future effects are the most important determinant of what we ought to do

Weakly agree (at least if we caveat that I believe in some sort of deontic constraints on utility maximizing.) I think it is unclear that we can influence the far future in a predictable way, but slightly more likely than not, and I think the expected number of people and other sentient beings in the far future is likely very, very large as Greaves and McAskill argue. 

Far-future effects are the most important determinant of what we ought to do

 

The argument for strong longtermism as I understand it seems structurally identical to Pascal's mugging. (There is some small chance that the value of the future is enormous, therefore reducing extinction risk today has enormous value).

It frustrates me that I can't explain exactly what is wrong with this argument, but I am sceptical of it for the same reason that I wouldn't hand over my wallet to a Pascal mugger.

Francis Wade
1
0
0
10% ➔ 30% agree

I believe far future effects are important, but  only very slightly so. When combined with short- and mid-term effects they plan an important role, but should not be artificially singled out for emphasis. In other words, the question puts in opposition a timeline which should not ever be separated.

Jordan Arel
1
0
0
100% agree

Far-future effects are the most important determinant of what we ought to do

The future is extremely big, and if we work hard, we are in an unusually high leverage position to influence it. 

JoA🔸
1
0
0
20% agree

(10% disagree) I do not think there are any robust interventions for current humans who wish to improve "impartial welfare" in the future, but I'd find these interventions probably dominant if I believed there were any. 

I don't want to say I'm "not a longtermist" since I'm never sure whether action-guidance has to be contained within one's theory of morality, but given the framing of the question is about what to do, I have to put myself in disagree, as I'm quite gung-ho on extreme neartermism (seeing a short path to impact as a sort of multiplier effect, though I may be wrong).

Ray Raven
2
1
0
50% disagree

We can't determine far future. The farther we try to predict , the more error will we face. It's a lesson we learnt from the past. None from a two or decades ago could have imagined that information technology would've changed beyond exponentially but it did. There's thousands of wall street quants whose job is to predict future stock price for short term using thousands of computers , and yet there's return is little better than average investors.

 

Far-future effects are the most important determinant of what we ought to do

Time, like distance, has no relevance for moral judgement. 

RobertDaoust
1
1
0
80% disagree

Our knowledge about future is so weak.

-0.5 (75% present and near future, 25% far future).

The future is infinite and poorly predictable, and if one cares only about it - then what's even a point of all of this at all? I prefer to appreciate subjective experience.

jcderose
1
0
0
40% agree

Far-future effects are the most important determinant of what we ought to do

There's lots of small things we can do today that will greatly impact the future but won't come to fruition for years. We should absolutely do them.

However, it's important to balance future optimization with current optimization--are there opportunities to move the needle today, right now? We shouldn't completely discount those suffering today in favor of those who will benefit in the future.

QuantumCoop
1
0
0
100% disagree

Far-future effects are the most important determinant of what we ought to do

The uncertainty of the future discounts the importance of possible outcomes

vandermude
1
0
0
90% disagree

Far-future effects are the most important determinant of what we ought to do

Far-future effects are so hypothetical that they cannot be used as a determinant for our actions. The counterfactual branching factor is so high that your present action can only apply to a minuscule number of futures. Our time and effort should have an exponential decay - focus most of the effort on the present and (a/b)^y for each year where a/b < 1 

cfranco
1
0
0
50% agree

Far-future effects are the most important determinant of what we ought to do

Difficult to predict far-future effects, so while these have some influence, it is natural to focus mostly on more near-term impacts and actions.

Far-future effects are the most important determinant of what we ought to do

I think we need to take a mixed approach, as it is extremely hard to predict what will happen in the far future. Improving the world today will also generally make the future better.

Rick Baker
1
0
0
80% disagree

Far-future effects are the most important determinant of what we ought to do

In general we have no idea what the far-future effects of our actions will be and this fact weighs heavily against this proposition.

Karl von Wendt
1
0
0
90% disagree

Far-future effects are the most important determinant of what we ought to do

"Far future" is an extremely fuzzy concept. We won't get very far anyway if we don't solve the near-term problems, like preventing ASI. For me, longtermism really means keeping the space of possible decisions for the next generation as large as possible, not determining the future in a way we today think is "good". So it's not much different from sustainability.

Kieren
1
0
0
60% agree

Far-future effects are the most important determinant of what we ought to do

From a totalist population ethics viewpoint, ensuring that there are humans in the future is critical because there can be so many more of them than there are humans currently alive. which can't happen if extinction occurs, or lock in on bad states like AI authoritarianism. 

James Watson
1
0
0
40% agree

With certain caveats, the reason longtermism is such an appealing concept to me is because of how neglected it is relative to other philosophies and positions

mickofemsworth
1
0
0
100% disagree

The far future is unpredictable, certainly less predictable than we might assume.

Far-future effects are the most important determinant of what we ought to do

I think it's probably reasonable to discount the future to some extent (especially when we're uncertain there's even going to be a far future), but I also see potential for unimaginable future good, and think there's really value in getting to that point sooner rather than later

Elina Christian
1
0
0
80% disagree

Far-future effects are the most important determinant of what we ought to do

the uncertainty of what will be the best for the longterm future means we should focus on what we know is best in the nearer future

MichaelDello
1
0
0
100% agree

Far-future effects are the most important determinant of what we ought to do

Assuming this captures x-risk considerations, the scale of the future is significantly bigger than present day.

While this is ostensibly called "strong longtermism", the precision of saying "near-best" instead of "best" makes (i) hard to deny (the opposite statement would be "one ought to choose an option that is significantly far from the best for the far future"). The best cruxes against (ii) would be epistemic ones i.e. whether benefits rapidly diminish or wash out over time.

I don’t think the opposite of (i) is true.

 Imagine a strong fruit loopist, believes there’s an imperative to maximise total fruit loops. 

If you are not a strong fruit loopist there’s no need to minimise total fruit loops, you can just have preferences that don’t have much of an opinion on how many fruit loops should exist (I.e. everyone’s position). 

Curated and popular this week
Relevant opportunities