Also on LessWrong (with different essays).
Interesting.
Well, let me literally take Anthony's first objection and replace the words to make it apply to the Emily case:
There are many different ways of carving up the set of “effects” according to the reasoning above, which favor different strategies. For example: I might say that I’m confident that
an AMF donation saves livesgiving Emily the order to stand down makes her better off, and I’m clueless about its long-term effects overall (of this order, due to cluelessness about which of the terrorist and the child will be shot). Yet I could just as well say I’m confident that there’s some nontrivially likely possible world containing an astronomical number of happy lives (thanks to the terrorist being shot and not the kid), whichthe donationmy order makes less likely via potentiallyincreasing x-riskpreventing the terrorists (and luckily not the kid) from being shot, and I’m clueless about all the other effects overall. So, at least without an argument that some decomposition of the effects is normatively privileged over others, Option 3 won’t give us much action guidance.
When I wrote the comment you responded to, it just felt to me like only the former decomposition was warranted in this case. But, since then, I'm not sure anymore. It surely feels more "natural", but that's not an argument...
Is your intuition strongly that Emily should stand down for option 3 reasons, or merely that Emily should stand down?
The former, although I might ofc be lying to myself.
Nice, thanks. To the extent that, indeed, noise generally washes out our impact over time, my impression is that the effects of increasing human population in the next 100 years on long-term climate change may be a good counterexample to this general tendency.
Not all long-term effects are equal in terms of how significant they are (relative to near-term effects). A ripple on a pond barely lasts, but current science gives us good indications that i) releasing carbon into the atmosphere lingers for tens of thousands of years, and ii) increased carbon in the atmosphere plausibly hugely affects the total soil nematode population (see, e.g., Tomasik's writings on climate change and wild animals)[1]. It is not effects like (i) and (ii) Bernard's post studies, afaict. I don't see why we should extrapolate from his post that there has to be something that makes us mistaken about (i) and/or (ii), even if we can't say exactly what.
Again, we might have no clue in which direction, but it still does.
Oh ok so you're saying that:
1. In ~100 years, some sort of (almost) unavoidable population equilibrium will be reached no matter how many human lives we (don't) save today. (Ofc, nothing very special in exactly 2125, as you say, and it's not that binary, but you get the point.)
Saving human lives today changes the human population curve between 2025 and ~2125 (multiple possible paths represented by dotted curves). But in ~2125 , our impact (no matter in which direction it was) is canceled out.
2. Even if 1 is a bit false (such that what the above black curve looks like after 2125 actually depends on how many human lives we save today), this won't translate into a difference in terms of agricultural land use (and hence in terms of soil nematode populations).
Almost no matter how many humans there are after ~2125, total agricultural land remains roughly the same.
Is that a fair summary of your view? If yes, what do you make of, say, the climate change implications of changing the total number of humans in the next 100 years? Climate change seems substantially affected by total human population (and therefore by how many human lives we save today). And the total number of soil nematodes seems substantially affected by climate change (e.g., could make a significant difference in whether there will ever be soil nematodes in current dead zones close to the poles), including long after ~2125 (nothing similar to your above points #1 and #2 applies here; climate-change effects last). Given the above + the simple fact that the next 100 years constitute a tiny chunk of time in the scheme of things, the impact we have on soil nematodes counterfactually affected by climate change between ~2125 and the end of time seems to, at least plausibly, dwarf our impact on soil nematodes affected by agricultural land use between now and ~2125.[1] What part of this reasoning goes wrong, exactly, in your view, if any?
We might have no clue about the sign of our impact on the former such that some would suggest we should ignore it in practice (see, e.g., Clifton 2025; Kollin et al. 2025), but it's a very different thing from assuming this impact almost certainly is negliglble relative to short-term impact.
(Nice, thanks for explaining.) And how do you think saving human lives now impacts the soil nematodes that will be born between 100 years from now and until the end of time? And how does this not dwarf the impact on soil nematodes that will be born in the next 100 years? What happens in 100 years that reduces to pretty much zero the impact of saving human lives now on soil nematodes?
I think empirical evidence suggests effects after 100 years are negligible.
Curious what you think of the arguments given by Kollin et al. (2025), Greaves (2016), and Mogensen (2021) that the indirect effects of donations to AMF/MAWF swamp the intended direct effects. Is it that you agree but you think the unintended indirect effects that swamp the calculus all play out before in 100 years (and the effects beyond that are small enough to be safely neglected)?
But in combination with the principle that we can't understand the downstream effect of our actions in the long-term, I don't understand how somebody can be skeptical of cluelessness.
(You might find this discussion helpful.)
How do indirect effects get incorporated into cost effectiveness calculations, for example, if anyone is doing cost-effectiveness calculations?
Afaict, the dominant approach is "build (explicitly or implicitly) a model factoring in all the cruxy parameters and give your best precise guess to estimate their values" (see, e.g., Lewis 2021; Greaves & MacAskill 2025, section 7.3; Violet Hour 2022)[1]. For a nice overview (and critique) of this approach, see DiGiovanni (2025a; 2025b). He also explains how some popular approaches that might seem to differ are actually doing the same, but implicitly, iirc.
An alternative approach, relevant to when we think the above one forces us to give precise best guesses that are too arbitrary, is to specify indeterminate beliefs, e.g., imprecise credences. See DiGiovanni (2025a; 2025c). But then this makes expected value maximization (and therefore orthodox cost-effectiveness calculations) impossible, and we need an alternative decision rule, and it's hard to find one that is both seemingly sensible and action-guiding (although see 3 and 4 in my response to your next question).
In any case, one also has to somehow account for the crucial effects they know they are unaware of.[2] One salient proposal is to incorporate in our models a "catch-all" meant to factor in all the crucial effects we haven't thought of. This seems to push towards preferring the above alternative approach with indeterminacy, since we'll arguably never find a principled way to assign a precise utility to the catch-all. This problem is discussed by DiGiovanni (2025c) and Roussos (2021, slides), iirc.
When do indirect effects get to be treated as irrelevant for cluelessness reasons, and when do they not
Greaves (2016) differentiates between simple and complex cluelessness. She essentially argues that:
- you can assume random symmetric chaotic long run effects (e.g., from moving my hand to the left, right now) give only rise to simple cluelessness and hence "cancel out".
- you can't do the same with effects where you have asymmetric reasons to believe your action will end up doing good overall, and reasons to believe the opposite (e.g., long-term effects of giving to AMF or Make A Wish Foundation).[3]
Now, when we don't know whether a given action is desirable because of complex clulessness, what do we do? Here are worth-mentioning approaches I'm aware of:
1. Fool yourself into believing you're not clueless and give an arbitrary best guess. This is very roughly "Option 1" in Clifton (2025).
2. Accept that you don't know and embrace cluelessness nihilism. This is very roughly "Option 2" in Clifton (2025).
3. Find a justifiable way to "bracket out" the effects you are clueless about. This is roughly "Option 3" in Clifton (2025) and is thoroughly discussed in Kollin et al. (2025).
4. Do something similar to the above but with normative views. DiGiovanni might post something very thorough on this, soon. But for now, see this quick take of his and Vinding's post he responds to.
In practice, most core EAs either (in decreasing order of how common this is in my experience):
A) do some underdefined (and poorly-justified?) version of 3 where they ignore crucial parameters they think they can't estimate,[4] i.e., somehow treat complex cluelessness as if it were simple cluelessness. Examples of people endorsing such an approach are given in this discussion of mine and DiGiovanni (2025d). See Clifton (2025) on the difference between this approach and his bracketing proposal, though it can sometimes lead to the same behavior in practice.
B) have precise beliefs (whether explicitly or implicitly) about absolutely every crucial parameter (even, e.g., the values aliens hold, our acausal impact in other branches of the multiverse, and the cruxy considerations they are unaware of). Hence, there is just no complex-cluelessness paralysis in their view so no problem and no need for any of the four above solutions. I don't think the radical version of this approach I present has been explicitly publicly/formally defended (although it is defo endorsed by many, sometimes unconsciously), but see Lewis (2021) and the refs and this poll from DiGiovanni, for related defenses.
C) openly endorse doing 1 and don't mind arbitrariness.
Which organizations' theories of change (if any) have explicitly tried to account for indirect effects, or selected approaches they think minimize unintended consequences?
Greaves (2020) talks about how GiveWell has tried to factor in some medium-term indirect effects in their calculations, fwiw.
I don't know about organizations' ToCs beyond that. Hopefully, my response to your first question helps, however. (EDIT to add: If we remain outside the realm of longtermism, there's Michael St. Jules' work on humans' impact on animals that I find particularly relevant. I think AIM and Rethink Priorities have done work intended to account for indirect effects too, but Michael would know much better where to point you!)
(EDIT 2 to add: Within the longtermism realm, there are also people who thought about exotic crucial considerations, in an attempt to do something along the lines of B above, like, e.g., considerations related to aliens---see, e.g., this post of mine, Vinding (2024) and Riché (2025), and references therein---and acausal reasoning---see, e.g., this and that.)
In addition, worth noting that some have argued we should focus on what they consider to be robustly good interventions to influence the far future (e.g., capacity-building and avoiding lock-in scenarios), hence aiming to minimize unintended consequences that would swamp the overall assessment of whether the intervention does more good than harm. See DiGiovanni (2025d) (last mention of him, I promise!) for a nice summary (and critique), and references therein for cases in favor of these robustness approaches.
Anyway, cool project! :) And glad this prompted me to write all this. Sorry if it's a lot at once.
Applied to forecasting the long-term (dis)value of human extinction/expansion, specifically, see this piece of mine and references therein.
While this distinction is consensual afaict, there is a lot of disagreement about whether a given action gives rise to complex cluelessness (when we consider all its effects) or if its overall desirability can be non-arbitrarily estimated. Hence all the debates around the epistemic challenge to longtermism. DiGiovanni (2025a; 2025c) offers the best overview of the topic to date imo, although he's not neutral on who's right. :)
For longtermists, this can be considerations that have to do with aliens and acausal reasoning, and unawareness. For neartermists, this can be the entire long-term future.
I aim to contribute to efforts to
1. find alternative action-guidance when standard consequentialism is silent on what we ought to do. In particular, I'm interested in finding something different from (or more specific than) Clifton's "Option 3", DiGiovanni's non-consequentialist altruism proposal, or Vinding's two proposals.
2. reduce short-term animal suffering or finding out how to do so in robust ways, since I suspect most plausible solutions to 1 say it's quite a good thing to do, although maybe not the best (and we might need to do 1 to do 2 better. Sometimes, cluelessness bites even if we ignore long-term consequences --- e.g., the impact of fishing).
How does the paper relate to your Reasons-based choice and cluelessness post? Is the latter just a less precise and informal version of the former, or is there some deeper difference I'm missing?