www.jimbuhler.site
Oh ok so you're saying that:
1. In ~100 years, some sort of (almost) unavoidable population equilibrium will be reached no matter how many human lives we (don't) save today. (Ofc, nothing very special in exactly 2125, as you say, and it's not that binary, but you get the point.)
Saving human lives today changes the human population curve between 2025 and ~2125 (multiple possible paths represented by dotted curves). But in ~2125 , our impact (no matter in which direction it was) is canceled out.
2. Even if 1 is a bit false (such that what the above black curve looks like after 2125 actually depends on how many human lives we save today), this won't translate into a difference in terms of agricultural land use (and hence in terms of soil nematode populations).
Almost no matter how many humans there are after ~2125, total agricultural land remains roughly the same.
Is that a fair summary of your view? If yes, what do you make of, say, the climate change implications of changing the total number of humans in the next 100 years? Climate change seems substantially affected by total human population (and therefore by how many human lives we save today). And the total number of soil nematodes seems substantially affected by climate change (e.g., could make a significant difference in whether there will ever be soil nematodes in current dead zones close to the poles), including long after ~2125 (nothing similar to your above points #1 and #2 applies here; climate-change effects last). Given the above + the simple fact that the next 100 years constitute a tiny chunk of time in the scheme of things, the impact we have on soil nematodes counterfactually affected by climate change between ~2125 and the end of time seems to, at least plausibly, dwarf our impact on soil nematodes affected by agricultural land use between now and ~2125.[1] What part of this reasoning goes wrong, exactly, in your view, if any?
We might have no clue about the sign of our impact on the former such that some would suggest we should ignore it in practice (see, e.g., Clifton 2025; Kollin et al. 2025), but it's a very different thing from assuming this impact almost certainly is negliglble relative to short-term impact.
(Nice, thanks for explaining.) And how do you think saving human lives now impacts the soil nematodes that will be born between 100 years from now and until the end of time? And how does this not dwarf the impact on soil nematodes that will be born in the next 100 years? What happens in 100 years that reduces to pretty much zero the impact of saving human lives now on soil nematodes?
I think empirical evidence suggests effects after 100 years are negligible.
Curious what you think of the arguments given by Kollin et al. (2025), Greaves (2016), and Mogensen (2021) that the indirect effects of donations to AMF/MAWF swamp the intended direct effects. Is it that you agree but you think the unintended indirect effects that swamp the calculus all play out before in 100 years (and the effects beyond that are small enough to be safely neglected)?
But in combination with the principle that we can't understand the downstream effect of our actions in the long-term, I don't understand how somebody can be skeptical of cluelessness.
(You might find this discussion helpful.)
How do indirect effects get incorporated into cost effectiveness calculations, for example, if anyone is doing cost-effectiveness calculations?
Afaict, the dominant approach is "build (explicitly or implicitly) a model factoring in all the cruxy parameters and give your best precise guess to estimate their values" (see, e.g., Lewis 2021; Greaves & MacAskill 2025, section 7.3; Violet Hour 2022)[1]. For a nice overview (and critique) of this approach, see DiGiovanni (2025a; 2025b). He also explains how some popular approaches that might seem to differ are actually doing the same, but implicitly, iirc.
An alternative approach, relevant to when we think the above one forces us to give precise best guesses that are too arbitrary, is to specify indeterminate beliefs, e.g., imprecise credences. See DiGiovanni (2025a; 2025c). But then this makes expected value maximization (and therefore orthodox cost-effectiveness calculations) impossible, and we need an alternative decision rule, and it's hard to find one that is both seemingly sensible and action-guiding (although see 3 and 4 in my response to your next question).
In any case, one also has to somehow account for the crucial effects they know they are unaware of.[2] One salient proposal is to incorporate in our models a "catch-all" meant to factor in all the crucial effects we haven't thought of. This seems to push towards preferring the above alternative approach with indeterminacy, since we'll arguably never find a principled way to assign a precise utility to the catch-all. This problem is discussed by DiGiovanni (2025c) and Roussos (2021, slides), iirc.
When do indirect effects get to be treated as irrelevant for cluelessness reasons, and when do they not
Greaves (2016) differentiates between simple and complex cluelessness. She essentially argues that:
- you can assume random symmetric chaotic long run effects (e.g., from moving my hand to the left, right now) give only rise to simple cluelessness and hence "cancel out".
- you can't do the same with effects where you have asymmetric reasons to believe your action will end up doing good overall, and reasons to believe the opposite (e.g., long-term effects of giving to AMF or Make A Wish Foundation).[3]
Now, when we don't know whether a given action is desirable because of complex clulessness, what do we do? Here are worth-mentioning approaches I'm aware of:
1. Fool yourself into believing you're not clueless and give an arbitrary best guess. This is very roughly "Option 1" in Clifton (2025).
2. Accept that you don't know and embrace cluelessness nihilism. This is very roughly "Option 2" in Clifton (2025).
3. Find a justifiable way to "bracket out" the effects you are clueless about. This is roughly "Option 3" in Clifton (2025) and is thoroughly discussed in Kollin et al. (2025).
4. Do something similar to the above but with normative views. DiGiovanni might post something very thorough on this, soon. But for now, see this quick take of his and Vinding's post he responds to.
In practice, most core EAs either (in decreasing order of how common this is in my experience):
A) do some underdefined (and poorly-justified?) version of 3 where they ignore crucial parameters they think they can't estimate,[4] i.e., somehow treat complex cluelessness as if it were simple cluelessness. Examples of people endorsing such an approach are given in this discussion of mine and DiGiovanni (2025d). See Clifton (2025) on the difference between this approach and his bracketing proposal, though it can sometimes lead to the same behavior in practice.
B) have precise beliefs (whether explicitly or implicitly) about absolutely every crucial parameter (even, e.g., the values aliens hold, our acausal impact in other branches of the multiverse, and the cruxy considerations they are unaware of). Hence, there is just no complex-cluelessness paralysis in their view so no problem and no need for any of the four above solutions. I don't think the radical version of this approach I present has been explicitly publicly/formally defended (although it is defo endorsed by many, sometimes unconsciously), but see Lewis (2021) and the refs and this poll from DiGiovanni, for related defenses.
C) openly endorse doing 1 and don't mind arbitrariness.
Which organizations' theories of change (if any) have explicitly tried to account for indirect effects, or selected approaches they think minimize unintended consequences?
Greaves (2020) talks about how GiveWell has tried to factor in some medium-term indirect effects in their calculations, fwiw.
I don't know about organizations' ToCs beyond that. Hopefully, my response to your first question helps, however. (EDIT to add: If we remain outside the realm of longtermism, there's Michael St. Jules' work on humans' impact on animals that I find particularly relevant. I think AIM and Rethink Priorities have done work intended to account for indirect effects too, but Michael would know much better where to point you!)
(EDIT 2 to add: Within the longtermism realm, there are also people who thought about exotic crucial considerations, in an attempt to do something along the lines of B above, like, e.g., considerations related to aliens---see, e.g., this post of mine, Vinding (2024) and Riché (2025), and references therein---and acausal reasoning---see, e.g., this and that.)
In addition, worth noting that some have argued we should focus on what they consider to be robustly good interventions to influence the far future (e.g., capacity-building and avoiding lock-in scenarios), hence aiming to minimize unintended consequences that would swamp the overall assessment of whether the intervention does more good than harm. See DiGiovanni (2025d) (last mention of him, I promise!) for a nice summary (and critique), and references therein for cases in favor of these robustness approaches.
Anyway, cool project! :) And glad this prompted me to write all this. Sorry if it's a lot at once.
Applied to forecasting the long-term (dis)value of human extinction/expansion, specifically, see this piece of mine and references therein.
While this distinction is consensual afaict, there is a lot of disagreement about whether a given action gives rise to complex cluelessness (when we consider all its effects) or if its overall desirability can be non-arbitrarily estimated. Hence all the debates around the epistemic challenge to longtermism. DiGiovanni (2025a; 2025c) offers the best overview of the topic to date imo, although he's not neutral on who's right. :)
For longtermists, this can be considerations that have to do with aliens and acausal reasoning, and unawareness. For neartermists, this can be the entire long-term future.
I aim to contribute to efforts to
1. find alternative action-guidance when standard consequentialism is silent on what we ought to do. In particular, I'm interested in finding something different from (or more specific than) Clifton's "Option 3", DiGiovanni's non-consequentialist altruism proposal, or Vinding's two proposals.
2. reduce short-term animal suffering or finding out how to do so in robust ways, since I suspect most plausible solutions to 1 say it's quite a good thing to do, although maybe not the best (and we might need to do 1 to do 2 better. Sometimes, cluelessness bites even if we ignore long-term consequences --- e.g., the impact of fishing).
Alternatives to your five attributes in your analogy? I don't think there's any that longtermists have identified that is immune to the motivations for cluelessness. The best contender in people's minds might be "making sure the ship doesn't get destroyed," but I haven't encountered any convincing case for why we shouldn't be clueless about whether that's good (or bad) in the long run.[1]
Then, it's tempting to say "let's try to do research and be less clueless" (predictive power) but even predictive power might turn out bad for all we know (in a complex-cluelessness way, not mere uncertainty).
I've just realized that I find your objections to Clifton's Option 3 much less compelling when applied to something like the following scenario I'm making up:
Four miles away from you, there's a terrorist you want dead. A sniper is locked on his position (accounting for gravity). No one has ever hit a target from this distance. The sniper will overwhelmingly likely miss the shot because of factors (other than gravity) affecting the bullet you cannot estimate from that far (the different wind layers, the Earth's rotation, etc.). You're tempted to think "well, there's no harm in trying" expect that the terrorist is holding a kid you do not want dead and they cover exactly has much surface area the way they stand. Say your utility for hitting the target is +1trillion and -1trillion for hitting the kid, and you are an EV maximizer, and you're the one who has to tell the sniper whether to take the shot or to stand down. If you thought the sniper's shot was even a tiny bit more likely to hit the target than the kid, you would tell her to take it. But you don't think that! (Something like the principle of indifference seemingly doesn't work here.) You're (complexely) clueless and therefore indifferent between shooting and not shooting. But this was before you remembered that Emily, the sniper, told you the other day that her shoulder hurts everytime she takes a shot. You care about Emily's shoulder pain much less than you care about where the bullet ends (say utility -1 if she shoots). But well, doesn't that give you a very good Option-3-like reason to tell her to stand down?
If I take your objections to Option 3 and replace some of the words to make it apply to my above scenario, I intuitively find them almost crazy. Do you have the same feeling? Is there a relevant difference with my scenario that I'm missing? (Or maybe my strong intuition for why we should tell Emily to stand down is actually not because of Option 3 being compelling but something else?)
I'm curious if you're hoping to shift people's thinking about strategy in any specific direction here, due to bringing this up?
Not really, at least not with this specific post. I just wanted to learn things by getting people's thoughts on SARP and the temporary setback view. Maybe I also very marginally made people update a bit towards "SARP might be a bigger deal than I thought" and "animal macrostrategy is complex and important", and that seems cool, but this wasn't the goal.
I like your questions. They got me thinking a lot. :)
Nice, thanks. To the extent that, indeed, noise generally washes out our impact over time, my impression is that the effects of increasing human population in the next 100 years on long-term climate change may be a good counterexample to this general tendency.
Not all long-term effects are equal in terms of how significant they are (relative to near-term effects). A ripple on a pond barely lasts, but current science gives us good indications that i) releasing carbon into the atmosphere lingers for tens of thousands of years, and ii) increased carbon in the atmosphere plausibly hugely affects the total soil nematode population (see, e.g., Tomasik's writings on climate change and wild animals)[1]. It is not effects like (i) and (ii) Bernard's post studies, afaict. I don't see why we should extrapolate from his post that there has to be something that makes us mistaken about (i) and/or (ii), even if we can't say exactly what.
Again, we might have no clue in which direction, but it still does.