How do indirect effects get incorporated into cost effectiveness calculations, for example, if anyone is doing cost-effectiveness calculations?
Afaict, the dominant approach is "build (explicitly or implicitly) a model factoring in all the cruxy parameters and give your best precise guess to estimate their values" (see, e.g., Lewis 2021; Greaves & MacAskill 2025, section 7.3; Violet Hour 2022). For a nice overview (and critique) of this approach, see DiGiovanni (2025a; 2025b). He also explains how some popular approaches that might seem to differ are actually doing the same, but implicitly, iirc.
An alternative approach, relevant to when we think the above one forces us to give precise best guesses that are too arbitrary, is to specify indeterminate beliefs, e.g., imprecise credences. See DiGiovanni (2025a; 2025c). But then this makes expected value maximization (and therefore orthodox cost-effectiveness calculations) impossible, and we need an alternative decision rule, and it's hard to find one that is both seemingly sensible and action-guiding (although see 3 and 4 in my response to your next question).
In any case, one also has to somehow account for the crucial effects they know they are unaware of. One salient proposal is to incorporate in our models a "catch-all" meant to factor in all the crucial effects we haven't thought of. This seems to push towards preferring the above alternative approach with indeterminacy, since we'll arguably never find a principled way to assign a precise utility to the catch-all. This problem is discussed by DiGiovanni (2025c) and Roussos (2021, slides), iirc.
When do indirect effects get to be treated as irrelevant for cluelessness reasons, and when do they not
Greaves (2016) differentiates between simple and complex cluelessness. She essentially argues that:
- you can assume random symmetric chaotic long run effects (e.g., from moving my hand to the left, right now) give only rise to simple cluelessness and hence "cancel out".
- you can't do the same with effects where you have asymmetric reasons to believe your action will end up doing good overall, and reasons to believe the opposite (e.g., long-term effects of giving to AMF or Make A Wish Foundation).
Now, when we don't know whether a given action is desirable because of complex clulessness, what do we do? Here are worth-mentioning approaches I'm aware of:
1. Fool yourself into believing you're not clueless and give an arbitrary best guess. This is very roughly "Option 1" in Clifton (2025).
2. Accept that you don't know and embrace cluelessness nihilism. This is very roughly "Option 2" in Clifton (2025).
3. Find a justifiable way to "bracket out" the effects you are clueless about. This is roughly "Option 3" in Clifton (2025) and is thoroughly discussed in Kollin et al. (2025).
4. Do something similar to the above but with normative views. DiGiovanni might post something very thorough on this, soon. But for now, see this quick take of his and Vinding's post he responds to.
In practice, most core EAs either (in decreasing order of how common this is in my experience):
A) do some underdefined (and poorly-justified?) version of 3 where they ignore crucial parameters they think they can't estimate, i.e., somehow treat complex cluelessness as if it were simple cluelessness. Examples of people endorsing such an approach are given in this discussion of mine and DiGiovanni (2025d). See Clifton (2025) on the difference between this approach and his bracketing proposal, though it can sometimes lead to the same behavior in practice.
B) have precise beliefs (whether explicitly or implicitly) about absolutely every crucial parameter (even, e.g., the values aliens hold, our acausal impact in other branches of the multiverse, and the cruxy considerations they are unaware of). Hence, there is just no complex-cluelessness paralysis in their view so no problem and no need for any of the four above solutions. I don't think the radical version of this approach I present has been explicitly publicly/formally defended (although it is defo endorsed by many, sometimes unconsciously), but see Lewis (2021) and the refs and this poll from DiGiovanni, for related defenses.
C) openly endorse doing 1 and don't mind arbitrariness.
Which organizations' theories of change (if any) have explicitly tried to account for indirect effects, or selected approaches they think minimize unintended consequences?
Greaves (2020) talks about how GiveWell has tried to factor in some medium-term indirect effects in their calculations, fwiw.
I don't know about organizations' ToCs beyond that. Hopefully, my response to your first question helps, however. (EDIT to add: If we remain outside the realm of longtermism, there's Michael St. Jules' work on humans' impact on animals that I find particularly relevant. I think AIM and Rethink Priorities have done work intended to account for indirect effects too, but Michael would know much better where to point you!)
(EDIT 2 to add: Within the longtermism realm, there are also people who thought about exotic crucial considerations, in an attempt to do something along the lines of B above, like, e.g., considerations related to aliens---see, e.g., this post of mine, Vinding (2024) and Riché (2025), and references therein---and acausal reasoning---see, e.g., this and that.)
In addition, worth noting that some have argued we should focus on what they consider to be robustly good interventions to influence the far future (e.g., capacity-building and avoiding lock-in scenarios), hence aiming to minimize unintended consequences that would swamp the overall assessment of whether the intervention does more good than harm. See DiGiovanni (2025d) (last mention of him, I promise!) for a nice summary (and critique), and references therein for cases in favor of these robustness approaches.
Anyway, cool project! :) And glad this prompted me to write all this. Sorry if it's a lot at once.
(This was a really nice "lightning lit review", do consider saving it on something like a personal site instead of leaving it here :) I've seen a fair number of the writings you cited, but (being a lay outsider) hadn't made sense of the "landscape" they constituted so to speak, why and how they're important, etc until I read your answer here.)
Yep, I think this is a crucial point that I worry has still gotten buried a bit in my writings. This post is important background. Basically: You might say "I don't just rely on an inside view world model and EV max'ing under that model, I use outside views / heuristics / 'priors'." But it seems the justification for those other methods bottoms out in "I believe that following these methods will lead to good consequences under uncertainty in some sense" — and then I don't see how these beliefs escape cluelessness.
Wow, was not expecting such a thorough answer, I really appreciate it! I will try to do justice to the existing literature in the talk :)
As a random aside, I hired Anthony to do an internship at WAI in ~2019 so it's very funny to me for him to have gone off and done longtermism things, that nevertheless have ended up relevant to WAI.