Hide table of contents

I'm presenting a talk at EAG NYC next month on the topic of indirect effects. Not to spoil the talk too much, but the broad theme is that wild animal welfare and longtermism are in similar epistemic positions with regard to (different types of) indirect effects, and that it could be instructive to compare how the two different communities approach uncertainty about these effects. 

By "indirect effects" I mean: the morally-relevant effects that your act has on the world beyond its intended consequences. You might have also seen these called cascade effects, network effects, or Nth order effects. For example, in wild animal welfare we might contrast the intended effect of food provisioning (improved welfare for animals fed who are now less hungry) with the indirect effects (anything from crowding at food sites leading to aggressive interactions and increased disease exchange to complex and hard-to-predict effects of a potentially increasing population size). 

To try to construct a longtermist example, noting that this is not my area of expertise, one might compare the direct effects of  passing AI safety regulation (e.g., slow down the development of novel technologies and decrease the likelihood a dictatorship uses AI to lock in its regime for centuries) with some potential indirect effects (e.g., increases the timeline to AI solving some kind of major human problem, like a new treatment for a disease). 

Since my experience is almost entirely in the wild animal welfare context, I would like to crowdsource some examples illustrating different ways folks working on AI or GCRs think about indirect effects, or how theorists of longtermism have suggested uncertainty about these effects be treated. Some examples of resources I'd be interested are posts/websites/papers addressing anything like: 

  • How do indirect effects get incorporated into cost effectiveness calculations, for example, if anyone is doing cost-effectiveness calculations?
  • When do indirect effects get to be treated as irrelevant for cluelessness reasons, and when do they not?
  • Which organizations' theories of change (if any) have explicitly tried to account for indirect effects, or selected approaches they think minimize unintended consequences? 

I'm not totally unaware of the space; I have discussed this topic with friends who work on AI and GCRs -- I just want to ensure I'm not missing any really interesting work on the topic from outside my network. 

Note: This question is focused on sourcing examples of how these ideas are handled in the longtermist community; I won't engage with comments on the similarity or otherwise of the two categories of indirect effects for now :)

52

1
0

Reactions

1
0
New Answer
New Comment

1 Answers sorted by

How do indirect effects get incorporated into cost effectiveness calculations, for example, if anyone is doing cost-effectiveness calculations?

Afaict, the dominant approach is "build (explicitly or implicitly) a model factoring in all the cruxy parameters and give your best precise guess to estimate their values" (see, e.g., Lewis 2021; Greaves & MacAskill 2025, section 7.3; Violet Hour 2022)[1]. For a nice overview (and critique) of this approach, see DiGiovanni (2025a;  2025b). He also explains how some popular approaches that might seem to differ are actually doing the same, but implicitly, iirc.

An alternative approach, relevant to when we think the above one forces us to give precise best guesses that are too arbitrary, is to specify indeterminate beliefs, e.g., imprecise credences. See DiGiovanni (2025a;  2025c). But then this makes expected value maximization (and therefore orthodox cost-effectiveness calculations) impossible, and we need an alternative decision rule, and it's hard to find one that is both seemingly sensible and action-guiding (although see 3 and 4 in my response to your next question).

In any case, one also has to somehow account for the crucial effects they know they are unaware of.[2] One salient proposal is to incorporate in our models a "catch-all" meant to factor in all the crucial effects we haven't thought of. This seems to push towards preferring the above alternative approach with indeterminacy, since we'll arguably never find a principled way to assign a precise utility to the catch-all. This problem is discussed by DiGiovanni (2025c) and Roussos (2021, slides), iirc.

When do indirect effects get to be treated as irrelevant for cluelessness reasons, and when do they not

Greaves (2016) differentiates between simple and complex cluelessness. She essentially argues that:
- you can assume random symmetric chaotic long run effects (e.g., from moving my hand to the left, right now) give only rise to simple cluelessness and hence "cancel out". 
- you can't do the same with effects where you have asymmetric reasons to believe your action will end up doing good overall, and reasons to believe the opposite (e.g., long-term effects of giving to AMF or Make A Wish Foundation).[3]

Now, when we don't know whether a given action is desirable because of complex clulessness, what do we do? Here are worth-mentioning approaches I'm aware of:
1. Fool yourself into believing you're not clueless and give an arbitrary best guess. This is very roughly "Option 1" in Clifton (2025). 
2. Accept that you don't know and embrace cluelessness nihilism. This is very roughly "Option 2" in Clifton (2025).
3. Find a justifiable way to "bracket out" the effects you are clueless about. This is roughly "Option 3" in Clifton (2025) and is thoroughly discussed in Kollin et al. (2025).
4. Do something similar to the above but with normative views. DiGiovanni might post something very thorough on this, soon. But for now, see this quick take of his and Vinding's post he responds to.

In practice, most core EAs either (in decreasing order of how common this is in my experience):
A) do some underdefined (and poorly-justified?) version of 3 where they ignore crucial parameters they think they can't estimate,[4] i.e., somehow treat complex cluelessness as if it were simple cluelessness. Examples of people endorsing such an approach are given in this discussion of mine and DiGiovanni (2025d). See Clifton (2025) on the difference between this approach and his bracketing proposal, though it can sometimes lead to the same behavior in practice.
B) have precise beliefs (whether explicitly or implicitly) about absolutely every crucial parameter (even, e.g., the values aliens hold, our acausal impact in other branches of the multiverse,  and the cruxy considerations they are unaware of). Hence, there is just no complex-cluelessness paralysis in their view so no problem and no need for any of the four above solutions. I don't think the radical version of this approach I present has been explicitly publicly/formally defended (although it is defo endorsed by many,  sometimes unconsciously), but see Lewis (2021) and the refs and this poll from DiGiovanni, for related defenses.
C) openly endorse doing 1 and don't mind arbitrariness.

Which organizations' theories of change (if any) have explicitly tried to account for indirect effects, or selected approaches they think minimize unintended consequences? 

Greaves (2020) talks about how GiveWell has tried to factor in some medium-term indirect effects in their calculations, fwiw. 

I don't know about organizations' ToCs beyond that. Hopefully, my response to your first question helps, however. (EDIT to add: If we remain outside the realm of longtermism, there's Michael St. Jules' work on humans' impact on animals that I find particularly relevant. I think AIM and Rethink Priorities have done work intended to account for indirect effects too, but Michael would know much better where to point you!)

(EDIT 2 to add: Within the longtermism realm, there are also people who thought about exotic crucial considerations, in an attempt to do something along the lines of B above, like, e.g., considerations related to aliens---see, e.g., this post of mine, Vinding (2024) and Riché (2025), and references therein---and acausal reasoning---see, e.g., this and that.)

In addition, worth noting that some have argued we should focus on what they consider to be robustly good interventions to influence the far future (e.g., capacity-building and avoiding lock-in scenarios), hence aiming to minimize unintended consequences that would swamp the overall assessment of whether the intervention does more good than harm. See DiGiovanni (2025d) (last mention of him, I promise!) for a nice summary (and critique), and references therein for cases in favor of these robustness approaches.


Anyway, cool project! :) And glad this prompted me to write all this. Sorry if it's a lot at once.

  1. ^

    Applied to forecasting the long-term (dis)value of human extinction/expansion, specifically, see this piece of mine and references therein.

  2. ^

    See DiGiovanni (2025c);  Kollin et al. (2025, section 2); Greaves & MacAskill (2025, section 7.2); Tarsney (2024, section 3); Roussos (2021, slides); Tomasik (2015); Bostrom (2014).

  3. ^

    While this distinction is consensual afaict, there is a lot of disagreement about whether a given action gives rise to complex cluelessness (when we consider all its effects) or if its overall desirability can be non-arbitrarily estimated. Hence all the debates around the epistemic challenge to longtermism. DiGiovanni (2025a;  2025c) offers the best overview of the topic to date imo, although he's not neutral on who's right. :)

  4. ^

    For longtermists, this can be considerations that have to do with aliens and acausal reasoning, and unawareness. For neartermists, this can be the entire long-term future.

  5. Show all footnotes

(This was a really nice "lightning lit review", do consider saving it on something like a personal site instead of leaving it here :) I've seen a fair number of the writings you cited, but (being a lay outsider) hadn't made sense of the "landscape" they constituted so to speak, why and how they're important, etc until I read your answer here.)

6
Toby Tremlett🔹
Strongly agree!

explains how some popular approaches that might seem to differ are actually doing the same, but implicitly

Yep, I think this is a crucial point that I worry has still gotten buried a bit in my writings. This post is important background. Basically: You might say "I don't just rely on an inside view world model and EV max'ing under that model, I use outside views / heuristics / 'priors'." But it seems the justification for those other methods bottoms out in "I believe that following these methods will lead to good consequences under uncertainty in some sense" — and then I don't see how these beliefs escape cluelessness.

Wow, was not expecting such a thorough answer, I really appreciate it! I will try to do justice to the existing literature in the talk :)

As a random aside, I hired Anthony to do an internship at WAI in ~2019 so it's very funny to me for him to have gone off and done longtermism things, that nevertheless have ended up relevant to WAI.

Comments1
Sorted by Click to highlight new comments since:

I added a bunch of relevant tags to your post that might help you search the forum better.

Curated and popular this week
Relevant opportunities