AD

Anthony DiGiovanni

1706 karmaJoined

Bio

Researcher at the Center on Long-Term Risk. All opinions my own.

Sequences
1

The challenge of unawareness for impartial altruist action guidance

Comments
227

Maybe you need some account of transworld identity (or counterparts) to match these lives across possible worlds

That's the concern, yeah. When I said ”some nontrivially likely possible world containing an astronomical number of happy lives”, I should have said these were happy experience-moments, which (1) by definition only exist in the given possible world, and (2) seem to be the things I ultimately morally care about, not transworld persons.[1] Likewise each of the experience-moments of the lives directly saved by the AMF donation only exist in a given possible world.

  1. ^

    (Or spacetime regions.)

Thanks for this post, Magnus! While I’m still uncompelled by your arguments in “Why give weight to a scope-adjusted view” for the reasons discussed here and here, I’ll set that aside and respond to the “Asymmetry in practical recommendations”.

Suppose that (i) the normative perspective from which we’re clueless (e.g., impartial consequentialism plus my framework here) says both A and B are permissible, and (ii) all other normative perspectives we give weight to say only A is permissible. In that case, I’d agree we should do A, no matter how minuscule the weight we give to the perspectives in (ii).

But realistically, our situation doesn’t seem that clean. Take {A = “donate to the Humane Slaughter Association”, B = “spend that money on yourself”}. It seems that different scope-adjusted views might give the opposite verdict here. Let T be the time horizon beyond which the HSA donation might lead to, say, increased wild animal suffering via increasing the price of meat for larger farmed animals.

  • If we discount fast enough, the effects before T (preventing painful slaughter) dominate ⇒ A is better than B.
  • If we discount more slowly (but not so slowly that we’re clueless!), the backfire effect on wild animals after T might dominate ⇒ B is better than A.

(In practice things might be much more complicated than this model lets on. It’s just an illustration.)

We might try to put meta-normative weights on these different discount rates. But I expect the relative weights to be arbitrary, which would make us clueless all over again from a consequentialist perspective. (I’m not saying our normative views need to be precisely specified — they can and will be vague — but we need some perhaps-vague overall reason to think the A-favoring perspectives outweigh the B-favoring perspectives. And I’m still keen for people to think about what that kind of reason might be!)

Unfortunately not that "succinct" :) but I argue here that cluelessness-ish arguments defeat the impartial altruistic case for any intervention, longtermist or not. Tl;dr: our estimates of the sign of our net long-term impact are arbitrary. (Building on Mogensen (2021).)

(It seems maybe defensible to argue something like: "We can at least non-arbitrarily estimate net near-term effects. Whereas we're clueless about the sign of any particular (non-'gerrymandered') long-term effect (or, there's something qualitatively worse about the reasons for our beliefs about such effects). So we have more reason to do interventions with the best near-term effects." This post gives the strongest case for that I'm aware of. I'm not personally convinced, but think it's worth investigating further.)

The "lower meat production" ⇒ "higher net primary productivity" ⇒ "higher wild animal suffering" connection seems robust to me. Or not that much less robust than the intended benefit, at least.

Permissive epistemology doesn't imply precise credences / completeness / non-cluelessness

(Many thanks to Jesse Clifton and Sylvester Kollin for discussion.)

My arguments against precise Bayesianism and for cluelessness appeal heavily to the premise “we shouldn’t arbitrarily narrow down our beliefs”. This premise is very compelling to me (and I’d be surprised if it’s not compelling to most others upon reflection, at least if we leave “arbitrary” open to interpretation). I hope to get around to writing more about it eventually.

But suppose you don’t care much about avoiding arbitrariness, for any interesting interpretation of “arbitrariness”. Suppose you have a highly permissive epistemology. E.g., you might think that as long as your beliefs satisfy some “coherence” conditions, that’s as far as rationality can take you, and you’re free to have whichever coherent beliefs you’re disposed to. Or you might think “beliefs” are nothing more or less than constraints on preferences, and it’s fine if preferences are arbitrary.

If you have such a view, does that imply your credences are precise, or your preferences are complete, or the like?

No.[1] You might simply introspect on what your response to your evidence, intuitions, etc. is, and find that the most honest representation of that response is imprecise/incomplete. You might try weighing up various considerations about the consequences of some pair of actions, and find that your disposition is, “I have no clue. Even after noticing that some precise number popped into my head, all things considered I have no preference either way (but I also don’t feel indifferent, because my preferences are insensitive to mild sweetening).”

Of course, if you have a permissive epistemology and you don’t have such introspective reactions, there’s nothing more I can say to you. But it’s important to acknowledge that precision, completeness, and non-cluelessness are not some privileged default for the permissivist.

  1. ^

    As argued here and here, imprecision and incompleteness are consistent with all the usual coherence conditions that don’t straightforwardly beg the question against incompleteness.

My understanding is that your proposed policy would be something like 'represent an interval of credences and only take "actions" if the action seems net good across your interval of credences'. … you'd take no actions and do the default. (Starving to death? It's unclear what the default should be which makes this heuristic more confusing to apply.)

Definitely not saying this! I don’t think that (w.r.t. consequentialism at least) there’s any privileged distinction between “actions” and “inaction”, nor do I think I’ve ever implied this. My claim is: For any A and B, if it’s not the case that EV_p(A) > EV_p(B) for all p in the representor P,[1] and vice versa, then both A and B are permissible. This means that you have no reason to choose A over B or vice versa (again, w.r.t. consequentialism). Inaction isn’t privileged, but neither is any particular action.

Now of course one needs to pick some act (“action” or otherwise) all things considered, but I explain my position on that here.

properly incorporating model uncertainty into your estimates

What do you mean by “properly incorporating”? I think any answer here that doesn’t admit indeterminacy/imprecision will be arbitrary, as argued in my unawareness sequence.

basically any interval which is supposed to include the plausible ranges of belief goes ~all the way from 0 to 1 

Why do you think this? I argue here and here (see Q4 and links therein) why that need not be the case, especially when we’re forming beliefs relevant to local-scale goals.

My understanding is that you work around this by saying "we ignore considerations which are further down the crazy train (e.g. simulations, long run future, etc)  or otherwise seem more "speculative" until we're able to take literally any actions at all and then proceed at that stop on the train".

Also definitely not saying this. (I explicitly push back on such ad hoc ignoring of crazy-train considerations here.) My position is: (1) W.r.t. impartial consequentialism we can’t ignore any considerations. (2) But insofar as we’re making decisions based on ~immediate self-interest, parochial concern for others near to us, and non-consequentialist reasons, crazy-train considerations aren’t normatively relevant — so it’s not ad hoc to ignore them in that case. See also this great comment by Max Daniel. (Regardless, none of this is a positive argument for “make up precise credences about crazy-train considerations and act on them”.)

  1. ^

    Technically this should be weakened to “weak inequality for all p + strict inequality for at least one p”.

(ETA: The parent comment contains several important misunderstandings of my views, so I figured I should clarify here. Hence my long comments — sorry about that.)

Thanks for this, Ryan! I’ll reply to your main points here, and clear up some less central yet important points in another comment.

Here's what I think you're saying (sorry the numbering clashes with the numbering in your comment, couldn't figure out how to change this):

  1. The best representations of our actual degrees of belief given our evidence, intuitions, etc. — what you call the “terminally correct” credences — should be precise.[1]
  2. In practice, the strategy that maximizes EV w.r.t. our terminally correct credences won’t be “make decisions by actually writing down a precise distribution and trying to maximize EV w.r.t. that distribution”. This is because there are empirical features of our situation that hinder us from executing that strategy ideally.
  3. I (Anthony) am mistakenly inferring from (2) that (1) is false.
    1. (In particular, any argument against (1) that relies on premises about the “empirical aspects of the current situation” must be making that mistake.)

Is that right? If so:

  • I do disagree with (1), but for reasons that have nothing to do with (2). My case for imprecise credences is: “In our empirical situation, any particular precise credence [or expected value] we might pick would be highly arbitrary” (argued for in detail here). (So I’m also not just saying “you can have imprecise credences without getting money pumped”.)
    • I’m not saying that “heuristics” based on imprecise credences “outperform” explicit EV max. I don’t think that principles for belief formation can bottom out in “performance” but should instead bottom out in non-pragmatic principles — one of which is (roughly) “if our available information is so ambiguous that picking one precise credence over another seems arbitrary, our credences should be imprecise”.
    • However, when we use non-pragmatic principles to derive our beliefs, the appropriate beliefs (not the principles themselves) can and should depend on empirical features of our situation that directly bear on our epistemic state: E.g., we face lots of considerations about the plausibility of a given hypothesis, and we seem to have too little evidence (+ too weak constraints from e.g. indifference principles or Occam’s razor) to justify any particular precise weighing of these considerations.[2] Contra (3.a), I don’t see how/why the structure of our credences could/should be independent of very relevant empirical information like this.
      • Intuition pump: Even an "ideal" precise Bayesian doesn't actually terminally care about EV, they terminally care about the ex post value. But their empirical situation makes them uncertain what the ex post value of their action will be, so they represent their epistemic state with precise credences, and derive their preferences over actions from EV. This doesn’t imply they’re conflating terminal goals with empirical facts about how best to achieve them.
  • Separately, I haven’t yet seen convincing positive cases for (1). What are the “reasonably compelling arguments” for precise credences + EV maximization? And (if applicable to you) what are your replies to my counterarguments to the usual arguments here[3] (also here and here, though in fairness to you, those were buried in a comment thread)?
  1. ^

    So in particular, I think you're not saying the terminally correct credences for us are the credences that our computationally unbounded counterparts would have. If you are saying that, please let me know and I can reply to that — FWIW, as argued here, it’s not clear a computationally unbounded agent would be justified in precise credences either.

  2. ^

    This is true of pretty much any hypothesis we consider, not just hypotheses about especially distant stuff. This ~adds up to normality / doesn’t collapse into radical skepticism, because we have reasons to have varying degrees of imprecision in our credences, and our credences about mundane stuff will only have a small degree of imprecision (more here and here).

  3. ^

    Quote: “[L]et’s revisit why we care about EV in the first place. A common answer: “Coherence theorems! If you can’t be modeled as maximizing EU, you’re shooting yourself in the foot.” For our purposes, the biggest problem with this answer is: Suppose we act as if we maximize the expectation of some utility function. This doesn’t imply we make our decisions by following the procedure “use our impartial altruistic value function to (somehow) assign a number to each hypothesis, and maximize the expectation”.” (In that context, I was taking about assigning precise values to coarse-grained hypotheses, but the same applies to assigning precise credences to any hypothesis.)

^ I'm also curious to hear from those who disagree-voted my comment why they disagree. This would be very helpful for my understanding of what people's cruxes for (im)precision are.

I’m strongly in favor of allowing intuitive adjustments on top of quantitative modeling when estimating parameters.

We had a brief thread on this over on LW, but I'm still keen to hear why you endorse using precise probability distributions to represent these intuitive adjustments/estimates. I take many of titotal's critiques in this post to be symptoms of precise Bayesianism gone wrong (not to say titotal would agree with me on that).

ETA: Which, to be clear, is a question I have for EAs in general, not just you. :)

In theory, we could influence them, and in some sense merely wagging a finger right now has a theoretical influence on them. Yet it nevertheless seems to me quite defensible to practically disregard (or near-totally disregard, à la asymptotic discount) these effects given how remote they are

Sorry, I'm having a hard time understanding why you think this is defensible. One view you might be gesturing at is:

  1. If a given effect is not too remote, then we can model actions A and B's causal connections to that effect with relatively high precision — enough to justify the claim that A is more/less likely to result in the effect than B.
  2. If the effect is highly remote, we can't do this. (Or, alternatively, we should treat A and B as precisely equally likely to result in the effect.)
  3. Therefore, we can only systematically make a difference to effects of type (1). So only those effects are practically relevant.

But this reasoning doesn't seem to hold up for the same reasons I've given in my critiques of Option 3 and Symmetry. So I'm not sure what your actual view is yet. Can you please clarify? (Or, if the above is your view, I can try to unpack why my critiques of Option 3 and Symmetry apply just as well here.)

Load more