AD

Anthony DiGiovanni

1940 karmaJoined

Bio

Researcher at the Center on Long-Term Risk (part-time). All opinions my own.

How others can help me

I'm currently open to roles in animal welfare or reduction of extreme human suffering (research or grantmaking).

Sequences
1

The challenge of unawareness for impartial altruist action guidance

Comments
245

Right, but the same point applies to other scope-restricted views, no? We need some non-arbitrary answer as to why we limit the scope to some set of consequences rather than a larger or smaller set. (I do think bracketing is a relatively promising direction for such a non-arbitrary answer, to be clear.)

Sorry this wasn't clear! I wasn't thinking about the choice between fully eliminating factory farming vs. the status quo. I had in mind marginal decreased demand for animal products leading to marginal decreased land use (in expectation), which I do think we have a fairly simple and well-evidenced mechanism for.

I also didn't mean to say the wild animal effects dominate, just that they're large enough to be competitive with the farmed animal effects. I agree the tradeoffs between e.g. cow or chicken suffering vs. wild insect suffering seem ambiguous. (And yep, from a non-suffering-focused perspective, it would also plausibly be ambiguous whether increased wild insect populations are bad.)

(I think when I wrote the above comment, I was thinking of pretty coarse-grained buckets of "robustness" vs "speculativeness".)

I have mixed feelings about this. So, there are basically two reasons why bracketing isn't orthodox impartial consequentialism:

  1. My choice between A and B isn't exactly determined by whether I think A is "better" than B. See Jesse's discussion in this part of the appendix.
  2. Even if we could interpret bracketing as a betterness ranking, the notion of "betterness" here requires assigning a weight of zero to consequences that I don't think are precisely equally good under A vs. B.

I do think both of these are reasons to give less weight to bracketing in my decision-making than I give to standard non-consequentialist views.[1]

However:

  • It's still clearly consequentialist in the sense that, well, we're making our choice based only on the consequences, and in a scope-sensitive manner. I don't think standard non-consequentialist views get you the conclusion that you should donate to AMF rather than MAWF, unless they're defined such that they suffer from cluelessness too.
  • There's an impartial reason why we "ignore" the consequences at some locations of value in our decision-making, namely, that those consequences don't favor one action over the other. (I think the same is true if we don't use the "locations of value" framework, but instead something more like what Jesse sketches here, though that's harder to make precise.)
  1. ^

    E.g. compare (i) "A reduces x more units of disutility than B within the maximal bracket-set I', but I'm clueless about A vs. B when looking outside the maximal bracket-set", with (ii) "A reduces x more units of disutility than B within I', and A and B are equally good in expectation when looking outside the maximal bracket-set." I find (i) to be a somewhat compelling reason to do A, but it doesn't feel like as overwhelming a moral duty as the kind of reason given by (ii).

Rejecting premise 1, completeness is essentially a nonstarter in the context of morality, where the whole project is premised on figuring out which worlds, actions, beliefs, rules, etc., are better than or equivalent to others. You can deny this your heart of hearts - I won’t say that you literally cannot believe that two things are fundamentally incomparable - but I will say that the world never accommodates your sincerely held belief or conscientious objector petition when it confronts you with the choice to take option A, option B, or perhaps coin flip between them.

This isn't central to your post, but I'm commenting on it because it's a very common defense of completeness in EA and I think rejecting completeness has very important implications:

I don't buy this argument. If you're forced to choose between A and B, and you pick A, this isn't enough to show you think A is "better" with respect to some particular normative view v — e.g., some lexical threshold consequentialism. You might have simply picked arbitrarily, or you might have chosen based on some other normative criteria you put some weight on.[1]

And incomparability differs from indifference in that, if you consider A and B incomparable, you might also consider "A + $1" and B incomparable. To me this property seems pretty intuitive in many cases, like this one from Schoenfield (in the context of relative probabilities, not betterness):

You are a confused detective trying to figure out whether Smith or Jones committed the crime. You have an enormous body of evidence that you need to evaluate. Here is some of it: You know that 68 out of the 103 eyewitnesses claim that Smith did it but Jones’ footprints were found at the crime scene. Smith has an alibi, and Jones doesn’t. But Jones has a clear record while Smith has committed crimes in the past. The gun that killed the victim belonged to Smith. But the lie detector, which is accurate 71% percent of the time, suggests that Jones did it. After you have gotten all of this evidence, you have no idea who committed the crime. You are no more confident that Jones committed the crime than that Smith committed the crime, nor are you more confident that Smith committed the crime than that Jones committed the crime.

[paraphrased:] But now you learn, actually 69 eyewitnesses claim Smith did it, not 68. The proposition that Smith did it has been mildly sweetened. So, should you now think that Smith is more likely to have done it than Jones?

More on this here and here.

  1. ^

    There are some philosophical wrinkles involved in making this idea rigorous, which I hope to do in a forthcoming post. But see here for a bit of a preview.

I'd recommend specifically checking out here and here, for why we should expect unintended effects (of ambiguous sign) to dominate any intervention's impact on total cosmos-wide welfare by default. The whole cosmos is very, very weird. (Heck, ASI takeoff on Earth alone seems liable to be very weird.) I think given the arguments I've linked, anyone proposing that a particular intervention is an exception to this default should spell out much more clearly why they think that's the case.

This will be obvious to Jesse, but for others:

Another important sense in which bracketing isn't the same thing as ignoring cluelessness is, we still need to account for unawareness. Before thinking about unawareness, we might have credences about some locations of value I' that tell us A >_{I'} B. But if the mechanisms governing our impact on I' are complex/unfamiliar enough, arguably our unawareness about I' is sufficiently severe that we should consider A and B incomparable on I'.

Thanks Ben — a few clarifications:

  • Bracketing doesn’t in general recommend focusing on the “first order consequences”, in the sense people usually use that term (e.g. the first step in some coarse-grained causal pathway). There can be locations of value I’ where we’d think A >_{I’} B if we only considered first order consequences, yet A [incomparable]_{I’} B all things considered. Conversely, there can be locations of value I’ that are only affected by higher-order consequences, yet A >_{I’} B.
  • Not sure exactly what you mean by “generally do better”, but just to be clear: Bracketing is its own theory of what it means to “do better” as an impartial altruist, not a formalization of a heuristic for getting higher EV. (Jesse says as much in the summary.)


On your second point: Even if AI would plausibly kill the bed net recipients soon, we also need to say whether (1) any concrete intervention we’re aware of would decrease AI risk in expectation, and (2) that intervention would be more cost-effective for bracketed-in welfare than the alternatives, if so.

  • I’m skeptical of (1), briefly, because whether an intervention prevents vs. causes an AI x-risk seems sensitive to various dynamics that we have little evidence about + are too unfamiliar for us to trust our intuitions about. More on this here.
  • Re: (2), if we’re bracketing out the far-future consequences, I expect it’s hard to argue that AI risk work is more cost-effective than the best animal welfare opportunities. (Less confident in this point than the previous one, conditional on (1).)

Hi Toby — sorry if this is an annoyingly specific (or not!) question, but do you have a sense of whether the following would meet the bar for "deep engagement"?:

  • One of the chapters contains a pretty short subsection that's quite load-bearing for the thesis of the chapter. (So in particular, the subsection doesn't give an extensive argument for that load-bearing claim.)
  • The essay replies at length to that subsection. Since the subsection doesn't contain an extensive argument, though, most of the essay's content is:
    • (1) a reply to anticipated defenses of the subsection's claim, and/or
    • (2) discussion of some surrounding philosophical context (outside the chapter) the subsection's claim relies on.

ETA: Or this?:

  • (Edited) One of the chapters gives some counterarguments to a class of critiques of longtermism, but similar counterarguments are given in various other writings.
  • (Edited) The essay's purpose is largely to reply to those counterarguments. But the framing of those counterarguments in the chapter per se isn't essential, and the essay also focuses on some other key points that are more tangential to the chapter.

explains how some popular approaches that might seem to differ are actually doing the same, but implicitly

Yep, I think this is a crucial point that I worry has still gotten buried a bit in my writings. This post is important background. Basically: You might say "I don't just rely on an inside view world model and EV max'ing under that model, I use outside views / heuristics / 'priors'." But it seems the justification for those other methods bottoms out in "I believe that following these methods will lead to good consequences under uncertainty in some sense" — and then I don't see how these beliefs escape cluelessness.

Poll: Is this one of your cruxes for cluelessness?

There's a cluster of responses to arguments for cluelessness I've encountered, which I'm not yet sure I understand but maybe is important. Here's my attempted summary:[1]

Sure, maybe assigning each action a precise EV feels arbitrary. But that feeling merely reflects the psychological difficulty of generating principled numbers, for non-ideal agents like us. It's not a problem for the view that even non-ideal agents should, ultimately, evaluate actions as more or less rational based on precise EV.

If you're skeptical of cluelessness, I'd find it super helpful if you'd agree-vote if you agree with the above response or something very similar to it, and disagree-vote otherwise. ETA: added a poll widget below, please use that instead (thanks to Toby Tremlett for suggesting this). (Please don't vote if you aren't skeptical of cluelessness.) And feel free to comment with some different version of the above you'd agree with, if that difference is important for you. Thanks!

The "arbitrariness" of precise EVs is just a matter of our discomfort with picking a precise number (see above).
NB
BP
CC
S
TK
disagree
agree
  1. ^

    Some examples of sentiments that, IIUC, this summary encapsulates (emphasis mine):
    * Greaves: "I think most of us feel like we’re really just making up arbitrary numbers, but that’s really uncomfortable because precisely which arbitrary numbers we make up seems to make a difference to what we ended up doing." See also Greaves' discussion of the "decision discomfort" involved in complex cluelessness.
    * Soares: "Now, I agree that this scenario is ridiculous. And that it sucks. And I agree that picking a precise minute feels uncomfortable. And I agree that this is demanding way more precision than you are able to generate. But if you find yourself in the game, you'd best pick the minute as well as you can. When the gun is pressed against your temple, you cash out your credences."

Load more