Researcher at the Center on Long-Term Risk (part-time). All opinions my own.
I'm currently open to roles in animal welfare or reduction of extreme human suffering (research or grantmaking).
Sorry this wasn't clear! I wasn't thinking about the choice between fully eliminating factory farming vs. the status quo. I had in mind marginal decreased demand for animal products leading to marginal decreased land use (in expectation), which I do think we have a fairly simple and well-evidenced mechanism for.
I also didn't mean to say the wild animal effects dominate, just that they're large enough to be competitive with the farmed animal effects. I agree the tradeoffs between e.g. cow or chicken suffering vs. wild insect suffering seem ambiguous. (And yep, from a non-suffering-focused perspective, it would also plausibly be ambiguous whether increased wild insect populations are bad.)
(I think when I wrote the above comment, I was thinking of pretty coarse-grained buckets of "robustness" vs "speculativeness".)
I have mixed feelings about this. So, there are basically two reasons why bracketing isn't orthodox impartial consequentialism:
I do think both of these are reasons to give less weight to bracketing in my decision-making than I give to standard non-consequentialist views.[1]
However:
E.g. compare (i) "A reduces x more units of disutility than B within the maximal bracket-set I', but I'm clueless about A vs. B when looking outside the maximal bracket-set", with (ii) "A reduces x more units of disutility than B within I', and A and B are equally good in expectation when looking outside the maximal bracket-set." I find (i) to be a somewhat compelling reason to do A, but it doesn't feel like as overwhelming a moral duty as the kind of reason given by (ii).
Rejecting premise 1, completeness is essentially a nonstarter in the context of morality, where the whole project is premised on figuring out which worlds, actions, beliefs, rules, etc., are better than or equivalent to others. You can deny this your heart of hearts - I won’t say that you literally cannot believe that two things are fundamentally incomparable - but I will say that the world never accommodates your sincerely held belief or conscientious objector petition when it confronts you with the choice to take option A, option B, or perhaps coin flip between them.
This isn't central to your post, but I'm commenting on it because it's a very common defense of completeness in EA and I think rejecting completeness has very important implications:
I don't buy this argument. If you're forced to choose between A and B, and you pick A, this isn't enough to show you think A is "better" with respect to some particular normative view v — e.g., some lexical threshold consequentialism. You might have simply picked arbitrarily, or you might have chosen based on some other normative criteria you put some weight on.[1]
And incomparability differs from indifference in that, if you consider A and B incomparable, you might also consider "A + $1" and B incomparable. To me this property seems pretty intuitive in many cases, like this one from Schoenfield (in the context of relative probabilities, not betterness):
You are a confused detective trying to figure out whether Smith or Jones committed the crime. You have an enormous body of evidence that you need to evaluate. Here is some of it: You know that 68 out of the 103 eyewitnesses claim that Smith did it but Jones’ footprints were found at the crime scene. Smith has an alibi, and Jones doesn’t. But Jones has a clear record while Smith has committed crimes in the past. The gun that killed the victim belonged to Smith. But the lie detector, which is accurate 71% percent of the time, suggests that Jones did it. After you have gotten all of this evidence, you have no idea who committed the crime. You are no more confident that Jones committed the crime than that Smith committed the crime, nor are you more confident that Smith committed the crime than that Jones committed the crime.
[paraphrased:] But now you learn, actually 69 eyewitnesses claim Smith did it, not 68. The proposition that Smith did it has been mildly sweetened. So, should you now think that Smith is more likely to have done it than Jones?
I'd recommend specifically checking out here and here, for why we should expect unintended effects (of ambiguous sign) to dominate any intervention's impact on total cosmos-wide welfare by default. The whole cosmos is very, very weird. (Heck, ASI takeoff on Earth alone seems liable to be very weird.) I think given the arguments I've linked, anyone proposing that a particular intervention is an exception to this default should spell out much more clearly why they think that's the case.
This will be obvious to Jesse, but for others:
Another important sense in which bracketing isn't the same thing as ignoring cluelessness is, we still need to account for unawareness. Before thinking about unawareness, we might have credences about some locations of value I' that tell us A >_{I'} B. But if the mechanisms governing our impact on I' are complex/unfamiliar enough, arguably our unawareness about I' is sufficiently severe that we should consider A and B incomparable on I'.
Thanks Ben — a few clarifications:
On your second point: Even if AI would plausibly kill the bed net recipients soon, we also need to say whether (1) any concrete intervention we’re aware of would decrease AI risk in expectation, and (2) that intervention would be more cost-effective for bracketed-in welfare than the alternatives, if so.
Hi Toby — sorry if this is an annoyingly specific (or not!) question, but do you have a sense of whether the following would meet the bar for "deep engagement"?:
ETA: Or this?:
explains how some popular approaches that might seem to differ are actually doing the same, but implicitly
Yep, I think this is a crucial point that I worry has still gotten buried a bit in my writings. This post is important background. Basically: You might say "I don't just rely on an inside view world model and EV max'ing under that model, I use outside views / heuristics / 'priors'." But it seems the justification for those other methods bottoms out in "I believe that following these methods will lead to good consequences under uncertainty in some sense" — and then I don't see how these beliefs escape cluelessness.
Poll: Is this one of your cruxes for cluelessness?
There's a cluster of responses to arguments for cluelessness I've encountered, which I'm not yet sure I understand but maybe is important. Here's my attempted summary:[1]
Sure, maybe assigning each action a precise EV feels arbitrary. But that feeling merely reflects the psychological difficulty of generating principled numbers, for non-ideal agents like us. It's not a problem for the view that even non-ideal agents should, ultimately, evaluate actions as more or less rational based on precise EV.
If you're skeptical of cluelessness, I'd find it super helpful if you'd agree-vote if you agree with the above response or something very similar to it, and disagree-vote otherwise. ETA: added a poll widget below, please use that instead (thanks to Toby Tremlett for suggesting this). (Please don't vote if you aren't skeptical of cluelessness.) And feel free to comment with some different version of the above you'd agree with, if that difference is important for you. Thanks!
Some examples of sentiments that, IIUC, this summary encapsulates (emphasis mine):
* Greaves: "I think most of us feel like we’re really just making up arbitrary numbers, but that’s really uncomfortable because precisely which arbitrary numbers we make up seems to make a difference to what we ended up doing." See also Greaves' discussion of the "decision discomfort" involved in complex cluelessness.
* Soares: "Now, I agree that this scenario is ridiculous. And that it sucks. And I agree that picking a precise minute feels uncomfortable. And I agree that this is demanding way more precision than you are able to generate. But if you find yourself in the game, you'd best pick the minute as well as you can. When the gun is pressed against your temple, you cash out your credences."
Right, but the same point applies to other scope-restricted views, no? We need some non-arbitrary answer as to why we limit the scope to some set of consequences rather than a larger or smaller set. (I do think bracketing is a relatively promising direction for such a non-arbitrary answer, to be clear.)