www.jimbuhler.site
Also on LessWrong (with different essays).
I would still say there are actions which are robustly beneficial in expectation, such as donating to SWP. It is possible SWP is harmful, but I still think donating to it is robustly better than killing my family, friends, and myself, even in terms of increasing impartial welfare.
It's kinda funny to reread this 6 months later. Since then, the sign of your precise best guess flipped twice, right? You argued somewhere (can't find the post) that shrimp welfare actually was slightly net bad after estimating that it increases soil animal populations. Later, you started weakly believing animal farming actually decreases the number of soil nematodes (which morally dominate in your view), which makes shrimp welfare (weakly) good again.
(Just saying this to check if that's accurate because that's interesting. I'm not trying to lead you into a trap where you'd be forced to buy imprecise credences or retract the main opinion you defend in this comment thread. As I suggest in this comment, let's maybe discuss stuff like this on a better occasion.)
I suspect Vasco is reasoning about the implications of epistemic principles (applied to our evidence) in a way I'd find uncompelling even if I endorsed precise Bayesianism.
Oh so for the sake of argument, assume the implications he sees are compelling. You are unsure about whether your good epistemic principles E imply (a) or (b).[1]
So then, the difference between (a) and (b) is purely empirical, and MNB does not allow me to compare (a) and (b), right? This is what I'd find a bit arbitrary, at first glance. The isolated fact that the difference between (a) and (b) is technically empirical and not normative doesn't feel like a good reason to say that your "bracket in consequentialist bracketing" move is ok but not the "bracket in ex post neartermism" move (with my generous assumptions in favor of ex post neartermism).
I don't mean to argue that this is a reasonable assumption. It's just a useful one for me to understand what moves MNB does and does not allow. If you find this assumption hard to make, imagine that you learn that we likely are in simulation that is gonna shut down in 100 years and that the simulators aren't watching us (so we don't impact them).
I find impartial consequentialism and indeterminate beliefs very well-motivated, and these combined with consequentialist bracketing seem to imply neartermism (as Kollin et al. (2025) argue), I think it’s plausible that metanormative bracketing implies neartermism.
Say I find ex post neartermism (Vasco's view that our impact washes out, ex post, after say 100 years) more plausible than consequentialist bracketing being both correct and action-guiding.
My favorite normative view (impartial consequentialism + plausible epistemic principles + maximality) gives me two options. Either:
Would you say that what dictates my view on (a)vs(b) is my uncertainty between different epistemic principles, such that I can dichotomize my favorite normative view based on the epistemic drivers of (a)vs(b)? (Such that, then, MNB allows me to bracket out the new normative view that implies (a) and bracket in the new normative view that implies (b), assuming no sensitivity to individuation.)
If not, I find it a bit arbitrary that MNB allows your "bracket in consequentialist bracketing" move and not this "bracket in ex post neartermism" move.
Spent some more time thinking about this, and I think I mostly lost my intuition in favor of bracketing in Emily's shoulder pain. I thought I'd share here.
In my contrived sniper setup, I've gotta do something, and my preferred normative view (impartial consequentialism + good epistemic principles + maximality) is silent. Options I feel like I have:
All these options feel arbitrary, but I have to pick something.
Picking D demands accepting the arbitrariness of letting perfect randomness guide our actions. We can't do worse than this.[2] It is the total-arbitrariness baseline we're trying to beat.
Picking A or B demands accepting the arbitrariness of favoring one over the other, while my setup does not give me any good reason to do so (and A and B give opposite recommendations). I could pick A by sorta wagering on, e.g., an unlikely world where the kid dies of Reye's syndrome (a disease that affects almost only children) before the potential bullet hits anything. But I could then also pick B by sorta wagering on the unlikely world where a comrade of the terrorist standing near him turns on him and kills him. And I don't see either of these two wager moves as more warranted than the other.[3]
Picking C, similarly, demands accepting the arbitrariness of favoring it over A (which gives the opposite recommendation), while my setup does not give me any good reason to do so. I could pick C by wagering on, e.g., an unlikely world where time ends between the potential shot hurting Emily's shoulder and the moment the potential bullet hits something. But I could then also pick A by wagering on the unlikely world where the kid dies of Reye's syndrome anyway. And same pb as above.[4] And this is what Anthony's first objection to bracketing gestures at, I guess.
While I have a strong anti-D intuition with this sniper setup, it doesn't favor C over A or B for me, at the very moment of writing.[5]
Should we think that our reasons for C are "more grounded" than our reasons for A, or something like that? I don't see why. Is there a variant of this sniper story where it seems easier to argue that it is the case (while conserving the complex cluelessness assumption)? And is such a variant a relevant analogy to our real-world predicament?
Without necessarily assuming persons-based bracketing (for A, B, or C), but rather whatever form of bracketing results in ignoring the payoffs associated with one or two of the three relevant actors.
Our judgment calls can very well be worse than random due to systematic biases (and I remember reading somewhere in the forecasting literature that this happens). But if we believe that’s our case, we can just do the exact opposite of what our judgment calls say and this beats a coin flip.
It feels like I’m just adding non-decisive mildly sweet considerations on top of the complex cluelessness pile I already had (after thinking about the different wind layers, the Earth's rotation, etc). This will not allow me to single out one of these considerations as a tie-breaker.
This is despite some apparent kind of symmetry existing only between A and B (not between C and A) that @Nicolas Mace recently pointed to in some doc comment---symmetry which may feel normatively relevant although it feels superficial to me at the very moment of writing.
In fact, given the apparent stakes difference between Emily’s shoulder pain and where the bullet ends, I may be more tempted to act in accordance to A or B, deciding between the two based on what seems to be the least arbitrary tie-breaker. However, not sure whether this temptation is, more precisely, one in favor of endorsing A or B, or in favor of rejecting cluelessness and the need for bracketing to begin with, or something else.
If most of the value we can influence is in the far future
To be clear, you don't necessarily assume this in the paper, and you don't need to, right? You need bracketing to escape cluelessness paralysis, even if you merely think it's indeterminate whether most of the value we can influence is in the far future, afaiu.
One could try to argue that the second-order effects of near-term interventions are negligible in expectation (see "the washing out hypothesis"). But I don’t think this is plausible.
So even if this were plausible (as Vasco thinks, for instance), this wouldn't be enough to think we don't need bracketing. One would need to have determinate-ish beliefs that rule out the possibility of far future effects dominating.
Yup, something a variety of views can get behind. E.g., not "buying beef".
For "consensual EAA interventions" above, I think I was thinking more "not something EAs see as ineffective like welfare reforms for circus animals". If this turned out to be the safest animal intervention, I suspect this wouldn't convince many EAs to consider it. But if, say, developing alternatives to rodents as snake food turned out to be very safe, this could weigh a lot in its favor for them.
hey sorry for reopening but very curious to get your take on this:
Say you have been asked to evaluate the overall[1] utilitarian impact of the very first Christianity-spreaders during the first century AD (like Paul the Apostle) on the world until now (independently of their intention ofc). You have perfect information on what's causally counterfactually related to their actions. How much of their impact (whether good or bad) is on beings between 0 and 200 VS. on beings between 200 and now? (making your usual assumptions you specifically make about nematodes and stuff; don't take anyone else's perspective.)
If mostly the former, how do you explain that?
If mostly the latter, what's the difference between their ex post impact and yours? Why is most of their ex post impact longtermist-ish while yours would be neartermist? Why would, e.g., most of the people helping nematodes, thanks to you (including very indirectly through your influence on others before them) be concentrated within the next hundred years?
I.e., factoring in nematodes and stuff.
It seems highly plausible that you could counterfactually affect many more acres of this land (and thus many more soil animals) through building houses or other structures than trying to maintain factory farms.
This would not necessarily undermine your overall argument but, interestingly, Tomasik's (2016-2022) estimates seem somewhat in tension with this claim. According to him, it's really hard to beat "buying beef" in terms of cost-effectiveness to reduce wild invertebrate populations.[1] (Not saying I agree or that I think we should reduce wild invertebrate populations.)
Although he omits the fact that agriculture might in fact increase soil nematode populations, as also pointed out by Vasco in another comment thread here.
Interesting! This resembles Michael St. Jules' hedging proposal.
Is it that obvious? I find it hard to come up with interventions that reduce farmed animal suffering (and farmed animal suffering only, ignoring all the rest) more robustly than "minimize the number of entities that incidentally perpetuate this farmed animal suffering". I feel like this should actually be pretty high up the list for someone who strictly spotlights farmed animals (like the farmed animal representatives are supposed to in your spotlighting parliament, if I understand correctly).
Why don't the managers of the Animal Welfare Fund support preventing the existence of more potential meat-eaters, then? (via, e.g., increasing x-risks.) I presume because
Curious if you agree.
For the moral trade between the representatives of human victims of malaria and farmed animal representatives to be fair, in your setup, the preferences of the latter would have to actually stoplight farmed animals the same way the former spotlights human victims of malaria. I.e., the preferences of farmed animal representatives in your spotlighting parliament should not be those of real farmed animal advocates who are not spotlighting farmed animals (otherwise, they would obviously be pro-x-risks and stuff despite the downsides on other beings, the same way the representatives of human malaria victims are anti-poverty despite the meat-eater pb).