www.jimbuhler.site
Also on LessWrong (with different essays).
Extinction forecloses all option value — including the option for future agents to course-correct if we've made mistakes. Survival preserves the ability to solve new problems. This isn't a claim about net welfare across cosmic history; it's a claim about preserving agency and problem-solving capacity.
I think it still implicitly is a claim about net welfare across the cosmos. You have to believe that preserving option value will actually eventually lead to higher net welfare across the cosmos[1]---belief which I argue relies on judgment calls. (And the option-value argument for x-risk reduction was kind of already infamously known as a bad one in the GPR literature, including among x-risk reducers.)
You might say individuals can act on non-longtermist grounds while remaining longtermist-clueless. But this concedes that something must break the paralysis, and I'd argue that "preserve option value / problem-solving capacity" is a principled way to do so that doesn't require the full judgment-call apparatus you describe.
Nice, that's the crux! Yeah so I tentatively find something like bracketing out long-term effects more principled (as a paralysis breaker) than option-value preservation. I have no clue whether reducing the agony of the many animals we can robustly help in the near term is overall good when considering the indirect long-term effects, but I find doing it anyway far more justifiable than "reducing x-risks and let future people decide what they should do". I would prefer the latter if I bought the premises of the option-value argument for x-risk reduction, but I wouldn't be clueless and wouldn't have a paralysis problem to begin with, then.
I don't see any good reason to believe enabling our descendants is impartially better than doing the exact opposite (both positions rely on judgment calls that seem arbitrary to me). However, I see good (non-longtermist) reasons to reduce near-term animal suffering rather than increase it.
Unless you intrinsically value the existence of Earth-originated agents or something, and in a way where you're happy to ignore the welfarist considerations that may leave you clueless on their own. In this case, you obviously think reducing P(extinction) is net positive. But then,
Nice, thanks! (I gave examples of charities/work where you're kinda agnostic because of a crux other than AI timelines, but this was just to illustrate.)
Assuming that saving human lives increases welfare, I agree doing it earlier increases welfare more if TAI happens earlier.
I had no doubts you thought this! :) I'm just curious as to whether you see reasons for someone to optimize assuming long AI timelines, despite low resilience in their high credence in long AI timelines.
(Hey Vasco!) How resilient is your relatively high credence that AI timelines are long?
And would you agree that the less resilient it is, the more you should favor interventions that are also good under short AI timelines? (E.g., the work of GiveWell's top charities over making people consume fewer unhealthy products, since the latter pays off far later, as you and Michael discuss in this thread.)
it seems pretty likely to me that aquatic noise reduces populations (and unlikely that it increases them), both fish and invertebrates, by increasing mortality and reducing fertility.
What about trophic cascades? Maybe the populations most directly affected and reduced by aquatic noise were essential for keeping overall wild animal populations down?
Do you think aquatic noise is like some specific forms of fishing that determinately reduce overall populations? Is it because you think it directly affects/reduces all populations (unlike some other specific forms of fishing) such that trophic cascades can hardly compensate?
if we're clueless whether Emily will feel pain or not then the difference disappears. In this case I don't have the pro-not-shooting bracketing intuition.
Should this difference matter if we're not difference-making risk-averse or something? In both cases, C is better for Emily in expectation (the same way reducing potential termite suffering is better for termites, in expectation, even if it might make no difference because they might not be sentient).
Now, new thought experiment. Consider whatever intervention you find robustly overall good in the near-term (without bracketing out any near-term effect) and replace A, B, and C with the following:
Do you have the pro-C' intuition, then? If yes, what's different from the sniper case?
Interesting! This resembles Michael St. Jules' hedging proposal.
But the animal representatives, even if they aren't thrilled by retaining another omnivore, have more pressing priorities than trying to help animals by eliminating meat-eaters one by one.
Is it that obvious? I find it hard to come up with interventions that reduce farmed animal suffering (and farmed animal suffering only, ignoring all the rest) more robustly than "minimize the number of entities that incidentally perpetuate this farmed animal suffering". I feel like this should actually be pretty high up the list for someone who strictly spotlights farmed animals (like the farmed animal representatives are supposed to in your spotlighting parliament, if I understand correctly).
Why don't the managers of the Animal Welfare Fund support preventing the existence of more potential meat-eaters, then? (via, e.g., increasing x-risks.) I presume because
Curious if you agree.
For the moral trade between the representatives of human victims of malaria and farmed animal representatives to be fair, in your setup, the preferences of the latter would have to actually stoplight farmed animals the same way the former spotlights human victims of malaria. I.e., the preferences of farmed animal representatives in your spotlighting parliament should not be those of real farmed animal advocates who are not spotlighting farmed animals (otherwise, they would obviously be pro-x-risks and stuff despite the downsides on other beings, the same way the representatives of human malaria victims are anti-poverty despite the meat-eater pb).
I would still say there are actions which are robustly beneficial in expectation, such as donating to SWP. It is possible SWP is harmful, but I still think donating to it is robustly better than killing my family, friends, and myself, even in terms of increasing impartial welfare.
It's kinda funny to reread this 6 months later. Since then, the sign of your precise best guess flipped twice, right? You argued somewhere (can't find the post) that shrimp welfare actually was slightly net bad after estimating that it increases soil animal populations. Later, you started weakly believing animal farming actually decreases the number of soil nematodes (which morally dominate in your view), which makes shrimp welfare (weakly) good again.
(Just saying this to check if that's accurate because that's interesting. I'm not trying to lead you into a trap where you'd be forced to buy imprecise credences or retract the main opinion you defend in this comment thread. As I suggest in this comment, let's maybe discuss stuff like this on a better occasion.)
I suspect Vasco is reasoning about the implications of epistemic principles (applied to our evidence) in a way I'd find uncompelling even if I endorsed precise Bayesianism.
Oh so for the sake of argument, assume the implications he sees are compelling. You are unsure about whether your good epistemic principles E imply (a) or (b).[1]
So then, the difference between (a) and (b) is purely empirical, and MNB does not allow me to compare (a) and (b), right? This is what I'd find a bit arbitrary, at first glance. The isolated fact that the difference between (a) and (b) is technically empirical and not normative doesn't feel like a good reason to say that your "bracket in consequentialist bracketing" move is ok but not the "bracket in ex post neartermism" move (with my generous assumptions in favor of ex post neartermism).
I don't mean to argue that this is a reasonable assumption. It's just a useful one for me to understand what moves MNB does and does not allow. If you find this assumption hard to make, imagine that you learn that we likely are in simulation that is gonna shut down in 100 years and that the simulators aren't watching us (so we don't impact them).
I find impartial consequentialism and indeterminate beliefs very well-motivated, and these combined with consequentialist bracketing seem to imply neartermism (as Kollin et al. (2025) argue), I think it’s plausible that metanormative bracketing implies neartermism.
Say I find ex post neartermism (Vasco's view that our impact washes out, ex post, after say 100 years) more plausible than consequentialist bracketing being both correct and action-guiding.
My favorite normative view (impartial consequentialism + plausible epistemic principles + maximality) gives me two options. Either:
Would you say that what dictates my view on (a)vs(b) is my uncertainty between different epistemic principles, such that I can dichotomize my favorite normative view based on the epistemic drivers of (a)vs(b)? (Such that, then, MNB allows me to bracket out the new normative view that implies (a) and bracket in the new normative view that implies (b), assuming no sensitivity to individuation.)
If not, I find it a bit arbitrary that MNB allows your "bracket in consequentialist bracketing" move and not this "bracket in ex post neartermism" move.
If you weren't doing [B] with moral weights, though, you would presumably have to worry about things other than effects on soil animals. So, ultimately, [B] remains an important crux for you.
(You could still say you'd prioritize decreasing uncertainty on moral weights if you thought there was too much uncertainty to justify doing [B], but the results from such research might never be precise enough to be action-guiding. You might have to endorse B despite the ambiguity, or one of the three others.)