www.jimbuhler.site
Also on LessWrong (with different essays).
I think if we only do spatiotemporal bracketing, it tells us to ignore the far future and causally inaccessible spacetime locations, because each such location is made neither determinately better off in expectation nor determinately worse off in expectation.
Oh helpful thanks, this reasoning also works in my sniper case, actually. I am clueful about the "where Emily is right after she potentially shoots" ST location so I can't bracket out the payoff attached to her shoulder pain. This payoff is contained within this small ST region. However, the payoffs associated with where the bullet ends aren't neatly contained in small ST regions the same way! I want the terrorist dead because he's gonna keep terrorizing some parts of the world otherwise. I want the kid alive to prevent the negative consequences (in various ST regions) associated with an innocent kid's death. Because of this, I arguably can't pin down any specific ST location other than "where Emily is right after she potentially shoots" that is made determinately better or worse off by Emily taking the shot. Hence, ST bracketing would allow C but not A or B.
To the extent that I'm still skeptical of C being warranted, it is because:
And I guess all this also applies to A' vs B' vs C' and whether to bracket out near-term effects. Thanks for helping me identify these cruxes!
I'll take some more time to think about your point about bracketing out possibilities and AGI by date X.
And that's one way to interpret Anthony's first objection to bracketing? I can't actually pin down a specific ST location (or whatever value-bearer) where donating to AMF is determinately bad, but I still know for sure such locations exist! As I think you alluded to elsewhere while discussing ST bracketing and changes to agriculture/land use, what stops us from acting as if we could pin down such locations?
If you weren't doing [B] with moral weights, though, you would presumably have to worry about things other than effects on soil animals. So, ultimately, [B] remains an important crux for you.
(You could still say you'd prioritize decreasing uncertainty on moral weights if you thought there was too much uncertainty to justify doing [B], but the results from such research might never be precise enough to be action-guiding. You might have to endorse B despite the ambiguity, or one of the three others.)
Extinction forecloses all option value — including the option for future agents to course-correct if we've made mistakes. Survival preserves the ability to solve new problems. This isn't a claim about net welfare across cosmic history; it's a claim about preserving agency and problem-solving capacity.
I think it still implicitly is a claim about net welfare across the cosmos. You have to believe that preserving option value will actually eventually lead to higher net welfare across the cosmos[1]---belief which I argue relies on judgment calls. (And the option-value argument for x-risk reduction was kind of already infamously known as a bad one in the GPR literature, including among x-risk reducers.)
You might say individuals can act on non-longtermist grounds while remaining longtermist-clueless. But this concedes that something must break the paralysis, and I'd argue that "preserve option value / problem-solving capacity" is a principled way to do so that doesn't require the full judgment-call apparatus you describe.
Nice, that's the crux! Yeah so I tentatively find something like bracketing out long-term effects more principled (as a paralysis breaker) than option-value preservation. I have no clue whether reducing the agony of the many animals we can robustly help in the near term is overall good when considering the indirect long-term effects, but I find doing it anyway far more justifiable than "reducing x-risks and let future people decide what they should do". I would prefer the latter if I bought the premises of the option-value argument for x-risk reduction, but I wouldn't be clueless and wouldn't have a paralysis problem to begin with, then.
I don't see any good reason to believe enabling our descendants is impartially better than doing the exact opposite (both positions rely on judgment calls that seem arbitrary to me). However, I see good (non-longtermist) reasons to reduce near-term animal suffering rather than increase it.
Unless you intrinsically value the existence of Earth-originated agents or something, and in a way where you're happy to ignore the welfarist considerations that may leave you clueless on their own. In this case, you obviously think reducing P(extinction) is net positive. But then,
Nice, thanks! (I gave examples of charities/work where you're kinda agnostic because of a crux other than AI timelines, but this was just to illustrate.)
Assuming that saving human lives increases welfare, I agree doing it earlier increases welfare more if TAI happens earlier.
I had no doubts you thought this! :) I'm just curious as to whether you see reasons for someone to optimize assuming long AI timelines, despite low resilience in their high credence in long AI timelines.
(Hey Vasco!) How resilient is your relatively high credence that AI timelines are long?
And would you agree that the less resilient it is, the more you should favor interventions that are also good under short AI timelines? (E.g., the work of GiveWell's top charities over making people consume fewer unhealthy products, since the latter pays off far later, as you and Michael discuss in this thread.)
it seems pretty likely to me that aquatic noise reduces populations (and unlikely that it increases them), both fish and invertebrates, by increasing mortality and reducing fertility.
What about trophic cascades? Maybe the populations most directly affected and reduced by aquatic noise were essential for keeping overall wild animal populations down?
Do you think aquatic noise is like some specific forms of fishing that determinately reduce overall populations? Is it because you think it directly affects/reduces all populations (unlike some other specific forms of fishing) such that trophic cascades can hardly compensate?
if we're clueless whether Emily will feel pain or not then the difference disappears. In this case I don't have the pro-not-shooting bracketing intuition.
Should this difference matter if we're not difference-making risk-averse or something? In both cases, C is better for Emily in expectation (the same way reducing potential termite suffering is better for termites, in expectation, even if it might make no difference because they might not be sentient).
Now, new thought experiment. Consider whatever intervention you find robustly overall good in the near-term (without bracketing out any near-term effect) and replace A, B, and C with the following:
Do you have the pro-C' intuition, then? If yes, what's different from the sniper case?
Is the positive effect on wild animal welfare really your crux for finding GHD net positive? If yes, that means you think WAW is more pressing than improving human health. And it feels weird to advocate for improving human health despite the meat-eating pb because of wild animal suffering. If you really think that, it feels like you should just advocate for reducing wild animal suffering instead (unless you think GDH happens to be the best way to do that).