Also on LessWrong (with different essays).
I recommend decreasing the uncertainty about effects on soil animals and microorganisms by making donations to Rethink Priorities (RP) restricted to projects on soil animals and microorganisms.
Does RP have such projects?
In a lot of crazy train frameworks, the existence of people is net negative, so a large future for humanity is the worst thing that could happen.
Curious to know why you think these frameworks are crazier than the frameworks that say it's net positive.
Or are you saying it's too crazy in both cases and that we should reduce extinction risks (or at least not increase them) for non-longtermist reasons?
I don't see how longtermism solves this. It doesn't cancel the argument according to which you should believe, e.g., what matters most is conscious sub-people you might have in your brain. It just adds "in the long-term" to it.
What makes you believe reducing x-risks (or whatever longtermist project) does more good than harm, considering all sub-people in the long-term? (or atoms, or beneficiaries of acausal trade, or whatever.)
My preferred solution to the crazy-town problem fwiw: modeling our uncertain beliefs with imprecise probabilities. I find this well-motivated anyway, but this happens to break at least the craziest Pascalian wagers, assuming plausible imprecise credences (see DiGiovanni 2024).
Thanks for the reply :)
> It could eventually make possible the kind of insight you describe (e.g. discovering a selection pressure that would flip X-risk reduction from undefined to positive).
Absolutely, although it could also lead people to mistakenly think they found such insight and do something that turns out bad for the long term. The crux then becomes whether we have any good reason to believe the former is determinately more likely than the latter, despite severe unawareness. To potentially make a convincing case that it is, one really needs to seriously engage with arguments for unawareness-driven cluelessness. Otherwise, one is always gonna give arguments that have already been debunked or that are way too vague for the crux to be identified, and the discussion never advances.
This (and references therein) suggests that we should still be clueless about whether (even) ontological longtermism would do more good than harm overall. This is for the same reasons why we should arguably be clueless about other longtermist projects (a position which you seem sympathetic to?).
Curious what you think of this.
How does the paper relate to your Reasons-based choice and cluelessness post? Is the latter just a less precise and informal version of the former, or is there some deeper difference I'm missing?
Interesting.
Well, let me literally take Anthony's first objection and replace the words to make it apply to the Emily case:
There are many different ways of carving up the set of “effects” according to the reasoning above, which favor different strategies. For example: I might say that I’m confident that
an AMF donation saves livesgiving Emily the order to stand down makes her better off, and I’m clueless about its long-term effects overall (of this order, due to cluelessness about which of the terrorist and the child will be shot). Yet I could just as well say I’m confident that there’s some nontrivially likely possible world containing an astronomical number of happy lives (thanks to the terrorist being shot and not the kid), whichthe donationmy order makes less likely via potentiallyincreasing x-riskpreventing the terrorists (and luckily not the kid) from being shot, and I’m clueless about all the other effects overall. So, at least without an argument that some decomposition of the effects is normatively privileged over others, Option 3 won’t give us much action guidance.
When I wrote the comment you responded to, it just felt to me like only the former decomposition was warranted in this case. But, since then, I'm not sure anymore. It surely feels more "natural", but that's not an argument...
Is your intuition strongly that Emily should stand down for option 3 reasons, or merely that Emily should stand down?
The former, although I might ofc be lying to myself.
Nice, thanks. To the extent that, indeed, noise generally washes out our impact over time, my impression is that the effects of increasing human population in the next 100 years on long-term climate change may be a good counterexample to this general tendency.
Not all long-term effects are equal in terms of how significant they are (relative to near-term effects). A ripple on a pond barely lasts, but current science gives us good indications that i) releasing carbon into the atmosphere lingers for tens of thousands of years, and ii) increased carbon in the atmosphere plausibly hugely affects the total soil nematode population (see, e.g., Tomasik's writings on climate change and wild animals)[1]. It is not effects like (i) and (ii) Bernard's post studies, afaict. I don't see why we should extrapolate from his post that there has to be something that makes us mistaken about (i) and/or (ii), even if we can't say exactly what.
Again, we might have no clue in which direction, but it still does.
What work that may reduce animal suffering looks effective (or positive at all) when we don't ignore or downplay the backfire risks via unintended indirect effects? See What to do about near-term cluelessness in animal welfare.