Musings on non-consequentialist altruism under deep unawareness
(This is a reply to a comment by Magnus Vinding, which ended up seeming like it was worth a standalone Quick Take.)
From Magnus:
The intuition here seems to be, "trying to actively do good in some restricted domain is morally right (e.g., virtuous), even when we're not justified in thinking this will have net-positive consequences[1] according to impartial altruism". Let's call this intuition Local Altruism is Right (LAR). I'm definitely sympathetic to this. I just think we should be cautious about extending LAR beyond fairly mundane "common sense" cases, especially to longtermist work.
For one, the reason most of us bothered with EA interventions was to do good "on net" in some sense. We weren't explicitly weighing up all the consequences, of course, but we didn't think we were literally ignoring some consequences — we took ourselves to be accounting for them with some combination of coarse-grained EV reasoning, heuristics, "symmetry" principles, discounting speculative stuff, etc. So it's suspiciously convenient if, once we realize that that reason was confused, we still come to the same practical conclusions.
Second, for me the LAR intuition goes away upon reflection unless at least the following hold (caveat in footnote):[2]
1. The "restricted domain" isn't too contrived in some sense, rather it's some natural-seeming category of moral patients or welfare-relevant outcome.
1. (How we delineate "contrived" vs. "not contrived" is of course rather subjective, which is exactly why I'm suspicious of LAR as an impartial altruistic principle. I'm just taking the intuition on its own terms.)
2. I'm at least justified in (i) expecting my intervention to do good overall in that domain, and (ii) expecting not to have large off-target effects of indeterminate net sign in domains of similar "speculativeness" (see "implementation robustness").
1. ("Speculativeness", too, is subjective. And whil