Researcher at the Center on Long-Term Risk. All opinions my own.
I'm not sure about this, though. As I wrote in a previous comment:
The reasons to do various parochial things, or respect deontological constraints, aren't like this. They aren't grounded in something like "this thing out there in the world is horrible, and should be prevented wherever/whenever it is [or whoever causes it]".
The concern I've tried to convey in our discussion so far is: Insofar as our moral reasons for action are grounded in "this thing out there in the world is horrible, and should be prevented wherever/whenever it is [or whoever causes it]", then shining the spotlight of our active altruism on beings who happen to be salient/near to us is arbitrary. To me, active "altruism" per se[1] is pretty inextricable from anti-arbitrariness.
And I'm saying, suppose for a moment we're no longer trying to be actively altruistic, and instead consider normative reasons that aren't grounded in the above. Then, prioritizing those whom you actually have special relationships with isn't arbitrary in the relevant sense. Because those relationships give you a reason to prioritize them. (Of course, if we started from an impartial altruistic perspective, this reason would be dwarfed by our duty to reduce large-scale suffering overall, insofar as that's tractable! But the worry is that it's not.)
Is your position something like, "We also have special relationships with strangers who are near to us"? I might be sympathetic to that, but it seems like it needs more unpacking.
Like I said, I do share the LAR intuition in some limited contexts, and it would be pretty sad if there's no non-arbitrary way to make sense of active altruism at all. I find this situation unsettling. But I currently feel confused as to how much I honestly endorse LAR.
As opposed to deontological(-ish) prohibitions against harming strangers.
(I unfortunately don't have time to engage with the rest of this comment, just want to clarify the following:)
Indeed, bracketing off "infinite ethics shenanigans" could be seen as an implicit acknowledgment of such a de-facto breakdown or boundary in the practical scope of impartiality.
Sorry this wasn't clear — I in fact don't think we're justified in ignoring infinite ethics. In the footnote you're quoting, I was simply erring on the side of being generous to the non-clueless view, to make things easier to follow. So my core objection doesn't reduce to "problems with infinities", rather I object to ignoring considerations that dominate our impact for no particular reason other than practical expedience. :) (ETA: Which isn't to say we need to solve infinite ethics to be justified in anything.)
I've replied to this in a separate Quick Take. :) (Not sure if you'd disagree with any of what I write, but I found it helpful to clarify my position. Thanks for prompting this!)
Musings on non-consequentialist altruism under deep unawareness
(This is a reply to a comment by Magnus Vinding, which ended up seeming like it was worth a standalone Quick Take.)
From Magnus:
For example, if we walk past a complete stranger who is enduring torment and is in need of urgent help, we would rightly take action to help this person, even if we cannot say whether this action reduces total suffering or otherwise improves the world overall. I think that's a reasonable practical stance, and I think the spirit of this stance applies to many ways in which we can and do benefit strangers, not just to rare emergencies.
The intuition here seems to be, "trying to actively do good in some restricted domain is morally right (e.g., virtuous), even when we're not justified in thinking this will have net-positive consequences[1] according to impartial altruism". Let's call this intuition Local Altruism is Right (LAR). I'm definitely sympathetic to this. I just think we should be cautious about extending LAR beyond fairly mundane "common sense" cases, especially to longtermist work.
For one, the reason most of us bothered with EA interventions was to do good "on net" in some sense. We weren't explicitly weighing up all the consequences, of course, but we didn't think we were literally ignoring some consequences — we took ourselves to be accounting for them with some combination of coarse-grained EV reasoning, heuristics, "symmetry" principles, discounting speculative stuff, etc. So it's suspiciously convenient if, once we realize that that reason was confused, we still come to the same practical conclusions.
Second, for me the LAR intuition goes away upon reflection unless at least the following hold (caveat in footnote):[2]
Some examples:
None of which is to say I have a fleshed-out theory! I'm keen to think more about what non-consequentialist altruism under unawareness might look like.
I mean to include Clifton's Option 3 as a possible operationalization of "net-positive consequences according to impartial altruism".
In the definition of LAR, "trying to actively do good" is the key phrase. I find it pretty intuitive that we don't need conditions nearly as strong as (1)+(2) below when we're asking, "Should you refrain from doing [intuitively evil thing]?"
Maybe the most promising angle is to show that it's normatively relevant that our beliefs about the more distant moral patients are (qualitatively?) less grounded in good reasons (see Clifton).
For example, the following passages seem to use these terms as though they must imply consequentialism
I don't understand why you think this, sorry. "Accounting for all our most significant impacts on all moral patients" doesn't imply consequentialism. Indeed I've deliberately avoiding saying unawareness is a problem for "consequentialists", precisely because non-consequentialists can still take net consequences across the cosmos to be the reason for their preferred intervention. My target audience practically never appeals to distributive impartiality, or impartial application of deontological principles, when justifying EA interventions (and I would be surprised if many people would use the word "altruism" for either of those things). I suppose I could have said "impartial beneficence", but that's not as standard.
Those claims seem to assume that all the alternatives are wholly implausible (including consequentialist views that involve weaker or time-adjusted forms of impartiality). But that would be a very strong claim.
Can you say more why you think it's very strong? It's standard within EA to dismiss (e.g.) pure time discounting as deeply morally implausible/arbitrary, and I concur with that near-consensus.[1] (Even if we do allow for views like this, we face the problem that different discount rates will often give opposite verdicts, and it's arbitrary how much meta-normative weight we put on each discount rate.) And I don't expect a sizable fraction of my target audience to appeal to the views you mention as the reasons why they work on EA causes. If you think otherwise, I'm curious for pointers to evidence of this.
Some EAs are sympathetic to discounting in ways that are meant to avoid infinite ethics problems. But I explained in footnote 4 that such views are also vulnerable to cluelessness.
Some people think we’re entirely clueless, so that we haven’t the faintest clue about which actions will benefit the far future. I disagree with this position for reasons Richard Y Chappell has explained very persuasively. It would be awfully convenient if after learning that the far future has nearly all the expected value in the world, it turned out that this had no significant normative implications.
What do you think about the argument for cluelessness from rejecting precise expected values in the first place (which I partly argue for here)?
Thanks for this Magnus, I have complicated thoughts on this point, hence my late reply! To some extent I'll punt this to a forthcoming Substack post, but FWIW:
As you know, relieving suffering is profoundly important to me. I'd very much like a way to make sense of this moral impulse in our situation (and I intend to reflect on how to do so).
But it's very important that the problem isn't that we don't know "the single best thing" to do. It's that if I don't ignore my effects on far-future (etc.) suffering, I have no particular reason to think I'm "relieving suffering" overall. Rather, I'm plausibly increasing or decreasing suffering elsewhere, quite drastically, and I can't say these effects cancel out in expectation. (Maybe you're thinking of "Option 3" in this section? If so, I'm curious where you disagree with my response.)
The reason suffering matters so deeply to me is the nature of suffering itself, regardless of where or when it happens — presumably you'd agree. From that perspective, and given the above, I'm not sure I understand the motivation for your view in your second paragraph. (The reasons to do various parochial things, or respect deontological constraints, aren't like this. They aren't grounded in something like "this thing out there in the world is horrible, and should be prevented wherever/whenever it is [or whoever causes it]".)
I think such a view might also be immune to the problem, depends on the details. But I don't see any non-ad hoc motivation for it. Why would sentient beings' interests matter less intrinsically when those beings are more distant or harder to precisely foresee?
(I'm open to the possibility of wagering on the verdicts of this kind of view due to normative uncertainty. But different discount rates might give opposite verdicts. And seems like a subtle question when this wager becomes too Pascalian. Cf. my thoughts here.)
Which empirical beliefs you hold would have to change for this to be the case?
For starters, we'd either need:
(Sorry if this is more high-level than you're asking for. The concrete empirical factors are elaborated in the linked section.)
Re: your claim that "expected effects of actions decrease over time and space": To me the various mechanisms for potential lock-in within our lifetimes seem not too implausible. So it seems overconfident to have a vanishingly small credence that your action makes the difference between two futures of astronomically different value. See also Mogensen's examples of mechanisms by which an AMF donation could affect extinction risk. But please let me know if there's some nuance in the arguments of the posts you linked that I'm not addressing.
Sorry, I'm having a hard time understanding why you think this is defensible. One view you might be gesturing at is:
But this reasoning doesn't seem to hold up for the same reasons I've given in my critiques of Option 3 and Symmetry. So I'm not sure what your actual view is yet. Can you please clarify? (Or, if the above is your view, I can try to unpack why my critiques of Option 3 and Symmetry apply just as well here.)