Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog
10% Pledge #54 with GivingWhatWeCan.org
Regarding the "world-destruction" reductio:
this isn't strong evidence against the underlying truth of suffering-focused views. Consider scenarios where the only options are (1) a thousand people tortured forever with no positive wellbeing whatsoever or (2) painless annihilation of all sentience. Annihilation seems obviously preferable.
I agree that it's obviously true that annihilation is preferable to some outcomes. I understand the objection as being more specific, targeting claims like:Â
(Ideal): annihilation is ideally desirable in the sense that it's better (in expectation) than any other remotely realistic alternative, including <detail broadly utopian vision here>. (After all, continued existence always has some chance of resulting in some uncompensable suffering at some point.)
or
(Uncompensable Monster): one being suffering uncompensable suffering at any point in history suffices to render the entire universe net-negative or undesirable on net, no matter what else happens to anyone else. We must all (when judging from an impartial point of view) regret the totality of existence.
These strike me as extremely incredible claims, and I don't think that most of the proposed "moderating factors" do much to soften the blow.
I grant your "virtual impossibility" point that annihilation is not really an available option (to us, at least; future SAI might be another matter). But the objection is to the plausibility of the in principle verdicts entailed here, much as I would object to an account of the harm of death that implies that it would do no harm to kill me in my sleep (the force of which objection would not be undermined by my actually being invincible).
Moral uncertainty might help if it resulted in the verdict that you all things considered should prefer positive-utilitarian futures (no matter their uncompensable suffering) over annihilation. But I'm not quite sure how moral uncertainty could deliver that verdict if you really regard the suffering as uncompensable. How could a lower degree of credence in ordinary positive goods rationally outweigh a higher degree of credence in uncompensable bads? It seems like you'd instead need to give enough credence to something even worse: e.g. violating an extreme deontic constraint against annihilation. But that's very hard to credit, given the above-quoted case where annihilation is "obviously preferable".)
The "irreversibility" consideration does seem stronger here, but I think ultimately rests on a much more radical form of moral uncertainty: it's not just that you should give some (minority) weight to other views, but that you should give significant weight to the possibility that a more ideally rational agent would give almost no weight to such a pro-annihilationist view as this. Some kind of anti-hubris norm along these lines should probably take priority over all of our first-order views. I'm not sure what the best full development of the idea would look like, though. (It seems pretty different from ordinary treatments of moral uncertainty!) Pointers to related discussion would be welcome!
I think a more promising form of suffering-focused ethics would explore some form of "variable value" approach, which avoids annihilationism in principle by allowing harms to be compensated (by sufficient benefits) when the alternative is no population at all, but introduces variable thresholds for various harms being specifically uncompensable by extra benefits beyond those basic thresholds. I'm not sure whether a view of this structure could be made to work, but it seems more worth exploring than pro-annihilationist principles.
Isn't the point of the Long Reflection to avoid "locking in" irreversible mistakes? Extinction, for example, is irreversible. But large population isn't. So I don't actually see any sense in which present "min-natalism" maintains more future "optionality" (or better minimizes moral risks) than pro-natalism. Both leave entirely open what future generations choose to do. They just differ in our present population target. And presently aiming for a "minimal population" strikes me as much the worse and riskier of the two options, for both intrinsic moral reasons and instrumental ones like misjudging / undershooting the minimally sustainable level.
Your executive summary (quoted below) appears to outright assert that quantification is "harmful" and "results in poor decision making". I don't think those claims are well-supported.
If you paint a picture that focuses only on negatives and ignores positives, it's apt to be a very misleading picture. There may be possible ways to frame such a project so that it comes off as just "one piece of the puzzle" rather than as trying to bias its readership towards a negative judgment. But it's an inherently risky/difficult undertaking (prone to moral misdirection), and I don't feel like the rhetorical framing of this article succeeds in conveying such neutrality.
A Utilitarian Ideology
The EA ideology, a set of moral ideas, values, and practices, includes problematic and harmful ideas. Specifically, the ideology ties morality to quantified impact which results in poor decision making, encourages ends justify the means reasoning, and disregards individuality, resulting in crippling responsibility on individuals and burnout.
Looking at EAâs history can show us strong and in many cases negative influence from utilitarian ideas.
It also shows strong and in vastly more cases positive influence from (what you call) "utilitarian" ideas (but really ought to be more universal--ideas like that it is better to do more good than less, and that quantification can help us to make such trade-offs on the basis of something other than mere vibes).
Unless there's some reason to think that the negative outweighs the positive, you haven't actually given us any reason to think that "utilitarian influence" is a bad thing.
Quick sanity check: when I look at any other major social movement, it strikes me as vastly worse than EA (per person or $ spent), in ways that are very plausibly attributable to their being insufficiently "utilitarian" (that is, insufficiently concerned with effectiveness, insufficiently wide moral circles, and insufficiently appreciative of how strong our moral reasons are to do more good).
If you're arguing "EA should be more like every other social movement", you should probably first check whether those alternatives are actually doing a better job!
Mostly just changing old habits, plus some anticipated missing of distinctive desired tastes. It's not an unreasonable ask or anything, but I'd much rather just donate more. (In general, I suspect there's insufficient social pressure on people to increase our donations to good causes, which also shouldn't be "so effortful", and we likely overestimate the personal value we get from marginal spending on ourselves.)
I don't understand the relevance of the correlation claim. People who care nothing for animals won't do either. But that doesn't show that there aren't tradeoffs in how to use one's moral efforts on the margins. (Perhaps you're thinking of each choice as a binary: "donate some" Y/N + "go vegan" Y/N? But donating isn't binary. What matters is how much you donate, and my suggestion is that any significant effort spent towards adopting a vegan diet might be better spent on further increasing one's donations. It depends on the details, of course. If you find adopting veganism super easy, like near-zero effort required, then great! Not much opportunity cost, then. But others may find that it requires more effort, which could be better used elsewhere.)
My main confusion with your argument is that I don't understand why donations don't also count as "personal ethics" or as "visible ethical action" that could likewise "ripple outward" and be replicated by others to good effect. (I also think the section on "equity" fundamentally confuses what ethics should be about. I care about helping beneficiaries, not setting up an "equitable moral landscape" among agents, if the latter involves preventing the rich from pursuing easy moral wins because this would be "unfair" to those who can't afford to donate.)
One more specific point I want to highlight:
...where harm is permissible as long as itâs âoffsetâ by a greater good
fwiw, my argument does not have this feature. I instead argue that:
(1) Purchasing meat isnât justified: the moral interests of farmed animals straightforwardly outweigh our interest in eating them. So buying a cheeseburger constitutes a moral and practical mistake. And yet:
(2) It would be an even greater moral and practical mistake to invest your efforts into correcting this minor mistake if you could instead get far greater moral payoffs by directing your efforts elsewhere (e.g. donations).
Thanks for your reply! Working backwards...
On your last point, I'm fully on board with strictly decoupling intrinsic vs instrumental questions (see, e.g., my post distinguishing telic vs decision-theoretic questions). Rather, it seems we just have very different views about what telic ends or priorities are plausible. I give ~zero credence to pro-annihilationist views on which it's preferable for the world to end than for any (even broadly utopian) future to obtain that includes severe suffering as a component. Such pro-annihilationist lexicality strikes me as a non-starter, at the most intrinsic/fundamental/principled levels. By contrast, I could imagine some more complex variable-value/threshold approach to lexicality turning out to have at least some credibility (even if I'm overall more inclined to think that the sorts of intuitions you're drawing upon are better captured at the "instrumental heuristic" level).
On moral uncertainty: I agree that bargaining-style approaches seem better than "maximizing expected choiceworthiness" approaches. But then if you have over 50% credence in a pro-annihilationist view, it seems like the majority rule is going to straightforwardly win out when it comes to determining your all-things-considered preference regarding the prospect of annihilation.
Re: uncompensable monster: It isn't true that "orthodox utilitarianism also endorses this in principle", because a key part of the case description was "no matter what else happens to anyone else". Orthodox consequentialism allows that any good or bad can be outweighed by what happens to others (assuming strictly finite values). No one person or interest can ever claim to settle what should be done no matter what happens to others. It's strictly anti-absolutist in this sense, and I think that's a theoretically plausible and desirable property that your view is missing.
I don't think it's helpful to focus on external agents imposing their will on others, because that's going to trigger all kinds of instrumental heuristic norms against that sort of thing. Similarly, one might have some concerns about there being some moral cost to the future not going how humanity collectively wants it to. Better to just consider natural causes, and/or comparisons of alternative possible societal preferences. Here are some possible futures:
(A) Society unanimously endorses your view and agrees that, even though their future looks positive in traditional utilitarian terms, annihilation would be preferable.
(B) Society unanimously endorses my view and agrees that, even though existence entails some severe suffering, it is compensable and the future overall looks extremely bright.
Intuitively: B2 > A2 > A1 > B1.
I think it would be extremely strange to think that B1 > B2, or that A1 > B2. In fact, I think those verdicts are instantly disqualifying: any view yielding those verdicts deserves near-zero credence.
(I think A1 is broadly similar to, though admittedly not quite as bad as, a scenario C1 in which everyone decides that they deserve to suffer and should be tortured to death, and then some very painful natural disaster occurs which basically tortures everyone to death. It would be even worse if people didn't want it, but wanting it doesn't make it good.)