Richard Y Chappell🔸

Associate Professor of Philosophy @ University of Miami
7147 karmaJoined Working (6-15 years)South Miami, FL 33146, USA
www.goodthoughts.blog/
Interests:
Bioethics

Bio

Participation
1

Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog

🔸10% Pledge #54 with GivingWhatWeCan.org

Comments
456

Thanks! I'd previously found it a bit stressful deciding which posts were relevant enough to share here, so I ended up outsourcing the decision to the good folks on the Forum team (who also take care of the cross-posting). Accordingly, a good share of the appreciation is owed to them! :-)

Your executive summary (quoted below) appears to outright assert that quantification is "harmful" and "results in poor decision making". I don't think those claims are well-supported.

If you paint a picture that focuses only on negatives and ignores positives, it's apt to be a very misleading picture. There may be possible ways to frame such a project so that it comes off as just "one piece of the puzzle" rather than as trying to bias its readership towards a negative judgment. But it's an inherently risky/difficult undertaking (prone to moral misdirection), and I don't feel like the rhetorical framing of this article succeeds in conveying such neutrality.

A Utilitarian Ideology

The EA ideology, a set of moral ideas, values, and practices, includes problematic and harmful ideas. Specifically, the ideology ties morality to quantified impact which results in poor decision making, encourages ends justify the means reasoning, and disregards individuality, resulting in crippling responsibility on individuals and burnout.

Looking at EA’s history can show us strong and in many cases negative influence from utilitarian ideas.

It also shows strong and in vastly more cases positive influence from (what you call) "utilitarian" ideas (but really ought to be more universal--ideas like that it is better to do more good than less, and that quantification can help us to make such trade-offs on the basis of something other than mere vibes).

Unless there's some reason to think that the negative outweighs the positive, you haven't actually given us any reason to think that "utilitarian influence" is a bad thing.

Quick sanity check: when I look at any other major social movement, it strikes me as vastly worse than EA (per person or $ spent), in ways that are very plausibly attributable to their being insufficiently "utilitarian" (that is, insufficiently concerned with effectiveness, insufficiently wide moral circles, and insufficiently appreciative of how strong our moral reasons are to do more good).

If you're arguing "EA should be more like every other social movement", you should probably first check whether those alternatives are actually doing a better job!

It's mostly not anything specific to going vegan. Just the general truism that effort used for one purpose could be used for something else instead. (Plus I sometimes donate extra precisely for the purpose of "offsetting", which I wouldn't otherwise be motivated to do.)

Mostly just changing old habits, plus some anticipated missing of distinctive desired tastes. It's not an unreasonable ask or anything, but I'd much rather just donate more. (In general, I suspect there's insufficient social pressure on people to increase our donations to good causes, which also shouldn't be "so effortful", and we likely overestimate the personal value we get from marginal spending on ourselves.)

I don't understand the relevance of the correlation claim. People who care nothing for animals won't do either. But that doesn't show that there aren't tradeoffs in how to use one's moral efforts on the margins. (Perhaps you're thinking of each choice as a binary: "donate some" Y/N + "go vegan" Y/N? But donating isn't binary. What matters is how much you donate, and my suggestion is that any significant effort spent towards adopting a vegan diet might be better spent on further increasing one's donations. It depends on the details, of course. If you find adopting veganism super easy, like near-zero effort required, then great! Not much opportunity cost, then. But others may find that it requires more effort, which could be better used elsewhere.)

My main confusion with your argument is that I don't understand why donations don't also count as "personal ethics" or as "visible ethical action" that could likewise "ripple outward" and be replicated by others to good effect. (I also think the section on "equity" fundamentally confuses what ethics should be about. I care about helping beneficiaries, not setting up an "equitable moral landscape" among agents, if the latter involves preventing the rich from pursuing easy moral wins because this would be "unfair" to those who can't afford to donate.)

One more specific point I want to highlight:

...where harm is permissible as long as it’s “offset” by a greater good

fwiw, my argument does not have this feature. I instead argue that:

(1) Purchasing meat isn’t justified: the moral interests of farmed animals straightforwardly outweigh our interest in eating them. So buying a cheeseburger constitutes a moral and practical mistake. And yet:

(2) It would be an even greater moral and practical mistake to invest your efforts into correcting this minor mistake if you could instead get far greater moral payoffs by directing your efforts elsewhere (e.g. donations).

Just to clarify: Spears & Geruso's argument is that average (and not just total) quality of life will be significantly worse under depopulation relative to stabilization. (See especially the "progress comes from people" section of my review.)

The authors discuss this a bit. They note that even "higher fertility" subcultures are trending down over time, so it's not sufficiently clear that anyone is going to remain "above replacement" in the long run. That said, this does seem the weakest point for thinking it an outright extinction risk. (Though especially if the only sufficiently high-fertility subcultures are relatively illiberal and anti-scientific ones - Amish, etc. - the loss of all other cultures could still count as a significant loss of humanity's long-term potential! I hope it's OK to note this; I know the mods are wary that discussion in this vicinity can often get messy.)

I wrote "perhaps the simplest and most probable extinction risk". There's room for others to judge another more probable. But it's perfectly reasonable to take as most probable the only one that is currently on track to cause extinction. (It's hard to make confident predictions about any extinction risks.) I think it would be silly to dismiss this simply due to uncertainty about future trends.

Load more