Richard Y Chappell

Associate Professor of Philosophy @ University of Miami
4565 karmaJoined Dec 2018


Academic philosopher, co-editor of, blogs at


Yes, I agree it seems important to have marketers and PR people to craft persuasive messaging for mass audiences. That's not what I'm trying to do here, and nor do I think it would make any sense for me to shift into PR -- it wouldn't be a good personal fit. My target audience is academics and "academic-adjacent" audiences, and as a philosopher my goal is to make clear what's philosophically justified, not to manipulate anyone through non-rational means. I think this is an important role, for reasons explained in some of the footnotes to my posts there. But I also agree it's not the only important role, and it would plausibly be good for EA to additionally have more mass-market appeal.  It takes all sorts.

fyi, I weakly downvoted this because (i) you seem like you're trying to pick a fight and I don't think it's productive; there are familiar social ratcheting effects that incentivize exaggerated rhetoric on race and gender online, and I don't think we should encourage that. (There was nothing in my comment that invited this response.) (ii) I think you're misrepresenting Trace. (iii) The "expand your moral circle" comment implies, falsely, that the only reason one could have for tolerating someone with bad views is that you don't care about those harmed by their bad views.

I did not mean the reference to Trace to function as a conversation opener. (Quite the opposite!) I've now edited my original comment to clarify the relevant portion of the tweet. But if anyone wants to disagree with Trace, maybe start a new thread for that rather than replying to me. Thanks!

I'd just like to clarify that my blogroll should not be taken as a list of "worthy figure[s] who [are] friend[s] of EA"!  They're just blogs I find often interesting and worth reading. No broader moral endorsement implied!

fwiw, I found TracingWoodgrains' thoughts here fairly compelling.

ETA, specifically:

I have little patience with polite society, its inconsistencies in which views are and are not acceptable, and its games of tug-of-war with the Overton Window. My own standards are strict and idiosyncratic. If I held everyone to them, I'd live in a lonely world, one that would exclude many my own circles approve of. And if you wonder whether I approve of something, I'm always happy to chat.

Thanks, that's very helpful!  I do want my points to be forceful, but I take your point that overdoing it can be counterproductive.  I've now slightly moderated that sentence to instead read, "Wenar is here promoting a general approach to practical reasoning that is systematically biased (and predictably harmful as a result): a plain force for ill in the world."

Right, that's why I also take care to emphasize that responsible criticism is (pretty much) always possible, and describe in some detail how one can safely criticize "Good Things" without being susceptible to charges of moral misdirection.

Thanks, that's helpful feedback. I guess I was too focused on making it concise, rather than easily understood.

This is an important point. People often confuse harm/benefit asymmetries with doing/allowing asymmetries. Wenar's criticism seems to rest on the latter, not the former. Note that if all indirect harms are counted within the constraint against causing harm, almost all actions would be prohibited. (And on any plausible restriction, e.g. to "direct harms", it would no longer be true that charities do harm. Wenar's concerns involve very indirect effects. I think it's very unlikely that there's any consistent and plausible way to count these as having disproportionate moral weight. To avoid paralysis, such unintended indirect effects just need to be weighed in aggregate, balancing harms done against harms prevented.)

I found it a bit hard to discern what constructive points he was trying to make amidst all the snark. But the following seemed like a key passage in the overall argument:

Making responsible choices, I came to realize, means accepting well-known risks of harm. Which absolutely does not mean that “aid doesn’t work.” There are many good people in aid working hard on the ground, often making tough calls as they weigh benefits and costs. Giving money to aid can be admirable too—doctors, after all, still prescribe drugs with known side effects. Yet what no one in aid should say, I came to think, is that all they’re doing is improving poor people’s lives.

... This expert tried to persuade Ord that aid was much more complex than “pills improve lives.” Over dinner I pressed Ord on these points—in fact I harangued him, out of frustration and from the shame I felt at my younger self. Early on in the conversation, he developed what I’ve come to think of as “the EA glaze.”... Ord, it seemed, wanted to be the hero—the hero by being smart—just as I had. Behind his glazed eyes, the hero is thinking, “They’re trying to stop me.”

Putting aside the implicit status games and weird psychological projection, I don't understand what practical point Wenar is trying to make here. If the aid is indeed net good, as he seems to grant, then "pills improve lives" seems like the most important insight not to lose sight of. And if someone starts "haranguing" you for affirming this important insight, it does seem like it could come across as trying to prevent that net good from happening. (I don't see any reason to personalize the concern, as about "stopping me" -- that just seems blatantly uncharitable.)

It sounds like Wenar just wants more public affirmations of causal complexity to precede any claim about our potential to do good? But it surely depends on context whether that's a good idea. Too much detail, especially extraneous detail that doesn't affect the bottom line recommendation, could easily prove distracting and cause people (like, seemingly, Wenar himself) to lose sight of the bottom line of what matters most here.

So that section just seemed kind of silly. There was a more reasonable point mixed in with the unreasonable in the next section:

GiveWell still doesn’t factor in many well-known negative effects of aid... Today GiveWell’s front page advertises only the number of lives it thinks it has saved. A more honest front page would also display the number of deaths it believes it has caused.

The initial complaint here seems fine: presumably GiveWell could (marginally) improve their cost-effectiveness models by trying to incorporate various risks or costs that it sounds like they currently don't consider. Mind you, if nobody else has any better estimates, then complaining that the best-grounded estimates in the world aren't yet perfect seems a bit precious. Then the closing suggestion that they prominently highlight expected deaths (from indirect causes like bandits killing people while trying to steal charity money) is just dopey. Ordinary readers would surely misread that as suggesting that the interventions were somehow directly killing people. Obviously the better-justified display is the net effect in lives saved. But we're not given any reason to expect that GiveWell's current estimates here are far off.

Q: Does Wenar endorse inaction?

Wenar's "most important [point] to make to EAs" (skipping over his weird projection about egotism) is that "If we decide to intervene in poor people's lives, we should do so responsibly—ideally by shifting our power to them and being accountable for our actions."

The overwhelmingly thrust of Wenar's article -- from the opening jab about asking EAs "how many people they’ve killed", to the conditional I bolded above -- seems to be to frame charitable giving as a morally risky endeavor, in contrast to the implicit safety of just doing nothing and letting people die.

I think that's a terrible frame. It's philosophically mistaken: letting people die from preventable causes is not a morally safe or innocent alternative (as is precisely the central lesson of Singer's famous article). And it seems practically dangerous to publicly promote this bad moral frame, as he is doing here. The most predictable consequence is to discourage people from doing "riskily good" things like giving to charity. Since he seems to grant that aid is overall good and admirable, it seems like by his own lights he should regard his own article as harmful. It's weird.

(If he just wants to advocate for more GiveDirectly-style anti-paternalistic interventions that "shift our power to them", that seems fine but obviously doesn't justify the other 95% of the article.)

There was meant to be an "all else equal" clause in there (as usually goes without saying in these sorts of thought experiments) -- otherwise, as you say, the verdict wouldn't necessarily indicate underlying non-utilitarian concerns at all.

Perhaps easiest to imagine if you modify the thought experiment so that your psychology (memories, "moral muscles", etc.) will be "reset" after making the decision. I'm talking about those who would insist that you still ought to save the one over the two even then -- no matter how the purely utilitarian considerations play out.

It's fine to offer recommendations within suboptimal cause areas for ineffective donors. But I'm talking about worldview diversification for the purpose of allocating one's own (or OpenPhil's own) resources genuinely wisely, given one's (or: OP's) warranted uncertainty.

Load more