This is a linkpost for

Forthcoming in Public Affairs Quarterly:

Effective altruism sounds so innocuous—who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. This paper addresses some common misconceptions, and argues that the core "beneficentric" ideas of effective altruism are both excellent and widely neglected. Reasonable people may disagree on details of implementation, but all should share the basic goals or values underlying effective altruism.




Sorted by Click to highlight new comments since: Today at 1:28 PM

Woah, a really nice article that identified the most common criticisms of EA that I've come across, namely, cause prioritization, earning to give, billionaire philanthropy, and longtermism. Funnily enough, I've come across these criticisms on the EA forum more than anywhere else! 

But it's nice to see a well-researched, external, and in-depth review of EA's philosophy, and as a non-philosopher, I found it really accessible too. I would like to see an article of a similar style arguing against EA principles though. Does anyone know where I can find something like that? A search for EA criticism on the web brings up angry journalists and media articles that often miss the point. 

Does anyone know where I can find something like that?

You can take a look at the ‘Further reading’ section of criticism of effective altruism, the articles so tagged, and the other tags starting with “criticism of” in the ‘Related entries’ section.

Thanks for sharing!  fyi, I've written up a summary of the main themes of the paper here.

(And seconding Jordan's request for "an article of a similar style arguing against EA principles". My suspicion is that none can exist because there's no reasonable way to make such an argument; insinuation and "political" critique is all that the critics have got. But I'd love to be proven wrong!)

I would strongly push back against the idea that “insinuation and ‘political’ critique’” are all that critics have. Currently posting from my phone before bed, but happy to follow up at a later date once I have some free time with a more in depth and substantive discussion on the matter if you’d be interested :)

For this quick message though I hope it is at least fair to suggest that dismissing critiques off hand is potentially risky as we are naturally inclined to steal man our own favored conclusions and straw man arguments against, which doesn’t do us any favors epistemologically speaking

Definitely interested to hear your substantive views when you have time! (All views are risky. I'm just honestly reporting my current opinion, based on what I've read to date. Happy to update after hearing more, though.)

*Edit: I accidentally hit Save before I was finished, went back to finish

*I started writing this the week after your reply but went down too deep of a rabbit hole and didn't get around to finishing it. Apologies for the delay! Note, the first portion was written 3 months ago (Novemberish 2023) and the latter portion was written today (12 Feb 2024)


Ok - I've had a bit more time to read through some of your writing and some of the comments to give myself a little context and hopefully I can contribute a bit more meaningfully now.

Before getting into details though, probably best to frame things:

  1. My initial comment was solely aimed at responding to your original comment in this thread in a relative vacuum without having read through the paper or summary. Now that I've read the summary you shared[1], I imagine that we could have a much longer discussion on quite a few different points where we may productively disagree -> however, to keep things concise I'll be focusing the discussion here on this specific line from your comment above: "My suspicion is that [no article of a similar style arguing against EA principles] can exist because there's no reasonable way to make such an argument; insinuation and "political" critique is all that the critics have got"
  2. As I had not, at the time of my original comment, read further I was not aware of your definition of "insinuation" and "political critique" -> now, having read more, it would probably be helpful to clearly share those definitions, as I understand them from you writing, here. (If I've misunderstood, please let me know!)
    1. Insinuation: any critical, disparaging, or otherwise negative commentary that is made without significant explanation, evidence, reasoning, good-faith argumentation, or further context.
    2. Political Critique: criticism that focuses not upon principles, but rather on practical, real-world matters. [2]
  3. While I have personally engaged with people who have presented many critiques of Effective Altruism, I've never approached trying to assess criticism systematically and most of my familiarity with critiques of EA comes from undocumented, anecdotal encounters. I also don't regularly read or subscribe to many of the various media wherein formalized criticism of EA might be most common, so I'm not very familiar with whatever existing body of external criticism there is[3]. It is probably worth while to distinguish which kinds of criticism we want to address:
    1. Formal critiques: Pieces of criticism that are documented and were made with at least a reasonable degree of intentionality, thought, and a clear purpose of arguing against some aspect of/associated with EA. Examples may include academic and non-academic articles, in-depth blog posts, podcasts, pieces of journalism, formal debates, books, etc. But probably better to not consider idle social media commentary, one-sided ranting in informal settings, or casual anecdotal conversation
    2. External critiques: Pieces of criticism that come from sources that don't identify as part of the EA movement. While there is plenty of criticism shared by and among people under the EA umbrella, I posit that external criticism provides some unique value as it seems more likely to represent 'public opinion', to consider factors that may be neglected within EA, to propose different ways of thinking than those commonly used within EA, and to be less biased by various 'in-group' effects

^I hope this sounds reasonable - if you'd like to modify any points please let me know :)

On another note, at some point (time permitting) I would love to flesh out a more comprehensive post synthesizing and summarizing criticism of EA in a more rigorous, systematic and thoughtful way. However, a project like that seems like it would take quite a bit of work and collaboration, so I'm not too optimistic I'll be able to take it on personally (at least not in the near future) :( 

Examples of (semi-)Formal Criticism

Here I've tried to collect an incomplete list of several critiques of EA and tried to sort them by my best guess of where they fall along several relevant criteria

Concerns about Narrow Goal-Posts and dismissing 'Political' Criticism

"As an academic, I think we should assess claims primarily on their epistemic merits, not their practical consequences." from page 33 of your paper -> from a purely academic philosophical perspective I can understand this claim if the word 'epistemic' was replaced with a term like 'ethical, logical, or philosophical' as the basic tenants of EA are pretty defensible on paper. However, the word 'epistemic' relates to knowledge, and generally considers evidence alongside logic. To ignore 'practical consequences' would be to ignore a large body of evidence that may help to inform our perspective on EA's merits. Of course, there are many confounding variables that abstract the relationship between the core philosophical tenants of EA and the 'practical consequences' of EA that should lead us to think carefully before updating our perspective of EA's merits based on any one given piece of real-world evidence. However, to deprioritize practical consequences entirely seems like it would lead us to miss out on some key considerations. 

Let's imagine that EA's core ideas are applied in many different scenarios and that, separately a randomized sample of main-stream ethical frameworks are applied in those same scenarios. If we started to observe that after a statistically robust amount of trials that the EA-applied scenarios led to worse outcomes on average than the other group, it would certainly lead me to question the epistemic merits of EA's core claims. While this level of experimental rigor would be impractical, I believe a naturalistic observation comparing the successes and failings of EA vs equivalent non-EA frameworks would be a reasonable proxy for modestly bolstering or weakening (updating) my perception of the merits of EA's core tenants. 

Additionally, given the focus within Effective Altruism on applied ethics, which is a highlighted in the title's usage of the word "Effective", it seems to me that one of the core claims is that it is important to examine practical consequences when evaluating how good or bad an idea is. To assess the merit of EA's core ideas purely on non-'political' critique seems to run counter to those very core ideas. In fact, I would imagine that a good-faith interpretation of EA's core principles would lead one to rigorously assess all kinds of critiques, philosophical as well as political, to constantly update our beliefs and actions.

Circling back to your paper, on page 33 & 34, you continue 

But insofar as the political critique disavows this academic norm, it must
also expose itself to practical evaluation. And in this case, the harm
it risks is clear and grave. Political opponents of effective altruism
have very likely caused the deaths of a great many children*
*In the counterfactual sense that, had they not acted thus, those deaths would
not have occurred. Which is not, of course, to claim that they are the direct cause
of death.

Personally, I don't find this argument particularly compelling as 1) it lumps all political opponents of EA into one group, 2) makes a very large claim with no supporting evidence and 3) the hypothetical 'political' wrongness of the critics doesn't affect the hypothetical 'political' wrongness of EA (seems like a form of 'What About-ism'[4]). Of course, I'm sure that you have many more perfectly legitimate arguments for why we shouldn't place an undue amount of credence in political critiques, but that's a debate I would like to see more fleshed out than the attention it has been afforded in this discussion thus far before I am convinced.

Side note, JerL's comment on your Substack Post raises some points I find compelling :)

Concerns about how we approach engaging with Criticism of EA

I posit that, people in the EA space should be more receptive to criticism from outside of EA, even if it is flawed by EA standards for several reasons:

  • People in EA, even those who have trained in 'good epistemics', are still susceptible to any number of biases that could lead us to under-value external critique and over-value things that confirm our views
  • Engaging in good faith with diverse critiques of EA aligns with several of the core values of EA
  • The way people in the EA space behave in response to criticism can have an impact -> responding to criticism with an openness and empathy is likely to lead to better outcomes for EA

Regardless of how 'correct' or not EA's principles are, the way that people in the EA orbit absorb, assess, and respond to criticism is important can have real consequences. I have noticed a trend both on the EA forum, as well as in discussions with people from EA aligned organizations, at EAGs and other EA events, that most popular responses to external criticism of EA tend to be highly dismissive and focus more on tearing down the arguments of the critic rather than making a good faith effort to engage with the underlying sentiment and intention of the critic. 

EA, as you have cited, places a very high value on self-critique and has invested in a significant amount of diverse initiatives to promote such critique, such as the red-teaming contest. However, such criticism suffers from a huge blind spot as people who are already associated with EA enough to participate in that type of critique are a severely biased sample. 

It can often seem like critiques of EA from people outside the EA space are only taken seriously by EAs if those critiques mold themselves to meet the specific criteria, argumentative formulations, and style preferred by people within the EA space. If that is the case (it could just be my personal perception!), then we risk missing out on the diverse perspectives of the vast majority of people who are not inclined to communicate their perspectives in an 'EA way'.

A portion of EA thought emphasizes the value of worldview diversification[5], in large part because there's been a significant amount of research on the practical value-add of diversity (though the evidence is much more nuanced than is often portrayed in common discussion)[6]. Part of worldview diversification includes engaging with style of argument that do not align with our own, as well as engaging with arguments coming from people with beliefs and backgrounds very different to our own. A very well intentioned person who isn't comfortable speaking in academic jargon or assembling logical arguments to a forensic standard may still have great points, and we would benefit to engage with those points.

Beyond the potential epistemic benefits of engaging with external critique, the way in which we engage with critique has an impact in and of itself. If the EAs most popular reactions to external criticism of EA are negative, dismissive, patronizing, or just generally don't attempt to meet the critic where they are, then we may only serve to perpetuate negative impressions of EA and create a chilling effect on dissent within the EA space. 

I'm not sure if pro-EA responses to critiques of EA get more upvotes and agrees and karma than critical-of-EA responses on the forum, but it seems plausible that might be the case. I'm also not present enough on X or any other social media platforms to see what the average response of EAs to criticism looks like, it could be very respectful and well received! But it isn't hard to imagine that some responses by some EAs to criticism might dismissive, come across as 'elitist', or are at least somewhat alienating to the non-EAs who see the responses. Regardless, such responses are bound to have at least a modest effect on the EA 'brand' and I would hope that we err on the side of engaging in good-faith, empathetic, personable responses when reasonable. (If the majority of EA responses to external criticism already are like that, great, let's keep it up! If they aren't, that's unfortunate)

To try to get some sense of how this dynamic plays out (at least on the EA Forum) I spent some time looking through the EA Forum for external and internal critiques of EA and luckily @JWS shared this list collecting some criticism of EA criticism. As a little exercise, reading through the pieces JWS linked and the comments below a couple things popped out to me:

  • There are drastically more entries under the topic tag "Criticism of Effective Altruism" that are written by, and for, EAs than there are entries that engage with external criticism of EA
  • Of the entries that do engage with external criticism of EA, several simply share the original critiques to open discussion and several counter-criticize the criticism, but I haven't found any posts that agree with, or claim to have updated their thoughts based upon, external critiques -> my assumption would be that people on the EA forum are on average more motivated to refute external criticism than to engage with it or empathize with it.
  • There are quite a few critiques of EA that aren't even mentioned anywhere on the EA forum - can't be sure why this is, it is plausible that this confirms the point above
  • It's hard to find external critiques of EA on the forum...

One last note

I really appreciate you engaging on this so openly! Really respect your ideas and everything you bring to the table :) 

Apologies of any of my counter-arguments misunderstood your original points or don't seem fair, I'm sure I'm off base in a few places and am happy to update 

  1. ^

    Unfortunately I don't have the time to make it through the full paper right now :( I'm sure you share a lot of very valuable arguments therein 

  2. ^

    In my limited understanding, the distinction between "Political" vs "Principle" critique is similar to the distinction between a "Consequentialist" vs "Deontological" approach whereby "Political" criticism refers to how things have actually played out in the real world and "Principle"-based criticism refers to how good the actual underlying ideas are

  3. ^

    I'm much more familiar with internal criticism shared on the EA Forum, during EA events, etc.

  4. ^

  5. ^

    Example from Open Philanthropy:

  6. ^

    A couple relevant studies:

This is a rather uncharitable take on the ~weakest forms of the arguments presented. It's also the first published instance of a tendency (fortunately not a widespread one) I've seen in online EA spaces when responding to criticism to water down the philosophy of EA to something close to its broadest, most comprehensive form to the point where it becomes virtually undistinguishable from any other philanthropical enterprise. I think this is where a kind of social/intellectual history of EA ideas would be extremely valuable: it seems to be that there is a gap between what someone who is entrenched in EA and EA spaces considers EA to be versus what someone who is observing it from the outside and relying on published materials would understand it to be. [ETA because I forgot a sentence: and this probably stems from the relatively fast evolution on EA philosophy over the past 7-8 years in particular and the difficulty in understanding what is still considered fundamental and what is outdated.] This creates a disconnect between critics and EAs and I think to some extent, to put it in very imprecise terms, newer versus older EAs, and longtermist versus neartermist EAs re: what are the guiding principles and how each of these principles and these components are weighted relative to each other. I'd love to see a robust article or even better an extended dialogue between EAs discussing EA from a ship of Theseus-like perspective to see hiw far you can push these boundaries and at what point EA stops being EA.

More from Pablo
Curated and popular this week
Relevant opportunities