Hide table of contents

Alice Crary and Peter Singer recently had a debate about the desirability of effective altruism. It was a strange debate, as Richard Chappell notes in his review of it; Crary made lots of extremely bizarre and implausible claims. Among other things, Crary argued:

  • Peter Singer, the prototypical effective altruist, wasn’t really an effective altruism (because she agreed with the reasonable things Singer was saying, but those things didn’t conform to the strawman of EAs that she’d erected in her mind).
  • EAs focus exclusively on the sorts of intervention efforts that can be measured by randomized control trials. She also panned effective altruists for focusing on longtermist projects that certainly can’t be measured by randomized control trails.
  • Asking how her criticism of EA might be most wrong and dangerous is in some way illicit because it presupposes that she is wrong.
  • It’s bad to get a good-paying job and then give your money away because, quoting Richard’s summary, “it positions rich people as “saviors” of the poor.” I’ve always found arguments like this bizarre—of the form “if you do X which is good, someone could describe it using a sentence that sounds bad.”
  • Effective altruism should be completely abandoned, even though she’s mostly on board with the things Singer was saying. This position seems totally crazy when one considers the hundreds of thousands of lives saved through effective altruist projects.

In her opening statement, Crary provides four main objections to effective altruism. These objections are very badly mistaken, and so I thought it would be worth explaining why.

Crary’s first major criticism is that human lives are more complicated than the objective, numerical metrics that EAs use. Trying to use objective metrics to capture the value of giving is a doomed project, given that various important bits of human life can’t be numerically captured. But this is an error; the fact that randomized control trials can’t meaningfully capture all the features of human experience tells us absolutely nothing about whether they’re better than alternatives.

What is the alternative supposed to be? If we jettison relying on data and cost-benefit analyses because they don’t fully capture the costs and benefits, what charitable actions should we perform? Surely the standard approach to charity—just give wherever seems good to you—neglects costs and benefits far, far more than carefully gathering and performing high-quality randomized control trials.

The impossibility of fully capturing every cost and benefit of a project is a completely universal feature of all charitable action. It will apply no matter how we try to do good, and is, in fact, mitigated by designing careful and rigorous studies that try to measure as many things as possible. This is, therefore, not uniquely a reason to oppose EA—Crary’s favored “justice movements” will have precisely the same problems.

Crary cites Lief Wenar’s piece in support of the claim that some effective altruist interventions have turned out badly. I’ve already written, in some detail, about Wenar’s piece; it’s one of the worst things I’ve ever read, basically just trawling through Google Scholar to find random downsides to effective altruist programs, and acting like this poses a serious empirical critique.

Crary’s second criticism is that effective altruists ignore justice movements. Effective altruists don’t, for instance, sponsor movements fighting for abortion rights, or other sorts of political reform. Crary thinks this is very bad. I’ll group this with her third criticism which is that effective altruists ignore systemic change, focusing on granular programs.

In these sections, Crary makes some very bizarre claims. She claims that EAs fail to engage with the political system, but also that EAs are undemocratic because they have wealthy people meddle in the way things are run. Which one is it? Surely it can’t just be that, for instance, when people give to the Against Malaria Foundation, that’s undemocratic on account of it not having been subject to a vote? By the same standard, when Jeff Bezos buys an Amazon warehouse, that’s undemocratic, because no one voted for it, and indeed when Crary makes influential statements criticizing effective altruism, that’s undemocratic on account of having not been voted for (certainly those whose lives were saved from the malaria nets distributed because of the effective altruist movement would vote against her advocacy).

Crary also claims cynically that effective altruism’s opposition to systemic change is why they get major funding—funding from people who support the status quo. Um, what? Have you seen the organizations advocating political change? Far, far more is spent on the election in a single year than EAs ever spend. People are far more interested in spending money on political change than on effective altruism.

As for Crary’s criticism that EAs ignore justice movements and major change, this simply isn’t true. Many EA projects involve working with local communities to, for instance, provide vaccinations and malaria nets. For a while, EAs pushed for criminal justice reform, until finding it was less effective than alternatives. EAs were majorly involved in reforming remittances, foreign aid, and lots of other systemic projects.

The difference is that EAs don’t support such projects at random; they demand evidence. Advocating for a socialist revolution generally is less effective than distributing malaria nets, which have saved well over 200,000 people (unlike Crary’s preferred movements). EAs aren’t opposed to systemic change, only to systemic change if there isn’t clear evidence that it will work out well.

It’s notable that even if Crary is right about this, perhaps it would justify the position that EAs should shift their focus. But it wouldn’t be a reason effective altruism should be abandoned or hasn’t done more good then harm. If a group has saved upwards of 200,000 lives, the group being abandoned would be a bad thing, even if the group is not completely optimal.

Crary’s fourth criticism is that effective altruists have begun leaning into longtermism. She claims longtermism endorses “frightening reasoning,” whereby one would do terrible things in the present to slightly reduce the risk of future catastrophe. But longtermism—of the weak variety—isn’t committed to that, but only to the much more modest and philosophically overwhelming claim that we should be doing a lot more to make sure the future goes well.

If one actually looks at the things that longtermists are doing, they’re overwhelmingly good. Scott has a list of them:

  • Developed RLHF, a technique for controlling AI output widely considered the key breakthrough behind ChatGPT.9
  • …and other major AI safety advances, including RLAIF and the foundations of AI interpretability10.
  • Founded the field of AI safety, and incubated it from nothing up to the point where Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, Sam Altman, Bill Gates, and hundreds of others have endorsed it and urged policymakers to take it seriously.11
  • Helped convince OpenAI to dedicate 20% of company resources to a team working on aligning future superintelligences.
  • Gotten major AI companies including OpenAI to work with ARC Evals and evaluate their models for dangerous behavior before releasing them.
Sam Altman tweet praising Eliezer Yudkowsky

 

I don't exactly endorse this Tweet, but it is . . . a thing . . . someone has said.

  • Got two seats on the board of OpenAI, held majority control of OpenAI for one wild weekend, and still apparently might have some seats on the board of OpenAI, somehow?12
  • Helped found, and continue to have majority control of, competing AI startup Anthropic, a $30 billion company widely considered the only group with technology comparable to OpenAI’s.13
Tweet saying that "keep Less Wrong ideas away from AI advances" is like "get your government hands off my Medicare"

I don't exactly endorse and so on.

Other:

  • Helped organize the SecureDNA consortium, which helps DNA synthesis companies figure out what their customers are requesting and avoid accidentally selling bioweapons to terrorists14.
  • Provided a significant fraction of all funding for DC groups trying to lower the risk of nuclear war.15

Now, maybe the worry is that while longtermists are doing good stuff now, they might eventually do terrible things for the sake of the long-term future. But longtermists are very clear that this is a bad idea! Effective altruist leaders have, for years, been warning about the dangers of violating commonsense morality and engaging in malevolent diabolical plans for the greater good.

The best way to figure out what longtermists are likely to do is by looking at what they’ve actually done. And what they’ve actually done has been overwhelmingly positive. (Note that Crary, like almost all critics of effective altruism, doesn’t give any critique of the quite powerful case for the overwhelming importance of the far future—she just points and sputters).

But even if effective altruists have very bad strong longtermist philosophical views, as long as they’re doing good stuff, that seems to be what matters. And Crary has no criticisms of the projects that longtermists actually engage in—just the philosophy.

The founders of the Red Cross were Christians. They probably, like most Christians, think that a sizeable share of humanity will be tortured forever at the hands of a loving God—including many quite decent people who have the wrong beliefs. This isn’t an objection to the Red Cross, because their philosophy is irrelevant—the things they’re doing are good.

As I’ve argued before, there are essentially two distinct parts of effective altruism. One of them is the philosophical side: that we should strive to do good effectively, to make effective giving a major part of our lives. Crary seems to agree with this, saying that she has enormous respect for Singer and mostly endorses the sorts of things he’s in favor of.

The second is the practical side: the things the movement actually does. But to analyze this, it’s not enough to focus on what members of the movement happen to believe: you have to actually look at the things effective altruists do. Those things turn out to be overwhelmingly positive, having done about as much good as preventing 17 9/11s per year via global health programs, plus doing a lot of other good stuff!

A movement that prevents more than a 9/11 worth of mostly third-world death shouldn’t be totally abandoned. Tweaks at the margins can certainly be argued for. But the kind of wholesale condemnation that Crary provides is poorly argued, foolish, and dangerous, supported by astonishingly weak arguments! It’s perhaps no surprise that Crary was reduced to claiming Singer isn’t a real effective altruist; for she had nothing approaching a sensible critique of the things effective altruists actually support.

 


 

2

4
0

Reactions

4
0

More posts like this

Comments3
Sorted by Click to highlight new comments since:
huw
32
15
5
1
1

I downvoted this post, so I want to explain why. I don't think this post actually adds much to the forum, or to EA more generally. You have mostly just found a strawperson to beat up on, and I don't think many of your rebuttals are high quality, nor do they take her on good faith (to use a rat term I loathe, you are in 'soldier mindset').

I can't really see a benefit to doing so; demarcating our 'opponents' only serves to cut us off from them, and to become 'intellectually incurious' about why they might feel that way or how we might change their minds. This does, over time, make things harder for us—funders start turning their noses up at EAs, policymakers don't want to listen, influential people in industry can write us off as unserious.

There are numerous other potential versions of this post. It could have been a thought-provoking critique of Peter Singer for engaging in debate theatre. It could have tried to steelperson her arguments. It could have even tried to trace the intellectual lineage of those arguments to understand why she has ended up with this particular inconsistent set of them! All of those would have been useful for understanding why people hate us, and how we can make them hate us less. I am not a fan of this trend of cheerleading against our haters, and I worry about the consequences of the broader environment it has and is fostering :(

I don't agree that longtermism has been "overwhelmingly good". Your evidence for this is a blog post which specifically and deliberately only cherry picked "good" things. It's fairly easy to make the opposite case that EA helped jumpstart the current day AI arms race, which has resulted in a lot of current day and potential future real world harms. 

Now, maybe the worry is that while longtermists are doing good stuff now, they might eventually do terrible things for the sake of the long-term future

Longtermists have (past-tense) done bad things. The worry is now evidenced by a track record. It's not some theoretical worry about the future. Are we still not able to admit this as a movement?

Things done under the guise of longtermism:

I can't even begin to estimate the harm done by SBF's political interference, succesfully electing his crypto-friendly candidates to the world's most powerful institution of democracy, the US House. Permanently altering the course of world history, ending the careers of some very promising principled candidates who refused to take big money.

Curated and popular this week
Relevant opportunities