Extreme altruism: right on!, The Economist, September 20th, 2014. An excerpt:

Flyers at petrol stations do not normally ask for someone to donate a kidney to an unrelated stranger. That such a poster, in a garage in Indiana, actually did persuade a donor to come forward might seem extraordinary. But extraordinary people such as the respondent to this appeal (those who volunteer to deliver aid by truck in Syria at the moment might also qualify) are sufficiently common to be worth investigating. And in a paper published this week in the Proceedings of the National Academy of Sciences, Abigail Marsh of Georgetown University and her colleagues do just that. Their conclusion is that extreme altruists are at one end of a “caring continuum” which exists in human populations—a continuum that has psychopaths at the other end. [...]

She and her team used two brain-scanning techniques, structural and functional magnetic-resonance imaging (MRI), to study the amygdalas of 39 volunteers, 19 of whom were altruistic kidney donors. (The amygdalas, of which brains have two, one in each hemisphere, are areas of tissue central to the processing of emotion and empathy.) Structural MRI showed that the right amygdalas of altruists were 8.1% larger, on average, than those of people in the control group, though everyone’s left amygdalas were about the same size. That is, indeed, the obverse of what pertains in psychopaths, whose right amygdalas, previous studies have shown, are smaller than those of controls.

Whether this applies to EAs, however, is unclear. Compare Peter Singer's recent remarks in a panel discussion about empathy:

My admittedly impressionistic observation is that effective altruists are not especially empathetic—at least, not in the sense of emotional empathy. They do have what is sometimes called “cognitive empathy” or “perspective taking” capacity—that is, the ability to see what life is like for someone else.

4

0
0

Reactions

0
0
Comments8


Sorted by Click to highlight new comments since:

I worry a bit that the way EAs communicate/market their ideas might be putting off a much larger segment of the population that relies largely on what Singer calls "emotional empathy" when making altruistic decisions.

I think it would be worthwhile to:

(1) look very carefully at the anti-EA hit pieces that occasionally pop up and try to understand the motivations/concerns behind the (usually not very well-argued) criticisms of EA;

(2) experiment with pitches similar to those employed by very popular and well-funded mainstream charities.

Speaking very broadly, EAs seem to have two main goals: getting more people to redirect their donations to more effective charities, and getting more people to donate more of their resources to charity. I think pushing both goals simultaneously is likely making EA unpalatable to most typical people, who might be receptive to moving their $20-50/month elsewhere but don't want to be measured against someone who's donating 10% of their earnings.

Meanwhile, we should be able to appeal to the high-empathy people who are probably feeling fairly lonely in their conviction. When I've mentioned my intention to go forward with a non-directed kidney donation, more people have questioned my sanity than have reacted positively.

I've heard from several of my friends that EA is frequently introduced to them in a way that seems elitist and moralizing. I was wondering if there was any data on how many people learned about it through which sources. One possibility that came up was running tv/radio/internet ads for it (in a more gentle, non-elitist manner), in the hopes that the outreach and potentially recruited donors would more than pay back the original cost. Thoughts?

I agree with what you say, except for this:

Speaking very broadly, EAs seem to have two main goals: getting more people to redirect their donations to more effective charities, and getting more people to donate more of their resources to charity.

There are multiple effective paths to impact, and only some of these involve making or giving money. I think it's important to be clear about this: there are already critiques of the EA movement out there which foster this misconception (see e.g. the RationalWiki entry on EA), and this may be turning away people that would otherwise be receptive to our ideas.

That's a good point. I don't just think in terms of money when I talk about "donations" and "resources," but there's not really a very concise or clear way to talk about the very broad array of actions people can take that are consistent with EA goals.

The very ability of considering what one's position would be in a scenario very different from the one in which one finds oneself is prefaced by controlling the impulse to react to the immediate environment.

The common feature between, say, Nick Bostrom's PHD, Nick Beckstead's PHD and Paul Christiano's blog Rational Altruist, is a capacity to hold even fewer particularities of one's environment as true come what may.

Empathy is just the opposite of that, empathy is frequently seen as the immediate, system 1, uncontrollable emotion that one experiences when someone else in the local immediacies undergoes distress.

I've argued in the past, and would continue to argue, that the moral obligation is higher, not lower, for people with less empathy. I'm much more forgiving of people who give locally and thereby fail to save globally if they do it to avoid feeling the sadness of empathy.

Yes, my personal impression of many EAs I've met in person, and from talking to people online, is that EAs are more likely to suffer from Memetic Immune Disorder than to be unusually empathetic in the conventional terms. I think people who are very empathetic often have trouble with trolley-style scenarios.

Interesting piece. However, the article conflates psychopathy meaning "people with smaller amygdalas" and psychopathy meaning "people with smaller amygdalas who display anti-social behavior". The former group is not necessarily in the latter group. For example, you may have a smaller than average amygdala and genuinely respond less to the fear and distress of others but not become a social predator that manipulates people.

And as you point out, it's not clear how this study relates to EAs. It could be that EAs have relatively normal amygdala size but are disproportionately interested in rationality and ethics and hence recognize the good they can and should be doing in the world.

I agree. It would be interesting to know how EAs score on standard measures of empathy, relative to the general population or to other relevant subpopulations (such as psychopaths or hyper-empathetic folk).

More from Pablo
Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 7m read
 · 
Introduction I have been writing posts critical of mainstream EA narratives about AI capabilities and timelines for many years now. Compared to the situation when I wrote my posts in 2018 or 2020, LLMs now dominate the discussion, and timelines have also shrunk enormously. The ‘mainstream view’ within EA now appears to be that human-level AI will be arriving by 2030, even as early as 2027. This view has been articulated by 80,000 Hours, on the forum (though see this excellent piece excellent piece arguing against short timelines), and in the highly engaging science fiction scenario of AI 2027. While my article piece is directed generally against all such short-horizon views, I will focus on responding to relevant portions of the article ‘Preparing for the Intelligence Explosion’ by Will MacAskill and Fin Moorhouse.  Rates of Growth The authors summarise their argument as follows: > Currently, total global research effort grows slowly, increasing at less than 5% per year. But total AI cognitive labour is growing more than 500x faster than total human cognitive labour, and this seems likely to remain true up to and beyond the point where the cognitive capabilities of AI surpasses all humans. So, once total AI cognitive labour starts to rival total human cognitive labour, the growth rate of overall cognitive labour will increase massively. That will drive faster technological progress. MacAskill and Moorhouse argue that increases in training compute, inference compute and algorithmic efficiency have been increasing at a rate of 25 times per year, compared to the number of human researchers which increases 0.04 times per year, hence the 500x faster rate of growth. This is an inapt comparison, because in the calculation the capabilities of ‘AI researchers’ are based on their access to compute and other performance improvements, while no such adjustment is made for human researchers, who also have access to more compute and other productivity enhancements each year.