Hide table of contents

Author's note: Hi everyone! This is a cross-post from my blog. It's a short, accessible piece intended for those who find the idea of measuring suffering "icky," uncomfortable, or cold. I've noticed this as a fairly common reaction to effective altruism, and I wanted to write from a place of sincerely empathising with it. My goal here was to explore how quantification can complement our natural empathy rather than replace it. 


Is it more difficult to live with AIDS for a year or to have blindness for a year? There is a kind of person who wrinkles their nose at this question. I know because, I wrinkled my nose at it back in 2019, as I paged through my girlfriend's copy of Will MacAskill's Doing Good Better.

"I mean, ok, I get that you can try to measure which is worse, but, um. Feels really, icky? What would even possess someone to ask that question? It's kind of insufferable, you're not blind and you don't have AIDS, but now you're creating a 'badness' competition between the two?" I rolled my eyes. But I continued reading, because my girlfriend really liked the book and I liked her.

And as I read more, the arguments started making sense to me. The core idea is simple: if we want to help others effectively, we need ways to compare different forms of suffering so we can direct our limited resources where they'll do the most good. I hope you’ll allow me to explain a bit more.

Why we resist quantifying suffering

Why do many of us instinctively hate the idea of measuring and comparing suffering? I think it comes from the typical way that most of us orient towards pain: from a place of empathy and care. When my sister starts crying, my stomach clenches and I reach for her. I don't wonder, "Is this the most important suffering I could address right now? Does she feel worse than someone with malaria? Should I pull out my laptop and donate to a charity instead?" I ask my sister, "What's going on?" and I pull her into my arms.

I think, on some level, people are protective of their protective instincts. We like the very human part of us that jumps to our feet when a friend falls. That sees a hand extended to us when we tear up. We are protective of our pain, because it is ours.

This resistance makes sense. Quantification can feel cold, mechanical, and even dehumanising. It seems to reduce our human experience to numbers. And who are we to say, “hey, you there. This pain that feels so real and deep and consuming within you? It’s a 4. And that dude over there? His real and deep experience is a 6.”

A necessary tool in an overwhelming world

I have no argument against our empathetic impulses. I feel them too. But Doing Good Better was the very first book to ask me a question that truly resonated and changed my thinking for the better: there are a lot of hours in the day, there are a lot of people in a complicated world, and there are limited resources — we all have pain, but some of us have different kinds and many of us have a lot more resources than others — what should we do?

Putting a number on pain is not novel. When you go to the doctor and say your head hurts, you're asked to rate it on a scale from 1–10. This scale, despite its limitations and subjectivity, helps medical professionals determine appropriate treatment. When you join a transplant list, multiple factors including medical urgency, expected benefit, and time waiting are assessed to determine priority. These systems aren't perfect—they can't capture every nuance of human suffering—but they're necessary attempts to allocate scarce resources.

And it is awful, because shouldn't the doctor just take your headache seriously? And shouldn't everyone have the organs that they so desperately need? The act of quantifying suffering is not a commentary on the theoretical worth of someone's life or pain — those things are fundamentally invaluable, in my opinion. The act of quantifying suffering is a forced response to the reality that we can't help everyone.

Quantification as a complement to empathy

When we quantify suffering, we gain the ability to optimise our efforts. Research has shown that preventive malaria treatments can save a life for roughly $3000, while other interventions might cost tens or hundreds of thousands per life saved. This isn't just academic—these comparisons translate to real people whose lives are improved or saved because resources were directed more effectively. The humans themselves are, of course, never numbers — we are all very real and deserving.

And if every life matters equally, then we can and even should try to help as many people as possible.

But you don't need to do so at the cost of your sister, your friend, your care, your feeling. You can always connect with the parts of you that need to hold your loved ones — you can give those parts as much time as they need. But perhaps, you have space for other things too. The parts of you that understand how vastly your birthplace affects access to basic medical care, food, water, protection. The parts of you that understand how unfair everything is. The parts of you that have room to consider different approaches to doing good: to engage in analysis, triage, and exploration when necessary.

Quantification isn't meant to replace our empathy—it's meant to extend it, direct it. It's a tool that helps our compassion reach further and do more good in a world where needs far outstrip our capacity to address them all. By embracing both our intuitive empathy and analytical thinking, we can respond to the suffering right in front of us while also making thoughtful choices about how to help those we cannot see. I think that’s really, really beautiful. And to this day, I still say with certainty that one of the most important things I ever did was finish that book.

65

0
0
3

Reactions

0
0
3

More posts like this

Comments5


Sorted by Click to highlight new comments since:

This is a great post—thanks, Frances! This comes up a lot in my conversations about EA, and I really appreciate the clarity you've brought to it.

One line that really stood out to me was:

“And shouldn’t everyone have the organs that they so desperately need?”

I think it can be useful to acknowledge that the answer to this question is a clear yes. When I talk to people about triage, I always try to acknowledge that, ideally, we wouldn’t have to make these trade-offs at all. Our ultimate goal isn’t to help only those above a certain threshold—it’s to help everyone.

We prioritise not because we think some lives matter more, but because we wish we could help everyone.

"Quantification isn't meant to replace our empathy—it's meant to extend it, direct it" is beautifully put. In the same vein, Brian Tomasik wrote of triage as being "warm and calculating", a reframing (and phrasing) which stuck with me.

Thanks for writing this! Coincidentally, my talk "The Heavy Tail of Valence: New Strategies to Quantify and Reduce Extreme Suffering" just went online a couple of hours ago. I thought you might like it ☺️ 

Oh god that's a huge preview. OK.

Executive summary: While quantifying suffering can initially feel cold or dehumanising, it is a crucial tool that complements—rather than replaces—our empathy, enabling us to help more people more effectively in a world with limited resources.

Key points:

  1. Many people instinctively resist quantifying suffering because it seems to undermine the personal, empathetic ways we relate to pain.
  2. The author empathises with this discomfort but argues that quantification is necessary for making fair, effective decisions in a world of limited resources.
  3. Everyday examples like pain scales in medicine or organ transplant lists already use imperfect but essential measures of suffering to allocate care.
  4. Quantifying suffering enables comparison across causes (e.g., malaria vs. other diseases), guiding resources where they can do the most good.
  5. Empathy and quantification need not be at odds; quantification is a tool to help our compassion reach further, not to diminish our emotional responses.
  6. The piece encourages integrating both human care and analytical thinking to address suffering more thoughtfully and impactfully.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier