by Rand
1 min read 7

25

The price of children is always rising. Inflation is still at 2% but the price of a child’s life doubles every few years. And one day they will call this a law, like Moore’s Law, that every two years the price of a child’s life will double. In the future, they will look at us in bafflement and ask whether it was true that you could save a child’s life for $5000. “How many children did you save?” they will ask with eyes wide open. And, like the regretful older man who never held onto his Mickey Mantle rookie card, found in a pack of gum sixty years ago, we will look at them and say:

“I threw them away.”

25

0
0

Reactions

0
0
Comments7


Sorted by Click to highlight new comments since:

(Or maybe, "I invested the money and made above-inflation returns while also waiting to see further research on the most cost-effective interventions. I did occasionally make large donations with the investments when particularly opportune moments arose.")

If you have a way of doubling your money every few years, go for it. But that's rather unlikely.

Average annual stock market returns of roughly 9% while accounting for roughly 2% inflation, would see you double your money in about (70/(9%-2%)=10 years. It’s not exactly fast, but it’s doubling after accounting for inflation so it’s nothing to sneeze at. (Also worth noting, that’s not accounting for the additional deposits you make with your income, which would likely double a given amount much faster)

Of course, anyone should be careful about investing (e.g., potential for downturns or personal inability to make sound investment decisions in line with really basic advice), potential for value drift, and the possibility that some current causes are urgent/warrant immediate funding despite the possibility that a later cause might be even more important that the current one. However, for some people/situations those concerns may also be partially if not wholly offset by the concepts like increased insight/research into charity effectiveness.

Ultimately, I don’t have a strict opinion on which method is generally better, but I don’t think it’s justified to so heavily dismiss/criticize delayed giving.

This piece isn't intended as an argument against delayed giving (though I think most such arguments would need to deny the premise of the piece). It's a story about not giving. It's about an older man, living in a time where saving a life in Kenya is like saving a life in Canada (that is, out of reach for most people), looking backward. Every year during that short window, he could have been a hero, saving one or more lives.  He missed that chance and it doesn't exist anymore.

Ah, I figured it was more of an argument against delayed giving rather than about plainly not giving. To clarify further, is your claim that the price of [saving?] a child’s life is actually doubling every few years (out of proportion to inflation), or is it just supposing a hypothetical world where that is the case?

I'm assuming it for the sake of the piece. I do think that the price of a child's life is rising faster than my investments appreciate, and probably thought they were doubling every 4 to 5 years when I wrote this. (I wrote $2000 back when I posted this to Facebook, I wonder what Givewell's estimates are.)

(To clarify further, this was a post to my Facebook creative writing group in 2015 as was the "Responsibility" poetry I posted.)

I think it's awesome, but Harrison should get more credit for pointing out the "patient philanthropy" critique. I'd like to see what you could get if you wrote a short story about it.

More from Rand
Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 32m read
 · 
Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, Taiwan eventually democratized in the 1990s and today is one of the riche