This might be one of the best pieces of introductory content to the concepts of effective giving that GWWC has produced in recent years!

I hit the streets of London to engage with everyday people about their views on charity, giving back, and where they thought they stood on the global income scale.

This video was made to engage people with some of the core concepts of income inequality and charity effectiveness in the hope of getting more people interested in giving effectively.

If you enjoy it - I'd really appreciate a like, comment or share on YouTube to help us reach more people!

There's a blog post and transcript of the video available too.

 

Big thanks to Suzy Sheperd for directing and editing this project and to Julian Jamison and Habiba Banu for being interviewed!

313

6
0
31

Reactions

6
0
31
Comments12


Sorted by Click to highlight new comments since:

Hi Grace, as you know, I think this is really great video, and I think you lead it super well, and I would gladly watch more content with you like this. I was wondering if you are at liberty to share the cost of making this video - it looks very good, professional, great image, and audio quality. And if not too much trouble, I am assuming this is part of the bigger campaign - including ads etc. I would be curious after you run it, if you will be able to share learnings. I think at EA, we shy away a bit from creating public-engaging content and big social media campaigns, I will be very curious to have a good case for this working, as I can imagine, e.g., other giving multipliers could follow the route. Overall a big fan of the approach you have been taking with your communications lately. I am hoping it really will hype effective giving and get more pledgers to emerge in the long run.

Hi Ula! Sorry for the very slow response! I got caught up with other work! 

A video like this would generally cost between 10k-20k GBP depending on the team used and how much staff time goes into it.

We're currently not running ads with this video, but will probably do so in the future. We have been trying to get as much organic traction on it as possible so far! Happy to share some further results with you when we have it!

Great video, will share! 

One question – in the interviews you incorporated, people stated that they thought it would be very expensive to save a life (£100k+) and even more (!) internationally. 

Was this the norm? Because in the academic research I've seen people tend to state very low amounts, vastly underestimating the true (~£5k) cost.   (This also seems to be happening iirc in my own ongoing work with Janek Kretschmer and Paul Smeets. Also why I was interested in seeing someone develop a "how much does it cost to save a life quiz and calculator"?) 

If this £100k+ estimate was fairly normal in your street interviews, I'm wondering if you framed this in a particular way, or whether you stated something before asking them this question that made them think of these larger numbers. Potential useful to  know how to frame this question for other research work and other contexts so that people are getting at 'what we really mean by cost to save a life'. 

Hi David!

I'm not sure why they thought it was so high! It also surprised me! 

I think maybe because we had talked about their own income beforehand? But I also hadn't really introduced much about charity before that point - so perhaps in other settings they are also being anchored? 

I had a small sample size so not sure I can draw any major conclusions here! The filming was about 10 months ago so I'm afraid I can't remember!

Great work! Nice editing choice to use the "iPad" frame to show different footage.

I really like this video, it made my day!

Great video, it was really fun and inspiring to watch! 

Great video!

This really seems directed more at middle class people who already have all of their needs met instead of "everyone". They enjoy delicious food and sick sexual needs.

I'm in the top ~20% globally with my ~$10k yearly income bur if I start giving to charity. I won't be able to afford my own cost of living. So who does that really help?

Just seems strange to put the onus of charity on to school teachers and old people in wheelchairs when billionaires still exist.

I very much agree that those who are wealthiest have much more of a responsibility to give! - but GWWC is not explicitly aimed at trying to get billionaires to donate more, because we're not really set up to do that! And there are other organisations who are trying to do this much better than we could.

This video is definitely not asking all people to give large amounts to charity, but I also wanted to talk to all kinds of people about charity. People across all spectrums of income give to charity, even people with very low incomes, so I think it's worthwhile giving people the knowledge that some charities can do a lot more than others. I think Imma is right below that even a small amount can go a long way when donated effectively.

We always encourage people to do what feels right to them, and would agree that people who are just trying to get their own needs met, may be better off not donating!

This video is really for people who have their needs meet (whatever this means to them) and who might be interested in giving. But it's very hard when you put a video up online to provide all these caveats or messages about who the video is for when you're trying to make a relatively interesting 15 min video that's already covering a bunch of topics. How hard it is to get the balance right is something I've been reflecting on lately. I hope we've struck an okay balance here.

(have not watched the video fully). I agree with you.

Multiple things can be true at the same time

  1. People who live in global poverty are are very poor
  2. Many people in developed countries are among the top 10-1 percent richest globally and don't realize that they comparatively rich
  3. If these people donate a bit, they can help extremely poor people by a lot.
  4. Living in relative poverty in rich countries is hard - even if people are globally "rich". (I don't have experience with that myself, but I have consumed a bit of media on relative poverty in my own and nearby countries in Western Europe, out of curiosity. I might still be completely wrong when I imagine what it's like). Some features of a rich society make living in relative poverty even harder. For instance, sharing a small house with a large number of people is made illegal.

It's good when people know these things!

I don't know where you live, but donating $10k while being on the top ~20% percentile globally sounds a lot. It does not help to donate so much to be unable to afford your living costs. It is simply not sustainable. Maybe donating a small bit is feasible. If donated well, a tiny amount already help people a lot, and I find donating very fulfilling. It also helps creating a culture where giving is normal, and not something weird.

Wealth is distributed insanely unequally. Billionares exist. They can donate much more with a much less sacrifice to themselves. They should (and pay taxes), do so thoughtfully, and keep their ego's and individual preferences on the background.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 32m read
 · 
Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, Taiwan eventually democratized in the 1990s and today is one of the riche