Hide table of contents

Earlier this month, we sent an announcement to our Giving What We Can Pledge members about our new pledge pin. This pin will be given to all Giving What We Can Pledge members who have been pledgers for over a year and are on track with their donations.

It’s an honour to be a part of a community that cares deeply about giving and improving the world, and this is a small way for us to thank our members for displaying an ongoing commitment to their pledge.

Here’s the full pledge pin announcement along with some FAQs.
 

2021 Giving Review

We’ve launched our yearly Giving Review and have been collecting data about the donations our community made in 2021.

If you could spend 5 minutes reporting your donations before the 31st July, please do so. It helps us measure our impact and hopefully provide better experiences for our members in the future.

Yes, your donations matter!

In the wake of Bill Gates’ announcement about upping the spending of the Gates Foundation, some of us may feel that our relatively small, individual donations don’t matter.

We’d like to remind you that although your contribution may feel like a drop in the bucket, it means quite a lot to the people, animals and planet it benefits.

By giving effectively, you are part of a community of donors who — together — are drastically increasing the size of that drop, making a meaningful, concrete difference to the lives of others and helping to normalise giving to those who need it. That’s incredibly important, because even with generous donations from large philanthropists, there's much more to be done.

Newsletter audio summary

Until next time, keep doing good!

-Luke Freeman & the rest of the Giving What We Can team

Newsletter audio summary

Member Daryl D'Souza started a new job and shared a donation publicly with his network!
Congratulate Daryl on his new job and his advocacy here

Attend An Event

Meetups

Americas/Oceania

The meetups team is hosting an EA Forum “Show and Tell.” Come and tell us about a forum post that you found interesting, didn’t agree with, didn’t understand, want to learn more about, etc., and we’ll discuss it together! Feel free to choose any post (old or new) on any topic or cause. Just come along and we can spend some time learning something new or debating something fun!

Europe/Asia

Tony Senanayake from IDInsight will join this session to answer some Q&A about his work with multiple organisations in the global health and development sector. Tony is a goldmine of knowledge and this is sure to be a fascinating discussion.

Open Forum

Our open forum is an event where you can come along with questions about effective giving and/or to meet others interested in effective giving. This event alternates between different timezones each month.
 

Next Open Forum (Europe/Americas)

New content from Giving What We Can

Blog

YouTube

Podcast

News & Updates

Effective altruism community

  • Ready Research performed a meta-review of what works and doesn’t work to promote charitable donations. Read Peter Slattery’s summary of the findings.
  • Magnify Mentoring has opened up applications for their next round of mentoring. If you are a woman, non-binary person, or trans person of any gender with enthusiasm to pursue a high-impact career path, consider applying before the deadline on 5th August
  • Peter McIntyre (formerly of 80,000 Hours) has launched a free online learning platform called non-trivial, which introduces some foundational EA concepts aimed to help young adults (particularly teenagers) increase their impact on the world. Peter encourages the EA community to share the first course, How to (actually) change the world, with others.
  • A community member is developing a course on forecasting. If you’re interested, you can join the waitlist to participate in this new online class.

Evaluators, grantmakers and incubators

  • GiveWell published an update on its funding projections for 2022, stating that it doesn’t expect to have enough funding this year to fill all the cost-effective grant opportunities it has been able to identify. As a result, it is raising its cost-effectiveness bar for funding and increasing its fundraising efforts.
  • GiveWell has published several new research materials, including a report on the efficacy and cost-effectiveness of programs that train health workers to deliver maternal and neonatal health interventions, a page about two recent grants (totaling $562,000) supporting IRD Global’s tuberculosis team in Karachi, Pakistan, and notes from a conversation with Drs. Edward Miguel and Michael Walker about a possible follow-up to a randomised controlled trial of GiveDirectly's unconditional cash transfer program in Kenya.
  • GiveWell is hiring for several positions, including Operations Assistant (new!), Senior Researcher, Senior Research Associate, and Content Editor. View all open positions on its jobs page.
  • Animal Charity Evaluators announced a special giving opportunity in which donations to its Movement Grants program will be matched dollar-for-dollar up to $300,000.
  • The EA Infrastructure Fund published an organisational update with information about the grants they made between September-December 2021
  • James Snowden, a GWWC pledger and former team member, has joined Open Philanthropy as Program Officer for Effective Altruism Community Building.

Cause areas

Animal welfare

Global health and development

  • Bill Gates recently announced that by 2026, the Gates Foundation aims to spend $9 billion a year
  • “Is it really useful to ‘teach a person to fish’ or should you just give them the damn fish already?” asks Sigal Samuel in his Vox article discussing the evidence behind ‘ultra-poor graduation programs,’ which are aimed at lifting the ultra-poor out of poverty through a combination of training and cash/assets. Samuel explores how these combo programs compare to simple cash transfer initiatives, and how the gap between “teach a man to fish” and “give a man to fish” is narrowing.
  • Evidence Action is hiring for some communications roles including a Senior Associate, Communications, a DC-based role working across a variety of communications workstreams with a heavy focus on digital, and an Associate Director, Communications.
  • Kelsey Piper, Vox journalist and GWWC member, writes about The return of the “worm wars” and how the controversy over the value of deworming interventions shows the need for effective altruists to reason under uncertainty.

Long-term future

You can follow us on Twitter, Facebook, LinkedIn, Instagram, YouTube, or TikTok and subscribe to the EA Newsletter for more news and articles.
Do you have questions about the pledge, Giving What We Can, or effective altruism in general? Check out our FAQ page, or contact us directly.

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 7m read
 · 
Introduction I have been writing posts critical of mainstream EA narratives about AI capabilities and timelines for many years now. Compared to the situation when I wrote my posts in 2018 or 2020, LLMs now dominate the discussion, and timelines have also shrunk enormously. The ‘mainstream view’ within EA now appears to be that human-level AI will be arriving by 2030, even as early as 2027. This view has been articulated by 80,000 Hours, on the forum (though see this excellent piece excellent piece arguing against short timelines), and in the highly engaging science fiction scenario of AI 2027. While my article piece is directed generally against all such short-horizon views, I will focus on responding to relevant portions of the article ‘Preparing for the Intelligence Explosion’ by Will MacAskill and Fin Moorhouse.  Rates of Growth The authors summarise their argument as follows: > Currently, total global research effort grows slowly, increasing at less than 5% per year. But total AI cognitive labour is growing more than 500x faster than total human cognitive labour, and this seems likely to remain true up to and beyond the point where the cognitive capabilities of AI surpasses all humans. So, once total AI cognitive labour starts to rival total human cognitive labour, the growth rate of overall cognitive labour will increase massively. That will drive faster technological progress. MacAskill and Moorhouse argue that increases in training compute, inference compute and algorithmic efficiency have been increasing at a rate of 25 times per year, compared to the number of human researchers which increases 0.04 times per year, hence the 500x faster rate of growth. This is an inapt comparison, because in the calculation the capabilities of ‘AI researchers’ are based on their access to compute and other performance improvements, while no such adjustment is made for human researchers, who also have access to more compute and other productivity enhancements each year.
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att