Hide table of contents

Update October 13, 2022: Giving What We Can and One for the World are supporting a pared-down version of EA Giving Tuesday in 2022. Read more and sign up for updates in their post

Summary 

  • We’re actively seeking an EA-aligned organization to lead EA Giving Tuesday in 2022 and future years. If you have ideas or leads, reach out to me (Megan Jamer) at megan@eagivingtuesday.org.
  • Since 2017, the EA Giving Tuesday donation matching initiative has helped hundreds of EA donors direct over $1.9M USD in extra funding to effective nonprofits.
  • Rethink Charity currently houses EA Giving Tuesday, and needs to hand it over to another EA-aligned organization before giving season.
  • If not handed off, EA Giving Tuesday will “hibernate.” It’s likely the website and most of the resources will remain online indefinitely, but may be out of date.

What is EA Giving Tuesday? 

EA Giving Tuesday organizes people around a shared goal: to direct matching funds to highly effective nonprofits that they wouldn't otherwise receive. The project supports donors and highly effective nonprofits, to make it as easy as possible to participate in matching opportunities with counterfactual value. EA Giving Tuesday has historically been focused on Facebook’s Giving Tuesday match.

If you’re unfamiliar with the project, there’s in-depth information on our website, and in the EA Giving Tuesday 2020 retrospective. Since 2019, EA Giving Tuesday has been housed within Rethink Charity

How valuable is EA Giving Tuesday?

The project’s main impact metric is matching funds directed to effective nonprofits during Facebook’s match. Here is what donors have accomplished through EA Giving Tuesday’s preparation and coordination efforts: 

YearDonatedMatchedOverall match %Learn more
2017$379k$48k13%EA Forum post
2018$719k$469k65%EA Forum post
2019$1.1M$563k52%EA Forum post
2020$1.6M$412k25%EA Forum post
2021$1.4M$411k29%Website

The value of the matching funds should be weighed against the resources required to run the project. You can find more information about project inputs in the EA Forum posts linked in the table above.

The project may have other sources of value, including: 

  • Offering a way for EAs to collaborate and take tangible action together
  • Potentially inspiring EAs to donate more, because of the match
  • Being a source of frustration and joy that fuels the creation of potentially dank memes

When does the project need to be handed off by?

A hand off plan will likely need to be in place by September 30, 2022, if EA Giving Tuesday is to be actively run this year. Otherwise, the project will “hibernate.” The next section describes some likely changes under this “hibernation” scenario.

I’m a donor or effective nonprofit that’s participated in EA Giving Tuesday? What does this update mean for me? 

EA Giving Tuesday will very likely “hibernate” if not handed off. Donors and nonprofits will see key changes in this hibernation form:
 

  • We expect the website and most of the resources will likely remain online indefinitely.
  • Donations will need to be made without a restriction (e.g., “Nonprofit’s X Project”) or the need for a regranting arrangement (e.g., “Nonprofit A via Nonprofit B”).
  • We’ll add disclaimers to the website and resources (i.e., instructions and FAQ) to reflect that they won’t have been updated since 2021 and may be out of date.


If the project is handed off successfully with enough lead time before Giving Tuesday, its incoming leadership would plan and execute a 2022 strategy.

What type of organization could be a good fit to lead this project? 

Here’s some characteristics of an organization that could be a great fit: 
 

  • The org’s mission involves effective giving and/or EA community building.
  • The org is an active member of the EA community, and would take trust and relationships with donors and nonprofits in this community seriously.
  • The org either has a U.S.-based person to lead the project, or would be able to source U.S.-based talent for this role.
  • The org has experience running complex operational projects.
  • The org can take over this project permanently, not just in 2022.

If you have ideas or leads for others that could be a good fit, please email Megan at megan@eagivingtuesday.org.

What resources are required to run EA Giving Tuesday? 

The receiving organization would be responsible for funding all project expenses and for recruiting a team and taking ownership of all of the systems used to run it (HubSpot, Mailchimp, Google Workspace, etc.). 

Funding from EA funding sources could possibly be available for this project. For an idea of the time and resources required to run EA Giving Tuesday in the past, please see the Estimating our impact section of the 2020 retrospective. 

I’ve received feedback that EA Giving Tuesday could be run in a leaner style, by focusing on the few aspects of the project that offer the most value. I basically agree! I think running a leaner project would be a good goal for the new leader to have.

Lead EA Giving Tuesday in 2022 and beyond!

I believe EA Giving Tuesday is a valuable project for the EA community. Unfortunately, neither I nor Avi Norowitz (the project’s leader for several years) are able to lead it.  At this time, Rethink Charity is unable to recruit new leads for the project.

I’m deeply grateful to have had the opportunity to work on this project since 2020 alongside talented and dedicated colleagues. In particular, I have really enjoyed supporting a wide network of donors and nonprofits in the EA community. 

EA Giving Tuesday is an exciting and operationally complex project that empowers donors in the EA community to take action together to maximize the impact of their donations.

If you’d like to help out with finding EA Giving Tuesday an EA-aligned organization to lead the project, please don’t hesitate to reach out – email megan@eagivingtuesday.org! 



 

Comments8


Sorted by Click to highlight new comments since:

Q: Where can we see the 'bottom line' on the impact for the most recent years?

I'm looking for.

  1. Additional amounts raised/diverted relative to counterfactual (with no EA GT org)

minus

  1. Cost (money and value of time) of this

As I didn't see anything in the linked posts etc., I sketched one below (which took about 20 minutes):

Proto-BOTEC sort of for 2020, sort of going forward

A quick skim and proto-Botec from the recent linked report for 2020

  1. Benefit: 243k increase in counterfactual funds raised for effective charities

Report: 411k in counterfactual matched donations ...

  • 275k from 100% match. Previous report suggests no EA donors would get the 100% match without the EA GT org. This seems wrong to me. Maybe EAGT made this happen in the past, but in the future many of the institutions and knowledge are in place ... so let's say 100k would be raised from this without EAGT going forward.
    --> 175k counterfactual impact of 80k

  • $136k of which was from 10% match ... ~ 40% of which obtained without EA GT (see discussion in that report) --> --> 81k counterfactual impact of 80k

81 + 175 = 256k

  • Less 17% lost tax benefits, which I'll estimate at 25% ... so subtract about 5% of the total
  1. $25,000 in costs (Hours organizing, other expenses)

466 + 297 = 763 paid and unpaid hours organizing this. I suspect that as we learn more, get better, have the sheets etc. in place, it will take fewer hours.

... So let's say 450 total hours going forward.

I'm not sure what type of labor goes into this. Let's say 150 hours of 'managerial and tech time' valued at $100 per hour, and 300 hours of 'volunteer/student time' valued at $30/hour.[1]

150 * 100 + 300 * 30 = $24,000

Nonlabor costs ... about $1000

Not considered:

  • chances Facebook continues GT matches

  • extent to which this leads counterfactual donations to be made

  • less tangible benefits,

  • cost of the time spent by donors (est: 411 donors spending ~30 min each = 200 hours = maybe 10k in value of time?)

Overall first-pass assessment

This seems like a potentially good use of resources. ~243k in increased amounts received by to EA charities per year. Let's say these have 10x the value of the counterfactual matched charities, so a this is worth $219k per year.

Relative to perhaps $25-35k in time costs? Or, if I'm wrong about the 'learn by doing time savings', maybe $50-60k in time costs.

Probably worth doing, or worth further investigation (including perhaps a MonteCarlo Fermi using Squiggle or something).



  1. (Of course some might say EA hours are super valuable, on the other hand people get something out of this, it's social, and it may not substitute for time spent solving X-risk issues etc. ↩︎

Hi David! I apologize for the very slow response. A few points: 
- Your analysis makes me upgrade how important I think diligent time tracking is on this project in future years, segmented by e.g., 'managerial and tech time' vs 'volunteer/student time'
- I don't have a go-to answer for you on the time costs for EA GT 2021. We had 2 Ops Specialists (Aisha and Mac) each work ~200 paid hours; I worked about 350 paid hours (including hiring and training); Avi worked probably a few hundred volunteer hours (including hiring and training); Gina and a few others worked a small amount of volunteer hours. 
- Can the project's time costs decrease via "learn by doing?" I am somewhat optimistic about this. But it's tricky because historically, new people have had to be trained on the systems and context every year. So processes can be improved, but a big thing is getting the same people to contribute to the project year after year. And this is tough, because it's uncertain the project will run any given year, and it's only seasonal. Ideally, the "institutional knowledge" would sit at an EA org (ideally, with the same people) over the long term. 
- Thanks again for your BOTEC, I enjoyed reading it and I imagine it has helped folks in the community evaluate the projects' value. 

Thanks for taking the time to complete and share a first-pass assessment, David! I'll follow up with a bit more info when I'm able.

And thanks for all the work you have done on this project!

I appreciate that - thanks! I have worked a lot on it. A lot of the credit goes to my great EA GT teammates, in present and past years. 

Hi guys - OFTW is interested in hosting this. I'll reach out by email.

Is there funding available for the org/individuals if they take it on?

Hi David! At present, there's no funding secured. That said, the project has received funding in past years. I'm relatively confident (70-80%) that for the right org/individual, there are a few different funding sources that would consider funding it. 

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 32m read
 · 
Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, Taiwan eventually democratized in the 1990s and today is one of the riche