Hide table of contents

Announcing changes to EA Giving Tuesday and Meta's Giving Season match

EDIT: Meta is actively updating the terms of the match, so this information could be out of date, please see https://www.facebook.com/help/332488213787105 for current details - the EA Giving Tuesday team will try our best to keep EAGivingTuesday.org up to date. We are actively exploring how we can coordinate around the match in a way that makes sense.

On Nov 1st 2022, Meta announced a significant change to their annual Giving Tuesday donation matching scheme, which affects EA Giving Tuesday.

Here are the high level details of this year’s match:

“To help nonprofits jumpstart their Giving Season fundraising, Meta will match your donors’ recurring donation 100% up to $100 in the next month (up to $100,000 per organization and up to $7 million in total across all organizations). All new recurring donors who start a recurring donation within November 15 - December 31, 2022 are eligible. Read the terms and conditions.

The match now requires participants to set up a recurring donation, in order to get up to $100 in matched funds. The matched funds are provided once the second transaction goes through i.e. you need to donate for two months to receive the match. Edit: As of 4 Nov: We are unsure but it seems possible that a donor could set up recurring donations to multiple organizations (up to 200) in order to get multiple matches (for a total of up to $20,000 in matches).

We believe this opportunity is only available in the US due to the functionality appearing to be US only.
 

What does this mean for EA Giving Tuesday?

In the past, the value proposition of EA Giving Tuesday was to organise around the 100% match in the morning of Giving Tuesday. With the lower match amount per donation and the requirement for it to be recurring, we think that the matching is much less likely to be competitive and therefore the previous level of coordination does not make sense to continue.

The EA Giving Tuesday team has also decided that it makes the most sense for people to donate directly to the charities via Facebook given the new requirement about recurring donations. These changes mean that EA Giving Tuesday will not support any organisations that require re-granting or restrictions due to ongoing administrative requirements of recurring donations. 501c3’s registered with Facebook fundraising tools will be able to participate in Meta’s Giving Season match. 

We encourage you to look for effective charities on Facebook for the match and will be listing effective charities who are interested in participating on our website.

EA Giving Tuesday will share the details of any matching opportunities we think are worthwhile and conduct an impact analysis at the end of the season. 

 

How can I get my donations matched this year?

This year there are two match opportunities we are sharing with you: 

Once you’ve received confirmation of a match, please let us know the details via this impact evaluation form so we can quantify the value of these opportunities for future years.
 

We’re disappointed that the match has changed significantly from the previous year, but we hope you find value in the matching opportunities from both Meta and Every.org.

We will continue to search for new matching opportunities that have the ability to shift donations towards highly effective charities throughout the season.

You can read more about EA Giving Tuesday at EAGivingTuesday.org

Grace and the EA Giving Tuesday Team 2022


 

Comments11


Sorted by Click to highlight new comments since:

TL;DR Update on my thoughts: I've updated significantly downwards on the probability that trying to make the Meta Giving Season match go very well for EAs is worthwhile and I am not investigating it further, pending the EA GT Team's reply to me (I emailed Philip some more information).

More details:

After getting some data on recurring Facebook donation timestamps, it appears to me that getting matched will likely be a random lottery for anyone who sets up their recurring donation within the first few hours of the match, only slightly weighted towards those who set up their donations early.

Specifically, the data suggests that the second donation that goes through on December 15th will go through not at the exact same time as the first donation on November 15th, but at a random time in a ~7 hour window (based on 11 data points). That's quite a bit of variation, which means a donor who donates in the first second on Nov 15th can get beaten by someone who donates a few hours later.

(This assumes that 70,000 recurring $100/mo donations will be set up within the first ~7 hours of the match. Given that ~$150M was donated in a single day on Giving Tuesday in years past, and that $7M in donations was made in the first couple seconds last year, this seems quite plausible to me, though not guaranteed. If the matching funds actually last for much longer (e.g. a full day or longer), then a donor probably can get matched with high probability by donating right at the beginning of the match.)

So I don't think I can be confident that a bunch of EA donors setting up their donations right when the match begins will almost all get their second donations matched, and because of that I think it's probably not worthwhile to put in the effort to try to get ~$100k-$1M matched by EAs. The strategy to do so would involve a lot of donation trades and would take a lot of organizer time, plus be asking a lot from donors, so I wouldn't want to do it unless there was a high chance that a high fraction of EAs' donations would actually get matched.

I'm not involved in organizing EA Giving Tuesday this year, but want to share my personal opinion here that I initially think there's a significant chance EAs could still direct >$100k of matching funds to effective charities despite the more severe match limits.

My very tentative guesses: ~75% the strategy would work to direct >$100k  of matching funds given the level of EA interest in past years, but only ~30% it would be worthwhile given the time cost involved to EAs.

Rather than describe the potential strategy here publicly at the moment, I emailed the proposal to the EA Giving Tuesday team to consider. (If they don't release some further update in the next week about it, I'll reply to this comment here with an update on whether the strategy seems feasible and worthwhile after further consideration.)

EDIT: Here's my update 3 days later.

Thanks Will! Just want to publicly acknowledge that we'd like to chat with you more about it.

I agree that there's still a chance for EAs to shift a good amount to effective charities through the match, but I think in general there's less of a need for the level of coordination than previously.

Looking forward to seeing what we can work out together! - Grace

About Giving Season at Meta
https://www.facebook.com/help/332488213787105

(Apparently this was published within the last day, according to what I was told by a member of the EA GT team, which is why it's not linked to in the above post)

Thanks for the update. If you have to donate twice, it's really a 50% match on up to $200 given per donor per organization, right?

Edit Nov 5th: It indeed seems like it's a 50% match (you donate twice and only your second donation in December gets matched) though I wouldn't be shocked if it ends up being different (e.g. due to Meta changing the terms in the next 10 days).

It's unclear to me. Meta just changed the terms here:

I'm currently seeing:

  • Limit of one recurring donation up to $200.
  • Limit of $20k per donor, across all eligible nonprofits.

This read "$100" and "$25k per donor" a couple hours ago (when I first saw it).

Just to further update, the limit of one recurring donation was brought back down to $100 recently, as I know Will is already aware.

Yes, we think that's right. We've just quoted directly from Meta above about the match. 

We're also trying to seek clarification about some thing from Meta but they are unlikely to respond based on past experience.

Since Meta hasn't specified a start time for the match, should we assume you're eligible if you donate after midnight on Nov. 15 in your time zone?

It's unclear to me when the match will begin exactly. In the past the match has started in Eastern Time - so I would assume ET is when the match will begin but this could be inaccurate.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr