Hide table of contents

Share where you donated or plan to donate in 2023 and why

See also: How would your project use extra funding? and Winners in the Forum’s Donation Election (2023)

I encourage you to share regardless of how small or large a donation you’re making, and you shouldn’t feel obliged to share the amount that you’re donating.

You can share as much or as little detail as you want (anything from 1 sentence simply describing where you’re giving, to multiple pages explaining your decision process and key considerations). You can also clarify whether you're interested in feedback or follow-up questions or not. 

And if you have thoughts or feedback on someone else’s donation plans, I’d encourage you to share that in a reply to their “answer”, unless the person indicated they don’t want that. (But remember to be respectful and kind while doing this! See also supportive scepticism.)

Why commenting on this post might be useful:

  • You might get useful feedback on your donation plan
  • Readers might form better donation plans by learning about donation options you're considering, seeing your reasoning, etc.
  • Commenting or reading might help you/other people become or stay inspired to give (and to give effectively)


Credit: DALL-E 3
Sorted by Click to highlight new comments since:

Thanks for helping organise the donation events, Lizka!

In agreement with my comment last year, I made 97 % of my year donations a few months ago to the Long-Term Future Fund (LTFF). However, I am now significantly less confident about existential risk mitigation being the best way to improve the world:

  • David Thorstad's posts, namely the ones on mistakes in the moral mathematics of existential risk, epistemics and exaggerating the risks, increased my general level of scepticism towards deferring to thought leaders in effective altruism before having engaged deeply with the arguments. It is not so much that I got to know knock-down arguments against existential risk mitigation, but more that I become more willing to investigate the claims being made.
  • I noticed my tail risk estimates tend to go down as I investigate a topic. In the context of:
    • Climate risk, I was deferring to a mix between 80,000 Hours' upper bound of 0.01 %  existential risk in the next 100 years, Toby Ord's best guess of 0.1 %, and John Halstead's best guess of 0.001 %. However, I looked a little more into John's report, and think it makes sense to put more weight in his estimate.
    • Nuclear risk, I was previously mostly deferring to Luisa's (great!) investigation for the effects on mortality, and to Toby Ord's 0.1 % existential risk in the next 100 years. However, I did an analysis suggesting both are quite pessimistic:
      • "My estimate of 12.9 M expected famine deaths due to the climatic effects of nuclear war before 2050 is 2.05 % the 630 M implied by Luisa Rodriguez’s results for nuclear exchanges between the United States and Russia, so I would say they are significantly pessimistic[3]".
      • "Mitigating starvation after a population loss of 50 % does not seem that different from saving a life now, and I estimate a probability of 3.29*10^-6 of such a loss due to the climatic effects of nuclear war before 2050[58]".
    • AI risk, I noted I am not confident superintelligent AI disempowering humanity would necessarily be bad, and wonder whether the vast majority of technological progress will happen in the longterm future.
    • AI and bio risk, I suspect the risk of a terrorist attack causing human extinction is exagerated.

I said 97 % above rather than 100 % because I have just made a small donation to the EA Forum Donation Fund[1], distributing my votes fairly similarly across the LTFF, Animal Welfare Fund, and Rethink Priorities. LTFF may still be my top option, so I might have put all votes on LTFF (related dialogue). On the other hand:

  • I was more inclined to support Rethink's (great!) work on the CURVE sequence (whose 1st post went out about 1 month after I made my big annual donation). I think it is stimulating some great discussion on cause priritisation, and might (I hope!) eventually influence Open Phil's allocation.
  • I agree animal welfare should be receiving more resources, and wanted to signal my support. Also, even though I am all in for fanaticism in principle (not in practice), I also just feel like it is nice to donate to something reducing suffering in a more sure-way now and then! 
  1. ^

    Side note. No donation icon showed up after my donation. Not sure whether this is supposed to or not. Update: you have to DM @EA Forum Team.

I am now significantly less confident about existential risk mitigation being the best way to improve the world

Meanwhile, I have updated further away from existential risk mitigation. I only plan to donate late in the year, but, if I was to do it now, I would go for the best animal welfare interventions (e.g. the ones recommended by Giving What We Can) instead of LTFF. On top of what I said above:

  • I think extinction risk from wars, nuclear wars, asteroids and comets, and supervolcanoes is astronomically low, and has often been greatly overestimated in the effective altruism community (see comparison with Toby Ord's estimates).
  • Even conditional on a nuclear/volcanic/impact winter causing human extinction, I believe the probability of not fully recovering would only be 0.0513 % (relatedly). I guess this would be even lower for a pandemic not involving advanced AI, as it would arguably not lead to so many extinctions in humans' past evolutionary path.
  • I become more sceptical about bio extinction risk:
    • Reading more posts of David Thorstad's series on bio risk, and skimming some of the linked sources.
    • Getting a sense that the cost-effectiveness of solutions to mitigate bio risk is often overestimated.
    • Listening to Sonia Ben Ouagrham-Gormley on Barriers to Bioweapons.
    • Having a negative impression of the methodology used in Appendix 1 and 2 of this report to estimate the probability of wildfire and stealth pandemics[1]. Not ideal because I am not aware of many attempts to estimate bio risk, and I tend to put more weight on quantitative estimates. Millett 2017 is another quantitative such attempt, and I agree with David Thorstad is has serious flaws (for example, it does not account for tail risk usually decaying faster as severity increases).
  • I feel like the power of governments to mitigate global catastrophic risk if they perceive there is such risk is often underestimated.
  • It is unclear to me whether tail risk is neglected in the relevant sense.
    • To illustrate, I commented that:
      • If the goal is saving lives, spending should a priori be proportional to the product between deaths and their probability density function (PDF). If this follows a Pareto distribution, such a product will be proportional to “deaths”^-alpha, where alpha is the tail index.
      • “deaths”^-alpha decreases as deaths increase, so there should be less spending on more severe catastrophes. Consequently, I do not think one can argue for greater spending on more severe catastrophes just based on it currently being much smaller than that on milder ones.
      • For example, for conflict deaths, alpha is "1.35 to 1.74, with a mean of 1.60", which means spending should a priori be proportional to "deaths"^-1.6. This suggests spending to decrease deaths in wars 1 k times as deadly should be 0.00158 % (= (10^3)^(-1.6)) as large.
    • In reality, saving lives in more severe catastrophes should be weighted more heavily. However, it looks like saving lives in normal times is better to improve the longterm future than doing so in catastrophes.
  • I found the submissions of the winners of the 2023 Open Philanthropy AI Worldviews Contest quite compelling.
  • I very much agree with Matthew Barnett's points about human disempowerment due to advanced sentient AI not being obviously bad (relatedly). To illustrate:
    • Humans currently have control over the future, as advanced misaligned AI about to cause human extinction would have.
    • Humans have caused the extinction of many less powerful species without arguably posing any meaningful existential risk in the process of doing so.
  • I have been going through posts tagged under AI risk skepticism, and finding some of the arguments for lower risk quite good.
  1. ^

    Kevin Esvelt discussed wildfire and stealth pandemics on The 80,000 Hours Podcast.

In 2023, I donated 46,645 $, which represents ~75% of my post-tax income

[My donations are a bit messy, since I often asked employers or clients to donate my "salary" to a charity instead of having it paid out to me and only then re-directing it. Sometimes this limited my choices of where to donate, and not all employers offered this. I work on AI Safety and so my donations go towards GHD which reflects my hedging and has some other philosophical reasons I am happy to share if you reach out to me.]

My 2023 donations will be split as follows:

  • GiveWell (Top Charities Fund): 26,655 $
  • GiveDirectly: 6,000 $
  • Malaria Consortium: 3,000 $
  • CEA [1]: 3,150 $
  • Misc [2]: ~2,340 $
  • Effektiv Spenden (as gift vouchers): ~5,500 $

Going forward, I will re-evaluate whether to include or even prioritize animal welfare in my giving (I had previously decided against that, but I'm now questioning my reasoning behind that decision).  

[Edited to include new donations that I just made]

  1. ^

    This is compensation for contracting work that I negotiated an hourly salary for which I then never claimed. 

  2. ^

    This mostly covers smaller amounts of work test compensation that wasn't claimed or re-directed immediately, or work that I was offered money for but ultimately turned down the payment. I don't want to name any organization here but am happy to share details if you are curious. 

Congratulations! 75% of your post-tax income is impressive. I am at 63%, but I don't think I can cut much more of my living expenses (currently living at 120% of the US Federal poverty level). I guess I must be saving too much for retirement. As you, I am also debating how much to contribute to animal welfare, so your thoughts are useful. 

Hi Diego!

Thanks for your reply. I don't know your financial situation so I don't want to make assumptions but I think saving for retirement or building some general runway is important, and I would never want you to think that you aren't doing enough, especially if you are donating 63% (!) of your income. That's fantastic! And there will always be someone who donates more than you 😉 it's not a race! 63% might be what works for you, and that's great.

I should also note that while I have always been frugal, I was only able to donate this much because for a large part of the year, I didn't have to pay anything for housing (and, sometimes, meals), and I didn't count "immediate" donations of the kind mentioned above as income, so this maybe explains the high percentage. In 2024, I will likely move, and other changes in my personal life will likely mean I will get a lot closer to ~50% or less.

Thanks for sharing, MvK!

Going forward, I will re-evaluate whether to include or even prioritize animal welfare in my giving (I had previously decided against that, but I'm now questioning my reasoning behind that decision).  

I would be curious to know what led you to such reevaluation.

Aim to finish up 2023 having donated:

  • 16k to AI safety / long term future 
  • 9k to animal suffering 
  • 4k to global health / well-being

This year wanted to shift more toward existential risk, vs last year which was mostly global health and well being - given the increased concerns and seemingly accelerating timelines.

Also put money into EA community side-projects:

  • 10k for accountability coaching app (pro-bono for EAs)
  • 3k for EA kids storybook project

Happy with the year - also took the GivingWhatWeCan pledge - better late than never!

ps - thanks for all the folks working hard in non-profits, fighting the good fight...

I generally choose on a month by month basis. I don't claim this is effective.

Things that I guess might be the most effective in the coming months? EA funds, Rethink and Animal Welfare stuff. 

(I find it uncomfy to talk about justifications for this but whatever. Feel free to correct errors here)

EA funds - I remember reading @Linch's case that money there was really marginal when he wrote it. I think they've had more money since but it still might be very effective.

Rethink - I know and trust Peter and often see reflections of work I think is interesting in Rethink (eg the surveys and animal weights). If their animal weights are even 1% likely then that implies a big shift in how we should spread impact. 

Animal Welfare stuff - seems underrated compared to humans.

Why not AI?

Well my P(doom) is 2-8% which is comparably lower than many people's. Also I am unsure that marginal money is spent well. I hear about and see non-trivial levels of grift. I have given to AI pause stuff before and might do that, but currently I feel like AI stuff is where I want it in terms of many of the axes I think I could donate to effect.

What else would excite me?

Sustainable immigration stuff. I think if that could happen there would be a lot of good outcomes pretty cheaply. Note I don't mean more immigration, I mean more sustainable stuff. How can we increase levels of immgration without increasing backlash (in the UK there has been a huge increase in concern about immigration mainly caused by the government)

I hear about and see non-trivial levels of grift.

Can you share more?


My personal donation budget is down this year, owing to several family issues I've had to pour money into. I've focused more on maintaining support for smaller organizations, for whom curtailing support at the three-to-four figure level feels more likely to have a perceptible impact.

My largest donation will be to @NickLaing's OneDay Health. I feel relatively well-equipped to evaluate OneDay. I consulted on the cost-effectiveness analysis posted here early this year, and thought it rather convincing for a smaller organization (even after applying a downward adjustment for early-stage CEAs). I am impressed with Nick personally from his participation on the Forum and conversations I've had with him directly, and I think evaluating the founder is an important element to evaluating smaller organizations. There are other object-level considerations that influenced my decision,[1] and it is a more legible organization to people whose donations I might be able to influence.[2] 

I'm also donating to Legal Impact for Chickens (marginal funding explanation here), partly as a sort of penance and disgorgement for eating meat. I eat too much processed food for convenience (sigh, I know...) and it's hard to find such foods with products from animals who have lived at least net-neutral lives.[3] As for why to contribute to LIC specifically, I'm a US-based lawyer and thus feel better qualified than most EAs to evaluate LIC's work. Also, as someone who is newer to the animal-welfare giving space, it's reassuring that LIC's method of action (i.e., suing corporations who mistreat animals) is obviously resistant to humanewashing.

(This isn't a complete list of donations to which I intend to give.)

  1. ^

    That most of the health centers' operating costs are covered by user fees not only increases leverage but demonstrates that the beneficiaries actually value what OneDay is doing. The existence of significant beneficiary net savings on transportation costs and avoidance of lost work time seems highly likely, and creates a floor value for OneDay's effectiveness (viewing it as akin to a program that makes tiny cash transfers to ill rural Ugandans). 

  2. ^

    E.g., I am planning to pitch OneDay to my church for its Good Friday offering, as it is more legible to that audience than standard EA charities for several reasons. I think demonstrating significant skin in the game would help with that pitch.

  3. ^

    I've seen some calculations for offsetting one's animal-product consumption -- but I have a diverse moral parliament with significant virtue-ethics representation. I don't think it is virtuous to benefit from animals being treated in net-harmful ways, by paying lower prices for processed food and out-of-home meals than I would have paid for net-neutral welfare animal products had they been more readily available to me. Another way of looking at it is that a de minimis offset would contribute to poor epistemics on my part, either because I know I'm still financially benefitting from mistreatment of animals or because paying a few dollars as an offset feels like trivializing the issue. 

Wow thanks so much for these encouraging words Jason, myself and the whole team at OneDay Health appreciate it. 

The great thing about people writing about their donations is that it gives us practitioners a confidence and energy boost. Having smart and thoughtful people like @Jason decide to send their hard earned cash in our direction, and then write a bit about why gives us the best end/start to the year possible - and also adds a little bit of healthy pressure and accountability to me personally at the turn of a new year of launching new health centers and reaching increasing numbers of people in remote rural areas with quality healhtcare.

Anyway, perhaps a bit cheesy for an EA forum comment but I can assure you it is sincere ;).

Top-line, gave ~25% of my income - primarily to Global Health and Climate causes. This year I focused on a smaller # of organizations at higher levels than in 2022, based on feedback on last year's thread.

  • 26k to GiveWell Top Charities Fund; add'l 11k to Against Malaria Fund
  • 35k to climate organizations - (EA-ish): Silverlining, Clean Air Task Force; (non-EA - focused on a US state-level organization, data organization, and industry-focused organizations): Fresh Energy, Carbonplan, IREC, InnerSpace
  • Balance to Nuclear Threat Initiative, University of Washington's Virology & Epidemiology Funds

Happy holidays!

I reckon my donations this year will amount to about:

  • $3.7K to animal welfare, via Effektiv Spenden.
  • $1.7K to global health and development, via Effektiv Spenden.
  • $1.1K to the Donation Election Fund.
  • And my labour to mitigating risks from AI. In a way, this amounts to way more than the above, given that I would be earning 2x+ what I am earning now if I were doing what I did before, i.e., software engineering.

I recently reconfigured my giving to be about 85% animal welfare and 15% global health, however, for reasons similar to those spelled out in this post (I think, though I only skimmed that post, and came to my decision independently).

I'm giving to GiveDirectly again, as I have every year since learning about them. I think they're undervalued in the EA community in general, because we don't yet have a way to give enough weight to subjective wellbeing, the value of self-determination, or justice. I think it is good that extremely poor people would have the opportunity to choose how to improve their own lives - rather than those types of decisions being made for them, however rigorously.

You say "don't yet"...are you aware of anyone working on a project to incorporate deontology or other non-utilitarian factors in cause prioritization?

because we don't yet have a way to give enough weight to subjective wellbeing, the value of self-determination, or justice

This year I gave 13% of my income (+ some carryover from last year, which I had postponed) to EA charities. Of this, I gave about half to global health and development (mostly to GiveWell Top Charities, some to Give Directly) and the other half to animal welfare (mostly to the EA Funds Animal Welfare Fund, some to The Humane League). I also gave $1,250 to various political candidates I felt were EA-aligned. In prior years I've given overwhelmingly to global health and development and I still think that's very important: it's what initially drew me to EA and what I'm most confident is good. But last year I was convinced I had underinvested in animal welfare historically and I'm starting to make up for that.

I strongly prefer near-term causes with my personal donations, partly because my career focuses on speculative long-term impact. I'm bothered by the strong possibility that my career efforts will benefit nobody, and want to ensure I do at least some good along the way. I also think that in recent years, the wealthiest and most prominent EAs have invested more money into longterm causes than we can be confident is helpful, in ways that have sometimes backfired, damaged or dominated EA's reputation, promoted groupthink in pursuit of jobs/community, and ultimately saddened or embarrassed me. Relatedly, I think managing public perceptions of EA is inescapably important work if we want to effectively improve government policies in democratic countries. So even on longtermist grounds, I think it's important for self-described EAs at the grassroots level to keep proven, RCT-backed, highly effective charities with intuitive mass appeal on the funding menu (perhaps especially if we personally work on longtermism and want people to trust our motives).

Within neartermism, I like to split my donations across a single-digit number of the most impactful funds or charities. This is because I do not have a strong, confident belief that any one of them is most effective, want to maximize my chance of doing a large amount of good overall, and see hedging my bets as a mark of intellectual humility. I don't mind if this makes my altruism less effective than that of the very best EAs, because I'm confident it's better than that of 99% of people. Likewise, I think the path to effective giving at a societal scale depends much more on outreach to the bottom 90% or so of givers, who give barely any quantitative thought to relative impact, than it does on redirecting donations from those already in the movement.

To be honest, I'm facing a difficult trade-off between whether I should donate more money now to the organizations I traditionally support (e.g. Vegan Outreach), versus investing in the crypto market before the next expected bull run in 2024-2025, after the bitcoin ETF approvals, the bitcoin halving, the next hype cycle, etc -- in hopes that an investment now could yield 10x more money to give later.

I'm curious if any other EAs are thinking about this tradeoff. I know a lot of us got stung, both financially and emotionally, by the FTX disaster. But IMHO, crypto is here to stay -- at least as a hyper-volatile risk asset that can be used to leverage wealth, by those with the knowledge and risk-tolerance to buy low and sell high.

Copying from the Facebook Group

Fellow crypto investor here (since 2015/16), I now run a crypto fund. A few points to make.

1. In the short term, way more money is made on getting in on trends before they are big than on fundamentals, at least they have historically. You wanted to get in on NFTs early. You wanted to get in on "AI tokens" early. You wanted to get in on yield farming early. You wanted to get in on ICOs early. You wanted to get in on dog tokens early. You wanted to get in on anything trending early. Etc. It didn't matter what project fundamentals were. Trends cause things to go up. There is a large degree of pyramic scheme/ponzinomics involved. You make money by getting in before others. They pump your bags and you dump on them.

2. In the long term, fundamentals are all that matter. Vast majority of junk dies with the founders/insiders/smart traders scheming off a good amount with bags held by retail traders.

3. I want to push back on the idea that we know a bullrun is coming. We don't. There are some bullish narratives around but also, bitcoin/eth are already up like 3x. What if the bullrun already happened and this is the "cycle's top"? What if the Bitcoin ETFs don't get approved? What if they do nothing? There is no law of nature that causes crypto cycles due to halvings, in fact, you should expect those to be way less prominent now that bitcoin is much larger and most of the coins that will ever exist are already out there. Pompliano's supply squeeze or whatever based on miners is stupid. I think the last one was most likely coincidental due to ZIRP and I don't suppose you have great macro forecasts to suggest we are going back to these territories that haven't been priced into the market.

4. I want to caution people from thinking they have alpha. I think there are good reasons to think EAs can have alpha. Quasi-insider information from being so close to the tech scene, greater understanding of AI progress and effects, some first principles thinking, just being smarter, etc. But usually, retail traders don't have alpha. They are the ones who think they do and pay the people that actually do. You have to be crystal clear as to where your edge comes from and not expand outside of that. Why do you expect this group to be better able to predict a large scale crypto cycle better than people who do this full time with teams of super well paid analysts and quant traders?

I'm not a big EMH proponent, but this group has had a lot of success and now I think has gotten too cocky and now thinks that even weak-EMH doesn't apply to us.

Everybody feels that they are a super savvy investor who will get above market returns and thus "others should donate now while I seek great financial returns to donate more later"

I've thought about this tradeoff a lot. I feel the same way. I have a realized APR over the last 4 years of around 300%. I've also donated ~20% over this time. I can imagine some think that it would have been better for me to continue to invest this money but:

1) I by no means expect this rate of return to continue as I am investing larger sums

2) What if stuff didn't go in my favour

I haven't figured out a better answer then "yes, you should always donate for several reasons, every year". There are also non-financial factors at play here like value drift as well to consider as well as stable ecosystems/whatever the opposite of the unilateralist curse is.

I also want to push back (as a crypto investor myself who runs a crypto fund) on the idea that we know a bull run is coming. We don't. Crypto is now a much much larger market than 2012/2013 when EA had a lot of edge investing in crypto and I really doubt we are going to have another macro free money period again.

Bitcoin is an $800B marketcap asset and crypto as a whole is 1.7T. You should expect far greater efficiency now and that retail investors/people with other jobs won't do nearly as well as the pros.

I haven't, but is it still true that the EA donor base's assets are fairly heavily in crypto? So one potential downside would be reinforcing a relative lack of diversification, which could lead to both periods of really bountiful funding for orgs and droughts. Though perhaps at the small/midsize donor level, that isn't as much of a concern and one should go for the best expected return on a risk-neutral basis.

Jason - this is a reasonable concern. The 4-year crypto asset cycle could indeed lead to cycles of windfalls and dry spells for donations. But I guess the burden would be on EA organizations to smooth this out by saving up some of the windfall money to cover the dry spells -- rather than the donors trying to avoid high-volatility assets that show such cycles?

That makes sense in many contexts. I can think of some in which it might not work as well:

  • It is plausible that orgs may have planned for the crypto cycle, but not planned for FTX collapse and probable clawbacks, and that the assets that would otherwise be used for smoothing had to be diverted. That goes for double if the org was affected by a non-crypto financial issue (e.g., grant reduction/non-renewal from another source). As a practical matter, an org can only prepare for so many contingencies at once . . . even the US military with all its massive spending is designed to maintain a two-front war, IIRC.
  • I think it probably relies on an assumption that the org was old enough / established enough to have received a windfall during the high-water point of the previous crypto boom cycle. Without a windfall, there would be no windfall income to devote to smoothing.

Jason - yes, fair points. 

Hopefully any donations from individual donors who have benefitted from actually taking profits (into fiat currency) from volatile assets (such as crypto) would be less subject to corporate collapses, scandals, and clawbacks than donations from crypto companies such as FTX. But it's well worth thinking about these kinds of financial and legal risks. 

You might be interested to ask in this Facebook group (I would love to help and thinking similar things but know approximately nothing)

Yes, thanks! Will have a look.

I haven't yet decided, but it's likely that a majority of my donations will go to this year's donor lottery. I'm fairly convinced by the arguments in favour of donor lotteries [1, 2], and would encourage others to consider them if they're unsure where to give. 

Having said that, lotteries have less fuzzies than donating directly, so I may separately give to some effective charities which I'm personally excited about.

I’m donating 10% this year, probably all towards nonhuman animal welfare via the ACE Recommended Charity Fund.

  • Animal issues seem much more neglected than global health & poverty.
  • X-risk seems much less funding-constrained than animal stuff.

If there were an obvious way to support longermist animal stuff, I’d probably allocate something towards that. In particular, I think someone should be lobbying AI companies to take animal welfare more seriously and to get their models to not tacitly support factory farming. I also think digital sentience seems important and neglected, but I basically trust OpenPhil to do a good job funding that type of research.

I'm giving to the EA Animal Welfare Fund.  


I thought this was likely among the best giving opportunities around.  And then was further persuaded by the investigation from GWWC.


Is it okay if I post here? I’m not an EA but am curious about the movement.

To answer OP’s question: my giving this year has focused on animal rights and welfare—local shelters, pro-vegan organizations, pet and wildlife rehabilitation. I’ve also given direct aid to people experiencing financial crisis in my social circle, which isn’t “charity” but is part of my personal mission of care.

If my financial situation ever does improve, I’d love to give more and also fund anti-AI / low-tech / degrowth initiatives.

Hi Hayven- yes, you're very welcome to post here. 
Thanks for caring about animals and the people around you! 
If you're interested in helping the most animals you can with some of your donations, you may be interested in this recent post from the EA animal welfare fund. Giving What We Can recently evaluated them as a top rated fund for animal welfare, so they are likely to be one of the absolute best places you could donate to help animals. 

Thank you so much, Toby! I'll read the post today and see what I think. 

It's honestly kind of refreshing to see that concern for (non-human) animals is so widespread in the EA movement just because it most certainly isn't in wider society.  I have a lot of hesitation about aligning myself with any ideologies, but there's something really refreshing about EA's care for animals as more than just means to human ends. 

I was originally not sure if I would donate this year, as my living expenses skyrocketed. I wound up donating to The Human League--the donation was much smaller than ones I've sent to charities in previous years, but THL was a new charity for me. I realized I have been underweighting animal welfare relative to my values, due to my discomfort thinking about it. I decided to donate to THL both because I was convinced by the cost-effectiveness argument, and as an expression of my ongoing effort to bring my actions more in line with my values. Here's hoping that each new year finds me more ethical and compassionate than the last. :)

I divide my donation strategy into two components:

  1. The first one is a monthly donation to Ayuda Efectiva, the effective giving charity in Spain, which allows fiscal deduction too. For the time being, they mostly support Global health and poverty causes, which is boringly awesome.

  2. Then I make one-off donations to specific opportunities that appear. Those include, for example, one donation to Global Catastrophic Risks, to support their work on recommendations for the EU AI act sandbox (to be first deployed in Spain), some volunteering work for FLI existential AI risk community, my donation to this donation election, to make donations within the EA community more democratic :)

For this donation election I have voted for Rethink Priorities, the EA long term future fund, and ALLFED. ALLFED work seems to be pretty necessary and they are often overlooked, so I am happy to support them. The other two had relatively convincing posts arguing for what they could do with additional funding. In particular, I am inclined to believe Rethink Priorities work benefits the EA community quite widely and am happy to support them, and would love them to keep carrying out the annual survey.

Last year's new years resolution was to trial giving 10%. I've hit it! 

For a long time I procrastinated, and ended up donating pretty much exclusively during December. This is mostly because, thanks to EA and a philosophy degree, it felt as though I would have to solve many intractable questions in order to donate at all. Forcing myself to stick to a commitment was the way out of this dead end. I think this has overall been valuable, even though it probably leads to sub-optimal donating (i.e., I didn't give myself the option to wait until a great opportunity came up). My donations this year looked like this:

I currently like the idea of giving to Funds rather than directly, because it takes some of the evaluating effort off of me. However, I expect I will feel more involved in any wins that Legal Impact for Chickens (the only charity I gave to directly, because of their theory of change and this comment) has over the next year, so there may be a trade-off in sustainable motivation. 

The main aim of my donating this year was to start myself on a path to taking a Giving Pledge/ more generally donating in a way that works for me. This meant that this year, I focused on donating to causes I am pretty confident in, and feel good about. I hope that in the future I'll be open to options like taking riskier bets, saving my donations for the right moment, or donating an entire year's worth to one project. 

But, for now, it is good to know that my money is going to avert suffering for people and animals, and I can go into next year a little more integrated with my professed values. 

There was an earlier post from lots of people at CEA, including me: Here’s where CEA staff are donating in 2023

Quick summary of my section: I donated to the Donation Election Fund for the reasons described here, to someone's political campaign[1], and in some cases I didn't take compensation I was supposed to get from organizations I'd happily donate to. 

  1. ^

    I feel weird donating to political campaigns (I grew up ~avoiding politics and still have a lot of the same beliefs and intuitions). But I talked to some people I know about the value of this campaign and tried to estimate the cost-effectiveness of the donation (my conclusion was that it was very close to donating to the LTFF, even when I was ignoring impact that might come from animal welfare improvements, which is important to me), and was compelled by the consideration that I had an unusual ability to donate to the campaign as a US citizen. (I'm interested in hearing people's thoughts about this, but will probably not actively participate in public discussions about the decision.)

Just to comment on your footnote: my intuition is that political spending can be very effective and it is an important component of my family's donations. For anyone interested in this I really recommend Ezra Klein's interview with Amanda Litman from Run for Something. 

She speaks compellingly about how most political donations, especially on the left, are reactionary and not necessarily effective, but about how in certain races and particularly state and local races, tiny sums of money can really make a huge difference. I don't think she explicitly uses an ITN framework but it definitely fits, and their work is in what has in recent history been a very neglected space IMO.

You can see where some GWWC team members are donating in Where are the GWWC team donating in 2023? 

TL;DR for me (excluding donations to GWWC)

I'm still on the fence about going all in with longtermist cause areas. Therefore, I usually divide my donations between Animal Welfare (Animal Charity Evaluators Fund), GWWC Top Charities Fund and The Long Term Future Fund (pandemic risk and AI safety mainly).

Although not relevant to the question, a really useful tip is to donate via your friends' employers (such as Google and Microsoft) who may match the employee's contribution.

This year, I was able to donate 44% of my pre-tax income to effective charities. Here's how I split the overall $77,500 I directed to help others:

GiveWell’s All Grants Fund:   30.4%

Malaria Consortium (via The Life You Can Save):  17.4%

New Incentives (via The Life You Can Save):  17.4%

GiveWell’s Top Charities Fund:  12.9%

The Life You Can Save (all recommended charities):  12.9%

Helen Keller Intl (via The Life You Can Save):  4.5%

GiveDirectly (via The Life You Can Save):  4.5 %

Notably absent from this portfolio are contributions to animal welfare. While I have abstained from eating meat, dairy, or eggs during the last 16 years, I still have not allocated any funds to organizations like the Humane League, the Good Food Institute, etc. I am highly confident that this will change in 2024, and I welcome supporting or dissenting thoughts about it. 

This year I decided to focus my donations more, as in the past I used to have a "charity portfolio" of  about 20 charities and 3 political parties that I would donate to monthly. This year I've had some cash flow issues due to changes with my work situation, and so I stopped the monthly donations and switched back to an annual set of donations once I worked out what I can afford. I normally try to donate 12.5% of my income annually averaged over time.

This year's charitable donations went to: The Against Malaria Foundation, GiveDirectly, Rethink Priorities, and AI Governance & Safety Canada. I also donated again to some political parties, but I don't count those as charity so much as political activism, so I won't mention them further.

AMF has been my go to as the charity I donate the most to because of GiveWell's long-running recommendation. When in doubt, I donate to them.

GiveDirectly is my more philosophical choice, as I'm somewhat partial to the argument that people should be able to choose how best to be helped, and cash does this better than anything else. I also like their basic income projects as I worry about AI automation a lot, and I think it has the most room for growth of any option.

Rethink Priorities is well, I'll be honest, a big part of donating to that outfit is that I have an online acquaintanceship with Peter Wildeford (co-CEO of RP) that goes back to the days when he was a young Peter Hurford posting on the Felicifia utilitarianism forum, and I think a team co-led by him will go places and deserves support (he also gave a pretty good argument for donating to RP on the forum and Twitter). I know Peter enough to know that he's an incredibly decent human being, a true gentleman and a scholar, and any org he's chosen to co-run is going to be a force for good in the world. Also, I'm a big fan of the EA Survey as a way to gauge and understand the community.

AIGS Canada is an organization that's closer to home and I think they do good work engaging with the politicians and media up here in Canada, doing a much needed service that is otherwise neglected. They're kinda small, so I figure even a small donation from me will have an outsized impact compared to other options. Full disclosure: I'm in the AIGS Canada Slack and sometimes partake in the interesting discussions there.

The first two would be my primary recommendations to people generally. The latter two I would suggest to people in the EA community specifically.

I go into somewhat more detail about my general charity recommendations and also mention some of the ones I used to donate to but don't anymore here: http://www.josephius.com/recommended-charities/

This year I am "donating" to my career transition. As I had an unfortunate gap between grants, I had to scramble to find temporary employment until the next batch of grant money comes in. This means I am now at a lower salary. With 2 kids and being the main breadwinner it means there isn't that much left to donate and I do not want to put further strain on my young family by further reducing our expenditures. So in a sense, I have donated the difference between my pre-grant salary and my current salary to my own career transition and EA entrepreneurial activities, which I hope has a fair shot of reducing existential risk from bio.

That said, my family are super well off globally speaking and have all our basic needs covered and are enjoying a nice holiday together.

If you think this qualifies for putting a heart next to my profile, please react with a heart and if I get above let's say 10 hearts I will click the banner (or let me know if I shouldn't!).

I wrote about my 2023 donations at length here.


  • 90% to the Against Malaria Foundation (only GiveWell-recommended charity with a Canadian entity)
    5% to a local harm reduction charity (local charity that probably saves the most lives per dollar donated)
    5% to a local financial aid clinic (expected ROI of something like 20-30x cash)

I'm waiting to decide where to give t'ill 2024 comes or later, as GWWC plans on investing several new charities, as well as reviewing current ones, so that my giving will be more accurate, as well as judt the general increase in accuracy of charity evaluators. I also plan on donating much later, and investing it now, to increase the amount of good done.

In a recent post, I shared where I am donating through the end of the year where I am donating through the end of the year.


I will likely write a post in January to share my 2024 giving plan. 

I probably will end up on the receiving end of donations (in some form or other :/ )rather than give myself some (Uh Oh, had like 4000$ of disposable income this year). 

But If I had the funds I would donate to Sea Shepard, I like their proactive approach to animal suffering (in this case, marine wildlife). Who knows I might join them full-time one day even if my career is the complete opposite of being out there in the open sea.

Another one that comes in mind is International Network of Street papers and members of it, dependent on location they may vary. The business model is fairly simple and the investment per person is fairly low given that the return is so much more. The local paper here is running now for the past 10 years and their political involvement has helped a lot of people, directly and indirectly.

Another aspect is the direct help that the marginalized communities get from these street papers, at least here, the paper directly helps with rehabilitation and resocialization of their sellers (usually homeless/very low income/disabled people) and you can see this on their faces you can feel it in society on a wider scale.

More from Lizka
Curated and popular this week
Relevant opportunities