Hide table of contents

I was recently talking to someone who had recently started thinking about effective altruism, and was trying to figure out how to work it into their life. They were established in their career, it paid well, and their background wasn't obviously a good fit for direct work, so they were naturally considering earning to give. This prompts a question of how much to give.

"How much?" is a question people have struggled with for a very long time. Donating 10% of income has a long history, and it's common for EAs to pledge to do this; donating 2.5% of wealth is also traditional. If you're earning to give, however, you might want to give more: Julia and I have been giving 50%; Allan Saldanha has been giving 75% since 2019. How should one decide?

I was hoping there were good EA blog posts on this topic, but after spending a while with EA Forum search and Google I didn't find any. Claude kept telling me I should check out Jeff Kaufman's blog, but all I found was a rough post from 2011. So here's an attempt that I think is better than my old post, but still not great.

While EAs talk a lot about principles, I think this is fundamentally a pragmatic question. I find the scale of the world's problems overwhelming; no one has enough money to eliminate poverty, disease, or the risk we make ourselves extinct. This is not to say donations don't matter—there are a lot of excellent options for making the world better—but there's not going to be a point where I'm going to be satisfied and say "Good! That's done now." This gives a strong intellectual pull to donate to the point where donating another dollar would start to decrease my altruistic impact, by interfering in my work; burning out does not maximize your impact!

In the other direction, I'm not fully altruistic. I like some amount of comfort, there are fun things I want to do, and I want my family to have good lives. I'm willing to go pretty far in the altruism direction (I donate 50% and took a 75% pay cut to do more valuable work) but it's a matter of balance.

Which means the main advice I have is to give yourself the information you need to make a balanced choice. I'd recommend making a few different budgets: how would your life look if you gave 5%? 10%? 20%? In figuring out where you'd cut it might be helpful to ignore the donation aspect: how would your budget change if your industry started doing poorly?

In some ways Julia and I had this easy: we got into these ideas when we were just starting out and living cheaply, while we could still be careful about which luxuries to adopt and maintain inexpensive tastes. It's much harder to cut back! So another thing I'd recommend, especially if you haven't yet reached peak earning years, is to plan to donate a disproportionately large fraction of pay increases. For example, 10% of your (inflation adjusted!) 2024 salary plus 50% of any amount over that.

Overall, the goal is to find a level where you feel good about your donations but are also still keeping enough to thrive. This is a very personal question, and people land in a bunch of different places. But wherever you do end up, I'm glad to be working with you.

81

5
0
2
3

Reactions

5
0
2
3

More posts like this

Comments13


Sorted by Click to highlight new comments since:

From my point of view, the biggest issue that makes this question an everlasting companion for most is uncertainty. Even if I could currently give 50% away and have the same standard, how will that look like in a few years? What if I lose my job in my 50s and struggle to find anything? What if my abilities will become meaningless because of technological advancements even earlier?

I would assume for most it's not a question of consumption vs. donations, as many essays and books make it sound. It's about the balance between how much to put into your own financial securement vs. donating. This is probably much easier to answer for promising 80,000 hours supported geniuses, but a very different picture for the Average Joe who struggled in school and to find employment in the first place. It's probably impossible to give clear answers when taking that into consideration, though.

You could try putting cash into a separate savings account earmarked for donation. When you are happy that you don’t need it, donate it. (But maybe over a few years for tax efficiency)

You've put into clear words the struggle that I have always had. If I had a guaranteed income or  some high level of confidence that I would always be able to find employment and gain income of a certain level, then I'd find it quite easy to give away money. It wouldn't be as scarce of a resource.

There are certain parallels to the idea of put on your own oxygen mask first, as we do need to make sure we are okay before helping others. But I also suppose that the really tricky part is considering what is okay 'enough' for us.

I strongly agree that you need to put your own needs first, and think that your level of comfort with your savings and ability to withstand foreseeable challenges is a key input. My go-to in general, is that the standard advice of keeping 3-6 months of expenses is a reasonable goal - so you can and should give, but until you have saved that much, you should at least be splitting your excess funds between savings and charity. (And the reason most people don't manage this has a lot to do with lifestyle choices and failure to manage their spending - not just not having enough income. Normal people never have enough money to do everything they'd like to; set your expectations clearly and work to avoid the hedonic treadmill!)

That's why my own approach is "FIRE [Financial Independence, Retire Early] first". In which one first plans for a frugal retirement (which, for the USA, requires way less than $1M, possibly less than half of that, so it's highly achievable, and mainly depends on the strength of your frugal muscles, not your above-average earning power). That takes about 7 to 10 years, which can be shortened to 5 if you work hard or are lucky. That amount is than set apart in case your life takes a wrong left turn.  

Then you keep working, and either donate everything (since you're already set for life), or at least as high of a percentage you're comfortable with. 

  • You have to consider, e.g. the cost of raising kids, since the amount planned for a 50+ years retirement won't have those expenses considered (in the long run, they are "temporary")
  • Plus the general category of "thriving", since if you are optimizing for effectiveness you're likely not optimizing for minimum absolute cost. That's why I'm not just linking Jacob Lund Fisker and telling you $7000/year is enough (and mind, he's kept up at least until most recent update in 2019)

As for Average Joe... most limiting resource isn't money at all, but willpower and other cognitive powers. Fortunately, it's not like the Average Joe is EA or vice versa. 

In any case, consider that my answer of "how much to put into your own financial security vs. donating". Not in terms of splitting a wage, but of bypassing the question entirely. 

To follow on to your point, as it relates to my personal views, (in case anyone is interested,) it's worth quoting the code of Jewish law. It introduces its discussion of Tzedakah by asking how much one is required to give. "The amount, if one has sufficient ability, is giving enough to fulfill the needs of the poor. But if you do not have enough, the most praiseworthy version is to give one fifth, the normal amount is to give a tenth, and less than that is a poor sign." And I note that this was written in the 1500s, where local charity was the majority of what was practical; today's situation is one where the needs are clearly beyond any one person's ability - so the latter clauses are the relevant ones.

So I think that, in a religion that prides itself on exacting standards and exhaustive rules for the performance of mitzvot, this is endorsing exactly your point: while giving might be a standard, and norms and community behavior is helpful in guiding behavior, the amount to give is always a personal and pragmatic decision, not a general rule.

This is extremely relevant for me as I have been thinking a lot about when to start making more serious donations. I discussed some previous blockers here which haven't been resolved. I am therefore considering commissioning some research (ideally with others).

Broadly I'm interested in better understanding the 'donators dilemma': if you give money now, you forego the later opportunity to 'give better' due to having improved information, and to 'give more' due to passive income. Also to benefit from increased financial security that might enable you to have more direct impact (e.g., by taking a lower paid role that has higher impact, or starting a new initiative). 

I want somebody to systematically review the literature for to capture the different arguments and trade-offs for giving now versus later. Then to create some sort of accessible decision-making tool or process that people like me can use to decide on an appropriate threshold or strategy to have WRT to giving now versus later.

If anyone is also interested in funding this or knows some existing tools, then please let me know.

For the question of whether to "save to give," MacAskill's paper on the topic was very useful for me. One crucial consideration is whether my donations would grow more in someone else's hands. 

E.g. I give $100k to AMF means fewer die from malaria, which means more economic growth. Does this generate more than the ~7%/year my stocks might? I find that people often neglect this counterfactual. 

I just found that Sebastian Schwiecker had written a blog post on the same topic.
Because of

 I was hoping there were good EA blog posts on this topic, but after spending a while with EA Forum search and Google I didn't find any.

... I'm leaving this link here :) https://effektiv-spenden.org/blog/wie-viel-soll-ich-spenden/

Thank you for writing this. I have been struggling with this question myself, and your recommendation will hopefully give me motivation to finally getting around to creating a budget 

Having defined budgets has been very helpful for me! Otherwise, I fall prey to the perils of maximization.

I like your posts. They are short and informative.

I really wonder how you manage to have the time to work, take care of the kids and do other stuff like writing... good posts. It is not only that the topic is usually interesting, but writing short informative posts is usually much more time-consuming that writing the same post as a long and not specific/without links version. How do you do it?

Thanks!

I think it's some combination of temperament (I just really like writing!) and practice (I've been writing posts multiple times a week for over a decade)?

I think you're probably also only seeing my better posts, since I don't cross-post most things to the Forum?

Curated and popular this week
 ·  · 13m read
 · 
Notes  The following text explores, in a speculative manner, the evolutionary question: Did high-intensity affective states, specifically Pain, emerge early in evolutionary history, or did they develop gradually over time? Note: We are not neuroscientists; our work draws on our evolutionary biology background and our efforts to develop welfare metrics that accurately reflect reality and effectively reduce suffering. We hope these ideas may interest researchers in neuroscience, comparative cognition, and animal welfare science. This discussion is part of a broader manuscript in progress, focusing on interspecific comparisons of affective capacities—a critical question for advancing animal welfare science and estimating the Welfare Footprint of animal-sourced products.     Key points  Ultimate question: Do primitive sentient organisms experience extreme pain intensities, or fine-grained pain intensity discrimination, or both? Scientific framing: Pain functions as a biological signalling system that guides behavior by encoding motivational importance. The evolution of Pain signalling —its intensity range and resolution (i.e., the granularity with which differences in Pain intensity can be perceived)— can be viewed as an optimization problem, where neural architectures must balance computational efficiency, survival-driven signal prioritization, and adaptive flexibility. Mathematical clarification: Resolution is a fundamental requirement for encoding and processing information. Pain varies not only in overall intensity but also in granularity—how finely intensity levels can be distinguished.  Hypothetical Evolutionary Pathways: by analysing affective intensity (low, high) and resolution (low, high) as independent dimensions, we describe four illustrative evolutionary scenarios that provide a structured framework to examine whether primitive sentient organisms can experience Pain of high intensity, nuanced affective intensities, both, or neither.     Introdu
 ·  · 2m read
 · 
A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because: -Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions  (or more) of years likely  requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.  -Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support. Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population  is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of
 ·  · 7m read
 · 
The company released a model it classified as risky — without meeting requirements it previously promised This is the full text of a post first published on Obsolete, a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to Build Machine Superintelligence. Consider subscribing to stay up to date with my work. After publication, this article was updated to include an additional response from Anthropic and to clarify that while the company's version history webpage doesn't explicitly highlight changes to the original ASL-4 commitment, discussion of these changes can be found in a redline PDF linked on that page. Anthropic just released Claude 4 Opus, its most capable AI model to date. But in doing so, the company may have abandoned one of its earliest promises. In September 2023, Anthropic published its Responsible Scaling Policy (RSP), a first-of-its-kind safety framework that promises to gate increasingly capable AI systems behind increasingly robust safeguards. Other leading AI companies followed suit, releasing their own versions of RSPs. The US lacks binding regulations on frontier AI systems, and these plans remain voluntary. The core idea behind the RSP and similar frameworks is to assess AI models for dangerous capabilities, like being able to self-replicate in the wild or help novices make bioweapons. The results of these evaluations determine the risk level of the model. If the model is found to be too risky, the company commits to not releasing it until sufficient mitigation measures are in place. Earlier today, TIME published then temporarily removed an article revealing that the yet-to-be announced Claude 4 Opus is the first Anthropic model to trigger the company's AI Safety Level 3 (ASL-3) protections, after safety evaluators found it may be able to assist novices in building bioweapons. (The