A while back I wrote about giving later on my blog and was encouraged to share it here. The original audience was mostly economists.

Before getting to the post, I would like to say that I think this is a topic that warrants further attention. Thus, I am interested in finding people who have seriously thought about giving later (coming to a conclusion on either side) and interviewing them for both a future blog series and further discussion on this forum. Please get in touch by e-mail if interested.

Original post:

-

One of the more important things I’ve changed my mind about recently is the best cause to donate to. I now put the most credence on the possibility that the best option is donating to a fund that invests the money and disburses strategically in the future. I will refer to this as “giving later”, though I actually support giving now to a donor-advised fund set up to disburse in the future, for the value that donating now can have for encouraging others to donate (and because of the risk that even if one thinks one will donate later, one will at some point change one’s mind).

There are several reasons why I prefer a fund that disburses in the future. First, I believe people currently discount the future too much (see hyperbolic discounting, climate change). If people discount the future, that causes the rate of return on investments to always be higher than the growth rate (else people would not be willing to invest). In economics, the Ramsey equation is often used to determine how much a social planner should discount future consumption. It is specified by , where is the real rate of return on investment, is the extent to which marginal utility decreases with consumption, is the growth rate, and represents pure time preferences. Unless one personally puts a particularly high value on , it makes sense to invest today and spend later to take advantage of the gap between the real rate of return on investment (~7%) and the growth rate (~3-4%).

How should one set ? This is a huge open question. Like most effective altruists, I do not believe one should treat people today any differently from people tomorrow. But one might still wish to place a non-zero value on due to the risk that people will simply not exist in the future – that nuclear war or other disasters will wipe them out. Economists tend to like to respect people’s pure time preferences and so end up with rather higher values than effective altruists. The Stern report famously set =0.1, while Nordhaus prefers =3. The current Trump administration set up to 7, which justifies not doing anything about climate change (see also this nice figure). With a modest , it makes sense to invest now and give later according to the Ramsey equation.

A second reason that I prefer a fund that disburses in the future is that I think we have very limited knowledge today and that our knowledge is increasing. I am concerned about the problem that research results do not generalize all that well, but with respect to economic development I am optimistic that the situation can improve. With respect to technological change which could bring huge benefits or risks, I think we know even less about the problems future generations will face and may be able to understand them better in the future. It seems unlikely to me that we are at the exact moment in time, out of all periods of time from here on out into the future, that we actually have the best opportunity to do good. We may not recognize the best moment when it comes, but that just pushes the argument back a step: I also think it unlikely that we are at the best moment, out of the whole foreseeable future, to have the best combination of knowledge and opportunity to do good.

These are not novel arguments. Some form of them is made in several other blog posts, for example. Some of the criticisms commonly raised are whether donations today can help to improve the long-run growth rate and whether it is feasible to design and maintain a fund that disburses later without value drift. There are sadly few long-run follow-ups of development interventions, but it seems prima facie unlikely that interventions will have a long-run effect on the growth rate, given the growth rate is a function of many, many things. I expect most effects to taper off over time but acknowledge that further research in this area is needed. With regards to it being difficult to build a persistent and safe institution, I agree that this is challenging, but not altogether impossible, and I know several people working on this right now.

There are several reasons to be optimistic. First, this institution could take into consideration the risk of e.g. nuclear war or values drift in setting its disbursement scheme, so that it has a more aggressive disbursement scheme as the risks go up (in the extreme case, disbursing everything right away). Second, it is easy to think of a “lower-bound” version of this that would not be at much risk for values drift. For example, suppose a fund existed that disbursed the minimum amount possible every year (U.S. charities, for example, are required to disburse 5% per year), and then disbursed the rest in year 10. In the simplest possible version of this, think of a cash transfer charity like GiveDirectly which gives out cash to people in developing countries via mobile money transfers. One could set up the institution to automatically make these payments over time without any deviations allowed (say, through a smart contract). Unless mobile money is no longer in use 10 years from now, this option would seem to strictly dominate giving cash transfers today. What about other types of transfers, like to some of GiveWell’s top-rated charities, the Against Malaria Foundation or Deworm the World? It is possible that interventions are particularly cheap now, while they may be more expensive (for the same benefit) in the future. For example, most of the gains in life expectancy have been due to improvements in sanitation and basic healthcare reducing under-5 mortality; it is a lot harder to increase life expectancy from 79 to 80. There are some arguments that can be made against this. I won’t get into them too much, though I will note that under some conditions this situation could be addressed by letting the investments compound for longer before using them. In any case, my assumption is that if the calculus really works out this way, we are back in the world in which the organization disburses everything right away. Further, if one considers the farther future and cares about future potential lives, one may wish to place more emphasis on avoiding existential or extinction risks, and it is not clear that we are at a particularly good time in history to do that.

I think it appeals psychologically to many people - myself included - to think that we are living at a particularly important time. However, I recognize that people have thought this throughout history. As more time has passed, I have become increasingly confident that my gut antipathy to the idea that it’s better to “give later” is just a cognitive bias.

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal