Hide table of contents

I used to think that earning to save is mainly of interest to the most ‘patient’ longtermists, but I’ve realised that there’s a broader argument for keeping it open as an option, which would mean placing a somewhat higher value on career capital relevant to high earning roles.

Earning to save is like earning to give, but involves investing the money and then donating later. A moderate version would involve donating in 30 years near the end of your career, and is a pretty mainstream practice. A more extreme version would involve trying to invest the money for as long as possible.

What follows is a simple argument for keeping open the option of earning to save.

I should clarify, my overall view on earning to save is unsettled. Here I just want to present an argument I hadn’t considered before, but which is only one consideration among many. This post is also not an official position from 80,000 Hours, but rather aims to spark discussion.

The argument:

  1. There is an optimal percentage of resources for the community to spend vs. invest in a given year.
  2. The community may end up spending above this level in the future.
  3. If that happens, having people switch to earning to save may be one of the best ways to deal with it.

I take (1) to be obvious, though there’s a lot of uncertainty about what the percentage should be. At the EA Leaders Forum 2020, the interquartile range of estimates was that the movement should currently be spending 3-8% of its assets per year, and it’s even harder to know what this means for labour compared to money. For the purposes of this post, we don’t need to know what the ideal percentage is – just that there is an optimal level we might plausibly exceed.

The more into patient longtermism you are, the lower you will think the optimal level is, and the easier it will be to exceed. However, the EALF respondents were mainly not patient longtermists, and the level they gave of 3-8% still seems possible to exceed.

Why might we end up spending above the optimal level in the future?

Here are some reasons:

  • Large donors often donate on a schedule determined by other factors (e.g. wanting to spend everything before they die).
  • Open Philanthropy intends to increase spending significantly.
  • The EA survey shows that the community is ageing by about one year per year. This will stop at some point, but I think it will carry on for a while, which will mean the community is significantly older ten years from today. As people get older, they have fewer opportunities to build career capital, so more of the community will switch to ‘giving now’. (Though if the community is older, it would also be optimal to give a larger percentage.)
  • If you’re more convinced by patient longtermism than average, you’ll probably think the community should invest more than the typical community member currently does.

I don’t think any of these factors are convincing by themselves, and I’m not predicting we will spend above the optimal rate in the future. Rather, my intention is just to show it’s a possibility.

(You might also respond to these arguments by thinking we should spend more now.)

Why might earning to save be a good option for saving more?

If we want to increase the proportion of its resources the community invests for the future, there are three options:

  • Donors reduce how much they donate now and invest instead.
  • People switch towards positions that build career capital.
  • People switch toward working on ‘meta’ problems like global priorities research and building the EA community, which can be thought of as a type of investment that pays off with future work.
  • More people earn to save.

The first two might not be possible for the same reasons mentioned above, which would leave us with meta (which can absorb a limited number of people) and earning to save.

Some other challenges of earning to save

  • It doesn’t look good. Rather than making concrete progress on real problems, and demonstrating moral seriousness by making real commitments, it’ll look like the community just wants to enrich itself.
  • People might not follow through – they might save and never give. This could be offset by using DAFs, but that restricts how the funds can be used.
  • It’s probably less motivating.
  • If we remain more constrained by specific skill bottlenecks than funding, earning to save seems less useful.

Overall, I’m pretty unsure earning to save is a good idea, and even more unsure about what fraction of people would ideally do it in the future, but it seems worth considering as a potential option. All else equal, this makes gaining career capital that opens up high-earning options more attractive.

Thank you to Howie Lempel for comments.

Comments13


Sorted by Click to highlight new comments since:

Hmm, this argument feels confused to me.

You say you take (1) to be obvious, but I think that you're treating the optimal percentage as kind of exogenous rather than dependent on the giving opportunities in the system. In fact with the right opportunities even a maximally patient longtermist will want to give 100% of capital (or more, via borrowing) in a given year. (If those opportunities will quickly enough return more capital that's smart+aligned with their values.)

So the argument really feels like:

Maybe in the future the community will give to some places that are worse than this other place [=saving]. If you're smarter than the aggregate community then it will be good if you control a larger slice of the resources so you can help to hedge against this mistake. This pushes towards earning.

I think if you don't have reason to believe you'll do better than the aggregate community then this shouldn't get much weight; if you do have such reason then it's legitimate to give it some weight. But this was already a legitimate argument before you thought about saving! It applies whenever there are multiple possible uses of capital and you worry that future people might make a mistake. I suppose whenever you think of a new possible use of capital it becomes a tiny bit stronger?

It's quite possible I'm mischaracterising your argument somehow! But at present I'm worried that this isn't really a new argument and the post risks giving inappropriate prominence to the idea of earning to save (which I think could be quite toxic for the community for reasons you mention), even given your caveats.

[My own views here, not necessarily Ben’s or “80k’s”. I reviewed the OP before it went out but don’t share all the views expressed in it (and don’t think I’ve fully thought through all the relevant considerations).]

Thanks for the comment!

“You say you take (1) to be obvious, but I think that you’re treating the optimal percentage as kind of exogenous rather than dependent on the giving opportunities in the system.”

I mostly agree with this. The argument’s force/applicability is much weaker because of this. Indeed, if EAs are spending a higher/lower proportion of their assets at some point in the future, that’s prima facie evidence that the optimal allocation is higher/lower at that time.

(I do think a literal reading of the post is consistent with the optimal percentage varying endogenously but agree that it had an exogenous 'vibe' and that's important.)

“So the argument really feels like:
Maybe in the future the community will give to some places that are worse than this other place [=saving]. If you’re smarter than the aggregate community then it will be good if you control a larger slice of the resources so you can help to hedge against this mistake. This pushes towards earning.
I think if you don’t have reason to believe you’ll do better than the aggregate community then this shouldn’t get much weight; if you do have such reason then it’s legitimate to give it some weight. But this was already a legitimate argument before you thought about saving! It applies whenever there are multiple possible uses of capital and you worry that future people might make a mistake. I suppose whenever you think of a new possible use of capital it becomes a tiny bit stronger?”

I think this is a good point but a bit too strong, as I do think there’s more to the argument than just the above. I feel pretty uncertain whether the below holds together and would love to be corrected but I understood the post to be arguing something like:

i) For people whose assets are mostly financial, it’s pretty easy to push the portfolio toward the now/later distribution they think is best. If this was also true for labour and actors had no other constraints/incentives, then I’d expect the community’s allocation to reflect its aggregate beliefs about the optimum so pushing away from that would constitute a claim that you know better.

ii) But, actors making up a large proportion of total financial assets may have constraints other than maximising impact, which could lead the community to spend faster than the aggregate of the community thinks is correct:

  • Large donors usually want to donate before they die (and Open Phil’s donors have pledged to do so). (Of course, it’s arguable whether this should be modeled as such a constraint or as a claim about optimal timing).

Other holders of financial capital may not have enough resources to realistically make up for that.

iii) In an idealised ‘perfect marketplace’ holders of human capital would “invest” their labour to make up for this. But they also face constraints:

  • Global priorities research, movement/community building, and ‘meta’ can only usefully absorb a limited amount of labour.
  • Human capital can’t be saved after you die and loses value each year as you age.
  • [I’m less sure about this one and think it’s less important.] As career capital opportunities dry up when people age, it will become more and more personally costly for them to stay in career capital mode to ‘invest’ their labour. This might lead reasonable behaviour from a self-interested standpoint to diverge from what would create a theoretically optimal portfolio for the community.

This means that for the community to maintain the allocation it thinks is optimal, people may have to convert their labour into capital so that it can be ‘saved/invested.’ But most people don’t even know that this is an option (ETA: or at least it's not a salient one) and haven’t heard of earning to save. So pointing this out may empower the community to achieve its aggregate preferences, as opposed to being a way to undermine them.

“But at present I’m worried that this isn’t really a new argument and the post risks giving inappropriate prominence to the idea of earning to save (which I think could be quite toxic for the community for reasons you mention), even given your caveats.”

I agree this is a reasonable concern and I was a bit worried about it, too, since I think this is overall a small consideration in favor of earning to save, which I agree could be quite toxic. But I do think the post tries to caveat a lot and it overall seems good for there to be a forum where even minor considerations can be considered in a quick post., so I thought it was worth posting. (Fwiw, I think getting this reaction from you was valuable.)

I’m open to the possibility that this isn’t realistic, though. And something like “some considerations on earning to save” might have been a better title.

Thanks for the thoughtful reply!

On reflection I realise that in some sense the heart of my objection to the post was in vibe, and I think I was subconsciously trying to correct for this by leaning into the vibe (for my response) of "this seems wrongfooted".

But I do think the post tries to caveat a lot and it overall seems good for there to be a forum where even minor considerations can be considered in a quick post., so I thought it was worth posting.

I quite agree that it's good if even minor considerations can be considered in a quick post. I think the issue is that the tone of the post is kind of didactic, let-me-explain-all-these-things (and the title is "an argument for X", and the post begins "I used to think not-X"): combined these are projecting quite a sense of "X is solid", and while it's great that it had lots of explicit disclaimers about this just being one consideration etc., I don't think they really do the work of cancelling the tone for feeding into casual readers' gut impressions.

For an exaggerated contrast, imagine if the post read like:

A quick thought on earning-to-save

I've been wondering recently about whether earning-to-save could make sense. I'm still not sure what I think, but I did come across a perspective which could justify it.

[argument goes here]

What do people think? I haven't worked out how big a deal this seems compared to the considerations against earning to save (and some of them are pretty substantial), so it might still be a pretty bad idea overall.

I think that would have triggered approximately zero of my vibe concerns.

Alternatively I think it could have worked to have a didactic post on "Considerations around earning-to-save" that felt like it was trying to collect the important considerations (which I'm not sure have been well laid out anywhere, so there might not be a canonical sense of which arguments are "new") rather than particularly emphasise one consideration.

That's fair - I was aiming to write it in a crisp way to make it easier to engage with, but I agree I could have given the argument crisply with a better introduction.

ii) But, actors making up a large proportion of total financial assets may have constraints other than maximising impact, which could lead the community to spend faster than the aggregate of the community thinks is correct:

  • Large donors usually want to donate before they die (and Open Phil’s donors have pledged to do so). (Of course, it’s arguable whether this should be modeled as such a constraint or as a claim about optimal timing).

Other holders of financial capital may not have enough resources to realistically make up for that.

Thanks for pulling this out, I think this is the heart of the argument. (I think it's quite valuable to show how the case relies on this, as it helps to cancel a possible reading where everyone should assume that they personally will have better judgement than the aggregate community.)

I think it's an interesting case, and worth considering carefully. We might want to consider:

  1.  Whether this will actually lead to incorrect spending?
    • My central best guess is that there will be enough flow of other money into longtermist-aligned purposes that this won't be an issue in coming decades, but I'm quite uncertain about that
  2. What are the best options for mitigating it?
    • Earning to save is certainly one possibility, but we could also consider e.g. whether there are direct work opportunities which would have a significant effect of passing capital into the hands of future longtermists

e.g. whether there are direct work opportunities which would have a significant effect of passing capital into the hands of future longtermists

Could you say more about what you might have in mind here?

Thanks for writing this up. However, I am confused about the mechanism.

In my head I think of there as being three options, all of which have diminishing returns:

  • Direct Work
    • Turning money into EA outcomes.
    • Diminishing returns due to low hanging problems being solved, non-parallel workflows and running out of money.
  • Earn to Give/Spend
    • Turning market work into Direct Work.
    • Diminishing returns due to running out of good people to employ.
  • Earn to Save
    • Turning market work now into Direct work later.
    • Diminishing returns due to running out of good people to employ in the future.

As each possibility has diminishing returns, there is an optimal ratio of Spending to Saving. But an exogenous increase in Spending volume doesn't increase the marginal returns of Saving, so it doesn't increase the attractiveness of Saving vs Direct. It does make Saving more attractive vs Spending, but both of those require basically the same skills (e.g. tech or finance skills), so the value of those skills is diminished.

Separately, you might think of upcoming increases in Spending (OpenPhil, bequests, career advancement) as an artificially high level of Saving now. This would decrease the attractiveness of current Saving.

Hi Larks, I think that's a nice way of framing the issue, and you might be right. I think Howie's reply to Owen is also relevant.

Just wanted to throw up my previous exploration of a similar topic. (I think I had a fairly different motivation than you – namely I want young EAs to mostly focus on financial runway so they can do risky career moves once they're better oriented).

tl;dr – I think the actual Default Action for young EAs should not be giving 10%, but giving 1% (for self-signalling), and saving 10%. 

It's a good point there could also be good cultural effects from encouraging people to save more as well as the negatives I mention.

Though there's a bit of a tradeoff where putting the money into a DAF/trust might alleviate some of the negative effects Ben mentioned but also loses out on a lot of the benefits Raemon is going for.

Thanks for sharing your thoughts, I never thought about this question before and it challenges my intuitive outlook.

Larks mentioned a factor that seems central to me and that I don't know how to fit into your argument:

Diminishing returns due to running out of good people to employ.

My gut's perspective is this: By investing resources to employ/engage/convince smart people today, we are investing in "capacity building", and that's key for long-lasting impact. OPP excellent "Direct work" contributions to both long- and also short-term cause areas will pay off immensely by drawing in more excellent people, further growing the amount of smart minds and resources available to longtermist causes. So as long as there are a lot of EA & longtermism-sympathetic smart minds out there, we should try to reach them by public excellent work that they naturally would want to be part of.

Yes, I agree that's an important consideration. Doing direct work also causes movement building, creating a bunch of extra value. (Some even argue that most movement building comes from direct work rather than explicit movement building efforts.) It doesn't seem like earning to save will be as good on this front, though I think that building up a big pot of money can also get people interested (though maybe for dubious reasons!).

Curated and popular this week
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 3m read
 · 
We’ve redesigned effectivealtruism.org to improve understanding and perception of effective altruism, and make it easier to take action.  View the new site → I led the redesign and will be writing in the first person here, but many others contributed research, feedback, writing, editing, and development. I’d love to hear what you think, here is a feedback form. Redesign goals This redesign is part of CEA’s broader efforts to improve how effective altruism is understood and perceived. I focused on goals aligned with CEA’s branding and growth strategy: 1. Improve understanding of what effective altruism is Make the core ideas easier to grasp by simplifying language, addressing common misconceptions, and showcasing more real-world examples of people and projects. 2. Improve the perception of effective altruism I worked from a set of brand associations defined by the group working on the EA brand project[1]. These are words we want people to associate with effective altruism more strongly—like compassionate, competent, and action-oriented. 3. Increase impactful actions Make it easier for visitors to take meaningful next steps, like signing up for the newsletter or intro course, exploring career opportunities, or donating. We focused especially on three key audiences: * To-be direct workers: young people and professionals who might explore impactful career paths * Opinion shapers and people in power: journalists, policymakers, and senior professionals in relevant fields * Donors: from large funders to smaller individual givers and peer foundations Before and after The changes across the site are aimed at making it clearer, more skimmable, and easier to navigate. Here are some side-by-side comparisons: Landing page Some of the changes: * Replaced the economic growth graph with a short video highlighting different cause areas and effective altruism in action * Updated tagline to "Find the best ways to help others" based on testing by Rethink
 ·  · 4m read
 · 
Summary I’m excited to announce a “Digital Sentience Consortium” hosted by Longview Philanthropy, in collaboration with The Navigation Fund and Macroscopic Ventures, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include research fellowships, career transition fellowships, and a broad request for proposals for applied work on these topics.  For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of  Claude 4’s model card are about its potential moral patienthood. Scientific experts increasingly say that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field to increase the chance that society deals with the possibility of digital sentience reasonably, and with concern for all involved. * Apply to Research Fellowship * Apply to Career Transition Fellowship * Apply to Request for Proposals Motivation & Focus For about as long as I’ve been reading about transformative AI, I’ve wondered whether society would face critical decisions involving AI sentience. Until recently, I thought there was not much to be done here besides perhaps more philosophy of mind and perhaps some ethics—and I was not sure these approaches would make much progress.  Now, I think there are live areas where people can contribute: * Technically informed research on which AI systems are sentient, like this paper applying existing theories of consciousness to a few AI architectures. * Innovative approaches to investigate sentience, potentially in a way that avoids having to take a stand on a particular theory of consciousness, like work on  AI introspection. * Political philosophy and policy research on the proper role of AI in society. * Work to ed