Hide table of contents

This is the second in a series of posts about effective giving in 2022, from One for the World and Giving What We Can. See the first post here. 

Epistemic status — I’m quite confident about my central claim that small donors motivated by longtermism can, in expectation, have a significant impact through their donations. I aimed to highlight any peripheral claims I am less confident about.  

Caveat — I wrote this in my capacity as a researcher at Giving What We Can; though I primarily aimed to articulate my own view, this post also conveys the organisation’s current best guesses regarding the value of longtermist donations. 

Introduction

Ben Todd recently estimated that there is approximately $46 billion USD committed to effective altruism,[1] but despite this, small donors still have significant impact

This was understandably questioned: if small donors are only contributing a fraction of a fraction of the yet-to-be-dispersed funds given by large donors, doesn’t the relative value of earning to give significantly drop compared to other career paths? 

I don’t think these views are mutually exclusive. 

The value of charitable giving relative to certain career options may have changed, but this isn’t because we have all the funding we need. So while there may be extremely good opportunities for ambitious projects or careers that leverage the resources already available, I think there is still room for more funding to make a significant difference — even from (if not especially from) a longtermist perspective. More precisely:

  • From a neartermist perspective, donating to top GiveWell charities can save a life for $5,000 USD. It’s quite sad that it’s so cheap to save a life, but it warrants being called a significant opportunity for impact.
  • If longtermists can outperform this, that also warrants their impact being called significant.
  • There are many available donation opportunities that, from a longtermist perspective, seem like they should outperform the $5,000 per life benchmark (and perhaps by a significant margin).

The above is consistent with the idea that most people who could do highly impactful direct work should do that instead of earning to give, even if they could have extremely lucrative careers. There’s no cap on how good something can be: despite how much good you can do through effective giving, it’s possible direct work remains even better. But in any case, I think that in general, effective giving is not in tension with pursuing direct work. And for many people, effective giving is the best opportunity to have an impact.

The aim of this post is to make the case that the amount of good longtermists can do through effective giving is significant (even if it is relatively less impactful).

I then discuss some implications of this, including:

  • How it affects Giving What We Can’s approach to promoting effective giving
  • How it helps frame just how high the stakes are on a longtermist worldview
  • Whether you should earn to give

The impact of effective giving from a neartermist point of view

An assumption I’m making here (which I anticipate many people involved with effective altruism to already be generally on board with) is that giving to effective neartermist charities can be cost effective on the order of ‘saving a life’[2] for something like $5,000 USD. That is, GiveWell’s cost effectiveness estimates are roughly right.[3] I respond to some concerns people may have about fungibility in the appendix. The analysis there is quite preliminary (I’m aiming to write something more fleshed out soon), but my current guess is that you can expect this level of cost effectiveness even when you take fungibility into account. I’ll use the Against Malaria Foundation (AMF) as the archetypal example in this post.

Going back to the number: $5,000. I think this is a figure that many people have become too used to. Compared to the value of a life, it’s a trivial cost. I firmly stand by all the introductory pitches to effective altruism I have given on this basis: it is an extremely sorry state of the world that renovating a family’s kitchen in one country often costs more than it would to save a family from losing a child in another. 

I think $5,000 to save a life qualifies as being called a very high level of cost effectiveness, giving donors an opportunity for ‘significant’ impact. Of course, for something to be very high, it has to be high relative to something. In this case, I think the fair comparison is the value of (marginal) consumption from individuals in wealthy countries: things like renovating kitchens, getting takeout more regularly, paying rent in a nice neighbourhood. In most cases, this is where the money would otherwise go. 

What I’m aiming to avoid here is comparing the value of donations to the value of a directly impactful career. This comparison is indeed extremely important when the two are trading off against each other — for example, if someone is deciding whether to earn to give or do direct work, or if someone doing direct work is considering spending on themselves to improve their marginal productivity. But most of the time this is not the comparison — in these cases, I think it’s important to look at the value of donations compared to the actual alternative. 

Going back to $5,000: It’s not a good thing that neartermist charities are so cost effective — it reflects a world with major problems that could be fixed at low cost, but aren’t. I don’t think the situation is better for longtermists.

Why longtermist donations should be at least as cost effective (from a longtermist point of view)

I think longtermists should expect to find donation opportunities with a cost effectiveness at least as high as $5,000 to save a life, and likely much higher. Roughly, this is because:

  1. Longtermists can donate to the best neartermist charities and have the same near-term effects. If they believe the long-term benefits are positive (in expectation) they’re already there.
  2. There are other giving opportunities that seem robustly good from both neartermist and longtermist perspectives, such as climate change.
  3. There are likely even more cost effective donation opportunities from a longtermist perspective, though just how cost effective they are is hard to say.

In the appendix, I discuss that what I am saying is not particularly controversial among longtermists; it’s certainly not novel. If you are already on board, please feel free to skip to the implications.

Longtermists can donate to neartermist charities

To state the obvious, longtermism does not imply the near term doesn’t happen, so longtermists can have the same cost effectiveness in the near term as neartermists can. The difference is that, from a longtermist perspective, the most important moral consideration is what the long-term effects will be. 

Greaves discusses this in her paper “Cluelessness,” using AMF as the example. Greaves argues that whether donating to AMF is positive in expectation will ultimately be decided by the donor’s views on its long-term effects. This means a that a donor who thinks AMF’s work has positive long-term effects can donate $4,500[4] and expect to save a life in the near term, in addition to (in expectation) even more significant long-term benefits. The cost effectiveness would then be significantly greater than $4,500 to save a life. 

But Greaves is concerned that this new cost effectiveness estimate is importantly different from the estimates from GiveWell:

  • The neartermist assumes most of the value of their donation comes from the life saved — something they can be very confident about, being supported by a wide and robust evidence base.
  • The longtermist sees most of the value lying in subjective, highly uncertain guesses about the value of extremely complex phenomena (such as about the long-run consequences of increasing the population).

In short, even though the longtermist expects AMF to be even better in expectation than the neartermist, much of this comes from guesses that might not be resilient to further reflection and research. 

Still, I think it’s a point worth making that if a longtermist thinks that any of GiveWell’s top charities have indirect beneficial long-term benefits (and they buy GiveWell’s own estimates), then they already think that you can have a substantial impact through donations. On a personal level, I’m very motivated by longtermism and have a general prior that “good things are good” and so feel comfortable in my guess that the indirect long-term effects of interventions like AMF are positive, though some reasonable people could disagree. In any case, my argument here doesn’t depend on complex issues around moral cluelessness, given the availability of longtermist donation opportunities that I think escape these challenges. 

There are robustly good longtermist interventions that need funding

There are donation opportunities that seem good from both a near-term and long-term point of view: the main example I’ll use here is addressing climate change. Reducing carbon in the atmosphere can improve the lives of people over the following years and decades, while also having beneficial long-term effects. By accounting for both these near-term and long-term benefits, I think the best charities mitigating climate change reach the $5,000 per life threshold.

Founders Pledge’s research report estimated that the Clean Air Task Force (CATF) has historically had a cost effectiveness of approximately $5,600 to save a life. This 2018 estimate is retrospective, and therefore out of date, though there is some more recent research suggesting that climate change mitigation efforts might be justifiable on health grounds alone. Still, I suspect $5,600 per life would be an overestimate of the near term cost effectiveness of CATF. 

However, the $5,600 estimate does not account for the very long-term impacts of CATF’s work — and from a longtermist perspective, this is likely where the vast majority of the impact comes from. Once we account for these long-term effects (for example, decreasing the chance of extreme climate change, thereby lowering the likelihood of global conflict and potentially existential risk more broadly) I think it’s very likely CATF currently meets the cost effectiveness threshold of $5,000 to save a life. And even if there are concerns about CATF in particular, they are not the only charity working to address climate change.[5]

I imagine some readers are still not convinced by my claim that there are longtermist donation opportunities with a cost effectiveness at least as high as $5,000. They might:

  • Doubt my analysis of CATF.
  • And think that no charity addressing climate change would be sufficiently cost effective.
  • And believe that none of GiveWell’s charities have beneficial long-term effects.

These seem to me like quite uncorrelated views — with varying plausibility — about the specifics of particular charities and causes. And there are other charities that are potentially cost-competitive from a near-term point of view, and seem very likely to have positive long-term effects. For example, Ben Todd highlights CEPI — which works to produce vaccines to fight the next pandemic — as a potentially promising and scalable giving opportunity (among others). 

So while there is often disagreement about whether any specific charity meets the $5,000 to save a life threshold (when taking into account long-term effects) I think the claim that there is at least one opportunity is more robust. And in general, I think we should expect to find promising donation opportunities with room for more funding. Even though $48 billion dollars is a lot of money, it’s far from being enough to solve the world’s problems.[6]

There are likely even more promising donation opportunities for longtermists

So far, I’ve been arguing from a very defensive stance, trying to highlight the most straightforward ways longtermists can expect their donation to be as cost effective as $5,000 to save a life. I think there are other opportunities that are significantly more compelling from a longtermist point of view, although they are somewhat less legible:

  • You could donate to the Long-Term Future Fund. It’s difficult to get an estimate for the cost effectiveness of this, but it seems likely to be the best publicly available donation opportunity to reduce existential risk (especially from misaligned AI, which is very plausibly the world’s most important problem).[7]
  • You could donate to Founders Pledge’s Patient Philanthropy Fund. This is especially promising if you are unsure about whether there are sufficiently compelling opportunities now, but believe there might be more effective opportunities in the future. For example, it seems likely to me that in the future there will be megaprojects in the longtermist space that will be able to absorb a lot of funding.
  • You could donate to organisations working to reduce existential risk that have been evaluated and supported by organisations like Open Philanthropy, such as the Nuclear Threat Initiative’s biosecurity work. Though there is some reasonable concern about fungibility, my current guess is that you are still likely to (perhaps significantly) outperform $5,000 to save a life even after fungibility is accounted for.[8]
  • You could donate to opportunities unique to small donors. Some of these are listed in Ben Todd’s post and include supporting organisations that can’t accept large donations, or helping people you know improve their capacity to contribute to top causes.

The conclusion I’m arguing for here does not seem very controversial among highly engaged longtermists. Though predicting the long-term future is extremely difficult, and so the claim that any specific charity will be good for the long-term future will be quite fraught with uncertainty, there are many donation opportunities that — though uncertain — are extremely impactful in expectation. In some sense, the fact that these opportunities exist is what constitutes (at least the strong version of) longtermism. I discuss this more in the appendix

What are the implications of this?

A reminder of the claim: longtermist donors can find donation opportunities that are at least as cost effective as $5,000 to save a life, and likely much greater. 

I see several implications of this.

We should continue to promote effective giving

We plan on discussing why Giving What We Can (GWWC) is excited to promote effective giving in an upcoming post. I’ll leave the positive case for promoting effective giving for that post. For now, I’ll discuss one potential concern I’ve heard from several people. 

The concern (which assumes longtermism) is that if people are introduced to effective altruism through effective giving, they might end up retrospectively feeling tricked. The idea is that people might initially buy into effective altruism by being convinced that they can have an incredible impact through donations. If they later come to the view (quite widely held among longtermist EAs) that their donations were trivial (implicitly, ‘trivial relative to direct work’) they might feel deflated and disenfranchised. They might think that they can’t have the impact they’d hoped for, or were promised.

I think this concern has some merit, but can be addressed.

First, at GWWC, we honestly believe that small, longtermist donations (compared to personal consumption) provide an opportunity for significant impact from a longtermist perspective. That’s the claim of this post, and part of my motivation for writing it is to seek feedback on whether the view is right — we don’t want to be deceptive. If it is right, the individual can have the impact they were promised — it’s just that (at least from a longtermist point of view) they might be able to have an even more extraordinary impact by working directly on the world’s most important causes. 

Second, at GWWC we want to highlight when direct work might be especially impactful, because we think a significant part of our impact can come from increasing the number of people working directly on the world’s most pressing problems. This means we often promote 80,000 Hours’ work, and more broadly encourage people to get involved in effective altruism. This is an important part of why we’re excited to promote effective giving, and something we’ll discuss in our upcoming update. 

Third, for many people, donating is their best opportunity for impact — so we want to put forward the best possible case for it. Though some people might be able to do even more good through direct work, we think it’s important for the health of the effective altruism community to celebrate people for doing the best they can in their mission to improve the world. We agree with this comment from Michael Plant: it seems like a sad outcome if someone donating 10% of their income comes to an EA event and feels like they’re being looked down upon because they don’t have an “EA job.” And I don’t think this is just a hypothetical example; I think the meme that direct work is overwhelmingly impactful has caused some harm, and I’m a personal example of someone who propagated it.

Effective giving can be a neglected way to get involved in effective altruism

Before I worked at GWWC, I was heavily involved in community building. During that time, even though effective giving was something I actively did (and was proud of), I pretty deeply internalised the idea that it was ultimately not very significant compared to having an effective career (something I desperately wanted). One consequence of this is that I think I often did a very poor job of introducing people to effective altruism, and probably put some people off it.

A regular part of community building is giving introductory pitches about effective altruism followed by a conversation about how people can get involved. A cartoon example of my conversations looked like this:

Me: “You can do an extraordinary amount of good if you try to be effective about it!”

Them: “Cool! How?”

Me: “Well, the biggest way you can have an impact is through a directly impactful career.”

Them: “Great! I’m studying architecture, and my friend here is a pharmacist. What should we do?”

I would then proceed to awkwardly introduce 80,000 Hours, and even more awkwardly try to radically shift their career plans. I don’t think I ever succeeded.[9] I'd wager my experience isn't that uncommon — I suspect a fair number of people’s first (and perhaps last) exposure to effective altruism is constituted by someone trying to convince them to switch to an EA career.

Obviously, career plan changes do happen — and when they do, they’re really impactful. But my guess is that I would have had greater success in engaging new community members if I had said something more like:

Me: “There are a bunch of ways you can help! The most straightforward way is through charity — which can be extremely impactful. You can also look at 80,000 Hours’ career advice if you’re interested in having an impact with your career, which in some cases is the best way to help.”

But it’s important that this isn’t deceptive: you can have an extraordinary impact through donating to effective charities. 

Putting into perspective how much is at stake

Another consequence of effective giving being impactful is that it provides a way of framing just how much value a longtermist worldview implies is at stake, and just how much good the effective altruism movement aims to do. 

Consider the following claims:

  • Around $50 billion USD is committed to effective altruism.
  • Right now you can expect a donation to save a life for $5,000 (or potentially less) on the margin.

Perhaps it’d be naive to divide these numbers, but I think it’s likely that doing so would underestimate the amount at stake. 

Further, it puts into perspective the strength of the claim that some direct work is magnitudes more impactful than donating. For example, Ben Todd suggests that he values people in important longtermist leadership roles at $400,000–$4 million per year. This is not because he has a particularly bleak outlook on the cost effective donation opportunities (he estimates donating to longtermist charities can do at least 10 times more good than donating to the best neartermist charities) — it’s because he thinks some of these roles have truly outstanding potential impact. 

So, should I earn to give?

The argument I’m trying to make in this post is that longtermists still have the opportunity for significant impact through small donations. I am not making the point that people have undervalued earning to give as a career path. 

I think these points are sometimes conflated, and I suspect that this had something to do with the pushback against Ben’s post.[10]

It’s possible for donations to be impactful, but for direct work to be much more impactful. I discuss this more in the appendix, highlighting some reasons why I think more funding within effective altruism might be underrated, but it is in the appendix because it’s not my central argument. And in fact, my personal all-things-considered view is pretty similar to Ben’s: when someone has a good personal fit for high-impact direct work, they’re likely to have more impact pursuing that than earning to give. This view is also shared by Giving What We Can leadership. 

What next?

If I’m right, longtermist giving can be significantly impactful, and we should not carelessly dismiss it. 

You may think it can be impactful, but that there are much better opportunities we should prioritise instead. If so, first question whether the two are actually in tension. If they are, say that, rather than something like, “longtermism isn’t funding constrained.”

In response to my friend Nuno Sempere’s comment that Ben’s post “is kind of too ‘feel good’ for my tastes,” I’ve hoped to make two claims:

  1. Being “feel good” doesn’t make it wrong — small donors can have a significant impact.
  2. I hope people do feel good when they donate in highly impactful ways, but there’s nothing “feel good” about the fact you can make such a huge difference with relatively small donations — it reflects a bad state of the world.

Lastly, a significant motivation behind writing this was to get feedback from the community. Giving What We Can has been fairly silent for the last few years; we want that to change. Comments on the overall claims (and the reasoning supporting them) would be much appreciated. 

And if you’re a researcher interested in helping ensure Giving What We Can’s research content is clearly presented and has solid epistemics, please reach out to me at michael (dot) townsend (at) givingwhatwecan (dot) org. We’re interested in more volunteer reviewers, but we also have some paid ($40 USD an hour) reviewing roles we’re trialling for and are seeking expressions of interest for a senior researcher.

 

Appendix 

The appendix adds discussion of: 

  • Neartermist fungibility (tentatively, as I will be working on a more fleshed-out piece on fungibility soon).
  • Why more funding from diverse sources could be especially valuable.
  • Why I don’t think my central claim is controversial in the longtermist community.

Initial thoughts on GiveWell and fungibility

There are some reasonable concerns that by donating to GiveWell’s charities you are in effect funging Open Philanthropy.

First, what happened: GiveWell recently rolled over $110 million USD in funds due to a significant increase in Open Philanthropy’s grants to GiveWell charities. If we assume that Open Philanthropy’s increased grants were somewhat sensitive to the amount of money the GiveWell charities had received up to the point of their grant, this means that donors who gave to GiveWell’s charities (excluding GiveDirectly) in 2021 will only realise their impact when the rollover funds have been spent down. 

In fact, this is exactly what GiveWell says in its Rollover Funding FAQ.[11] GiveWell is confident that they can find even more promising opportunities, with similar cost effectiveness to their current top charities. If this is right, then those funds will be spent down in a cost effective way; even though the donations were “funged,” they will still have the promised cost effectiveness. 

There is a concern that when these rollover funds are spent down, Open Philanthropy will simply make another grant — and the problem repeats. But I don’t think this is what is happening: Open Philanthropy has clarified that its policy is to give approximately $500 million per year in 2022 and 2023, and that this isn’t sensitive to “modest fluctuations” in the funding GiveWell expects to receive from small donors, nor the cost effectiveness estimates from GiveWell. 

And even if it was sensitive to these fluctuations — at least on a theoretical level — one should expect that this wouldn’t lower cost effectiveness. I’ll note I’m less sure about this point, and wish to explore it more in a future post. The idea is that Open Philanthropy’s Global Health and Wellbeing work aims to “equalize marginal cost effectiveness across portfolios and across time.” So to the extent that your donation has less impact due to funging effects caused by Open Philanthropy, it’d be due to a mistake on their end — that is, they could have done more good if they had not lowered their grant in response to your donations (which is where the funging effect came from).

A final point is that individuals likely can avoid having their donations being funged by Open Philanthropy. GiveWell guess that if you donate directly to one of GiveWell’s top charities (as opposed to the Maximum Impact Fund) in 2022, your money would (effectively[12]) be spent before Open Philanthropy could respond by changing the size of their grant. 

Effective giving from diverse sources might be underrated by the community

There are some reasons that adding more funds from diverse sources could be underrated. 

  • EA currently has a lot of money, but it may not always.
    • FTX is highly uncertain — it’s a huge bet on crypto, and my understanding is the vast majority of the money is not liquid.
    • Not even Open Philanthropy is certain (consider the recent crash in Facebook stock).
  • For many organisations, being supported by a broad donor base is quite valuable.
    • Large donors can funge money, but they can’t funge the number of individuals supporting an organisation.
    • For example, GWWC recently received funding from Open Philanthropy, but both organisations agreed to limit the maximum percentage of our funding coming from a single donor.
    • This topic was systematically explored in “The Funding Landscape of EA Meta Organizations” report and comes to similar conclusions.
  • The funding needs of particular causes may change.
    • If we suddenly find ourselves in a global pandemic, or some other disaster, every dollar might matter.
    • There may be similar cases with AI, wars between great powers, etc.
  • Megaprojects may end up being prominent, successful, and expensive. That’s sort of the point of them.
  • We may significantly improve our longtermist grantmaking, discovering neglected opportunities that aren’t currently available.

Of course, you can come up with a list of reasons why you should hit yourself in the head with a baseball bat, but it doesn’t mean you should. I don’t share this to suggest that effective giving is underrated relative to direct work; I share it because I think these are relevant and sometimes neglected points I’d like to add to the discussion. For a fantastic discussion on this, I recommend reading “We need more nuance regarding funding gaps.

Is the idea that longtermist donors can outperform the best neartermist charities controversial?

At least within the longtermist community, I don’t think my claim is very controversial. In some sense, the claim I’m arguing for is what constitutes (at least the strong version of) longtermism. 

Greaves and MacAskill’s “The Case for Strong Longtermism” is the most referenced academic paper introducing the idea of longtermism. In it, they define strong longtermism as: 

Axiological strong longtermism (ASL): In the most important decision situations facing agents today, 

(i) every option that is near-best overall is near-best for the far future, and

(ii) every option that is near-best overall delivers much larger benefits in the far future than in the near future.

In this sense, longtermism is a claim about the world. Where an individual chooses to donate is considered as (at least a component of) the “most important decision situations facing agents today.” Similar to this post, Greaves and MacAskill use antimalarial bednet distribution as an “approximate upper bound on attainable near-future benefits per unit of spending.” They proceed to argue that a longtermist donor can outperform the distribution of bednets in terms of cost effectiveness — and the way they structure their argument is not an example in favour of (axiological) strong longtermism (ASL), but it is a key claim that constitutes ASL. In some (unconvincing) sense, what I’m saying is true by definition.

But there are obviously limits to this argument. There are many ways you could reformulate longtermism such that it retains a significant amount of force, but doesn’t consider individual philanthropy as among “the most important decisions facing agents today.” And perhaps more importantly, saying that something is “true by definition” in this way isn’t particularly convincing. But I think we neither need to nor should give up Greaves and MacAskill’s definition of longtermism, because when you do take a look at actually available donation opportunities, there are plenty of sufficiently compelling options. 

The idea that there are impactful ways to use funding also seems shared by the EA community in general. Linch recently asked how much we should be willing to spend to reduce existential risk by 0.01%, and the answers (including from several longtermist grantmakers) implied that there are likely opportunities in the range of $10 million USD to several billion dollars. This should be interpreted very lightly, as merely a gauge of a handful of people in the community’s “gut feeling.” But, just to show what this would imply: If we took $5 billion USD as the conservative guess on what it would cost to reduce existential risk by 0.01%, and took the conservative guess of the possible number of people’s lives at stake from Greaves and MacAskill (10^14), that would turn out to be equivalent to 50 cents to save a life. As they say: big if true. 

 

My thanks to Luke Freeman, Julian Hazell, Katy Moore, Nuño Sempere, Jonas Vollmer, Jack Lewars, Grace Adams and Max Daniel for their helpful comments on this post. 



 

  1. ^

     This number may have increased somewhat since Ben Todd wrote his post. FTX was more recently valued at $32 billion USD, up from the $18 billion at the time Ben wrote his post. On the other hand, Forbes estimates Dustin Moskovitz’s net worth is now $14.7 billion USD, down from the $25 billion estimate Ben used. 

  2. ^

     I am not using the term literally: ‘save a life for $5,000’ is shorthand for ‘do the equivalent amount of good in expectation as saving a life for $5,000.’

  3. ^

     I’m aware of some recent challenges to their methodology, but I was convinced by the responses to it, and have generally been very impressed by the quality of GiveWell’s work.

  4. ^

     At the time of this post, GiveWell estimates that a donation to AMF can save a life for $4,500.

  5. ^

     One reviewer commented that supporting CATF could, in theory, have indirect effects that affect other causes that end up mattering much more for the total value of the long-term future. I agree that the specifics of the intervention on climate change matter a lot, and some of these interventions plausibly could end up doing harm because of their indirect effects. But I don’t think these considerations are always going to be a dominant factor. To start with, the indirects of the interventions could well support other causes (I don't see any systematic reason to expect all climate interventions to have negative indirect effects overall). Another point is that, if the indirect negative effects are remote enough, then they could end up being being non-trivial but still not-dominant (e.g., increasing the chance of an engineered pathogen by such a small probability that it isn't outweighed by the benefit of the direct effects).

  6. ^

     Ben Todd makes much the same point in this comment

    “In short, world GDP is $80 trillion. The interest on EA funds is perhaps $2.5bn per year, so that’s the sustainable amount of EA spending per year. This is about 0.003% of GDP. It would be surprising if that were enough to do all the effective things to help others.”

  7. ^

     This guess is largely based on my subjective impressions, heavily informed by Larks’s AI Alignment Literature Review. Larks strongly recommends people come to their own conclusion, rather than deferring to him. 

  8. ^

     I plan to discuss this more in a future post, and I’m not as confident about this point. Right now, I don’t have more to add than Ben Todd’s discussion of fungibility in this comment.

  9. ^

     The career changes I (would at least like to) consider myself as being part of, though, usually occurred after someone was already pretty engaged with effective altruist ideas. It was not a starting point. This might seem obvious (I think it is obvious, in retrospect), and I’m not trying to avoid responsibility for my failings as a community builder, but I do think my case is illustrative.

  10. ^

     Though AppliedDivinityStudies explicitly acknowledged that Ben likely agreed with the central claim of his Red-Team post: that people should not prioritise earning to give. 

  11. ^

     See this quote in particular: 

    “The true impact of those donations will be realized once we are able to spend all funding available on extremely cost effective opportunities.”

  12. ^

     I say “effectively” because the money wouldn’t literally be spent — your donation might take years until it affects the beneficiary — but it would effectively be spent, because by the time Open Philanthropy made its next grants to GiveWell charities, the GiveWell charity would have already made plans on how to spend your money, and those plans would not be affected by further funding. 

Comments15
Sorted by Click to highlight new comments since:

Some of the comments here are suggesting that there is in fact tension between promoting donations and direct work. The implication seems to be that while donations are highly effective in absolute terms, we should intentionally downplay this fact for fear that too many people might 'settle' for earning to give.

Personally, I would much rather employ honest messaging and allow people to assess the tradeoffs for their individual situation. I also think it's important to bear in mind that downplaying cuts both ways—as Michael points out, the meme that direct work is overwhelmingly effective has done harm.

There may be some who 'settle' for earning to give when direct work could have been more impactful, and there may be some who take away that donations are trivial and do neither. Obviously I would expect the former to be hugely overrepresented on the EA Forum.

Thanks for the post! I broadly agree with the arguments you give, though I think you understate the tensions between promoting earning to give vs direct work.

Personal example: I'm currently doing AI Safety work, and I expect it to be fairly impactful. But I came fairly close to going into finance as it was a safe, stable path I was confident I'd enjoy. And part of this motivation was a fuzzy feeling that donations was still somewhat good. And this made it harder to internalise just how much higher the value from direct work was. Anecdotally, a lot of smart mathematicians I know are tempted by finance and have a similar problem. And in cases like this, I think that promoting longtermist donations is actively in tension with high impact career advice

Thanks Neel for sharing your personal experience! I can see how this would be a concern with promoting earning to give too heavily.

However, Michael's post advocating for promoting earning to give, it's about promoting effective giving. This is a really important distinction. GWWC is focused on promoting effective giving more broadly to the wider public and not focused on promoting earning to give as a career path.

Promoting effective giving outside the EA community helps fund important work, provides many people with a strong opportunity to have a big impact, and also brings people into the EA community.

A good post Michael!

Something I feel confused about is, does the Long-Term Future Fund have room for more funding for AI safety, or does it and Open Philanthropy already have enough money to fund all the AI safety things they think are good? What's an example of something that it might fund in AI safety that it isn't currently because it doesn't have enough money, or something that an AI safety org would want to do if only it had more money? Are there people that they're not hiring that they would if they had more funding?

E.g., it seems to me that salaries at CHAI could be somewhat higher to be more competitive with, say, a software engineering internship in industry – indeed, salaries at the Alignment Research Center are quite high, probably so that they can attract the best candidates. But it's also possible that the organizations have reason to keep salaries lower than they can afford – e.g., the meta org Lightcone Infrastructure chooses to pay 30% below market rate for various reasons – or high salaries have detrimental effects on the community. So I'm confused about whether there is a funding gap here.

Nitpick: universities (including Berkeley) have strict payscales and usually refuse to allow direct raises even when funding is completely secured. But this is a nitpick, because there are ways around this.

Curated and popular this week
Relevant opportunities