Context:  I have given around $1000 to GiveWell this year and I am considering giving around $1000 more. I'm a college student, so that works out to around 10% or 20% of last year's income.

It seems like EA-aligned causes and professional grant makers like the FTX future fund have more money than they know what to do with; the general consensus on the forum appears to be that EA is (for the moment, at least) no longer funding constrained. If this is true, the case for direct work seems very strong, but the case for personally giving seems weak. 

I don't expect to do better than e.g. William MacAskill when selecting funding opportunities, and if he thinks that last-dollar funding is not cost effective, why should I? (I know that professional grant makers think that last-dollar funding is not cost effective because they aren't funding more projects, but aren't out of dollars.)

In other words, professionals I expect to be able to make better decisions then I am think that marginal EA-aligned funds are better saved than spent right now. Why not do the same for my personal giving? If the EA landscape doesn't return to being funding constrained, why ever give more?

26

0
0

Reactions

0
0
Comments14


Sorted by Click to highlight new comments since:

We still don't have enough money to bring everyone out of extreme poverty, even if EA spent all its money. The fact that I can still save a child's life with my donations really motivates me to help where I can.

That said, as a student, it might be worth saving until you've finished your studies, especially if you don't have a safety net to fall back on (like parents or a spouse).

The other people gave reasons why you might want to donate money on the margin. But my personal guess is that in most cases, college students in EA probably should not be donating large amounts (to them) of money, and instead invest it in either improving their own career capital, or (if they don't have good social or governmental safety nets) saving.

Almost all of your wealth as a college student is in human capital, so investing $s in ways that lets you either do direct work later or donate much more later is likely the best route to impact for your money. 

To my eyes, honestly the strongest reason to donate money for most people in your position is something like "identity formation": you might consciously want to form an honest self-image as someone who cares about others and is willing to make large personal sacrifices to do so. I did this when I was younger, and I'm glad to have done so. But on the other hand, I value money now much less than I used to, and if I could send money back to my past self, I would gladly do so.

There have been a few posts discussing the value of small donations over the past year, notably:

  1. Benjamin Todd on "Despite billions of extra funding, small donors can still have a significant impact"
  2. a counterpoint, AppliedDivinityStudies on "A Red-Team Against the Impact of Small Donations"
  3. a counter-counterpoint, Michael Townsend  on "The value of small donations from a longtermist perspective"

There's a lot of discussion here (especially if you go through the comments of each piece), and so plenty of room to come to different conclusions.

Here's roughly where I come out of this:

  • What's the relevant counterfactual? Many of these comment threads turn into discussions about earning-to-give vs direct work, but if you have $1000 in your hand, ready to donate, that's not the relevant question. Rather, you should ask, "if I don't donate this, what would I do with it instead, and how much impact would that have?"
  • You say "I know that professional grant makers think that last-dollar funding is not cost effective because they aren't funding more projects, but aren't out of dollars." I think this frames the issue incorrectly. It's not that big funders know that other projects aren't cost-effective, it's that they don't currently have enough projects that clear a certain cost-effectiveness bar. But crucially, that bar is still far above zero!

This means

  • there are probably many opportunities that are just as cost-effective that they haven't found (potentially you have information they don't that you could exploit; see this section of the above ADS post)
  • marginal donations should have a cost-effectiveness at worst just below that bar, which means you're only doing a little worse than the big funders. (This point taken from Benjamin Todd here.)

If you aren’t opposed to donating to political campaigns: some campaign finance laws restrict the amount of money that can go directly to campaigns on a per-person basis, so at least that seems like an area where “small” donors can still matter.

Agree with this point.  Jeffrey Ladish wrote "US Citizens: Targeted political contributions are probably the best passive donation opportunities for mitigating existential risk". 

He says: 

Recently, I’ve surprised myself by coming to believe that donating to candidates who support policies which reduce existential risks is probably the best passive donation opportunity for US citizens. The main reason I’ve changed my mind is that I think highly aligned political candidates have a lot of leverage to affect policies that could impact the long-term future and are uniquely benefited from individual donations.

If you're not a US citizen, you can volunteer for a campaign (that's legal!). 

My answer is that you should primarily be focused on saving, so that you have the financial freedom to pivot, change jobs, learn more, or found an organization. Previously, I recommended new EAs (esp. college students) give 1%, save at least 10% (so that they were building at least some concrete altruistic habits, while mostly focusing on building up slack).

I think this remains good practice in the current environment. (Giving 1% is somewhat a symbolic gift in the first place, and I think it's still a useful forcing function to think about which organizations are valuable to you). But also, as long as you're concretely setting aside money and thinking about your future, I think that's a pretty good starting point.

With respect to the last dollar of funding: I think Open Philanthropy expects to spend their last dollar on something more cost-effective than GiveDirectly. So I think the last dollar of spending will still look good, and at the worst case your spending now will move some other funding to something somewhat less effective but still pretty good down the line.

Another potential advantage for an individual donor would be identifying something not currently receiving large amounts of funding that you think is worth taking a bet on. That would give the initiative more time to demonstrate it's value and to gather information on how well it's achieving its goals (or it could be funding an individual to grow their skills or something).

We've also seen Will write that the FTX Future Fund rejected 95% of their applicants, so it's not the case that there's a money firehose that everyone has access to. Plenty of people are, presumably, open to working on new projects given funding.

I know that professional grant makers think that last-dollar funding is not cost effective because they aren't funding more projects, but aren't out of dollars.

None of our big donors were intending to spend all of their funding before now. It's taken Open Phil years to grow their capacity and increase their giving in line with their standards of diligence. They intend to spend down their funds, I believe, within the lifetime of their funders.

Reframe the idea of "we are no longer funding-constrained" to "we are bottlenecked by people who can find new good things to spend money on". Which means you should plausibly stop donating to funds that can't give out money fast enough, and rather spend money on orgs/people/causes you personally estimate needs more money now.

Are there any good public summaries of the collective wisdom fund managers have acquired over the years? If we're bottlenecked by people who can find new giving opportunities, it would be great to promote the related skills. And I want to read them.

Others will have better answers, but I have decided to keep donating some if my income to global health and development orgs (despite leaning pretty strongly longtermist) on the basis that:

a) non-longtermist orgs are, AFAIK, not totally funded; and

b) I won't miss it that much, and so it won't really impact any direct work I do

Even if some of the big near termist stuff is funded enough in the near future (e.g. LLIN distribution), seems like there could still be lots of cool unfunded opportunities (e.g. paying for mental health support for people in low to middle income countries).

More money in EA just means that it makes sense for us to have a lower bar for cost-effectiveness in our donations and spending.

It doesn’t actually change any moral obligations surrounding donations.

However, the lower cost effectiveness bar makes it more likely than before that the most cost-effective donations could be to incubators like CE and new orgs like CE charities, since it’s more likely than before that these orgs could meet our new, lower cost effectiveness bar.

The lower cost-effectiveness bar also means that expected value maximising, hits-based giving based more on theory and less on evidence makes more sense than before, because it’s more likely to meet the lower cost effectiveness bar.

Even though there are some EA-aligned organizations that have plenty of funding, not all EA organizations are that well funded. You should consider donating to the causes within EA that are the most neglected, such as cause prioritization research. The Center for Reducing Suffering, for example, has only received £82,864.99 GBP in total funding as of late 2021. The Qualia Research Institute is another EA-aligned organization that is funding-constrained, and believes it could put significantly more funding to good use.

mic
19
0
0

The Qualia Research Institute might be funding-constrained but it's questionable whether it's doing good work; for example, see this comment here about its Symmetry Theory of Valence.

Even if the Symmetry Theory of Valence turns out to be completely wrong, that doesn't mean that QRI will fail to gain any useful insight into the inner mechanics of consciousness. Andrew Zuckerman sent me this comment previously on QRI's pathway to impact, in response to Nuño Sempere's criticisms of QRI. The expected value of QRI's research may therefore have a very high degree of variance. It's possible that their research will amount to almost nothing, but it's also possible that their research could turn out to have a large impact. As far as I know, there aren't any other EA-aligned organizations that are doing the sort of consciousness research that QRI is doing.

Curated and popular this week
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 3m read
 · 
We’ve redesigned effectivealtruism.org to improve understanding and perception of effective altruism, and make it easier to take action.  View the new site → I led the redesign and will be writing in the first person here, but many others contributed research, feedback, writing, editing, and development. I’d love to hear what you think, here is a feedback form. Redesign goals This redesign is part of CEA’s broader efforts to improve how effective altruism is understood and perceived. I focused on goals aligned with CEA’s branding and growth strategy: 1. Improve understanding of what effective altruism is Make the core ideas easier to grasp by simplifying language, addressing common misconceptions, and showcasing more real-world examples of people and projects. 2. Improve the perception of effective altruism I worked from a set of brand associations defined by the group working on the EA brand project[1]. These are words we want people to associate with effective altruism more strongly—like compassionate, competent, and action-oriented. 3. Increase impactful actions Make it easier for visitors to take meaningful next steps, like signing up for the newsletter or intro course, exploring career opportunities, or donating. We focused especially on three key audiences: * To-be direct workers: young people and professionals who might explore impactful career paths * Opinion shapers and people in power: journalists, policymakers, and senior professionals in relevant fields * Donors: from large funders to smaller individual givers and peer foundations Before and after The changes across the site are aimed at making it clearer, more skimmable, and easier to navigate. Here are some side-by-side comparisons: Landing page Some of the changes: * Replaced the economic growth graph with a short video highlighting different cause areas and effective altruism in action * Updated tagline to "Find the best ways to help others" based on testing by Rethink
 ·  · 4m read
 · 
Summary I’m excited to announce a “Digital Sentience Consortium” hosted by Longview Philanthropy, in collaboration with The Navigation Fund and Macroscopic Ventures, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include research fellowships, career transition fellowships, and a broad request for proposals for applied work on these topics.  For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of  Claude 4’s model card are about its potential moral patienthood. Scientific experts increasingly say that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field to increase the chance that society deals with the possibility of digital sentience reasonably, and with concern for all involved. * Apply to Research Fellowship * Apply to Career Transition Fellowship * Apply to Request for Proposals Motivation & Focus For about as long as I’ve been reading about transformative AI, I’ve wondered whether society would face critical decisions involving AI sentience. Until recently, I thought there was not much to be done here besides perhaps more philosophy of mind and perhaps some ethics—and I was not sure these approaches would make much progress.  Now, I think there are live areas where people can contribute: * Technically informed research on which AI systems are sentient, like this paper applying existing theories of consciousness to a few AI architectures. * Innovative approaches to investigate sentience, potentially in a way that avoids having to take a stand on a particular theory of consciousness, like work on  AI introspection. * Political philosophy and policy research on the proper role of AI in society. * Work to ed