I have a friend who is casually engaged with EA. They went on https://funds.effectivealtruism.org/grants and was browsing some of the grants and asking me about them. I get the impression this was well-intentioned curiosity, but I was at a loss to explain the dollar amount of some of the grants my friend pointed out to me, in light of the short "project description" provided. 

The last thing I want to do in this post is call anyone out-- I'm sure the rationale behind these grants was sound, and there is relevant information missing from the provided description-- but I was surprised to find out that there are grants for university group organizing in the five and six figures, and some of these are not even for an entire year. I do think this is something that will (perhaps justifiably) raise eyebrows for the average person, if they are just learning about EA as a movement very focused on cost effectiveness, and haven't yet internalized some of the expected value calculations that probably went into these grants. But also, in a couple cases, I personally am having a hard time imagining how these numbers make sense. 

If you are reading this post and willing to comment-- could you (1) help me make sense of these grants first for myself and (2) provide any pointers on how to explain them to someone who isn't yet totally onboard with EA? I don't want to indicate specific grants, but specifically, what e.g. is the argument for a 5 or 6 figure grant for one semester of university organizing at a specific school? I don't understand how so much money could be needed. As far as I'm aware, most organizers are volunteers (but maybe that is changing?). Happy to take this to a private conversation if that would be more appropriate.

79

0
0

Reactions

0
0
Comments18


Sorted by Click to highlight new comments since:
Evie
38
4
1

(Removed this comment. Don't know how to delete it.)

Evie
16
1
0

I’ve been considering writing a post about my experience of receiving a grant, and the downsides I didn’t anticipate beforehand. It would probably look similar to this comment but with more info.

I imagine that such a post could be quite helpful for other young people who are considering applying for funding, and it could also be helpful for other people to understand more of this "ecosystem." I, for one, would be interested to read your story.

Thanks for sharing your experience. I'm sure I would have also felt shame and guilt if I were in your situation, though obviously this is not what we want to happen!

My general feeling about situations like this is that there are some grants that are better off not being shared publicly, if the context allows for it (this depends on many complex social factors). Wealthy people spend money on all kinds of outlandish things all over the world yet receive comparably little opprobrium simply because this spending is rarely public. It's unfair for you to be exposed to the vitriol from regular people expressing their frustration with inequality.

I'm reluctant to say too much about your particular circumstance (given I don't have context, and this is quite a personal thing), but I think if it were me, I might look for ways to tactfully omit discussion of the grant when first getting to know non-EAs socially. Not because it *is* shameful but just because it may unconsciously make some people uncomfortable. If it does come up, I think there is a way to "check your privilege" while also expressing confidence that you did nothing wrong. I've found in my experience, ironically, if I express contrition about something, people are more likely to actually think I did something shameful. Whereas if I sound confident, they tend to have a positive impression of me. These aren't necessarily bad people, that's just how humanity is.

While socializing with EAs is wonderful, I agree that it is better to have a diverse social circle including non EAs too!

This could be titled as "The curse of non-consequentialist ethics plus social media means that there is no reasonable way to prioritize what matters, and the news contributes to that by essentially equalizing all crises under similar names, especially in the headline."

A bit of a sidestep but there there is also the new Longtermism Fund , for more legible longtermist donations that are probably easier to justify.

I think it's easy to miss the forest for the trees. Unless I've missed something:

  1. Before 2022, all of EA outreach/infrastructure funding in total have cost <<$200M,
    1. It's likely <100M, but it's hard to tell because some funding programs blur the lines between outreach/infra and direct work, e.g. paying for someone's PhD program.
    2. Notably this is lower than Open Phil's spending on criminal justice reform.
  2. EA outreach funding has likely generated substantially >>$1B in value, and
  3. EA outreach is an area where we expect there to be significant lags to happen between spending and impact
    1. For example, Sam Bankman-Fried graduated MIT 8 years ago[1].

Raising the saliency of our moral obligations and empirical worldviews on the ways to do good to future billionaires and future top researchers (or current billionaires and current top researchers) is by its very nature an extremely hits-based proposition. 

If you're only looking at a budget very loosely, it seems silly to complain about hundreds of thousands of dollars of spending when billions of dollars of foregone opportunities is on the line.

Now, if you're looking at budgets in detail and investigating programs closely, I think it's reasonable to be skeptical of some types of spending (e.g. if people are overpaid or eating overly fancy food or or not trying to save money on flights or whatever). It's probably even more important to be skeptical of weak theories of change, or poor operational execution. 

  1. ^

    I think SBF donating to utilitarian/LT stuff is probably overdetermined, so not something that we can say EA outreach was useful for. However, I do not think this is true for everyone who's extremely high impact. I think one of the strongest cruxes for value of EA outreach etc is whether or not "our top people" are drawn to EA without needing any active outreach. Current evidence suggests the outreach is pretty relevant.

"EA outreach funding has likely generated substantially >>$1B in value"

Would be curious how you came up with that number. 

It was a very quick lower bound. From the LT survey a few years ago, basically about ~50% of influences on quality-adjusted work in longtermism were from EA sources (as opposed to individual interests, idiosyncratic non-EA influences, etc), and of that slice, maybe half of that is due to things that look like EA outreach or infrastructure (as opposed to e.g. people hammering away  at object-level priorities getting noticed).

And then I think about whether I'd a) rather all EAs except one disappear and have 4B more, or b) have 4B less but double the quality-adjusted number of people doing EA work. And I think the answer isn't very close.

I'm not involved with EA funds, but some university group organizers have taken semesters of leave in the past to do group organizing full time for a semester. If you assume their term is 14 weeks, then that's 14*40=560 hours of work. At $20/hr, that's more than $10,000. And I think it is pretty reasonable to request more than $20/hr (various funding sources have previously offered something like $30/hr).

In general, nowadays, many group organizers are not volunteers and are paid for their part time work (if they are not full time, this shouldn't amount to five figures for one semester though). I think this is a good thing, since many university students simply cannot afford to take a volunteer job with a commitment of 10+ hours per week, and I wouldn't want EA groups only run by people who are rich enough that that's feasible.

The numbers that I am confused about are in the high 5 figures and low 6 figures, about an order of magnitude bigger than $10,000. I don't think assuming a salary of $30/hour helps me understand or explain these numbers. I brought up volunteering vs. paid work in the OP, and I think this was probably misleading-- sorry about that.

However, on that point:

I agree that we don't want EA groups to only be run by the financially privileged. But this concern needs to be balanced against the fact that EA in general, and EA university group organizing in particular (probably) already selects for high SES people, and there may be better ways of making participation in EA accessible to everyone. There is already some level of SES barrier for college students maneuvering themselves into a position to start receiving funding for this work, so you are already getting a filtered sample by the time the money starts flowing. This is a difficult problem to solve, but I hope people are conscientious of it. 

Yeah I wasn't sure which grants you were referring to (haven't looked through them all), but indeed that doesn't seem to be explained by what I said.

I agree that EA already selects for high SES people and that offering funding for them to organize a group doesn't negate this problem. Other steps are also needed. However, I know quite a few anecdotal cases of group organizers being able to organize more than they otherwise would have because they were being paid, and so this policy does concretely make some difference.

If you PM me I'm happy to send you the list. Like I said in the post, I don't believe it would be productive to post it publicly.

I think this is a wise decision, and I disagree with those claiming that publicly criticising grant receipts is a good idea.

I think I want to take the side of "public criticisms are good." I think past examples in this genre (e.g. the Larks' AI alignment Literature Review and Charity Comparison and Nuño's 2018-2019 LTFF Grantees: How did they do?) were substantially net positive.

Good question, and I think this is definitely healthy discussion. In general, money is a sensitive issue, and I would encourage all parties to show nuance, this includes (but not limited to) when: "judging" someone's salary, when asking for a salary, and when granting a salary.

Two steelmen for decent chunky grants 1) Bounded loss and unbounded wins - while theoretically salaries could be cut in half, impact could easily be 10-100x. I.e. the focus should be opportunity cost and not expenditure 2) many smart people in ea, and the people granting, may have previously been earning decent significant salaries as programmers/executives/consultants. You and I may see 80k USD as a lot of money, but its pretty normal for developers in Cali to earn hundreds of USD. Therefore, expecting people to earn 50k a year may effectively be asking them to donate 75% of their income.

And 2 steelmen for keeping salary low - 1) this is a movement about charity, helping others, and donating. We put a lot of effort and time into building health communities around these principles built on a heavy basis of trust. It's important to feel like people are in it for the right reason, and high salaries can jeopardise that. 2) it's pretty easy to justify a high salary with some of the above reasoning, perhaps too easy. As a community builder myself, it seems totally plausible we could attract people that are a poor fit for ea by being too relaxed around the money pedal.

For my own personal opinion, I think it's far too easy to ignore opportunity cost, and concentrate on short term expenditure and salary. However, I can very much imagine myself leaving the community if I salaries became too inflated. And I am likely to feel less aligned with others who require large salaries (just being honest here). Looking at recent posted receipts, I don't see anything that catches my eye in a bad way, although it could be said to be unfair that some community builders will be working 3x harder on a volunteer basis than other community builders on a competitive salary. I think this partially reflects the incentives which produced a world we currently live in (I.e. largely unaltruistic).

Whilst I find the arguments for working hard, and concentrating on impact, rather than earning little, pretty compelling - it's worth pointing out that there's some fantastic work coming out of Charity Entrepreneurship charities (who's employees generally earn little) , so it's not clear the tradeoff is always present.

Lastly, I would say its likely that I've made tradeoffs with my own salary, which have likely significantly negatively effected my social impact. I suspect this is easy to do, and would encourage people to avoid failing into this trap.

If you search the forum for the EAIF tag you can get some more details on past grants.  I'm not sure if this gives you quite what you're looking for, or not.

https://forum.effectivealtruism.org/topics/effective-altruism-infrastructure-fund?sortedBy=magic

I haven't browsed the grants in much detail myself, but I would default to trying to explain EA's culture of thoroughness by reference to e.g. GiveWell's detailed evaluations of various charities, and say "this is more depth than most grants go into but it sets the tone of the sorts of things people tend to look for".

You could also point out common biases that the person might be falling for. One thing I would be inclined to explain in particular is the bikeshedding bias (https://thedecisionlab.com/biases/bikeshedding) -- it's much easier to critique things we understand. The simplest-looking grants (like university group support) are ones which I can imagine are particularly subject to bikeshedding.

Another thing I would be inclined to explain is the idea of continuing to invest (possibly exponentially) in things that work. e.g., if some intervention has shown that they made good use of $10k in the past, maybe try giving them $100k and see if they can do 10x as much good, or close to it. A related bias is the absurdity heuristic (e.g. ruling good ideas out because "they seem kind of crazy").

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would