The Centre for Effective Altruism has distributed its first round of grants through its new Effective Altruism Grants program. The aim of the project is to solve funding constraints for high-impact projects. You can read more about our motivation and aims.
This post details (1) the grants we’ve made, (2) our assumptions, (3) the grant methodology, (4) cost-benefit considerations, (5) mistakes, (6) difficulties, (7) project changes, and (8) our plans for EA Grants going forward.
Grants
We are sharing information about our grant this round to give people a better sense of what kinds of projects we look for, should we run EA Grants rounds in the future. You can see the grants we made.
We have allocated £369,924 for distribution, withholding the remainder of the allotted £500,000 to further fund some of the current recipients, contingent on performance.
We also facilitated the funding of grants by the Open Philanthropy Project and a couple of private donors.
Assumptions
There were many implicit assumptions we made in deciding that and how to run EA Grants. A few of the major ones include:
Many good projects are hamstrung by small funding gaps.
We believe some high-value projects have unmet funding needs. The individuals and small organizations we decided to fund are generally too small to get on the radar of foundations like the Open Philanthropy Project, and small donors rarely have time or expertise to evaluate many small projects. But there are high returns to funding them.
Value alignment is useful for maintaining project relevance.
In order to be comfortable with this arrangement, we placed particular emphasis on evaluating value alignment, altruistic motivation, and judgment. Value alignment was particularly important, even more than ostensibly well-defined projects, since some autonomy is inevitable. All else equal we preferred projects by applicants who have a track record of doing this or other projects well and on a voluntary or selflessly motivated basis. (One exception to this rule is that we must stipulate that funding is not used for certain activities that don’t fit within our charitable objects.)
At this funding level, a hefty application process would be more costly than useful.
Many grantmaking processes require multi-page proposals. Since our grants were both smaller and more speculative than many of the grants foundations distribute, applications of that length felt unnecessarily costly, both for the applicants and for us as evaluators. This had costs: projects that are hard to describe briefly suffer from insufficient space to make their cases. We tried to get the best of both worlds by requesting additional information where we found them hard to assess with just what we had. We leave open the possibility of longer proposals in the future should we run subsequent rounds.
Grant methodology
The grants application process had three rounds, and is best described as a process-based approach.
First round
In the first round the three grants associates eliminated applications which we could clearly assess would not meet our selection criteria. We received 722 applications and desk rejected 413 of them, about 57% of the applicants.
Second round
The second round involved taking the remaining applications and assessing applicants based on their track record, values, and plans. This assessment adhered to a rubric, weighting each category in accordance with its predictive power for project success. The scores combined into one weighted score per applicant, which we used to rank the remaining applicants.
We then went through the list by rank and chose applicants to interview, discussing applicants about which there was large divergence in scores or general opinion. Given our £500,000 budget and most of three staff members’ time for two weeks, we decided to interview 63 candidates.
Third round
Most candidates had three, 10-minute interviews, which we used to further assess their achievements, values, and plans. Candidates we knew well received only one interview. For candidates with skillsets we couldn’t evaluate internally we arranged a fourth interview with a relevant technical expert. We then used the data from these interviews, as well as any additional information requested from references and/or the applicants themselves, to adjust their written application scores. While each interviewer could make modifications to scores in all three categories, interviewers each had a category of focus, so their assessments in their respective area received the most weight.
Finally, we went through the new rank-ordered list and decided who to fund and how much. We initially assigned grants values to candidates in rank order until we’d exhausted the funding pool, then adjusted amounts to fit the particular circumstances of the grantees. Such considerations include our credence in the score given, the counterfactuals of funding each candidate, the potential risks associated with the candidate and/or their proposal, and what candidates could do with money on the margin. We passed promising candidates who did not fit our charitable objects or who requested money out of scope of our funding capacity onto some private donors associated with CEA and/or the relevant program officer at the Open Philanthropy Project.
Through this process we selected 22 candidates to fund, partially or in full, and passed another 11 on to the Open Philanthropy Project.
Cost-benefit considerations
An important consideration in our thinking is whether or not the costs of running EA Grants exceed its benefits. Since the counterfactual is likely a future grant made by the Open Philanthropy Project, one angle for evaluating EA Grants is to compare its costs and benefits relative to the distribution(s) Open Phil might have made otherwise. CEA distributed £600 per hour worked by the grants team, whereas we estimate Open Phil distributes ~£20,000 per hour. However, we think a comparison made in this way has limitations.
Costs
The costs are the £500,000 disseminated, plus ~740 CEA staff hours thus far. We expect to spend another 100 hours on activities related to this round of grantees, mostly arranging mentors and ensuring financial regulatory compliance. There have also been costs to other EA organizations — mostly the Open Philanthropy Project, who has decided to evaluate and fund some of the grantees who went through the application process.
An Open Phil staff member made a rough guess that it takes them 13-75 hours per grant distributed. Their average grant size is quite a bit larger, so it seems reasonable to assume it would take them about 25 hours to distribute a pot the size of EA Grants. This, of course, ignores the time costs of institution-building. Much of the time we spent in this funding round went to building the internal grants infrastructure and relationships with other funders. Should we run this project again, we expect to be able to run a similar grants process in a fraction of the time.
This ratio has limited meaning, most notably ignoring that Open Phil found this project compelling enough to fund. While they can distribute more funding per hour having achieved scale, we find it plausible that the additional costs for these smaller projects were still worth it on the margin. The cost-per-dollar distributed is less than that of other impact-focused foundations, likely on par if we were to factor in staff overhead time.
Benefits
The benefits are notably quite complicated to calculate. Any individual project is itself going to be challenging to evaluate, since most of the value is likely to come from hard-to-track, long-term changes to sentiments and behavior. Rather than try to compute the value of each grant with a common base metric, we have instead opted for projects that seem robustly positive should they work. This, again, is not unlike Open Phil’s strategy; the real question is how effective we think our distributions were compared to theirs.
It seems likely that we’ve picked up on value they would not have, given the scale of interventions they generally consider, which is more valuable per dollar than where they would have given. Reasons to believe this include:
-
Scaling potential. By funding early-stage projects, many with plans to grow, the returns to funding at this stage are higher variance but also higher potential upside.
-
Inexpensive salaries. Most people requested living wages at or below nonprofit employee salaries.
-
Funding individuals. Not only were salaries cheaper, but individuals are cheaper. Organizations often spend 1.7x an employee’s salary on overhead.
CEA’s counterfactuals are unclear. We are unsure if CEA would have received the additional money were EA Grants not in our plans. Assuming that to be the case, Open Phil might have later granted the money to some other community-building activity. Had CEA staff not worked on this program, we would have accelerated progress on writing collated EA content, built out the EA events infrastructure, and worked on plans for EA academic engagement. As for the projects we funded, we estimate that about one quarter of the projects wouldn’t have happened at all, and the rest would have received less time, since the grantees would have pursued other funding (from the Open Philanthropy Project, or elsewhere) or self-funded by working or going into personal debt.
Mistakes
Our communication was confusing. We initially announced the process with little advertisement. Then, we advertised to the EA Newsletter, but only shortly before the application deadline, and extended the deadline by two days.
We underestimated the number of applications we would receive, which gave us less time per candidate in the initial evaluation than we would have liked. It also caused delays, which we did not adequately communicate to applicants. We should have been less ambitious in setting our initial deadlines for replying, and should have communicated all changes in our timetable immediately and in writing to all applicants.
Our advertisement did not make sufficiently clear that we might not be able to fund educational expenses through CEA. Fortunately, the Open Philanthropy Project was receptive to considering some of the academic applicants.
Difficulties
Project evaluation
We found it hard to make decisions on first-round applications that looked potentially promising but were outside of our in-house expertise. Many applicants had proposals for studies and charities we felt under-qualified to assess. Most of those applicants we turned down; some we deferred to the relevant Open Phil program manager. We are in the process of establishing relationships with domain experts who can help us do this in the future.
Conflicts of interest
One difficulty in running this program is its susceptibility to conflicts of interest (COIs).
Many of the most promising applications came from people who are already deeply involved with the community. Involvement with the community gives us evidence of value-alignment, and the community also provides a context within which it is easier to come up with proposals that we think are important.
Unfortunately, since many applicants, and particularly many of the best, were deeply involved with the community, our assessing staff tended to have many COIs. This includes one of the team members, who was both an grant evaluator and an applicant.
Rather than avoid giving where COIs existed, we adopted a view much like that of the Open Philanthropy Project. You can see the details articulated in Holden Karnofsky’s post on hits-based giving. We recognized and tried to mitigate the effects of COIs by asking for expert input, expecting domain expertise to help correct for personal domain-irrelevant sentiments. However, given our process-based approach and comparatively limited internal capacity, it was both less necessary and less feasible for us to develop in-house expertise on areas about which we formerly knew little, another means of reducing the impact of COIs.
The measures we took include:
-
Blinding applications during the first and second rounds of the application process, such that all written applications received scores while anonymized.
-
Asking staff members to declare conflicts of interest with finalists, where they existed. The team then found replacement interviewers and asked the associated staff member to step out of decisionmaking for those candidates.
-
Deferring applications to staff of the Open Philanthropy Project when the project proposals were outside our domains of expertise.
-
Establishing scores in our rubric associated with observable measures, tying applicants’ scores to specific features of their abilities and plans rather than our general impression.
-
For the grant applicant who was also an assessor, removing him from all discussions about his application, obscuring his score and ranking, and subjecting him to the same evaluation process as all other grantees.
Project changes
We can’t fund educational expenses.
We adhered closely to our grant areas, funding nothing out of scope of what we described on the website. However, we have since determined that we cannot fund all of the projects for which we encouraged people to apply. Most notably, CEA’s charitable objects do not allow us to pay for education expenses, making it impossible for us to give grants for Masters or PhD programs. However, the Open Philanthropy Project is able to do so and has started to consider funding candidates pursuing research in their priority areas.
We are unlikely to make grants for longer than a year.
While we offered opportunities for grant renewal, we didn’t make any grants lasting more than a year. This was more a result of happenstance than an intentional decision. For the few finalists who requested more than a year of funding, we were sufficiently unsure of either their proposal or their future funding situation so as to not want to commit more than a year upfront. That being said, we’re still open to doing so in the future.
Plans going forward
It seems likely that we will run a similar program in the future. Kerry Vaughan has just taken over ownership of this project, and will be in charge of deciding on and implementing changes. That being said, the initial EA Grants team has many ideas of how to improve the scheme, and in particular how to solve the mistakes discussed above. We will coordinate with Kerry and post again when we have more information.
As I will no longer work on this project after October 6th, please direct questions and comments to eagrants@centreforeffectivealtruism.org.
Thanks to Ryan Carey, Rohin Shah, and Vipul Naik for corrections to this post.
Any thoughts on why the grants were so concentrated by cause area? EA Community and LT future got 65% and 33% respectively, while Global Health and Development and Animal Welfare each got just 1%. Was this a function of the applications (number or quality) or the evaluation process (values, metrics)? Would you have predicted this going in?
With regards to animal welfare, we passed on several applications which we found promising, but couldn't fully assess, to the Open Philanthropy Project, so we may eventually facilitate more grants in this area.
I would not have predicted such an extreme resource split going in: we received fewer high quality, EA-aligned applications in the global development space than we expected. However, CEA is currently prioritising work on improving the long-term future, so I would have expected the EA community and long-term future categories to receive more funding than global development or animal welfare.
What could have made applications/applicants in the global health & development space stronger?
This may be a bit late, but: I'd like to see a bit more explanation/justification of why the particular grants were chosen, and how you decided how much to fund - especially when some of the amounts are pretty big, and there's a lot of variation among the grants. e.g. £60,000 to revamp LessWrong sounds like a really large amount to me, and I'm struggling to imagine what that's being spent on.
60k GBP doesn't sound like too much to me to revamp LessWrong at all.
So it could easily take 1-2 person-years.
I agree with Jess, I'd love to hear more about the decision making. I think that the EA Grants programme has been the highest impact thing CEA has done in the past 2-3 years, and think it could be orders of magnitude more impactful if they can reliably expect to get funding for good projects. That would require that (a) it is done regularly and (b) people can know the reasons CEA uses to decide on what projects to fund.
Responding to why building online tools for intellectual progress takes multiple people's full time jobs: The original reddit codebase that LW 1.0 forked from was on the order of 4 years of 4 people's full time work, so say at least 10 person years of coding (we have had so far maybe 1 person year of full time coding work, and LW 2.0 has an entirely original codebase). While we're able to steal some of their insights (so we built a lot of the final product directly without having to fail and rebuild multiple times) LW 2.0 is building a lot of original features like an eigenkarma system, a sequences feature, and a bunch of other things that aren't currently in existence. We have still not yet built 50% of the features the site will have once we stop working on it.
Then also there's content curation and new epistemic and content norms to set up which takes time, and user interviews with writers in the community, and a ton of other things. The strategic overview points in the sorts of directions we'll likely build things.
I agree with this. Although note that a lot of things would have to happen for EA grants to get more than 1 order of magnitude better. (They might have to make several improvements e.g. larger grants, more frequent grants, better recruitment of grantees, etc etc.)
Interesting! Is there a plan to evaluate the grant projects after they reach some kind of "completion" point?
Yes, although what exactly that will entail is still being worked out.
There's the weak form of evaluation -- whether or not they completed the objectives they set out when applying -- which we're doing for both "is this an obviously bad project?" and legal compliance reasons. We're also hoping to do Fermi estimates on the value produced as a result of projects, both changes in value in the world and of the recipient.
Since I'm not going to be in charge of this, though, this is more my recommendation for what to do than a plan.
On your grants page (https://www.effectivealtruism.org/grants/) it still says you consider funding education, has this changed again?
"We welcome applications in the following areas: [...]
When is the next round of EA grants opening?
Are you considering accepting applications on a rolling basis?
Currently planning to open EA Grants applications by the end of the month. I plan for the application to remain open so that I can accept applications on a rolling basis.
All this was hard to follow.
EA money is money in the hands of EAs. It is argued that this is more valuable than non-EA money, because EAs are better at turning money into EAs. As such, a policy that cost $100 of non-EA money might be more expensive than one which cost $75 of EA money.
Something we can do to clarify?
It depends on the value you place on CEA staff time. Internally we value the average CEA staff hour at ~$75 ($50-$150, depending on the nature of the work), so 840 * £56 = £47,040 in opportunity cost, plus real staff costs. This suggests that staff time wasn't the main cost, unless you think the counterfactual uses of time would have been far more impactful than our average.
Really tricky for me to say, especially because I have incentive to think this was the right choice. That being said, it does seem right to me, primarily because of the haste consideration: https://80000hours.org/2012/04/the-haste-consideration/.
As I noted elsewhere in the piece, "about one quarter of the projects wouldn’t have happened at all, and the rest would have received less time." This makes the immediate multipliers pretty high. We spent about 0.42 years of CEA staff time and gained (really rough guess) 10 years of counterfactual EA time. Since a lot of people we sponsored are doing movement-building work in some form, I expect their activities to have multipliers, too.
The counterfactual activities are higher risk but vie for long-run value similar to that we expect the recipients to produce. (e.g. Theron Pummer is writing introductory EA content and trying to engage academics.)
Really glad to see you taking conflicts of interest so seriously!
For what it's worth, Owen thinks I should use at least double $75/hour, given the experience of the staff working on the project and the nature of the work.
Is cea considering awarding prizes to papers that advance core areas after the fact?
Hm, we haven't considered this in particular, although we are considering alternative funding models. If you think we should prioritize setting something like this up, can you make the case for this over our current scheme or more general certificates of impact?
I can't make a case for prioritization as I haven't been able to find enough data points for a reasonable base rate of expectations of effects from the incentive. Fqxi might have non public data on how their program has gone that they might be willing to share with cea. I'd probably also try reaching out to the John Templeton foundation, though they are less likely to engage. It is likely worth a short brainstorm of people who might know more about how prizes typically work out.
Normally people discuss the value of time by figuring out how many dollars they'd spend to save an hour. It's kind of unusual to ask how many dollars you'd have someone else spend so that you save an hour.
Those numbers are switched around, right?
Oops, yes they were. Fixed. Thanks!
It now went from 20,000 to 200,000. Is that what you intended? My crude calculation yields a number closer to 20,000 than 200,000.
Sloppy editing; thanks for the catch. It should actually be fixed now.
Thanks for writing this up! While it's hard to evaluate externally without seeing the eventually outcomes of the projects, and the counterfactuals of who you rejected, it seems like you did a good job!
My experience making grants at Open Phil suggests it would take us substantially more than 25 hours to evaluate the number of grant applications you received, decide which ones to fund, and disburse the money (counting grant investigator, logistics, and communications staff time). I haven't found that time spent scales completely linearly with grant size, though it generally scales up somewhat. So while it seems about right that most grants take 13-75 hours, I don't think it's true that grants that are only a small fraction of the size of most OP grants would take an equally small fraction of that amount of time.
Also related: https://www.facebook.com/vipulnaik.r/posts/10211030780941382
Right, neither do I. My 25-hour estimate was how long it would take you to make one grant of ~£500,000, not a bunch of grants adding up to that amount. I assumed that if Open Phil had been distributing these funds it would have done so by giving greater amounts to far fewer recipients.
Ah, k, thanks for explaining, I misinterpreted what you wrote. I agree 25 hours is in the right ballpark for that sum (though it varies a lot).
Thanks for the detailed post, Roxanne! I am a little confused by the status of the recipients and the way these grants are treated by recipients from an accounting/tax perspective.
First off, are all the grants made to individuals only, or are some of them made to corporations (such as nonprofits)? Your spreadsheet lists all the recipients as individuals, but the descriptions of the grants suggest that in at least some cases, the money is actually going to an organization that is (probably) incorporated. Three examples: Oliver Habryka for LessWrong 2.0 (which he has reported at http://lesswrong.com/r/discussion/lw/pes/lw_20_strategic_overview/ is a project under CFAR), Katja Grace for AI Impacts (which is a separate organization, that used to be classified as a project of MIRI), and Kelly Witwicki (whose work is under the Sentience Institute). If the grant money for some grants is going to corporations rather than individuals, is there a way to see in which cases the grant is going to a corporation, and what the corporation is?
Secondly, I was wondering about the tax and reporting implications of the grants that are made to individuals. Do the receiving individuals have to treat the grants as personal income? What if somebody is coordinating a project involving multiple people and splitting the money across different people? Do you directly pay each of the individuals involved, or does the person doing the coordination receive the totality of the money as personal income and then distribute parts to the other people and expense those?
Some of them are going to nonprofits and other institutions, yes.
This wasn't something we'd considered publishing, and I'm not sure what if any privacy concerns this could raise. If there's a good case for doing so I'm happy to consider adding that information.
Unfortunately, in cases where we paid individuals directly they do have to treat them as personal income. We might have been able to avoid this in some cases by giving the money as scholarships, although as far as I'm aware this would have been a big hassle to set up. It's on the table for future rounds if it seems worth the setup cost.
In four of five cases the money went to an institution with whom the recipient will coordinate multi-person distribution. In the fifth case the money went directly to an individual who had yet to designate the other recipient, so we gave them the totality to distribute themselves.
impressive and useful - thanks!!
It was a really informative and good read, thanks for being so specific. I think I've got a high potential project that can play a huge role in ending Malaria and it needs funding quite soon (2-3 weeks), so I'm supporting them in a funding round. They only need €12.000,- more, would we be able to join the EA grants even before the next round has started?