Hide table of contents


The Long-Term Future Fund made the following grants as part of its 2021 Q4 grant cycle (grants paid out sometime between August and December 2021):

  • Total funding distributed: 2,081,577
  • Number of grantees: 34
  • Acceptance rate (excluding desk rejections): 54%
  • Payout date: July - December 2021
  • Report authors: Asya Bergal (Chair), Oliver Habryka, Adam Gleave, Evan Hubinger

2 of our grantees requested that we not include public reports for their grants. (You can read our policy on public reporting here). We also referred 2 grants, totalling $110,000, to private funders, and approved 3 grants, totalling $102,000, that were later withdrawn by grantees.

If you’re interested in getting funding from the Long-Term Future Fund, apply here.

(Note: The initial sections of this post were written by me, Asya Bergal.)

Other updates

Our grant volume and overall giving increased significantly in 2021 (and in 2022 – to be featured in a later payout report). In the second half of 2021, we applied for funding from larger institutional funders to make sure we could make all the grants that we thought were above the bar for longtermist spending. We received two large grants at the end of 2021:

Going forward, my guess is that donations from smaller funders will be insufficient to support our grantmaking, and we’ll mainly be relying on larger funders.

More grants and limited fund manager time mean that the write-ups in this report are shorter than our write-ups have been traditionally. I think communicating publicly about our decision-making process continues to be valuable for the overall ecosystem, so in future reports, we’re likely to continue writing short one-sentence summaries for most of our grants, and more for larger grants or grants that we think are particularly interesting.


Here are some of the public grants from this round that I thought looked most exciting ex ante:

  • $50,000 to support John Wentworth’s AI alignment research. We’ve written about John Wentworth’s work in the past here. (Note: We recommended this grant to a private funder, rather than funding it through LTFF donations.)
  • $18,000 to support Nicholas Whitaker doing blogging and movement building at the intersection of EA / longtermism and Progress Studies. The Progress Studies community is adjacent to the longtermism community, and is one of a small number of communities thinking carefully about the long-term future. I think having more connections between the two is likely to be good both from an epistemic and a talent pipeline perspective. Nick had strong references and seemed well-positioned to do this work, as the co-founder and editor of the Works in Progress magazine.
  • $60,000 to support Peter Hartree pursuing independent study, plus a few "special projects". Peter has done good work for 80K for several years, received very strong references, and has an impressive history of independent projects, including Inbox When Ready.

Grant Recipients

In addition to the grants described below, 2 grants have been excluded from this report at the request of the applicants.

Note: Some of the grants below include detailed descriptions of our grantees. Public reports are optional for our grantees, and we run all of our payout reports by grantees before publishing them. We think carefully about what information to include to maximize transparency while respecting grantees’ preferences. 

We encourage anyone who thinks they could use funding to positively influence the long-term trajectory of humanity to apply for funding.

Grants evaluated by Evan Hubinger

EA Switzerland/PIBBSS Fellowship ($305,000): A 10-12 week summer research fellowship program to facilitate interdisciplinary AI alignment research

  • This is funding for the PIBBSS Fellowship, a new AI safety fellowship program aimed at promoting alignment-relevant interdisciplinary work. The central goal of PIBBSS is to connect candidates with strong interdisciplinary (e.g. not traditional AI) backgrounds to mentors in AI safety to work on interdisciplinary projects selected by those mentors (e.g. exploring connections between evolution and AI safety).

    We decided to fund this program primarily based on the strong selection of mentors excited about participating. However, we did have some reservations—primarily that, by targeting candidates with strong interdisciplinary backgrounds but not necessarily much background in EA or AI safety, we were somewhat concerned that such candidates might not stick around and continue doing good AI safety work after the program. However, we decided that it was worth pursuing this avenue regardless, given that such interdisciplinary talent is very much needed, and at least to get information on how effectively we can retain such talent.
  • Berkeley Existential Risk Initiative ($250,000): 12-month salary for a software developer to create a library for Seldonian (safe and fair) machine learning algorithms
    • This is funding for Prof. Philip Thomas to hire a research engineer to create a library for easily using Seldonian machine learning algorithms. I think that the Seldonian framework, compared to many other ways of thinking about machine learning algorithms, centers real safety concerns in a useful way. Though I am less excited about the particular Seldonian algorithms that currently exist, I am excited about Prof. Thomas continuing to push the general Seldonian framework and this seems like a reasonably good way to do so.

      The biggest caveat with this grant was primarily that Prof. Thomas had very little experience hiring and managing research engineers, suggesting that it might be quite difficult for him to actually turn this grant into productive engineering work. However, both BERI and I have provided Prof. Thomas with some assistance in this domain, and I am hopeful that this grant will end up producing good work.
  • John Wentworth ($50,000): 6-month salary for general research
    • I have been consistently impressed with John Wentworth’s AI safety work, just as I have been when we funded him in the past. Though this grant is more open-ended than previous grants we’ve made to John, I think John is an experienced enough AI safety researcher that general, open-ended research is something I am absolutely excited about him doing.
    • Note: We recommended this grant to a private funder, rather than funding it through LTFF donations. At the time, we believed that the general nature of the grant might include work outside of the scope of what we are able to fund as a charitable organization, but we intend to make similar grants through EA Funds going forward.
  • Anonymous ($44,552): Supplement to 3-month Open Phil grant, working on skilling up in AI alignment infrastructure.
    • This grant is to support a couple of promising candidates working on AI safety infrastructure/operations/community projects, supplementing funding that one of them previously received from Open Phil. This grant was referred to us by the EA Infrastructure Fund and funded by us largely on their recommendation.
  • Anonymous ($30,000): Additional funding to free up time for technical AI safety research.
    • This funding is general support for helping a technical AI safety researcher whose work I’ve been excited about improve their productivity. I think that many people doing good work in this space are currently underinvesting in improving their own productivity. If we can alleviate that in this case by providing extra funding, I think that’s a pretty good thing for us to be doing.
  • David Reber ($20,000): 9.5 months of strategic outsourcing to read up on AI Safety and find mentors
    • This funding is to help David improve his productivity and free up time to read up on AI safety while pursuing his AI PhD at Columbia. I think that these are valuable things for David to be doing and I think it will increase his odds of being able to contribute meaningfully to AI safety in the future. That being said, we decided to only fund David’s productivity improvements and not fund a teaching buyout for David during his PhD, as we decided that teaching was likely to be somewhat valuable to David at this point in his career and we were unsure enough about his own work at this point in his career for us to decide that a full teaching buyout didn’t make sense.
  • Adam Shimi ($17,355): Slack money for increased productivity in AI Alignment research
    • Adam Shimi has been doing independent AI safety research under a previous grant from us, but has found that he is tight on funding and could improve his productivity by receiving an additional top-up grant. Given that we continue to be excited by Adam’s research, and he thinks that the extra funding would be helpful for his productivity, I think this is a very robustly good grant.

Grants evaluated by Asya Bergal

Any views expressed below are my personal views and not the views of my employer, Open Philanthropy. (In particular, getting funding from the Long-Term Future Fund should not be read as an indication that the applicant has a greater chance of receiving funding from Open Philanthropy, and not receiving funding from the Long-Term Future Fund [or any risks and reservations noted in the public payout report] should not be read as an indication that the applicant has a smaller chance of receiving funding from Open Philanthropy.)

  • Kristaps Zilgalvis ($250,000): Funding for a degree in the Biological Sciences at UCSD (University of California San Diego).
    • Kristaps, who was based in Belgium, applied for funding to cover 4 years of tuition, housing, and dining fees at UCSD in the U.S., with the ultimate goal of reducing biological existential risk. Kristaps had worked previously with a long-term-future-focused biosecurity researcher, who gave him a positive reference, and demonstrated reasonable understanding of long-term biorisk considerations in my conversation with him.
    • It’s generally very difficult for international students to find support for degrees in the U.S., and Kristaps had indicated in his application that his counterfactual would be to either work for a few years to make money to pay for his degree, or to take out a substantial student loan.
    • In general, my guess is that going to a good US or UK university increases future impact in expectation, both by boosting someone’s career, and by putting them in closer proximity for longer with other people working on the long-term future. I think this case is stronger the better the university, the closer the university is to a key geographic hub, and the more students at the university itself are thinking seriously about the long-term future. I made the call to fund here partially by referencing this ranking site, which put UCSD 10th in Biological Sciences degrees worldwide.
    • I’m somewhat worried that funding undergraduate degrees is unusually likely to attract applicants who feign interest in a priority cause.
  • Noemi Dreksler ($99,550.89 ): Two-year funding to run public and expert surveys on AI governance and forecasting.
    • Noemi applied for funding to design, conduct, analyze, and write up survey research for the Centre for the Governance of AI, which hadn’t yet been set up at the time of the application. From her application:
      • > “Ongoing projects include a large-scale cross-cultural survey of the public’s .AI views (follow-up to Zhang & Dafoe, 2019), the analysis and dissemination of an AI researcher survey (Zwetsloot et al., 2021; Zhang et al., 2021), and a survey of economists’ views of AI/HLMI and related economic forecasts. Future work might include e.g., eliciting expert views and forecasts on AI from a variety of epistemic communities (e.g., policy-makers, AI researchers, AI ethics experts) through surveys and a study of the role of anthropomorphism and mind perception in attitudes towards AI governance.”
    • I wanted to make this grant because I was interested in some of the concrete surveys being conducted (particularly the economist survey), and also overall like the model of having someone “specialize” in conducting AI-relevant surveys – ideally, a dedicated survey-runner could become unusually efficient at running surveys, and make it cheap for others to request survey data when it was decision-relevant for them.
  • Anonymous ($90,000): 6-month salary to do AI alignment research.
    • This was funding for someone with a strong track record in AI alignment to work independently for 6 months.
  • William D'Alessandro ($22,570): Funds to cover speaker fees and event costs for EA community building tied in with my MA course on longtermism in 2022.
  • William Bradshaw ($16,456): Funding to cover a visit to Boston (via a stopover in another country as required by US coronavirus restrictions at the time) for biosecurity work on the Nucleic Acid Observatory and other biosecurity projects in the Esvelt group.
  • George Green ($11,400): Living costs stipend for extra US semester + funding for open-source intelligence (OSINT) equipment & software.
  • James Smith ($8,324): Time costs over 6 months to publish a paper on the interaction of open science practices and biorisk.
  • Anonymous ($5,585): 3-month salary to set up a new x-risk relevant project over the upcoming year.
  • Chelsea Liang ($5,000): 3+ months’ compensation to drive time-sensitive policy paper: Managing the Transition to Widespread Metagenomic Monitoring: Policy Considerations for Future Biosurveillance'.

Grants evaluated by Adam Gleave

  • Chad DeChant ($90,000): Funding to finish AI-safety related CS PhD on enabling AI asagents to accurately report their actions
    • Chad DeChant is a final-year CS PhD candidate at Columbia, working on enabling AI agents to report on and summarize their actions in natural language. He recently switched advisor to Daniel Bauer to pursue this topic, but unfortunately Daniel was unable to support him on a grant. This funding allows Chad to complete his CS PhD. 

      Chad has previously taught a course on AI Safety, Ethics and Policy. He is interested in pursuing a career combining technical AI safety with policy and governance, which I think Chad is a good fit for. Completing a PhD is a prerequisite for many of these relevant positions, so it seems worth enabling Chad to complete the program. Additionally, I think it is plausible that his current research direction will help with long-term AI safety.
  • The Center for Election Science ($50,000): General support for campaigns to adopt approval voting at local levels in the US (Note: we had originally included an outdated version of this write-up in this post; we've now updated this.)
    • Plurality voting, where electors vote for a single candidate from a list and the one with the most votes wins, is by far the most common voting system worldwide. Yet it is widely agreed by social choice theorists to be one of the worst voting systems, leading to random outcomes and often favoring extreme candidates. The Center for Election Science (CES) campaigns to adopt approval voting in the US, where voters can pick every candidate they "approve" of and the one with most approval wins.
    • I'm not sure whether approval voting is better than alternatives like ranked choice voting: my sense is approval voting has nicer theoretical properties and is backed by lab experiments. However, ranked choice voting has been battle-tested in more political situations. Both of them are, however, much better than plurality voting.
    • If implemented, approval voting could result in politicians being elected that more reliably reflect popular opinion and, in particular, favor candidates that appeal to a broad base. This seems likely to improve political stability and institutional decision-making, which seems robustly positive for the long-term. However, it's not without its pitfalls. For example, perhaps an extreme candidate winning occasionally helps "reset" government and keep it dynamic. Approval voting is likely to avoid those extreme candidates who don’t have sufficient support.
    • CES has won ballot initiatives in Fargo, ND (population 125k) and St Louis, MI (population 300k), at an average cost of $10 per voter. They have also organised a nationwide chapter system, outreach campaigns and a small research department. I'm confident they can replicate this success in other cities in the US, and think it's plausible they can scale to get approval voting used in some state gubernatorial races.

      However, from a longtermist perspective, most local governments are of limited importance – what matters is mostly the decision of the US and other influential nation states, and some key international bodies.

      CES may be able to have influence at the federal level by changing state-level voting rules on how senators and representatives are elected. This is not something they have accomplished yet, but would be a fairly natural extension of the work they have done so far. Additionally, they may be able to influence presidential primaries. Parties have significant leeway here, with substantial variation between states.

      Influencing presidential elections would be significantly harder. Plurality and approval voting give effectively the same outcome in two-candidate races, which US presidential elections currently are de facto. If all states adopted approval voting, then presidential races could include a broader range of candidates. The best option is likely an interstate compact to adopt a national popular approval vote, which would require only a majority of states to adopt it.

      I find the most plausible path to (long-term) impact being that CES continues to switch local jurisdictions to approval voting, and that this provides enough real-world demonstration of approval voting's value that new international institutions or nation states adopt it. Improving the composition of the Senate and House is also likely to provide some benefit, but I judge it to be smaller.
  • Prof Nick Wilson ($27,000): Fund a research fellow to identify island societies likely to survive sun-blocking catastrophes and optimising that chance of survival
    • Nick Wilson is a Professor in Public Health at the University of Otago. We funded him to hire a research assistant for a paper investigating possible island refuges for sun-blocking agricultural catastrophes. Such catastrophes are both plausible (e.g. from nuclear war or volcanic eruption) and reasonably neglected. The study has now been completed and the findings are covered in a long post on the EA Forum. More detailed articles have been submitted to journals, but the preprints are now available (for the main study, and another study of food self-sufficiency in New Zealand). The key findings were that some locations could likely produce enough food in a nuclear winter to keep feeding their populations, but food supply alone does not guarantee flourishing of technological society if trade is seriously disrupted.
  • Benedikt Hoeltgen ($19,020): 10-month salary for research on AI safety/alignment, focusing on scaling laws or interpretability.
    • We are funding Benedikt to work on technical AI safety research with Sören Mindermann and Jan Brauner in Yarin Gal's group at Oxford. Benedikt published several papers in philosophy during his undergraduate degree, and switched to ML research during his Master's in Computer Science after speaking to 80,000 Hours. I think Benedikt has a promising career ahead of him, and that this research experience will help him get into top PhD programs or other research-focused positions.
  • Anonymous (pseudonym Gurkenglas) ($14,125): 3-month salary to produce an interpretability tool that illustrates the function of a network's modules.
    • Understanding how neural networks work will help with AI safety by letting us audit networks prior to deployment, better understand the kind of representations they tend to learn, and potentially use them as part of a human-in-the-loop training process. The applicant has proposed a novel approach to interpretability based around computing the invariances of a neuron – what other inputs produce the same activations– and detecting modules in a neural network. While I consider this direction to be somewhat speculative, it seems interesting enough to be worth funding and to renew if the results show promise.
  • Anonymous ($8,000): 5-month salary top-up to plug hole in finances while finishing PhD in AI governance.
    • The grantee is pursuing a PhD on a topic related to AI governance. They have a temporary hole in finances due to a low PhD stipend coupled with high living expenses in their current location. I have heard positive things about their work from experts in the field, so I think it is worth providing them with this relatively small supplement to ensure financial limitations do not hamper their productivity.
  • Anson Ho ($4,800): 3-month funding for a project analysing AI takeoff speed
    • Anson is a recent Physics graduate from St Andrews. We are funding them to work with Vael Gates (Stanford post-doc) to study AI takeoff speed and continuity. While this topic has been studied shallowly in the past, I think there is still plenty of room for future work. Anson is new to AI strategy research but has a strong STEM background (first-class degree from a top university) and has done some self-studying on AI (e.g. attended EA Cambridge's AGI Safety course), and so seems to have a good chance of making progress on this important problem.

Grants evaluated by Oliver Habryka

  • David Manheim ($70,000): 6-month salary to continue work on biorisk and policy, and to set up a longtermist organization in Israel.
    • We’ve given multiple grants to David in the past (example). In this case, David was planning to work with FHI, but FHI was unable to pay him for his time. To enable him to continue doing work on longtermist policy, we offered to cover his salary at ALTER, the new organization he has set up. I did not evaluate this grant in great depth, given that FHI would have been happy to pay for his time otherwise.
    • so we offered to cover his salary. I did not evaluate this grant in great depth, given that FHI would have been happy to pay for this.
  • Peter Hartree ($60,000): ​​6-month salary to pursue independent study, plus a few "special projects".
    • Peter Hartree worked at 80,000 Hours for multiple years, and was interested in exploring a broader career shift –  to take more time to study and think about core longtermist problem areas.
    • He received great references from his colleagues at 80k, and I am generally in favor of people at EA organizations reconsidering their career trajectory once in a while and being financially supported while doing so (especially given that current salaries at most EA organizations make building runway for this kind of reflection hard).
    • Note: We recommended this grant to a private funder, rather than funding it through LTFF donations, since “independent study” is sometimes hard to prove public benefit for.
  • Aysajan Eziz (officially, Aishajiang Aizezikali) ($45,000): 9-month salary for an apprenticeship in solving problems-we-don’t-understand.
    • Aysajan is apprenticing to John Wentworth, whose work we’ve funded in the past, and whose work currently seems like some of the most promising AI Alignment research that is currently being produced. In this case, I had little information on Aysajan, but was excited about more people working with Wentworth on his research, which seemed like a good bet.
  • Nicholas (Nick) Whitaker ($18,000): 3 months of blogging and movement building at the intersection of EA/longtermism and Progress Studies
  • David Rhys Bernard ($11,700): 4-month salary for research assistant to help with surrogate outcomes project on estimating long-term effects
  • Effective Altruism Sweden ($4,562): Funding a Nordic conference for senior X-risk researchers and junior talents interested in entering the field
  • Benjamin Stewart ($2,230): 6-week salary for self-study in data science and forecasting, to upskill within a GCBR research career
  • Caroline Jeanmaire ($121,672): Two-year funding for a top-tier PhD in Public Policy in Europe with a focus on promoting AI safety
  • Logan McNichols ($3,200): Funding to pay participants to test a forecasting training program
    • The core principle of the program is to realistically simulate normal forecasting, but on questions which have already been resolved (backcasting). This creates the possibility of rapid feedback. The answer can be revealed immediately after a backcast is made, whereas forecasts are often made on questions which take months or years to resolve. The fundamental challenge of backcasting is gathering information without gaining an unfair advantage or accidentally stumbling on the answer. This project addresses the challenge in a simple way: by forming teams of two, an information gatherer and a forecaster.
    • Since this was a small grant, we didn’t evaluate this grant in a lot of depth. The basic idea seemed reasonable to me, and seemed like it might indeed improve training for people who want to get better at forecasting.
Sorted by Click to highlight new comments since:

I deeply appreciate the rare privilege of this support, and I am working hard to put it to good use.

My burn rate turned out lower than expected, so the 6-month support is actually going to cover me for 11 months—from October 2021 up to the end of August 2022.

Sometime in 2022Q4 I will share a public write up of how things went.

As of now I'm on track with the plan I sketched last year, namely:

(Phase 1) Independent study until spring / summer 2022

(Phase 2) Get into my next "big project" by end of 2022.

(Phase 1) is nearly complete—main thing left is to finish and post some EA Forum stuff.

I started on (Phase 2) in ~July 2022 and progress has been good so far. I decided to start some 1-8 week projects to test some ideas and potential long-term working relationships, improve my project management skills, and also do some hopefully useful things.

These include:

  • Radio Bostrom: audio narrations of Bostrom papers (soft-launched).
  • Helping with Will's book launch (mainly: feedback on website and related marketing stuff; web development).
  • A project to digitise Derek Parfit's archive of papers and correspondence (started Wednesday, funding secured today).
  • Comment Helper for Google Docs (beta phase).
  • Dialling up my attempts to help people get funding for things (2 people confirmed; 1 pending).
  • Dialling up my advisory role at 80,000 Hours to involve more proactive mentorship of several team members (going well).
  • Dialling up my attempts to help people think about their next career steps and/or offer useful advice on their projects (>5 calls and email threads).
  • Spending more time writing Tweets and EA Forum comments (just starting).
  • An EA infrastructure project (not public yet, may not pursue).

My salary for some of the above is covered by other sources, but initial exploration on all of them was covered by the LTFF grant. I think there's a strong counterfactual case that the LTFF grant has been important for making these happen.

The biggest problem I faced so far was an unusually long "low" period during winter 2021 - spring 2022. I've had these lows roughly once a year since forever, but this one was unusually bad. It have been exacerbated by a COVID-19 infection. This badly derailed my independent study (rate of progress dropped to ~20% of spring/summer 2021). This experience led me to make a big medication change, which I hope will improve things for the coming years. This medication change was sped up by the LTFF grant, because it made it easier to pay for private psychiatrist appointments.

On a personal note: I also met a girl and asked her to marry me. She said "yes".

If you'd like to know more, many of my working notes are semi-public at https://notes.pjh.is. I'm @peterhartree on Twitter.

Edit 2022-08-19: Updated the Parfit archive bullet to note that funding is secured (and to make clear that it's his physical papers and correspondence, given Pablo's comment below).

I'm curious about your plans to digitize Parfit's archive. I've made~all his writings available here, but maybe you have other things in mind?

Edit 2022-08-19: The Parfit archive digitisation project is now funded (pending formalities).

Yes I'm talking about all the boxes of unpublished papers and correspondence he left behind after his death. (Plus a bunch of harddrives from the 1980s to 2017 that we've not looked at yet.)

I just started on this a couple days ago. That's to say:

  • Wednesday: emailed Parfit's former long-term partner to ask what's up with the archive.
  • Thursday: called with the person who has possession of the physical archive. Realised we should digitise it ASAP; drafted funding application and requested quotes from digitisation services.
  • Friday: final drafting funding application (now), will share to potential funders this evening.
  • Friday: final draft of funding application, secured funding.

There's already a top philosophy professor involved (the person who has the archive). They plus an assistant have nearly finished the initial sift. There is some incredible material in there.

I've not yet thought much about building a team around this, but my quick thought is that ideally I would hire a project manager and just be in an advisory role myself.

Pablo: if you or someone you know might be interested in project-managing this, or serving in an advisory role, send me an email. Likewise other readers. Thanks!

Thanks for sharing this!

P.S. I'd like to acknowledge the help of Peter McIntyre here. Peter encouraged me to apply for the LTFF grant in summer 2021. I wasn't thinking about seeking financial support at the time. Minimally he sped me up on seeking funding by several months; maximally he is counterfactually responsible for quite a bit of what I've done December 2021-present.

We've also been doing ~daily virtual co-working sessions for over a year.

What are the reasons why this report was published over 8 months after these grant recommendations were made? Is there anything support from the rest of the EA community could ameliorate any of those bottlenecks for the LTFF, other EA Funds, fund advisors and/or the Centre for Effective Altruism?

I'm aware these may be sensitive questions, so I'm providing in this comment my rationale for asking these questions to demonstrate I mean to ask them in a respectful and constructive way.

First, I don't mean to imply one or more individuals have been procrastinating and not fulfilling some personal obligation to ensure reports are published in a timely manner. I recognize it as a structural problem of the Centre for Effective Altruism (CEA) and grant advisors being extremely busy with many, more important professional responsibilities. I would agree with any advisors their other professional responsibilities are a greater priority than only publishing the grant recommendations once they've already been made. 

I've interacted with dozens of people who are wary of submitting grant applications because they feel like they don't have enough information to be confident it's worth the effort to submit a grant application. That's a solvable problem but it's exacerbated by a delay in access to information that would better inform their decision.
There are various things to be done to solve the problem but for specifically hastening the rate at which grant recommendation reports are published, my opinion is that it would be worthwhile for some be hired and paid as an assistant grant advisors can delegate basic tasks to and save their own time. 

I could afford to donate a sum of my own money toward that end but I doubt it'd be enough. I'm confident enough in the value of this prospect, though, that I'm willing to advocate for others to donate to such an end too, or write-up an EA Forum post making the case for it. I'm confident enough in this that I might still consider it to be a good idea even if grant advisors themselves would think it unnecessary. 

Thanks for asking this, this didn't feel rude and I think it's a very reasonable question. I think that this report was released much later than we would have liked.

Firstly I want to clarify that EA Funds is not part of CEA, it was spun out a few years ago and I now run by me, whereas CEA is run by Max Dalton. Asya Bergal chairs the LTFF.

Asya may want to add more information below but my take is that EA Funds is bottlenecked on grant making capacity as well as good applications. Our goal is to make excellent grants and writing these reports trades off against grant making capacity. If we had more time I expect we'd put out these reports more quickly but I'm keen to try and protect the time of our part time fund managers as much as possible. I would happily hire more part time fund managers but we have found it hard to find people who are at our current bar and we have a reasonable amount of fund manager turn over (as our fund managers pursue other valuable projects).

We do have 1 assistant fund manager on EAIF and are hiring some more, but I don't expect them to speed up this process very much (as the fund managers themselves need to write up why they decided to fund the project). We will soon have a public grants database with each project we fund, but I'm less excited about just reporting our grant making as opposed to explaining our reasoning (as most of my theory of change for why these reports are useful is more around improving EA community project taste or being transparent in a high fidelity way).

I'm a bit confused about why people aren't sure whether it's worth their time to apply when the form takes less than an hour and people can apply for arbitrarily large amounts of money, the EV/hour seems very high (based on previous report acceptance rates).

Another factor which I expect to get push back on is that being transparent just ends up being very operationally costly and it's not obvious that this is the best use of our time relative to supporting grantees, approving more grants, or trying to solicit better applications. Also a large proportion of our funding comes from Open Phil, which in my view does decrease the requirement to be transparent outside of trying to encourage good community norms, and steer future EA projects.

I'm a bit confused about why people aren't sure whether it's worth their time to apply when the form takes less than an hour and people can apply for arbitrarily large amounts of money, the EV/hour seems very high (based on previous report acceptance rates).

I don't know how representative it is, but I know one person that worked pretty hard on the "less than an hour" application form. Asked feedback and had calls with several people to optimize it, rewrote it from scratch at least once, probably spent >30 hours on it in total.
It was probably worth it, they got a big raise and a much more exciting job. It was understandably life changing, and I think they'll do a lot of good!

Being so life changing, I understand why applying could be a large investment, it can be large amounts of money on the line.

Acceptance rate (excluding desk rejections): 54% 

Huh, this feels like it is pretty high / a signal that people should be applying way, way more. Or am I missing something?

tl;dr We're absolutely more restricted by having great grant applications than by money. So yes, if you have projects that you think are great for improving the long-term future, please apply!

(I joined LTFF in January 2022. I was not involved in any of the grants in the above payout report). 

Yeah, ~50% acceptance rate for things that aren't desk-rejects seems pretty normal for the distribution of grant applications that LTFF draws from. See this from Eli at Open Phil on their Longtermist Community Building grants:

We’ve received 94 applications overall, of which about 24 didn’t seem to be related to effective altruism or longtermism at all.

Of the remaining 70 applications, we:

  • Funded 25.
  • Rejected 28.
  • Referred 15 to the EA Infrastructure Fund or the Long-Term Future Fund, with the applicants’ permission.
  • Are still evaluating one.
  • Started evaluating one but the applicant withdrew, I believe (this application was handled by a colleague).

Which is just under 50% acceptance rate (25/53) for non-desk-rejects. 

Broadly speaking, I think the different core longtermist EA funding agencies don't have wildly different bars, especially for relatively small-scale grants that LTFF is likely to see. Acceptance rates have more to do with the distribution of applicants than the details of specific funding mechanisms.

So if you or other people have projects that you think are great (or at least decent for the longterm future, please apply!!!

Is there a description of the desk-reject policy and/or statistics on how many applications were desk rejected? 

(speaking for my own understanding of the situation. I don't handle desk rejects)

Usually, things that don't seem that related to or motivated by improving the long-term future at all, e.g. animal shelters, global poverty stuff, criminal justice reform, and things that are even less related. AFAIK there's no formal policy but for people reading this on the forum, I think you should think of desk rejects as mostly irrelevant to your own application chances.

Thanks for the breakdown Linch

From what I have heard from people involved in EA grant funding decisions, this is pretty typical, and yes they very much want more people to apply. 

Feels pretty wild

"I am skeptical whether CES will be able to have much influence at the federal level . . ."

It's worth mentioning that CES highlighted that approval voting was able to be used for US House, US Senate, Presidential general, and Presidential Primaries with state-wide ballot initiatives. This information seems to be missing in the write-up and instead states that it doesn't influence Federal elections. 

The write-up also seems to portray local-level reform is CES' only goal. Again. we provided feedback on this issue. We also corrected the review on the cost efficiency, which is incorrect.

We hope that our feedback is more fully considered in future reviews and that this doesn't dissuade others from supporting our critical work.

Hi Aaron, thanks for highlighting this. We inadvertently published an older version of the write-up before your feedback -- this has been corrected now. However, there are still a number of areas in the revised version which I expect you'll still take issue with, so I wanted to share a bit of perspective on this. I think it's excellent you brought up this disagreement in a comment, and would encourage people to form their own opinion.

First, for a bit of context, my grant write-ups are meant to accurately reflect my thought process, including any reservations I have about a grant. They're not meant to present all possible perspectives -- I certainly hope that donors use other data points when making their decisions, including of course CES's own fundraising materials.

My understanding is you have two main disagreements with the write-up: that I understate CES's ability to have an impact on the federal level, and that the cost effectiveness is lower than you believe to be true.

On the federal level, my updated write-up acknowledges that "CES may be able to have influence at the federal level by changing state-level voting rules on how senators and representatives are elected. This is not something they have accomplished yet, but would be a fairly natural extension of the work they have done so far." However, I remain skeptical regarding the Presidential general for the reasons stated: it'll remain effectively a two-candidate race until a majority of electoral college votes can be won by approval voting. I do not believe you ever addressed that concern.

Regarding the cost effectiveness, I believe your core concern was that we included your total budget as a cost, whereas much of your spending is allocated towards longer-term initiatives that do not directly win a present-day approval voting campaign. This was intended as a rough metric -- a more careful analysis would be needed to pinpoint the cost effectiveness. However, I'm not sure that such an analysis would necessarily give a more favorable figure. You presumably went after jurisdictions where winning approval voting reform is unusually easy; so we might well expect your cost per vote to increase in future. If you do have any internal analysis to share on that then I'm sure I and others would be interested to see it.

Hi Adam,

I think your response fairly addresses the concerns I initially raised, and I appreciate your effort there. Thank you for the delicate response.

Curated and popular this week
Relevant opportunities