- The Long-Term Future Fund is on track to approve $1.5M - $2M of grants this round. This is 3 - 4x what we’ve spent in any of our last five grant rounds and most of our current fund balance.
- We received 129 applications this round, desk rejected 33 of them, and are evaluating the remaining 96. Looking at our preliminary evaluations, I’d guess we’ll fund 20 - 30 of these.
- In our last comparable grant round, April 2019, we received 91 applications and funded 13, for a total of $875,150. Compared to that round:
- We’ve received more applications. (42% more than in April.)
- We’re likely to distribute more money per applicant, because several applications are for larger grants, and requested salaries have gone up. (The average grant request is ~$80K this round vs. ~$50K in April, and the median is ~$50K vs. ~$25K in April.)
- We’re likely to fund a slightly greater percentage of applications. (16% - 23% vs. 14% in April.)
- We’ve recently changed parts of the fund’s infrastructure and composition, and it’s possible that these changes have caused us to unintentionally lower our standards for funding. My personal sense is that this isn’t the case; I think the increased spending reflects an increase in the number of quality applications submitted to us, as well as changing applicant salaries.
- If you were considering donating to the fund in the past but were unsure about its room for more funding, now could be a particularly impactful time to give. I don’t know if my perceived increase in quality applications will persist, but I no longer think it’s implausible for the fund to spend $4M - $8M this year while maintaining our previous bar for funding. This is up from my previous guess of $2M.
Let's compare the situation of the Long-Term Future Fund evaluating the quality of a grant proposal to that of the academic community evaluating the quality of a published paper. Compared to the LTFF evaluating a grant proposal, the academic community evaluating the quality of a published paper has big advantages: The work is being evaluated retrospectively instead of prospectively (i.e. it actually exists, it is not just a hypothetical project). The academic community has more time and more eyeballs. The academic community has people who are very senior in their field, and your team is relatively junior--plus, "longtermism" is a huge area that's really hard to be an expert in all of.
Even so, the academic community doesn't seem very good at their task. "Sleeping beauty" papers, whose quality is only recognized long after publication, seem common. Breakthroughs are denounced by scientists, or simply underappreciated, at first (often 'correctly' due to being less fleshed out than existing theories). This paper contains a list of 34 examples of Nobel Prize-winning work being rejected by peer review. "Science advances one funeral at a time", they say.
Problems compound when the question of first-order quality is replaced by the question of what others will consider to be high quality. You're funding researchers to do work that you consider to be work that others will consider to be good--based on relatively superficial assessments due to time limitations, it sounds like.
Seems like a recipe for herd behavior. But breakthroughs come from mavericks. This funding strategy could have a negative effect by stifling innovation (filtering out contrarian thinking and contrarian researchers from the field).
Keep longtermism weird?
(I'm also a little skeptical of your "low-quality work dilutes the quality of those fields and attracts other low-quality work" fear--since high citation count is often thought of as an ipso facto measure of quality in academia, it would seem that if work attracts additional related work, it is probably not low quality. I think the most likely fate of low-quality work is to be forgotten. If people are too credulous of work which is actually low-quality, it's unclear to me why the fund managers would be immune to this, and having more contrarians seems like the best solution to me. The general approach of "fund many perspectives and let them determine what constitutes quality through discussion" has the advantage of offloading work from the LTFF team.)