LTFF is running an Ask Us Anything! Most of the grantmakers at LTFF have agreed to set aside some time to answer questions on the Forum.
I (Linch) will make a soft commitment to answer one round of questions this coming Monday (September 4th) and another round the Friday after (September 8th).
We think that right now could be an unusually good time to donate. If you agree, you can donate to us here.
About the Fund
The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas and to otherwise increase the likelihood that future generations will flourish.
In 2022, we dispersed ~250 grants worth ~10 million. You can see our public grants database here.
Related posts
- LTFF and EAIF are unusually funding-constrained right now
- EA Funds organizational update: Open Philanthropy matching and distancing
- Long-Term Future Fund: April 2023 grant recommendations
- What Does a Marginal Grant at LTFF Look Like?
- Asya Bergal’s Reflections on my time on the Long-Term Future Fund
- Linch Zhang’s Select examples of adverse selection in longtermist grantmaking
About the Team
- Asya Bergal: Asya is the current chair of the Long-Term Future Fund. She also works as a Program Associate at Open Philanthropy. Previously, she worked as a researcher at AI Impacts and as a trader and software engineer for a crypto hedgefund. She's also written for the AI alignment newsletter and been a research fellow at the Centre for the Governance of AI at the Future of Humanity Institute (FHI). She has a BA in Computer Science and Engineering from MIT.
- Caleb Parikh: Caleb is the project lead of EA Funds. Caleb has previously worked on global priorities research as a research assistant at GPI, EA community building (as a contractor to the community health team at CEA), and global health policy.
- Linchuan Zhang: Linchuan (Linch) Zhang is a Senior Researcher at Rethink Priorities working on existential security research. Before joining RP, he worked on time-sensitive forecasting projects around COVID-19. Previously, he programmed for Impossible Foods and Google and has led several EA local groups.
- Oliver Habryka: Oliver runs Lightcone Infrastructure, whose main product is Lesswrong. Lesswrong has significantly influenced conversations around rationality and AGI risk, and the LWits community is often credited with having realized the importance of topics such as AGI (and AGI risk), COVID-19, existential risk and crypto much earlier than other comparable communities.
You can find a list of our fund managers in our request for funding here.
Ask Us Anything
We’re happy to answer any questions – marginal uses of money, how we approach grants, questions/critiques/concerns you have in general, what reservations you have as a potential donor or applicant, etc.
There’s no real deadline for questions, but let’s say we have a soft commitment to focus on questions asked on or before September 8th.
Because we’re unusually funding-constrained right now, I’m going to shill again for donating to us.
If you have projects relevant to mitigating global catastrophic risks, you can also apply for funding here.
Just FWIW, this feels kind of unfair, given that like, if our grant volume didn't increase by like 5x over the past 1-2 years (and especially the last 8 months), we would probably be totally rocking it in terms of "the basics".
Like, yeah, the funding ecosystem is still recovering from a major shock, and it feels kind of unfair to judge the LTFF performance on the basis of such an unprecedented context. My guess is things will settle into some healthy rhythm again when there is a new fund chair, and the basics will be better covered again, when the funding ecosystem settles into more of an equilibrium again.