The Long-Term Future Fund (LTFF) is one of the EA Funds. Between Friday Dec 4th and Monday Dec 7th, we'll be available to answer any questions you have about the fund – we look forward to hearing from all of you!
The LTFF aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas, and to otherwise increase the likelihood that future generations will flourish.
Grant recommendations are made by a team of volunteer Fund Managers: Matt Wage, Helen Toner, Oliver Habryka, Adam Gleave and Asya Bergal. We are also fortunate to be advised by Nick Beckstead and Nicole Ross. You can read our bios here. Jonas Vollmer, who is heading EA Funds, also provides occasional advice to the Fund.
You can read about how we choose grants here. Our previous grant decisions and rationale are described in our payout reports. We'd welcome discussion and questions regarding our grant decisions, but to keep discussion in one place, please post comments related to our most recent grant round in this post.
Please ask any questions you like about the fund, including but not limited to:
- Our grant evaluation process.
- Areas we are excited about funding.
- Coordination between donors.
- Our future plans.
- Any uncertainties or complaints you have about the fund. (You can also e-mail us at ealongtermfuture[at]gmail[dot]com for anything that should remain confidential.)
We'd also welcome more free-form discussion, such as:
- What should the goals of the fund be?
- What is the comparative advantage of the fund compared to other donors?
- Why would you/would you not donate to the fund?
- What, if any, goals should the fund have other than making high-impact grants? Examples could include: legibility to donors; holding grantees accountable; setting incentives; identifying and training grant-making talent.
- How would you like the fund to communicate with donors?
We look forward to hearing your questions and ideas!
(Not sure if this is the best place to ask this. I know the Q&A is over, but on balance I think it's better for EA discourse for me to ask this question publicly rather than privately, to see if others concur with this analysis, or if I'm trivially wrong for boring reasons and thus don't need a response).
Open Phil's Grantmaking Approaches and Process has the 50/40/10 rule, where (in my medicore summarization) 50% of a grantmaker's grants have to have the core stakeholders (Holden Karnofsky from Open Phil and Cari Tuna from Good Ventures) on board, 40% have to be grants where Holden and Cari are not clearly on board, but can imagine being on board if they knew more, and up to 10% can be more "discretionary."
Reading between the lines, this suggests that up to 10% of funding from Open Phil will go to places Holden Karnofsky and Cari Tuna are not inside-view excited about, because they trust the grantmakers' judgements enough.
Is there a similar (explicit or implicit) process at LTFF?
I ask because
To be clear I think this is not my all-things-considered position. Rather, I think this is a fairly significant possibility, and I'd favor an analogue of Open Phil's 50/40/10 rule (or something a little more aggressive) than to eg whatever the socially mediated equivalent of full discretionary control by the specific funders would. be.
... (read more)