The Long-Term Future Fund (LTFF) is one of the EA Funds. Between Friday Dec 4th and Monday Dec 7th, we'll be available to answer any questions you have about the fund – we look forward to hearing from all of you!
The LTFF aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas, and to otherwise increase the likelihood that future generations will flourish.
Grant recommendations are made by a team of volunteer Fund Managers: Matt Wage, Helen Toner, Oliver Habryka, Adam Gleave and Asya Bergal. We are also fortunate to be advised by Nick Beckstead and Nicole Ross. You can read our bios here. Jonas Vollmer, who is heading EA Funds, also provides occasional advice to the Fund.
You can read about how we choose grants here. Our previous grant decisions and rationale are described in our payout reports. We'd welcome discussion and questions regarding our grant decisions, but to keep discussion in one place, please post comments related to our most recent grant round in this post.
Please ask any questions you like about the fund, including but not limited to:
- Our grant evaluation process.
- Areas we are excited about funding.
- Coordination between donors.
- Our future plans.
- Any uncertainties or complaints you have about the fund. (You can also e-mail us at ealongtermfuture[at]gmail[dot]com for anything that should remain confidential.)
We'd also welcome more free-form discussion, such as:
- What should the goals of the fund be?
- What is the comparative advantage of the fund compared to other donors?
- Why would you/would you not donate to the fund?
- What, if any, goals should the fund have other than making high-impact grants? Examples could include: legibility to donors; holding grantees accountable; setting incentives; identifying and training grant-making talent.
- How would you like the fund to communicate with donors?
We look forward to hearing your questions and ideas!
Of course there's lots of things we would not want to (or cannot) fund, so I'll focus on things which I would not want to fund, but which someone reading this might have been interested in supporting or applying for.
Organisations or individuals seeking influence, unless they have a clear plan for how to use that influence to improve the long-term future, or I have an exceptionally high level of trust in them
This comes up surprisingly often. A lot of think-tanks and academic centers fall into this trap by default. A major way in which non-profits sustain themselves is by dealing in prestige: universities selling naming rights being a canonical example. It's also pretty easy to justify to oneself: of course you have to make this one sacrifice of your principles, so you can do more good later, etc.
I'm torn on this because gaining leverage can be a good strategy, and indeed it seems hard to see how we'll solve some major problems without individuals or organisations pursuing this. So I wouldn't necessarily discourage people from pursuing this path, though you might want to think hard about whether you'll be able to avoid value drift. But there's a big information asymmetry as a donor: if someone is seeking support for something that isn't directly useful now, with the promise of doing something useful later, it's hard to know if they'll follow through on that.
Movement building that increases quantity but reduces quality or diversity. The initial composition of a community has a big effect on its long-term composition: people tend to recruit people like themselves. The long-termist community is still relatively small, so we can have a substantial effect on the current (and therefore long-term) composition now.
So when I look for whether to fund a movement building intervention, I don't just ask if it'll attract enough good people to be worth the cost, but also whether the intervention is sufficiently targeted. This is a bit counterintuitive, and certainly in the past (e.g. when I was running student groups) I tended to assume that bigger was always better.
That said, the details really matter here. For example, AI risk is already in the public conscience, but most people have only been exposed to terrible low-quality articles about it. So I like Robert Miles YouTube channel since it's a vastly better explanation of AI risk than most people will have come across. I still think most of the value will come from a small percentage of people who seriously engage with it, but I expect it to be positive or at least neutral for the vast majority of viewers.