Hi all - we’re the management team for the Long-Term Future Fund. This post is where we're hosting the AMA for you to ask us about our grant making, as Marek announced yesterday.
We recently made this set of grants (our first since starting to manage the fund), and are planning another set in February 2019. We are keen to hear from donors and potential donors about what kind of grant making you are excited about us doing, what concerns you may have, and anything in between.
Please feel free to start posting your questions from now. We will be available here and actively answering questions between roughly 2pm and 6pm PT (with some breaks) on December 20th.
Please ask different questions in separate comments, for discussion threading.
edit: Exciting news! The EA Foundation has just told us that donations to the Long-Term Future are eligible for the matching drive they're currently running. See the link for details on how to get your donation matched.
edit 2: The "official" portion of the AMA has now concluded, but feel free to post more questions; we may be able to respond to them over the coming week or two. Thanks for participating!
We’re absolutely open to (and all interested in) catastrophic risks other than artificial intelligence. The fund is the long term future fund, and we believe that catastrophic risks are highly relevant to our long term future.
Trying to infer the motivation for the question I can add that in my own modelling getting AGI right seems highly important, and is the thing I’m most worried about, but I’m far from certain that another of the catastrophic risks we face won’t be catastrophic enough to threaten our existence or to delay progress toward AGI until civilisation recovers. I expect that the fund will make grants to non-AGI risk reduction projects.
If the motivation for the question is more how we will judge non-AI projects, see Habryka’s response for a general discussion of project evaluation.