Hi all - we’re the management team for the Long-Term Future Fund. This post is where we're hosting the AMA for you to ask us about our grant making, as Marek announced yesterday.
We recently made this set of grants (our first since starting to manage the fund), and are planning another set in February 2019. We are keen to hear from donors and potential donors about what kind of grant making you are excited about us doing, what concerns you may have, and anything in between.
Please feel free to start posting your questions from now. We will be available here and actively answering questions between roughly 2pm and 6pm PT (with some breaks) on December 20th.
Please ask different questions in separate comments, for discussion threading.
edit: Exciting news! The EA Foundation has just told us that donations to the Long-Term Future are eligible for the matching drive they're currently running. See the link for details on how to get your donation matched.
edit 2: The "official" portion of the AMA has now concluded, but feel free to post more questions; we may be able to respond to them over the coming week or two. Thanks for participating!
I expect different people on the fund will have quite different answers to this, so here is my perspective:
I don’t expect to score projects or applications on any straightforward rubric any more than a startup VC should do so for the companies that they are investing in. Obviously, things like general competence, past track record, clear value proposition and neglectedness matter, but at large, I mostly expect to recommend grants based on my models of what is globally important, and on my expectation of whether the plan the grantee proposed will actually work, and do something that I guess you could call “model-driven granting”
What this means in practice is that I expect the things I look for in a potential grantee to differ quite a bit depending on what precisely they are planning to do with the resources. I expect there will be many applicants that will display strong competence and rationality, but are running on assumptions that I don’t share, or are trying to solve problems that I don’t think are important, and I don’t plan to make recommendations unless my personal models expect that the plan the grantee is pursuing will actually work. This obviously means I will have to invest significant time and resources to actually understand what the grantees are trying to achieve, which I am currently planning to make room for.
I can imagine some exceptions to this though. I think we will run across potential grantees who are asking for money mostly to increase their own slack, and who have a past track record of doing valuable work. I am quite open to grants like this, think they are quite valuable and expect to give out multiple grants in this space (barring logistical problems of doing so). In that case, I expect to mostly ask myself the question of whether I expect additional slack and freedom would make a large difference in that person’s output, which I expect will again differ quite a bit from person to person.
One other type of grant that I am open to are rewards for past impact. I think rewarding people for past good deeds is quite important for setting up long-term incentives, and evaluating whether an intervention had a positive impact is obviously a lot easier after the project is completed than before it is completed. In this case I again mostly expect to rely heavily on my personal models of whether the completed project had a significant positive impact, and will base my recommendations on that estimate.
I think this approach will sadly make it harder for potential grantees to evaluate whether I am likely to recommend them for a grant, but I think is less likely to give rise to various goodharting and prestige-optimization problems, and will allow me to make much more targeted grants than the alternative of a more rubric-driven approach. It’s also really the only approach that I expect will cause me to learn what interventions work and don’t work in the long-run, by exposing my models to the real world and seeing whether my concrete predictions of how various projects will go come true or not.