Hi all - we’re the management team for the Long-Term Future Fund. This post is where we're hosting the AMA for you to ask us about our grant making, as Marek announced yesterday.
We recently made this set of grants (our first since starting to manage the fund), and are planning another set in February 2019. We are keen to hear from donors and potential donors about what kind of grant making you are excited about us doing, what concerns you may have, and anything in between.
Please feel free to start posting your questions from now. We will be available here and actively answering questions between roughly 2pm and 6pm PT (with some breaks) on December 20th.
Please ask different questions in separate comments, for discussion threading.
edit: Exciting news! The EA Foundation has just told us that donations to the Long-Term Future are eligible for the matching drive they're currently running. See the link for details on how to get your donation matched.
edit 2: The "official" portion of the AMA has now concluded, but feel free to post more questions; we may be able to respond to them over the coming week or two. Thanks for participating!
I have a bunch of thoughts, but find it hard to express them without any specific prompt. In general, I find a lot of AI Alignment research valuable, since it helps me evaluate other AI Alignment research, but I guess that’s kind of circular. I haven’t found most broad cause-prioritization research particularly useful for me, but would probably find research into better decision making as well as the history of science useful for helping me make better decision (i.e. rationality research).
I’ve found Larks recent AI Alignment literature and organization review quite useful, so more of that seems great. I’ve also found some of Shahar Avin’s thoughts on scientific funding interesting, but don’t really know whether it’s useful. I generally think a lot of Bostrom’s writing has been very useful to me, so more of that type seems good, though I am not sure how well others can do the same.
Not sure how useful this is or how much this answers your question. Happy to give concrete comments on any specific research direction you might be interested in getting my thoughts on.