Hello Effective Altruism Forum, I am Nate Soares, and I will be here to answer your questions tomorrow, Thursday the 11th of June, 15:00-18:00 US Pacific time. You can post questions here in the interim.
Last week Monday, I took the reins as executive director of the Machine Intelligence Research Institute. MIRI focuses on studying technical problems of long-term AI safety. I'm happy to chat about what that means, why it's important, why we think we can make a difference now, what the open technical problems are, how we approach them, and some of my plans for the future.
I'm also happy to answer questions about my personal history and how I got here, or about personal growth and mindhacking (a subject I touch upon frequently in my blog, Minding Our Way), or about whatever else piques your curiosity. This is an AMA, after all!
EDIT (15:00): All right, I'm here. Dang there are a lot of questions! Let's get this started :-)
EDIT (18:00): Ok, that's a wrap. Thanks, everyone! Those were great questions.
Hey Nate, congratulations! I think we briefly met in the office in February when I asked Luke about his plans; now it turns out I should have been quizzing you instead!
I have a huge list of questions; basically the same list I asked Seth Baum, actually. Feel free to answer as many or as few as you want. Apologies if you've already written on the subject elsewhere; feel free to just link if so.
What is your current marginal project(s)? How much will they cost, and what's the expected output (if they get funded).
What is the biggest mistake you've made?
What is the biggest mistake you think others make?
What is the biggest thing you've changed your mind about recently? (say past year)
How do you balance the liklihood/risks of
UFAI
e.g. for what p would you prefer a p chance of FAI and a 1-p chance of UFAI over a guarantee of mankind continuing in a AGI-less fashion? (does this make sense in your current ontology?)
What's your probability distribution for AGI timescale?
Do you have any major disagreements with Eliezer or Luke about 1) expectations for the future 2) strategy?
What do you think about the costs and benefits of publishing in journals as strategy?
Do you think the world has become better or worse over time? How? Why?
Do you think the world has become more or less at risk over time? How? Why?
What you think about Value Drift?
What do you think will be the impact of the Elon Musk money?
How do you think about weighing future value vs current value?
Personal question, feel free to disregard, but this is an AMA:
How has concern about AI's affected your personal life, beyond the obvious. Has it affected your retirement savings? Do you plan / already have children?
Hey Larks, that's a huge set of questions. It might be helpful to some themed bundles of questions from here and split them off into their own comments, so that others can upvote and read the questions according to their interest.