[EDIT: Thanks for the questions everyone! Just noting that I'm mostly done answering questions, and there were a few that came in Tuesday night or later that I probably won't get to.]
Hi everyone! I’m Ajeya, and I’ll be doing an Ask Me Anything here. I’ll plan to start answering questions Monday Feb 1 at 10 AM Pacific. I will be blocking off much of Monday and Tuesday for question-answering, and may continue to answer a few more questions through the week if there are ones left, though I might not get to everything.
About me: I’m a Senior Research Analyst at Open Philanthropy, where I focus on cause prioritization and AI. 80,000 Hours released a podcast episode with me last week discussing some of my work, and last September I put out a draft report on AI timelines which is discussed in the podcast. Currently, I’m trying to think about AI threat models and how much x-risk reduction we could expect the “last long-termist dollar” to buy. I joined Open Phil in the summer of 2016, and before that I was a student at UC Berkeley, where I studied computer science, co-ran the Effective Altruists of Berkeley student group, and taught a student-run course on EA.
I’m most excited about answering questions related to AI timelines, AI risk more broadly, and cause prioritization, but feel free to ask me anything!
I'm curious about your take on prioritizing between science funding and other causes. In the 80k interview you said:
My question: Is funding in basic science less of a priority because there are compelling reasons to deprioritize funding more projects there generally, because there is less organizational comparative advantage (or not enough expertise yet) or something else?
Decisions about the size of the basic science budget are made within the "near-termist" worldview bucket, since we see the primary case for this funding as the potential for scientific breakthroughs to improve health and welfare over the next several decades; I'm not involved with that since my research focus is on cause prioritization within the "long-termist" worldview.
In terms of high-level principles, the decision would be made by comparing an estimate of the value of marginal science funding against an estimate of the value of the near-termist "last d... (read more)