[EDIT: Thanks for the questions everyone! Just noting that I'm mostly done answering questions, and there were a few that came in Tuesday night or later that I probably won't get to.]
Hi everyone! I’m Ajeya, and I’ll be doing an Ask Me Anything here. I’ll plan to start answering questions Monday Feb 1 at 10 AM Pacific. I will be blocking off much of Monday and Tuesday for question-answering, and may continue to answer a few more questions through the week if there are ones left, though I might not get to everything.
About me: I’m a Senior Research Analyst at Open Philanthropy, where I focus on cause prioritization and AI. 80,000 Hours released a podcast episode with me last week discussing some of my work, and last September I put out a draft report on AI timelines which is discussed in the podcast. Currently, I’m trying to think about AI threat models and how much x-risk reduction we could expect the “last long-termist dollar” to buy. I joined Open Phil in the summer of 2016, and before that I was a student at UC Berkeley, where I studied computer science, co-ran the Effective Altruists of Berkeley student group, and taught a student-run course on EA.
I’m most excited about answering questions related to AI timelines, AI risk more broadly, and cause prioritization, but feel free to ask me anything!
Hi Ajeya, thank you for publishing such a massive and detailed report on timelines!! Like other commenters here, it is my go-to reference. Allowing users to adjust the parameters of your model is very helpful for picking out built-in assumptions and being able to update predictions as new developments are made.
In your report you mention that you discount the aggressive timelines in part due to lack of major economic applications of AI so far. I have a few questions along those lines.
Do you think TAI will necessarily be foreshadowed by incremental economic gains? If so, why? I personally don't see the lack of such applications as a significant signal because the cost and inertia of deploying AI for massive economic benefit is debilitating compared to the current rate of research progress on AI capabilities. For example, I would expect that if a model like GPT-3 had existed for 50 years and was already integrated with the economy it would be ubiquitous in writing-based jobs and provide massive productivity gains. However, from where we are now, it seems likely that several generations of more powerful successors will be developed before the hypothetical benefits of GPT-3 are realized.
If a company like OpenAI heavily invested in productizing their new API (or DeepMind their Alphafold models) and signaled that they saw it as key to the company's success, would you update your opinion more towards aggressive timelines? Or would you see this as delaying research progress because of the time spent on deployment work?
More generally, how do you see (corporate) groups reorienting (if at all) as capabilities progress and we get close to TAI? Do you expect research to slow broadly as current theoretical, capabilities-driven work is replaced by implementation and deployment of existing methods? Do you see investment in alignment research increasing, including possibly an intentional reduction of pure capabilities work towards safer methods? On the other end of the spectrum, do you see an arms race as likely?
Finally, have you talked much to people outside the alignment/effective altruism communities about your report? How have reactions varied by background? Are you reluctant to publish work like this broadly? If so, why? Do you see risks of increasing awareness of these issues pushing unsafe capabilities work?
Apologies for the number of questions! Feel free to answer whichever are most interesting to you.
To clarify, we are planning to seek more feedback from people outside the EA community on our views about TAI timelines, but we're seeing that as a separate project from this report (and may gather feedback from outside the EA community without necessarily publicizing the report more widely).