[[THIRD EDIT: Thanks so much for all of the questions and comments! There are still a few more I'd like to respond to, so I may circle back to them a bit later, but, due to time constraints, I'm otherwise finished up for now. Any further comments or replies to anything I've written are also still be appreciated!]]
Hi!
I'm Ben Garfinkel, a researcher at the Future of Humanity Institute. I've worked on a mixture of topics in AI governance and in the somewhat nebulous area FHI calls "macrostrategy", including: the long-termist case for prioritizing work on AI, plausible near-term security issues associated with AI, surveillance and privacy issues, the balance between offense and defense, and the obvious impossibility of building machines that are larger than humans.
80,000 Hours recently released a long interview I recorded with Howie Lempel, about a year ago, where we walked through various long-termist arguments for prioritizing work on AI safety and AI governance relative to other cause areas. The longest and probably most interesting stretch explains why I no longer find the central argument in Superintelligence, and in related writing, very compelling. At the same time, I do continue to regard AI safety and AI governance as high-priority research areas.
(These two slide decks, which were linked in the show notes, give more condensed versions of my views: "Potential Existential Risks from Artificial Intelligence" and "Unpacking Classic Arguments for AI Risk." This piece of draft writing instead gives a less condensed version of my views on classic "fast takeoff" arguments.)
Although I'm most interested in questions related to AI risk and cause prioritization, feel free to ask me anything. I'm likely to eventually answer most questions that people post this week, on an as-yet-unspecified schedule. You should also feel free just to use this post as a place to talk about the podcast episode: there was a thread a few days ago suggesting this might be useful.
What are your thoughts on AI policy careers in government? I'm somewhat skeptical, for two main reasons:
1) It's not clear that governments will become leading actors in AI development; by default I expect this not to happen. Unlike with nuclear weapons, governments don't need to become experts in the technology to yield AI-based weapons; they can just purchase them from contractors. Beyond military power, competition between nations is mostly economic. Insofar as AI is an input to this, governments have an incentive to invest in domestic AI firms over government AI capabilities, because this is the more effective way to translate AI into GDP.
2) Government careers in AI policy also look compelling if the intersection of AI and war is crucial. But as you say in the interview, it's not clear that AI is the best lever for reducing existentially damaging war. And in the EA community, it seems like this argument was generated as an additional reason to work on AI, and wasn't the output of research trying to work out the best ways to reduce war.
Do you think the answer to this question should be a higher priority, especially given the growing number of EAs studying things like Security Studies in D.C.?
In brief, I do actually feel pretty positively.
Even if governments aren't doing a lot of important AI research "in house," and private actors continue to be the primary funders of AI R&D, we should expect governments to become much more active if really serious threats to security start to emerge. National governments are unlikely to be passive, for example, if safety/alignment failures become increasingly damaging -- or, especially, if existentially bad safety/alignment failures ever become clearly plausible. If any important institutions, design deci
... (read more)