Next week for The 80,000 Hours Podcast I'm interviewing Ajeya Cotra, senior researcher at Open Philanthropy, AI timelines expert, and author of Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover.
What should I ask her?
My first interview with her is here:
Some of Ajeya's work includes:
Maybe even that isn't thinking big enough. At some point with enough funding it could be possible to buy up and retire most existing AGI capabilities projects. At least in the West. Maybe the world would then largely follow suit (as has happened with things like e.g. global conformity on bioethics). Although on a smaller scale, there has been the precedent of curtailing the development of electric vehicles, which perhaps set things back a decade there. And EAs have discussed related things like buying up coal mines to limit climate change.
What level of spending would be needed? $1T? Would it be possible for the EA community to accumulate this much wealth in the next 5-10 years, without relying on profits from AI capabilities?
How much of an effect would it have? Could it buy us a few more years? Or would new orgs immediately fill the gaps? Would it be possible to pay off existing AI capabilities researchers in a way that they agree to a legally binding contract not to work on similar projects (at least for a set length of time)?