Firstly, it’s encouraging that AI is being discussed as a threat at the highest global body dedicated to ensuring global peace and security. This seemed like a remote possibility just 4 months ago.
However, throughout the meeting, (possibly near term) extinction risk from uncontrollable superintelligent AI was the elephant in the room. ~1% air time, when it needs to be ~99%, given the venue and its power to stop it. Let’s hope future meetings improve on this. Ultimately we need the UNSC to put together a global non-proliferation treaty on AGI, if we are to stand a reasonable chance of making it out of this decade alive.
There was plenty of mention of using AI for peacekeeping. However, this seems naive in light of the offence-defence asymmetry facilitated by generative AI (especially when it comes to threats like bio-terror/engineered pandemics, and cybercrime/warfare). And in the limit of outsourcing intelligence gathering and strategy recommendations to AI (whist still keeping a human in the loop), you get scenarios like this.
Highlights:
China mentioned Pause: “The international community needs to… ensure that risks beyond human control don’t occur… We need to strengthen the detection and evaluation of the entire lifecycle of AI, ensuring that mankind has the ability to press the pause button at critical moments”. (Zhang Jun, representing China at the UN Security Council meeting on AI))
Mozambique mentioned the Sorcerer's Apprentice, human loss of control, recursive self-improvement, accidents, catastrophic and existential risk: “In the event that credible evidence emerges indicating that AI poses and existential risk, it’s crucial to negotiate an intergovernmental treaty to govern and monitor its use.” (MANUEL GONÇALVES, Deputy Minister for Foreign Affairs of Mozambique, at the UN Security Council meeting on AI)
(A bunch of us protesting about this outside the UK Foreign Office last week.)
(PauseAI's comments on the meeting on Twitter.)
(Discussion with Jack Clark on Twitter re his lack of mention of x-risk. Note that the post war atomic settlement - Baruch Plan - would probably have been quite different if the first nuclear detonation was assessed to have a significant chance of igniting the entire atmosphere!)
(My Tweet version of this post. I'm Tweeting more as I think it's time for mass public engagement on AGI x-risk.)
Greg - thanks for this helpful overview of the UN meeting on AI.
Interesting that Mozambique seems savvier about AI X-risk than many bigger countries.
I suspect that there's a potential narrative that could be developed (e.g. by the AI Pause community, or AGI moratorium community) that runaway AGI research involves big rich countries -- especially the US -- imposing extinction risk on smaller poorer countries. Yet another example of rich-country hubris, or a kind of 'X-risk colonialism', where the key AI countries are charging ahead, doing their thing, imposing huge 'risk externalities' on other countries, civilizations, and cultures without their consent.
It's also striking that when AI industry advocates talk about the benefits of AI, it generally concerns US-centric issues such as promoting longevity, advanced-country prosperity, automation, space colonization, etc., rather than addressing the kinds of issues that poorer countries might care more about -- e.g. promoting the rule of law, property rights, stable currencies, public health, basic education, government integrity, etc. So, if I was a bright young person living in Brazil, Nigeria, India, or Morocco, the AI industry would seem like it's trying to solve first-world problems, while imposing huge and scary risks on my people and my country.
I suspect that this 'AI X risk neo-colonialism' narrative would be difficult for the AI industry to deal with, since so many of the AI leaders and researchers seem to be living in a Bay Area culture bubble that gives little thought to the risks (and benefits) they're imposing on the 96% of humans who don't live in the U.S.
trevor - I'm not sure I follow your point here. Please expand on it a little bit? Thanks!