AI governance ignores hard-won climate lessons on systemic risk, lock-in, and precaution – doing so at greater speed, but with a narrower window to act.
For roughly three decades, climate governance has been our most sustained real-world experiment in managing a slow-moving, civilisation-scale risk. Not because it has worked especially well, but because it has forced institutions to confront something genuinely hard: acting when harms are unevenly distributed across time and geography, and when feedback from decision–making arrives only after the damage has already begun.
What strikes me is how relatively little this accumulated experience informs contemporary AI safety debates. The communities are strikingly siloed and, given that AI is advancing far faster than climate change ever did, that seems like a problem.
Read my article over at Kairos.fm.
https://kairos.fm/what-ai-governance-can-learn-from-climate/

Working on climate with an interest in AI, I found this a fascinating read.
But I am a bit left wanting as to what the learnable lessons for the AI community are that will make the AI community act better than the climate community.
Could you articulate this?
(I think a lot of the parallels you cite are true, but I don't think they offer a lot of actionable implications, they feel more like negative updates on the difficulty of acting wisely for fast-moving coordination problems with deep uncertainty and lots of politicization).