AI systems require centralized, vulnerable infrastructure. This creates an asymmetry: increasing power concentration combined with increasing fragility.
Selectorate theory[1] predicts that as AI reduces the number of people needed in a winning coalition, political systems will optimize to privilege smaller empowered groups, disenfranchising others more. This doesn't just happen gradually, it accelerates as stakeholders race to consolidate before the vulnerability becomes critical.
The effective altruism community talks carefully about AI extinction risk, but rarely addresses the political economy dynamics that make certain outcomes more likely than others. Will MacAskill's recent post[2] gestures at this obliquely. But the logic is straightforward:
If AI infrastructure is a soft target, and if power is concentrating around that infrastructure, then the infrastructure itself becomes a focal point for global conflict. Not because anyone wants violence, but because the structural incentives make it increasingly likely.
This suggests our timeline for collective action is shorter than AI capability timelines alone would indicate. We're not just racing against AGI development, we're racing against the political instabilities that the AI race itself creates.
I don't have a solution. I'm writing this because bearing witness to a structural problem is worth doing even when solutions aren't clear. If others see these dynamics and have responses I haven't considered, I want to hear them.
The question isn't whether learning techniques could help humans keep up with AI. The question is whether we have the political stability to reach a future where that matters.