I have an idea for an organisation that I think ought to exist.
I'd welcome feedback on this from the community in particular:
- Is the concept sound? How could it be better/what's missing?
- Is it needed? Are there other organisations already doing this that I am not aware of perhaps?
I am not yet sure whether it is worth investing time in this, so would love candid feedback from people who know more than me. I would like to contribute something useful, but I don't know if I am barking up the wrong tree here!
Sorry if any of this is blindingly obvious - it's very much a half-baked idea at the moment, and I am not sure whether it's worth investing time in or if I am just going over ground that is well covered already.
0. The Context
Predictions for Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI) timelines have accelerated in recent years, with experts significantly shortening their forecasts as AI capabilities continue to advance.
The median forecast from current research and expert predictions places AGI arrival around 2028, with most estimates clustering between 2025 and 2035.
Industry Leaders (typically predicting 2025-2030):
• Sam Altman (OpenAI): Stated in 2025 that “we know how to build AGI” and suggests arrival by 2026
• Denis Hassabis (Google DeepMind): Predicts AGI by 2030.
• Dario Amodei (Anthropic): Expects “human-level AI” within 2-3 years, suggesting 2026-2027
• Shane Legg (DeepMind): Maintains his prediction of 50% probability by 2028
Academic Researchers (typically predicting 2040-2060):
• Most surveys indicate a 50% probability of achieving AGI between 2040 and 2061, with some estimating that superintelligence could follow within a few decades.
Artificial Superintelligence (ASI) Predictions ASI predictions are inherently more speculative, but researchers who address this topic generally suggest ASI will follow AGI within a couple of years.
1. The Problem: AI Governance is Fragmented, Slow, and Captured
AI systems are becoming increasingly influential, shaping markets, societies, and global security dynamics.
The stakes are high, and the risks extensive. Malicious actors could weaponise AGI to design novel bioweapons, automate cyberwarfare, or conduct large-scale psychological operations. The same systems could undermine global stability by accelerating WMD development, enabling authoritarian surveillance regimes, or triggering mass economic dislocation through the rapid displacement of human labour. A comprehensive assessment of AI risk would run to many pages.
Yet the institutions responsible for coordinating their development, deployment, and oversight are under developed. There is no shared coordination substrate to translate emerging norms into common practice. Instead of a global governance infrastructure, we have a patchwork of PDFs, speeches, and closed-door agreements.
This situation incentivises race dynamics, obstructs trust-building, and blocks the emergence of governance mechanism.
2. The Solution: The AI Coordination Forum
The AI Coordination Forum (AICF) will serve as a supranational, independent institution to coordinate AI governance between states, labs, and civil society.
AICF is not a think tank, not a lobbying group, and not an advocacy NGO. It is a coordination layer. Its work is not driven by ideology or regulation. It is driven by the need to make distributed actors legible to one another and to allow credible commitments around frontier AI systems to take form. It will design, maintain, and steward protocols, registries, and disclosure mechanisms that promote transparency, safety, and interoperability across jurisdictions and technical ecosystems.
AICF exists to support the safe deployment of advanced AI by:
- Facilitating cooperation between frontier labs, governments, civil society, and regulators
- Creating non-binding but widely adopted standards and registries for model transparency, evaluation, and risk disclosure
- Supporting the capacity of lower-resourced states and institutions to engage in frontier AI governance
- Bridging legal, technical, and operational gaps in global AI safety infrastructure
AICF is not a regulator, nor a lobbying body. It is an independent, neutral platform for technical policy coordination.
The Forum’s role is to fill the institutional void where no one actor can lead, and where centralisation would break trust.
3. What AICF Will Do: Starting Objectives (Ideas)
1. Launch and maintain the Safety Disclosure Protocol (SDP)
Why: This sets immediate norms for safe deployment and creates de facto standards that can influence both open-source developers and leading labs. If widely adopted, it shapes the release landscape for frontier models.
Impact: Medium-term containment of misuse risk; long-term scaffolding for international regulation.
Feasibility: High, if pitched as a collaborative and flexible standard.
Critical path: Adoption by 2–3 influential actors (e.g. Anthropic, Mistral, policy bodies).
2. Run red-team hackathons and evaluations on frontier models
Why: Provides actionable insights into model capabilities and misuse vectors. Forces labs to confront risks and correct failure modes pre-deployment. Helps the public and policymakers calibrate their threat models.
Impact: Direct contribution to model alignment, responsible disclosure, and hardening.
Feasibility: High, especially with academic and civil society collaboration.
Critical path: Access to frontier models under controlled conditions.
3. Track AGI-relevant compute and training runs across jurisdictions
Why: No one has a credible, open-source map of global compute trends. Surveillance of this space enables early warning and accountability.
Impact: High, particularly if used to detect race dynamics or rogue development.
Feasibility: Medium to low, due to data opacity and cooperation requirements.
5. Conclusion
The AI Coordination Forum is designed to fill a vacuum: a supranational, agile, and effective body capable of navigating the fragmented global landscape of AI governance. While existing institutions move slowly and national strategies remain parochial, AICF would establish early legitimacy by creating useful coordination primitives, high-trust dialogues and functional multistakeholderism. It is a bet on institutional innovation as a lever for long-term global stability and a future in which advanced AI is governed wisely.
AICF can become the coordination infrastructure that enables others to speak the same language, share credible information, and act responsibly across borders.
If you think this idea is useful, and would like to chat with me about potentially working on it, feel free to comment or dm me.
Also I hope I have followed all the forum rules - I did read them in advance but this is my first post!