Hide table of contents

TL; DR

Even if AGI doesn’t arrive this decade, the “grand challenges” outlined in Preparing for the Intelligence Explosion by Will McAskill and Finn Moorhouse will emerge in weaker but still dangerous forms.

If we fail to govern them, we increase the probability that catastrophic or existential risks from AGI will manifest; by destabilising societies, entrenching bad values, and eroding our collective capacity for safe decision-making.

The “Grand challenges”

In Preparing for the Intelligence Explosion by William MacAskill and Finn Moorhouse asks how society could prepare for a rapid, self-reinforcing increase in machine intelligence, a scenario where artificial general intelligence (AGI) triggers transformational economic, social, and political change.

They identify a set of “grand challenges”, high-level strategic and governance problems humanity would need to solve to manage such an event safely. These include controlling destructive technologies, preventing power concentration and value lock-in, governing autonomous AI agents, maintaining epistemic integrity, managing abundance, and preparing for unknown unknowns.

The authors emphasise that an “intelligence explosion” would not just pose a single risk but multiply existing ones, interacting with warfare, inequality, governance, and moral progress. They conclude that the key task is institutional preparedness: building systems of coordination, trust, and foresight robust enough to navigate transformative change.

My argument takes that same list of challenges, but asks what they mean if AGI doesn’t arrive soon, e.g. after 2030 and beyond. Because even without superintelligence, some of these pressures created by the current level of AI technology are already visible.

What if AGI doesn’t happen soon?

Many forecasts within the AI, AI safety, and EA communities anticipate continued rapid progress in AI through the 2020s. The current Metaculus forecast places the median prediction for AGI around July 2033, with a long-tailed distribution extending much further into the future.

Of course not everyone agrees with these near-term horizons. James Fodor’s Why I Am Still Skeptical About AGI by 2030 critiques the 3–5 year timelines discussed in Preparing for the Intelligence Explosion as overly reliant on simple extrapolations of scaling laws. But as he says “The coming few years will undoubtedly see continued progress and ongoing adoption of LLMs in various economic sectors”.

However I don’t want those of us with longer timelines to miss a central point: if AGI doesn't arrive within 3-5 years, current and near-future AI systems will still reshape governance, economics, and security. Many of the grand challenges outlined in that paper remain pressing, and its actionable insights are still relevant. Importantly if we fail to manage these challenges we increase the chance of existential risk occurring when AGI does arrive.

The authors themselves acknowledge this uncertainty, urging us to “Prepare despite uncertainty.”  Here I think there are many areas of common ground with those who expect shorter timelines. Whatever our forecasts, we converge on many of the same methods for mitigating risk and improving institutional readiness.

What I mean by Non-AGI

What do I mean by non-AGI?

I use the term to describe steady, cumulative advances in current AI systems; language models, multimodal systems, and agentic frameworks, that continue to improve but never reach full general intelligence. This is consistent with the recent suggested definition proposed by Dan Hendrycks et al. (2025): systems that demonstrate task-specific competence, learning, and adaptation without exhibiting general cognitive scope or autonomy.

This definition covers two categories under discussion here:

  1. Current AI – The systems we already have. These are increasing automation and transforming productivity. They affect how organisations think, write, analyse, and communicate today.
     
  2. Near-future non-AGI – The next 10 to 30 years of advancement if AGI does not emerge. These systems could exceed human performance in key areas of critical domains such as engineering, science, logistics, intelligence analysis, and design but without becoming generally intelligent. They will be faster, cheaper, and more agentic, but still bounded.

From my own experience leading AI automation projects in an engineering consultancy, I can already see how these trends are in action. We are building automated pipelines for traditionally high-skilled “white-collar” knowledge work. Given current models capabilities and a year or two to integrate them effectively, I estimate we could automate 50–60% of the tasks and services our company provides. That’s before the next generation of models arrives. Multiply that across industries, and the economic and organisational impact becomes transformative even without AGI.

If AGI takes 30 years rather than 3, it doesn’t mean little changes until then. It means the transformation happens in stages, through systems that are narrow but increasingly powerful. Those systems will shape institutions, incentives, and values long before general intelligence appears.

With this framing in mind, we can now revisit the grand challenges identified in Preparing for the Intelligence Explosion. Some of them depend on AGI-level capabilities, but many are already visible in emerging forms today and will become steadily more important as non-AGI systems advance. In the next section, I look at which of these challenges remain most relevant in a world where progress continues but AGI itself is still decades away.

The Grand Challenges Without AGI

I will go through the grand challenges I feel still remain relevant even if AGI does not emerge soon, or at all. I don’t think every challenge applies equally in a non-AGI world, but many still capture dynamics already shaping global stability, coordination, and values.

The challenges I focus on are:

·         Highly destructive technologies

·         Power-concentrating mechanisms

·         Value lock-in mechanism

·         New competitive pressure

·         Abundance

Each of these, in different ways, continues to influence how societies organise power, manage risk, and prepare for future technologies, whether or not AGI is realised soon or not.

Highly Destructive Technologies

Explosive technological progress could vastly increase humanity’s destructive power through engineered pathogens, drone swarms, nuclear expansion, or atomically precise manufacturing. A sudden surge in capability could raise both the power and likelihood of catastrophe, as new weapons proliferate faster than defences or norms can adapt.

However, even without AGI, this dynamic is already visible. Synthetic biology, cyberwarfare, and autonomous weapons are advancing quickly with the use of today’s AI systems are already having a significant impact, lowering barriers to large-scale harm. The same geopolitical traps the authors of Preparing for the Intelligence Explosion described, arms races, pre-emptive pressures, and fragile deterrence, are playing out today. Failing to manage risks created by advances in  destructive technologies from narrow-AI systems increases our fragility and leaves us less prepared for more advanced AI later.

Power-Concentrating Mechanism

Rapid technological progress from an intelligence explosion could centralise global power; politically, militarily, and economically. AI-enabled militaries, automated bureaucracies, and extreme first-mover advantages could allow a single company, state, or individual to entrench dominance. Economic inequality could deepen as automation shifts wealth from labour to capital, and early technological leaders could convert temporary advantages into lasting control.

But in today’s world, a few corporations already control frontier AI models, compute, and data. And governments use AI surveillance to strengthen political control. These trends weaken democracy, reduce competition, and make global coordination harder. Managing how narrow AI shapes power distributions is still highly relevant in a world where AGI does not arrive soon.

Value Lock-in Mechanisms

Preparing for the Intelligence Explosion warns that rapid technological progress could let powerful actors permanently entrench their own values. Advanced AI, automated militaries, surveillance, and commitment technologies could allow regimes to enforce absolute loyalty and eliminate dissent. Preference-shaping tools or even global governance structures might fix particular moral or political orders indefinitely. What begins as stability could become stagnation, locking humanity into sub-optimal or harmful value systems for centuries.

Narrow AI may still radically affect digital infrastructure and social systems. Today algorithms are able to reinforce ideologies and governments use AI surveillance to suppress opposition. These softer forms of lock-in make societies less adaptable, reducing moral flexibility just as technology accelerates. If we fail to preserve open institutions and pluralism now, future AGI will arrive in a world where moral and political evolution has stalled.

New Competitive Pressures

The paper argues that explosive technological growth could create extreme competitive pressures between states and firms. Safety/growth trade-offs may reward the most reckless actors, leading to “races to the bottom” in ethics and risk management. AI-enabled blackmail and strategic manipulation could empower aggressive or uncooperative groups. Over time, competition might even drive societies to delegate more functions to AI systems, gradually eroding human control and values.

These dynamics are already visible. States and firms are locked in a competition to deploy ever-more capable AI systems, often prioritising strategic advantage and speed over safety and transparency. “Safety-performance trade-off” models illustrate how actors under competitive pressure may accept higher risks in pursuit of greater capability, undermining overall safety even before catastrophe occurs. Meanwhile, military and economic rivalries are accelerating automation, compressing decision cycles, and creating new escalation and deterrence fragilities. In a world without AGI, competition still rewards speed over caution.

Abundance

The authors argue that an intelligence explosion could generate extraordinary material and cultural wealth, what they call radical shared abundance. Automated labour, near-zero-cost production, and new coordination technologies could make life vastly richer and safer, as rising prosperity tends to support stability, democracy, and risk reduction. However, these gains are not guaranteed. Without careful design, abundance could be captured by a small elite, or mismanaged in ways that entrench inequality and conflict rather than cooperation.

Without AGI gains will be more modest, however existing AI-systems have the potential to transform productivity, creative work, and global trade. Wealth concentration and job displacement may undermine the safety, stability, and trust needed to govern future technologies. Ensuring that AI-driven growth benefits society broadly is therefore part of existential risk reduction.If early abundance is distributed fairly and strengthens governance, future AI transitions become easier to manage.

Conclusions

The less we manage today and tomorrow’s sub-AGI risks, the less capable we’ll be when AGI arrives. Several of the grand challenges identified in Preparing for the Intelligence Explosion—highly destructive technologies, power-concentrating mechanisms, value lock-in mechanisms, new competitive pressures, and abundance—will remain relevant even if AGI does not emerge in the near term. Each is already being accelerated by the growing adoption of current AI systems and continued advances in their capabilities.

Even if we disagree on when, or even whether, AGI will appear, many of the actions that matter most, such as improving governance and aligning technology with public values, are highly relevant across all forecasters. Timelines shape how urgently we act, and how we allocate current resources, but not what ultimately needs to be done. Broader timelines don’t reduce the urgency of AI governance. The longer we have before AGI, the more time we have to strengthen global resilience. If we fail to use that time wisely, we may enter the AGI era fragmented and unstable, conditions in which catastrophic risks are more likely to manifest.

For me, one of the most useful parts of Preparing for the Intelligence Explosion is that it offers practical steps to improve collective decision-making and institutional readiness before any intelligence explosion occurs. These insights are just as relevant today, even if AGI arrives later than many expect, or not at all.

  • Accelerating good uses of AI: We can deliberately steer AI toward improving truth-seeking, forecasting, and policymaking.
  • Value loading: Even current models benefit from clearer ethical specifications. Setting transparent norms for what AI systems should and shouldn’t do builds trust and limits misuse.
  • Empowering competent decision-makers: Governments and institutions need technically literate, adaptive leaders. Training programmes, improved recruitment, and access to secure AI tools can help bridge this capability gap.
  • Increasing understanding and awareness: Raising awareness helps society to plan, coordinate, and respond to the transformative shifts we are already seeing.

Whether AGI arrives in 3 years or 30, the same capabilities determine how safely we navigate both current and future AI transitions.

Thanks to Paul Knott for reviewing drafts of this blog and offering helpful suggestions

6

1
0

Reactions

1
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities