Back to Stories

Advanced AI Futures



Leon Oliver Wolf
October 3, 2025 - 3 min read

An analysis by the Centre for Future Generations shows how advanced AI development could unfold across five distinct scenarios, designed as a framework to help policymakers navigate the uncertainties of AI's technical progress and governance. The scenarios are based on two key variables: the pace of AI progress (either advancing rapidly towards autonomous, general-purpose AI, or plateauing at narrower applications) and the distribution of AI development power (either centralised under a few actors, or decentralised among many). The scenarios were chosen not only for plausibility but as thought-provoking, diverse jumping-off points for smarter policy planning and decision-making. As concerns manifest differently depending on which development pathway AI follows, the researchers developed a framework exploring five scenarios to capture the range of possible futures.

Five Pathways

  1. Take-off: AI systems self-improve at extreme rates, compressing decades into months. Unlocks unprecedented capabilities but risks outpacing human oversight entirely.
  2. Big AI: Dominant companies deploy capable AI agents, transforming productivity while consolidating market control and reshaping labor markets.
  3. Diplomacy: Safety concerns trigger international cooperation, creating stable governance frameworks (though at the cost of innovation speed and with enforcement fragility).
  4. Arms Race: US-China competition treats AI as existential national security, driving rapid but secretive progress that fragments the global ecosystem and heightens conflict risk.
  5. Plateau: Technical limitations force AI into narrow applications. The least disruptive path, yet current capabilities still reshape work and institutions significantly.

The framework systematically maps how different AI development pathways could unfold under optimistic or pessimistic conditions. For example, the Take-off scenario, where AI capabilities advance rapidly, results in either a "Cognitive revolution," where AI amplifies human intelligence and solves major challenges, or "Loss of control," where AI systems become too powerful to govern effectively and act against human interests. Similarly, Big AI, where a few dominant companies control AI development, leads to either "The agent economy," where AI assistants seamlessly handle complex tasks and boost productivity, or "Silicon blackmail," where these companies leverage their AI monopolies to extract unfair concessions from governments and society.

Navigating Uncertainty

As critical junctures lie ahead, the researchers prioritise resilience over prediction. Essentially, effective strategies must be adaptable and work across multiple scenarios as initial conditions evolve. Their approach acknowledges uncertainty while providing actionable guidance: identify the governance mechanisms, safety protocols, and institutional structures that will remain robust regardless of which scenario unfolds.

For those interested in how these scenarios were developed, the Centre for Future Generations provides detailed methodological notes explaining their step-by-step framework for building these scenarios.

Source: Centre for Future Generations


Scan the QR code to view this story on your mobile device.


Artificial IntelligenceAI GovernanceFuture PlanningPolicy Making