The first type of transformative AI?

Citations

18th December 2025

Introduction

“AI will be transformative” is now a pretty mainstream view. Indeed, it will be transformative in many different ways. Which of these should we pay most attention to? 
Serious discussion often focuses on the biggest challenges AI might bring — an intelligence explosion; or scheming AI systems. But AI will likely change many aspects of the world. By the time we face the biggest challenges, things might already be very different. If we want to navigate the transition well, it seems useful to understand this shifting landscape. So: in what way(s) will AI be a big deal first?
Some possible answers:
  • Economic
    • Greater economic growth and wealth
    • Advanced robotics reshaping manufacturing and household services
    • Altered resource bottlenecks (e.g. energy scarcity or glut)
    • Cyborg teams outperforming humans in many industries
  • Scientific
    • Accelerating medical research
    • Rapid uptick in technological R&D
    • Automation of AI research leading to intelligence explosion
  • Epistemic
    • Making strategic foresight more like weather forecasting or web search
    • Permitting — or fighting — personalized misinformation at scale
    • Replacing (& improving?) the way that people coordinate to make decisions
  • Power & influence
    • Mass unemployment & widening economic inequality
    • Control of militaries becoming much more centralized 
    • Strengthening democracy by enabling greater participation and more meaningful oversight
    • Shifts in the equilibrium of power between states, or between companies and states
    • Widespread access to potentially dangerous knowledge (e.g. bioweapons, cyber) or cheap offensive equipment (e.g. drones)
  • ~Existential
    • Emergence of AI systems that might reasonably be seen as moral patients
    • Brain emulation tech that strains our notions of personhood 
    • Risk of takeover by misaligned agentic AI systems

Early transformations change the strategic landscape

On their own, some of the “transformations” above would matter a lot less than others. But early changes could affect who has power, how people think about AI, what tools are available to us, and so on — all of which, in turn, affects how subsequent transitions play out. So knowing the likely order of transformations could change what’s good to do.
Hand-drawn graph of AI-driven change over time, rising gradually then accelerating before AGI. Highlights uncertainty about how much the world changes before AGI and why the sequence of pre-AGI transformations matters.
Our perspective in a nutshell.

Image

Some illustrative trajectories

Compare, for instance, the following hypothetical paths: 
  • Silent intelligence explosion (silent IE)
    • AI has modest impacts on the world, until a breakthrough triggers a rapid intelligence explosion
    • The world is suddenly faced with extremely advanced (possibly rogue) agentic superintelligence
  • Turbocharged economy 
    • AI systems that can execute increasingly ambitious tasks drive growth across many sectors of the economy
      • We learn, through trial and error, to handle human-level-ish autonomous AI agents
      • This leads to more billionaires and trillionaires, while large numbers of people are being made redundant
    • Research speeds up across the board (but still needs humans)
      • The world absorbs and learns to handle a broad variety of advanced technologies
      • Social structures shift at unprecedented speed
    • Later an intelligence explosion begins in earnest1

       
  • Epistemics first
    • AI gets embedded into how we find information, make decisions, and coordinate
      • This is more infrastructure (something that works quietly in the background and enables things done on top of it) than tool (something that is brought out for a specific purpose at a specific time) or agent (something that independently operates towards a specified goal)
      • For better or for worse, more and more powerful AI systems are seamlessly built into our social platforms — increasingly, AI determines what info people see and trust
    • As AI systems come online that can do larger and larger tasks independently, they are smoothly integrated with this infrastructure layer
    • AI-created knowledge also integrates into these layers, helping many actors to better track the strategic situation they are in
Learning more about which transitions will likely come earlier should change our prioritization. For example, if we knew we were on these particular trajectories:
  • On the silent IE path:
    • AGI2

      arrives in a world pretty similar to our own — key decision-making institutions won’t have changed drastically
    • Preparing to automate safety research seems especially important
    • Preparing for coordinated slowdown to stretch out the critical period may also be worth a shot
    • While other transformations may follow soon after, it makes more sense to defer preparation for them to a world with superintelligent AI
  • On the turbocharged economy path:
    • By the time we’re able to build AGI —
      • AI will be a much bigger social/political issue than it is today (but it’s unclear which AI-related issues will command the most attention)
      • Governments may struggle to keep up with progress in industry; more generally, quite different groups may be in power
    • Possible implications include —
  • On the epistemics first path:
    • The world will have more and better affordances:
      • Those who make use of the right AI systems have a much better picture of the strategic situation than anyone is able to have today
      • Major unforced errors will be technically avoidable
      • Choosing to deliberate or coordinate on large scales may become feasible
    • We should lay the groundwork now for plans which involve ambitious coordination to handle more-advanced-AI, and deprioritize plans that rely on specific collective sense-making and decision-making processes as they exist today
    • We should do more threat modelling on ways this could go wrong (e.g. many people actively misled by their AI assistants), and safeguard against those

Shouldn’t we just focus on AGI? 

You might think that if AGI is ultimately the most important part of this picture, we should just keep our eye on that ball, and avoid getting distracted by other transformations. This is an implicit bet on something pretty close to the silent IE path (or on the view that the same interventions are still pretty much what you’d want on other paths — which we’ve just argued against).
Might this bet be justified? The strongest case we can see for focusing heavily on the silent IE trajectory would rest on some combination of the following beliefs:
  • Rapid AGI: AI is unlikely to change the world much until we’re facing extremely advanced agents, because progress towards AGI will be too fast (or too sharp) to allow for development and diffusion of other potentially transformative technologies
  • Leverage: We’re especially well positioned to help in worlds in which we are in fact on the silent intelligence explosion path, since AI challenges would get little attention from others in this scenario, we’d be facing an extremely high-stakes situation with a reasonably predictable strategic background, etc.
Ultimately, we don’t find these arguments very compelling:
  • On rapid AGI:
    • Today’s AI systems seem close to the kinds that could power important transformations (this is a theme we’ll return to in subsequent articles)
    • Technological progress rarely has large discontinuities, which suggests we should have a prior that pre-AGI systems will be capable enough to cause a lot of change3

    • While diffusion may cause real delays, uptake can still be swift and feel sudden after the tech reaches a certain quality level (and there are pathways around some obvious blockers)4

  • On leverage:
    • The neglectedness of AI safety on the silent IE path does mean our work could have an outsized influence, but we think there’s extra leverage on some of the other trajectories, too (see discussion below)
    • And whatever the theoretical case for leverage may be, in practice it seems5

      like we’re often not that well positioned to help in silent IE scenarios
Overall, while we don’t think it’s clearly wrong for someone to focus on the silent IE type worlds, we don’t see the case for it as very robust — and we worry that it receives excess attention because it is in many ways easier to visualize than worlds which have undergone more radical transformation by the arrival of AGI.

Early transformations could represent important intervention points

There are two different ways that we might try to affect early transformations to make things go better:
Hand-drawn diagram comparing two strategies for early AI transitions: improving how individual transformations go, or influencing their order. Notes that earlier transitions are more predictable and more neglected than later ones.
Key strategies for helping via early AI transitions

Image

Changing the order of AI transformations

We’ve been treating the order of transformations as something to be discovered, but it might be pretty malleable. If a particular sequence gives us an easier path than the others — because the earlier changes would equip us to handle later challenges — then differential speedup to make it more likely that we get those sequences could help a lot. For instance:
  • The silent IE path seems unusually perilous
    • Because the risks of superintelligent AI are especially daunting; we might be better positioned to tackle them if we had capabilities/affordances granted by other AI transformations (e.g. infrastructure to help people coordinate, or to robustly automate safety work)
    • So if that path is somewhat likely but not overdetermined, trying to get off it by differentially speeding up other AI changes (which are more peripheral to progress towards superintelligence) could be a good idea
  • Epistemic infrastructure changes, meanwhile, might generally seem desirable to have early
We could try to make our desired sequencing more likely by differentially investing in some kinds of technology over others, working to accelerate adoption of some technologies or slow adoption of others, etc.

Improving how earlier AI transformations play out

Early AI changes might have a big impact on the world’s competence for handling subsequent challenges. And since transitions are moments of flux, there may be opportunities to help them land in good places. To improve how an early transformation plays out, we could generally try to help people understand what’s going on, and then accelerate better versions of the relevant technologies, improve key institutions, help society to integrate technology in wiser ways, etc.
For instance: 
  • Superintelligent systems that come early on the silent IE trajectory will be able to take subsequent challenges off our hands. If we’re definitely on this path, our job is to ensure that those early superintelligent systems are aligned, safe, and pointed in good directions.
  • On the turbocharged economy trajectory:
    • The economic and scientific shifts this brings seem like they could go multiple ways:
      • They could mean we’ve experimented and learned good methods for deploying advanced tech, society is richer, less corrupt, more aware of AI, and more willing to invest in safety, etc.
      • But they could also leave us in turmoil, with a growing unemployed class and key decisions made opaquely by small groups of elites; powerful new technologies might trigger wars, etc.
    • So to steer towards better equilibria, we might want to e.g.
      • Build out technical and legal frameworks for handling ~human-level AI agents 
      • Focus on things which help to maintain healthy balances of power among humans, help liberal governments stay relevant and legitimate, etc.
  • If it’s epistemics first:
    • If we’re not careful, AI systems that are integrated into our platforms and decision-making could manipulate or mislead instead of improving our thinking. But if we get things right, the world might be much better at handling new challenges
    • We might therefore work to ensure that we land in high-truth equilibria and to find ways of measuring which uplift tools are trustworthy, and drive their adoption
We may also have more leverage to improve earlier transformations (compared to improving later ones):
  • Later people can work on later transitions; only we can work on the early transitions
    • In the allocation of farsighted risk mitigation work across time, it is the comparative advantage of people before the first transformation to work on the first transformation, and of people after the first transformation to work on later ones
    • This consideration is stronger because the number of people paying attention to AI will probably go up over time
    • Moreover, work on later transitions may be swamped by huge amounts of AI labour being directed to the same questions once it’s possible to automate that kind of work (or a substitute for it)
  • We can make better predictions about earlier transformations (and how to affect them), and will have to pay bigger nearsightedness penalties for later transitions
Of course, this doesn’t mean that we should necessarily work on the earliest changes! For one thing, there is a spectrum of how transformative things will be (and we will tend to see small transformations before we see big ones). For another, sometimes we might need to get started early to prepare for later transitions.

Information about the likely sequencing of transformations would be valuable

Since (it seems to us) some of the best interventions available to people today may look like shaping early transformations, we think better information could have a large and direct impact on prioritization.
From our perspective, people are really dropping the ball here. We think understanding early AI transformations would be at least as action-guiding as understanding AI timelines. But questions of early transformations appear to have had much less explicit attention — instead, many people planning around the future of AI appear anchored to some implicit or received wisdom which emphasises only the most rapid and decisive transformations.6

Others (perhaps reactively) dismiss the prospect of rapid transformations entirely.
Like AI timelines, we think this is a hard question to approach. But like AI timelines, we think that careful analysis and considering different models can provide meaningful insight, and is worth some effort.
Some factors to consider when thinking about sequencing7

:
  • Technical requirements: What technological capabilities are needed for each transformation? (Are there shorter paths? What breakthroughs are needed? Thinking about AI timelines is relevant here, but it may be ideal to examine the models underlying timeline predictions.)
  • Attention and investment: Are people already pushing towards the tech? Is anyone incentivised to do so, later? Might there be barriers, like social or political pushback?
  • Time-to-transformation: How rapidly would a tech be finessed, deployed, adopted? Are there other constraints or delays to reshaping the world? (Note that it can be hard to properly anticipate big transitions!8

    )
  • Comparison: Does the existence of one tech imply some probability of another already existing or being in use?
  • Deliberate strategic effort: Could technologists and funders deliberately direct attention, investment, and other prerequisites toward one or another tech pathway, because of its anticipated effects? Could societal conversations shape demand and therefore differential viability?

Closing thoughts

In future articles, we plan to go deeper on object-level analysis related to the question of some potential early transformations. In particular, we’ve come to believe that, before we see anything like agentic superintelligence, AI technologies could reshape the way people make sense of the world and make decisions. We might see exploratory, buggy tech giving way to applications that (like social media) change people’s interactions on a broad scale but don’t much change the global gameboard — and then to something more extraordinary, in the reference class of literacy, computers, or liberal democracy.
In our view, these shifts in epistemics and coordination are key early transformations to better understand and shape — ones that can too easily be obscured when we orient directly to agentic AGI and its precursors. But we are more confident in the importance of the sequencing question than in our tentative answers, so wanted to make the case for it stand alone.
In short, our case has been:
  1. There are multiple plausible candidates for “early big impacts of AI”.
  2. The best opportunities to make the world better depend a lot on this sequencing.
    1. Because early transformations change what looks effective for targeting later transformations.
    2. And because we may have better leverage on the earlier transformations.
  3. Despite this, there has been relatively little attention so far to the question of early impacts.
Questions of technological prediction are, of course, difficult. But the stakes here are high. We hope that it can and will receive more serious analysis. 
Thanks to Raymond Douglas, Max Dalton, Lukas Finnveden, Owain Evans, Will MacAskill, Rose Hadshar, Max Daniel, Tom Davidson, and Ben Goldhaber for helpful comments on earlier drafts.

Footnotes

Citations

Search

Search for articles...