A global convention to govern the intelligence explosion

Released on 26th January 2026

Citations

This note was written as part of a research avenue that I don’t currently plan to pursue further. It’s more like work-in-progress than Forethought’s usual publications, but I’m sharing it as I think some audiences may find it useful.

Introduction

AI might well lead to a much faster rate of technological development (including development of AI capabilities): an “AI revolution” that results in a speed-up in technological development of a similar magnitude as the agricultural and industrial revolutions. If that happens, there will be a huge number of challenges that humanity will have to face in a short space of time — potentially over just a few months or years. The risk of takeover of misaligned AI is one, but there are many others, too, including: 
  • Maintaining democracy even after the military and police force has been or is capable of being automated.
  • Regulating new potentially extremely dangerous technologies (including technologies we haven't yet conceived of).
  • Governing the race to grab new pools of resources that have been unlocked (such as space resources).
  • How to give rights and consideration to potentially-sentient digital beings.
I think that how we govern this period (which I’ll call the intelligence explosion) is crucial, and working this out is one of the highest-leverage things we can do. Below, I sketch out a proposal for governing the intelligence explosion.

A convention to govern the intelligence explosion: an overview 

The plan is as follows:
  1. Define a threshold point that marks the beginning of the intelligence explosion.
    1. This point is characterised by:
      1. A set of technical benchmarks, as guidelines.
        1. These could cover technical AI benchmarks, AI investment measures, and macroeconomic indicators.
      2. A panel of leading experts, who make the ultimate decision as to whether that threshold has been crossed.
    2. Ideally, the point is
      1. Early enough that
        1. We aren’t yet facing serious AI takeover risk, or other catastrophic risk from AI.
        2. We aren’t yet deep enough into the intelligence explosion that one country has far greater power than all others. Conventional threats (such as threats of stealing model weights or algorithmic insights, refusal to provide essential equipment, or even nuclear retaliation) still have force.
      2. Late enough that
        1. It’s clear that an intelligence explosion is potentially very near, and within the time horizons of ordinary political decision-making.
        2. We can make good use of AI deliberative assistance.
  2. In advance, the US commits to a one-month pause of frontier AI development once this threshold is reached. 
    1. In this period, it convenes a one-month convention to figure out governance regimes for the intelligence explosion, and for new currently-unregulated issues that will arise as a result of explosive capabilities progress.
    2. Other countries that also verifiably pause frontier AI development during this period (and don’t engage in other sorts of aggressive action like stealing model weights) can send delegates to attend the convention to provide input. 
      1. I say “other countries” more generally but the key countries would be the US, some European countries, and China. 
      2. Ultimately, for political feasibility reasons, it’s probably still the US calling the shots. (The precise details here are crucial, though.)
      3. An analogy is with the formation of the UN: partly as a reward for allying against the Axis powers, countries could become UN members. The text of the charter was almost wholly written by the US, though. (This isn’t a claim that the UN was a great success — only that it was a near miss for a meaningful global governance regime, and a useful reference point.)
  3. The one-month period involves drafting a series of multilateral treaties for issues that need to be addressed over the course of the intelligence explosion and that currently have little or no regulatory framework. This includes, but is not merely limited to, regulation of AI itself. Some illustrative examples include:
    1. What restrictions, if any, there are on further AI development.
      1. This could go via restrictions on areas that aren’t directly about AI, for example restrictions on energy use. 
    2. What investment there is in AI safety. 
    3. What restrictions, if any, there are on AI proliferation: whether superintelligence is in the hands of many or just a few; what guardrails do AI systems have to have before being deployed.
    4. Governance of newly-valuable Earth-based resources, for example:
      1. Deuterium in the oceans for fusion power.
      2. Ocean surface area for solar power.
    5. Governance of newly-valuable space resources (such as solar power). 
    6. Governance of new weapons of mass destruction, or other highly disruptive technologies (such as very advanced persuasion or surveillance).
    7. Measures to protect democracy once human labour (including a human military and police force) is obsolete. 
    8. Measures to give people economic power once human labour is obsolete.  
    9. What rights, if any, digital beings have; what restrictions there are on types of AI systems one can create. 
    10. New issues that we haven’t yet thought of, but which become clear when we’re closer to the intelligence explosion.

Some arguments in favour of this plan

  1. It turns what is a gradual process (progressively accelerating AI and tech development) into a step-change (“now the intelligence explosion begins”), which I think will make it easier for politicians to take action, easier for civil society to rally around, and so on. 
  2. It pauses AI development around the point in time at which we gain the most from having extra time to think, because
    1. Things otherwise would start to go very quickly indeed, and could be very disruptive.
    2. We have access to higher-quality AI deliberative assistance than we do now, which could dramatically increase our efficiency at figuring out what to do next.
    3. We will be closer to the intelligence explosion, and have a better understanding of how things will play out.
    4. Signs of an imminent intelligence explosion will be clearer, so a much wider group of people (including political leadership of different countries) will be on board with preparing for it.
  3. It’s a feasible pause.
    1. Assuming that the US has a significantly more than one month lead over its rivals, it does not give up its lead by implementing this pause.
    2. It’s plausibly in everyone’s self-interest to jointly pause at this point in time, because the intelligence explosion could be so disruptive (including to political leadership), and because gains from collaboration at this point could be so great.
    3. Potentially, the machine learning community could use its leverage to make this pause happen. They could form a “Union of Concerned Computer Scientists,” with an agreement among members of this union that they will only continue to do machine learning research, once the threshold has been crossed, if the US abides by the plan.
  4. It could lead to longer pauses: a one-month pause seems within the realm of possibility, but if it’s clear that more time is needed, then the convention could be extended.
    1. Alternatively, it could lead to the delay of particular moments of lock-in. For example, it could result in restrictions on grabbing space resources, in order to push points of lock-in into the future (as well as potentially slowing the intelligence explosion).
    2. It could also enable routes to AI slowdown via non-standard routes (for example, via restrictions on energy use). 
  5. It creates a target for policy analysis and advocacy. Knowing that such an event is planned to occur in the future, think-tanks (etc) have greater incentives to work on these issues ahead of time.
    1. Note that this convention is not incompatible with, or a replacement for, earlier conventions and conferences on the issues that the convention for governing the intelligence explosion would cover.

Some objections to this plan, and responses

  1. The plan is a very high-level sketch. The devil is in the details, and it might well fail once the details are spelled out. As a comparison, “Create a global federation in order to prevent another world war,” probably seemed like an excellent plan in the 1910s. But the League of Nations and the UN have both been pretty toothless; plausibly that’s because there was no way, compatible with the realpolitik of the time, of making them have teeth.
    1. Responses:
      1. We’re in a very different environment than the world of 1920 or 1945, which could change things in a very meaningful way. For example, we’re in a situation where the Western powers are very closely allied and combined have much greater economic and military power than any rivals. The level of distrust between the Western powers and China is considerably less than there was between the West and the USSR after the creation of the UN.
      2. More specific multilateral treaties have had much more success than the League of Nations or the UN. These include: the Treaty on the Non-Proliferation of Nuclear Weapons; the Montreal Protocol; the Marrakesh Agreement to establish the World Trade Organization; The Chemical Weapons Convention. 
  2. The US will have little incentive to adopt this plan. If they are the front-runner, then they can get all the power by charging ahead, so they won’t want to pause.
    1. Responses: 
      1. By adopting this plan, they can reduce the chance of theft of model weights, interruptions of the supply chain, or pre-emptive military action from other countries. This, alone, is a strong reason for them to like the plan.
      2. I expect political leaders to feel pretty uneasy about the intelligence explosion. They are currently on top, within their countries; the effects of the intelligence explosion is hard to predict, and might potentially undermine them.
      3. I don’t think that political leaders act solely in the narrow self-interest of the country they are leading, but are significantly driven by ideology. Given that the “loss” from such a convention to the US would be small, that it would be a loss that comes only from truly enormous gains in wealth, and that there are plausible reasons why having such a convention would be in the US’s best interests, it seems that holding a convention and enacting collaborative treaties would be well within the range of acceptable decision-making.
  3. Either people will see the intelligence explosion coming or they won’t. If they will, then there’s no point in advocating for this, as it’ll just replace processes that will more-or-less happen anyway. If they won’t, then they won’t want to pause or hold a convention even once the threshold has been reached. 
    1. Responses:
      1. I think we can significantly increase the chance of political leaders seeing the intelligence explosion coming, taking it seriously, and responding appropriately.
  4. The outputs of the negotiations that happen during the convention will not have teeth, because the US will soon afterward have all the power anyway.
    1. Responses:
      1. The US may well choose to follow the agreements even if they don’t “have” to.
      2. As part of the negotiations, there could be guarantees of continued enforceability, with mechanisms for enforceability. For example: there could be an agreement that further development of AI should be via an international collaboration.
  5. It only helps in “medium-speed” takeoff scenarios. There may well be a time delay between when the threshold is crossed and when the constitution actually convenes. In very fast takeoff scenarios, then by the time the convention has started, we may already be deep into the intelligence explosion. In very slow takeoff scenarios, then a convention won’t be necessary; business as usual politics could handle these issues.
    1. Responses:
      1. A “medium-speed” takeoff (which ramps up over a number of months or a small number of years) is my best-guess scenario.
      2. Though it’s true that this proposal doesn’t help if takeoff occurs over the course of days or weeks, that scenario seems very unlikely to me.
      3. If the takeoff is slower, there at least isn't a significant downside to calling the convention. 
      4. With respect to the fast-takeoff scenario: if agreements could be made ahead of time, then the “pause” could be scheduled to occur immediately when the threshold has been crossed.  
  6. It only helps in scenarios where there is a significant lead from the frontrunner. If the lead of the frontrunner is only a few months, then a one-month pause might be too great.
    1. Response:
      1. In my best-guess scenario, the frontrunner will have more than a one-month lead.
      2. But this is a way in which the plan is imperfect and might fail. 
  7. It gives laggard countries one month more time to try to steal model weights, etc.
    1. Response:
      1. My thought is that the gains from other countries joining the convention and collaboratively hashing out treaties will outweigh for them the expected payoff of doing something more aggressive.

Alternatives to this plan, with responses

  1. Just try to come up with substantive regulation for all of these issues asap, rather than using up time on pushing for this “buck-passing” solution. That substantive regulation could be fairly minimal, but that could still be enough to disincentive racing, theft of model weights, and so on.
    1. Response:
      1. I think that this is the most compelling major alternative to the plan. 
      2. But it loses out on the benefits of AI-assisted deliberation that we could get at the time, the benefits of knowing more clearly how things are going to play out, and the benefit that more people will have woken up to the possibility of a near-term intelligence explosion.
      3. And I’d think that if this convention were planned, it would increase the amount of substantive regulation that happens in advance. Analogy: often most of the work in drafting the constitution of a new country occurs well in advance of the constitutional convention.  
  2. Don’t bundle together the different issues (e.g. space governance and digital rights); instead push for separate conventions to discuss treaties.
    1. Responses, in favour of bundling:
      1. Because of explosive capabilities progress, many of these issues will come on the political radar at around the same time.  
      2. Many of the issues will interact with each other; bundling helps avoid bad interactions between treaties.
      3. There might well be wholly new issues to address that we haven’t thought about yet (as a result of rapid intellectual or technological progress), and it would be good to have a catch-all procedure that can deal with them. 
  3. The plan should be more ambitious, in particular with the amount of power given to non-US countries. Rather than merely allowing other countries to have input, and rather than aiming for a series of multilateral treaties, we should instead push for something more substantive.
    1. Responses: 
      1. This is on the table for me. My current take, though, is that this would be much less politically feasible, and that this is a sufficiently major cost to be decisive as a plan to push for. 
  4. Focus instead on figuring out what outcome for future civilisation would actually be best, and then work out ways we might fail to achieve that future, and then work out which targeted interventions could prevent those failure modes.
    1. Response: This seems like a good thing to do, to me, but just seems like a different project. 
  5. Focus instead on ensuring that one country isn’t able to get a significant lead above any others, so we maintain balance of power, for example by ensuring that the supply chain needed to develop ever-better AI is globally distributed.
    1. Response: I think this is pretty interesting; again, though, I don’t think this is incompatible with the proposed plan.

Thanks to many people for comments and discussion, and to Rose Hadshar for help with editing.
Released on 26th January 2026

Citations

Part 5 of 7

This is a series of papers and research notes on the idea that AGI should be developed as part of an international collaboration between governments. We aim to (i) assess how desirable an international AGI project is; (ii) assess what the best version of an international AGI project (taking feasibility into account) would look like.

Search

Search for articles...