What an international project to develop AGI should look like
Released on 26th January 2026
Citations
This note was written as part of a research avenue that I don’t currently plan to pursue further. It’s more like work-in-progress than Forethought’s usual publications, but I’m sharing it as I think some people may find it useful.
Introduction
There have been various proposals to develop AGI via an international project.1 In this note, I:
- Discuss the pros and cons of a having an international AGI development project at all, and
- Lay out what I think the most desirable version of an international project would be.
In an appendix, I give a plain English draft of a treaty to set up my ideal version of an international project. Most policy proposals of this scale stay very high-level. This note tries to be very concrete (at the cost of being almost certainly off-base in the specifics), in order to envision how such a project could work, and assess whether such a project could be feasible and desirable.
I tentatively think that an international AGI project is feasible and desirable. More confidently, I think that it is valuable to develop the best versions of such a project in more detail, in case some event triggers a sudden and large change in political sentiment that makes an international AGI project much more likely.
Is an international AGI project desirable?
By “AGI” I mean an AI system, or collection of systems, that is capable of doing essentially all economically useful tasks that human beings can do and doing so more cheaply than the relevant humans at any level of expertise. (This is a much higher bar than some people mean when they say “AGI”.)
By an “international AGI project” I mean a project to develop AGI (and from there, superintelligence) that is sponsored by and meaningfully overseen by the governments of multiple countries. I’ll particularly focus on international AGI projects that involve a coalition of democratic countries, including the United States.
Whether an international AGI project is desirable depends on what the realistic alternatives are. I think the main alternatives are 1) a US-only government project, 2) private enterprise (with regulation), 3) a UN-led global project.
Comparing an international project with each of those alternatives, here are what I see as the most important considerations:
| Compared to... | Pros of an international AGI project | Cons of an international AGI project |
|---|---|---|
| A US-only government project | Greater constraints on the power of any individual country, reducing the risk of an AI-enabled dictatorship. More legitimate. More likely to result in some formal benefit-sharing agreement with other countries. Potentially a larger lead over competitors (due to consolidation of resources across countries), which could enable:
| More bureaucratic, which could lead to
More actors, which could make infosecurity harder. |
| Private enterprise with regulation | Greater likelihood of a monopoly on the development of AGI, which could reduce racing and leave more time to manage misalignment and other risks. More government involvement, which could lead to better infosecurity. | More centralised, which could lead to:
|
| A UN-led global project | More feasible. Fewer concessions to authoritarian countries. Less vulnerable to stalemate in the Security Council. | Less legitimate. Less likely to include China, which could lead to racing or conflict. |
My tentative view is that an international AGI project is the most desirable feasible proposal to govern the transition to superintelligence, but I’m not confident in this view.2 My main hesitations are around how unusual this governance regime would be, risks from worse decision-making and bureaucracy, and risks of concentration of power, compared to well-regulated private development of AGI.3
For more reasoning that motivates an international AGI project, see AGI and World Government.
If so, what kind of international AGI project is desirable?
Regardless of whether an international project to develop AGI is the most desirable option, there’s value in figuring out in advance what the best version of such a project would be, in case at some later point there is a sudden change in political sentiment, and political leaders quickly move to establish an international project.
Below, I set out:
- Some general desiderata I have for an international AGI project, and
- A best guess design proposal for an international project.
I’m sure many of the specifics are wrong, but I hope that by being concrete, it’s easier to understand and critique my reasoning, and move towards something better.
General desiderata
In approximately descending order of importance, here are some desiderata for an international AGI project:
- It’s politically feasible.
- It gives at least a short-term monopoly on the development of AGI, in order to give the developer:
- Breathing space to slow AI development down over the course of the intelligence explosion (where even the ability to pause for a few months at a time could be hugely valuable).
- The opportunity to differentially accelerate less economically/militarily valuable uses of AI, and to outright ban certain particularly dangerous uses of AI.
- An easier time securing the model weights.
- No single country ends up with control over superintelligence, in order to reduce the risk of world dictatorship.
- I especially favour projects which are governed by a coalition of democratic countries, because I think that:
- Governance by democratic countries is more likely to lead to extensive moral reflection, compromise and trade than governance by authoritarian countries.
- Coalitions are less likely to become authoritarian than a single democratic country, since participating countries will likely demand that checks and balances are built into the project. This is because (i) each country will fear disempowerment by other countries; (ii) the desire for authoritarianism among leaders of democracy is fairly unusual, so it’s much less likely that the median democratic political leader aspires to authoritarianism than a randomly selected democratic political leader is.
- Non-participating countries (especially ones that could potentially steal model weights, or corner-cut on safety, in order to be competitive) actively benefit from the arrangement, in order to disincentivise them from bad behaviour like model weights theft, espionage, racing, or brinkmanship. (This is also fairer, and will improve people’s lives.)
- Where possible, it avoids locking in major decisions.
My view is that most of the gains come from having an international AGI project that (i) has a de facto or de jure monopoly on the development of AGI, and (ii) curtails the ability of the front-running country to slide into a dictatorship. I think it’s worth thinking hard about what the most-politically-feasible option is that satisfies both (i) and (ii).
A best guess proposal
In this section I give my current best guess proposal for what an international AGI project should look like (there’s also a draft of the relevant treaty text in the appendix). My proposal draws heavily from Intelsat, which is my preferred model for international AGI governance.
I’m not confident in all of my suggestions, but I hope that by being concrete, it’s easier to understand and critique my reasoning, and move towards something better. Here’s a summary of the proposal:
- Membership:
- Five Eyes countries (the US, the UK, Canada, Australia, and New Zealand), plus the essential semiconductor supply chain countries (the Netherlands, Japan, South Korea, and Germany), and not including Taiwan.
- They are the founding members, and invite other countries to join as non-founding members.
- Members invest in the project, and receive equity returns on their investments. They agree to ban all frontier training runs outside of the project.
- Non-member countries in good standing receive equity and other benefits. Countries not in good standing are cut out of any AI-related trade.
- Governance:
- Voting share is determined by equity. The US gets 52% of votes.4 Most decisions require a simple majority, but major decisions require a ⅔ majority and agreement from ⅔ of founding members.
- AI development:
- The project contracts out AI development to a company or companies, and funds significantly larger training runs.
- Project datacenters are distributed across member countries, with 50% located in the US, and have kill switches which leadership from all founding members have access to.
- Model weights are encrypted and parts are distributed to each of the Founding Members, with very strong infosecurity for the project as a whole.
More detail, with my rationale:
| Proposal | Rationale |
|---|---|
How the project comes about:
|
|
Name: Intelsat for AGI5 | |
Aims: “To develop advanced AI for the benefit of all humanity, while preventing destructive or destabilising applications of AI technology.” |
|
Membership:
|
|
Non-members:
|
|
Governance structure: board of governors consisting of representatives from all countries with more than 1% of investment in the project. |
|
Vote distribution: Decisions are made by weighted voting based on equity:
|
|
Voting rule:
|
|
AI development:
| On larger training runs:
|
Compute:
|
|
Infosecurity:
|
|
Why would the US join this project?
The Intelsat for AGI plan allows the US to entrench its dominance in AI by creating a monopoly on the development of AGI which it largely controls. There are both “carrot” and “stick” reasons to do this rather than to go solo. The carrots include:
- Cost-sharing with the other countries.
- Gaining access to the talent from other countries, who might feel increasingly nervous about helping develop AGI if the US would end up wholly dominant.
- Significantly helping with recruitment of the very best domestic talent into the project (who might be reluctant to work for a government project), and securing their willingness to go along with arduous infosec measures.10
- Securing the supply chain:
- Ensuring that they are first in line for cutting-edge chips (and that Nvidia is a first-in-line customer for TSMC, etc), even if there are intense supply constraints.
- Getting commitments to supply a certain number of chips to the US. Potentially, even guaranteeing that sufficiently advanced chips are only sold to the US-led international AGI project, rather than to other countries.
- Enabling a scale-up in production capacity by convincing ASML and TSMC to scale up their operations.
- Getting commitments from these other non-member countries not to supply to countries that are not in good standing (e.g. potentially China, Russia).
- Where, if this was combined with an overall massive increase in demand for chips, this wouldn’t be a net loss for the relevant companies.
- Guaranteeing the security of the supply chain by imposing greater defense (e.g. cyber-defense) requirements on the relevant companies.
- An international project could provide additional institutional checks and balances, reducing the risk of excessive concentration of power in any single actor.
The sticks include:
- For the essential semiconductor supply chain countries:
- Threatening not to supply the US with GPUs, extreme ultraviolet lithography machines, or other equipment essential for semiconductor manufacturing.
- Threatening to supply to China instead.
- Threatening to ban their citizens from working on US-based AI projects.
- Threatening to ban access to their markets for AI and AI-related products.
- Some countries could threaten cyber attacks, kinetic strikes on data centers, or even war if the US were to pursue a solo AGI project.
- Threatening to build up their own AGI program — e.g. an “Airbus for AGI”.11
- In longer timeline worlds, non-US democratic countries could in fact build up their own AI capabilities, and then threaten to cut corners and race faster, or threaten to sell model weights to China.
- Or non-US democratic countries could buy as many chips as it can, and say that it would only use them as part of an international AGI project.
Many of these demands might seem unlikely — they are far outside the current realm of likelihood. However, the strategic situation would be very different if we are close to AGI. In particular, if the relevant countries know that the world is close to AGI, and that a transition to superintelligence may well follow very soon afterwards, then they know they risk total disempowerment if some other countries develop AGI before them. This would put them in an extremely different situation than they are now, and we shouldn’t assume that countries will behave as they do today. What’s more, insofar as the asks being made of the US in the formation of an international project are not particularly onerous (the US still controls the vast majority of what happens), these threats might not even need to be particularly credible.12
It’s worth dividing the US-focused case for an international AGI project into two scenarios. In the first scenario, the US political elite don’t overall think that there’s an incoming intelligence explosion. They think that AI will be a really big deal, but “only” as big a deal as, say, electricity or flight or the internet. In the second scenario, the US political elite do think that intelligence explosion is a real possibility: for example, a leap forward in algorithmic efficiency of five orders of magnitude within a year is on the table, as is a new growth regime with a one-year doubling time.
In the first scenario, cost-sharing has comparatively more weight; in the second scenario, the US would be willing to incur much larger costs, as they believe the gains are much greater. Many of the “sticks” become more plausible in the second scenario, because it’s more likely that other countries will do more extreme things.
The creation of an international AGI project is more likely in the second scenario than in the first; however, I think that the first scenario (or something close to it) is more likely than the second. One action people could take is trying to make the political leadership of the US and other countries more aware of the possibility of an intelligence explosion in the near term.
Why would other countries join this project?
If the counterfactual is that the US government builds AGI solo (either as part of a state-sponsored project, a public-private partnership, or wholly privately), then other countries would be comparatively shut out of control over AGI and AGI-related benefits if they don’t join. At worst, this risks total disempowerment.
Appendix: a draft treaty text
This appendix gives a plain English version of a treaty that would set up a new international organisation to build AGI, spelling out my above proposal in further detail.
Preamble
This treaty’s purpose is to create a new intergovernmental organisation (Intelsat for AGI) to build safe, secure and beneficial AGI.
“Safe” means:
- The resulting models have been extensively tested and evaluated in order to minimise the risk of loss of control to AGI.
- The resulting models are not able to help rogue actors carry out plans that carry catastrophic risks, such as plans to develop or use novel bioweapons.
“Secure” means:
- The models cannot be stolen even as a result of state-level cyberattacks.
- The models cannot realistically be altered into models that are not safe.
“Beneficial” means:
- AGI that helps improve the reasoning, decision-making and cooperative ability of both individuals and groups.
- AGI that is broadly beneficial to society, helping to protect individual rights and helping people live longer, happier and more flourishing lives.
“AGI” means:
- An AI system, or collection of systems, that is capable of doing essentially all economically useful tasks that human beings can do and doing so more cheaply than the relevant humans at any level of expertise.
- If necessary, the moment at which this line has been crossed will be decided by an expert committee.
This treaty forms the basis of an interim arrangement. Definitive arrangements will be made not more than five years after the development of AGI or in 2045, whichever comes sooner.
Founding members
Five eyes countries:
- US, UK, Canada, Australia, New Zealand.
Essential semiconductor supply chain countries (excluding Taiwan):
- The Netherlands, Japan, South Korea, Germany.
Non-founding members
All other economic areas (primarily countries) and major companies (with a market cap above $1T) are invited to join as members. This includes China, the EU, and Chinese Taipei.
Obligations on member countries
Member countries agree to contribute to AGI development via financing and/or in-kind services or products.
They agree to:
- Only train AI model above a compute threshold of FLOP if:
- They receive approval from Intelsat for AGI.
- They agree to oversight to verify that, in training their AI, they are not giving it the capability to meaningfully automate AI R&D.
- (Note that the above compute threshold might vary over time, as we learn more about AI capabilities and given algorithmic efficiency improvements; this is up to Intelsat for AGI.)
- Abide by the other agreements of Intelsat for AGI.
Benefits to member countries
In addition to the benefits received by non-members in good standing, member countries receive:
- A share of profit from Intelsat for AGI in proportion to their investment in Intelsat for AGI.
- Influence over the decisions made by Intelsat for AGI.
- Ability to purchase frontier chips (e.g. H100s or better).
Benefits to member non-countries
Companies and individuals can purchase equity in Intelsat for AGI. They receive a share of profit from Intelsat for AGI in proportion to their investment in Intelsat for AGI, but do not receive voting rights.
Benefits to non-member countries
There are non-members in good standing, and members that are not in good standing.
Members that are in good standing:
- Have verifiably not themselves tried to train AI models above a compute threshold of FLOP without permission from Intelsat for AGI.
- Have not engaged in cyber attacks or other aggressive action against member countries.
They receive:
- A fraction of the profits from Intelsat for AGI.
- Ability to purchase API access to models that Intelsat for AGI have chosen to make open-access, and ability to trade for new products developed by AI.
- Commitments not to use AI, or resulting capabilities, to violate any country’s national sovereignty, or to persecute people on its own soil.
- Commitments to give a fair share of newly-valuable resources.
Countries that are not in good standing do not receive these benefits, and are cut out of any AI-related trade.
Management of Intelsat for AGI
Intelsat for AGI contracts one or more companies to develop AGI.
Governance of Intelsat for AGI
Intelsat for AGI distinguishes between major decisions and all other decisions. Major decisions include:
- Which lab to contract AGI development to.
- Constraints on the constitution that the AI is aligned with, i.e. the “model spec”.
- For example, these constraints should prevent the AI from being loyal to any single country or any single person.
- When to deploy a model.
- Whether to pass intellectual property rights (and the ability to commercialise) to a private company or companies.
- When to release the weights of a model.
- Whether a potential member should be excluded from membership, despite their stated willingness to offer necessary investment and abide by the rules set by Intelsat for AGI.
- Whether a non-member is in good standing or not.
- Amendments to the Intelsat for AGI treaty.
- Enforcement of Intelsat for AGI’s agreements.
Decisions are made by weighted voting, with vote share in proportion to equity. Major decisions are made by supermajority (⅔) vote share. All other decisions are made by majority of vote share.
Equity is held as follows. The US receives 52% of equity, and other founding members receive 15%. 10% of equity is reserved for all countries that are in good standing (5% distributed equally on a per-country basis, 5% distributed on a population-weighted basis). Non-founding members can buy the remaining 23% of equity in stages, including companies, but companies do not get voting rights.
50% of all Intelsat for AGI compute is located on US territory, and 50% on the territory of a Founding Member country or countries.
The intellectual property of work done by Intelsat for AGI, including the resulting models, is owned by Intelsat for AGI.
AI development will follow a responsible scaling policy, to be agreed upon by a supermajority of voting share.
Thanks to many people for comments and discussion, and to Rose Hadshar for help with editing.
Footnotes
Released on 26th January 2026
Citations
The international AGI project series
Article SeriesPart 1 of 7
This is a series of papers and research notes on the idea that AGI should be developed as part of an international collaboration between governments. We aim to (i) assess how desirable an international AGI project is; (ii) assess what the best version of an international AGI project (taking feasibility into account) would look like.