Expression of interest (power concentration)

Applications close on 1st February 2026
We are looking to support people who want to contribute to reducing risks of extreme power concentration or AI-enabled coups.
We encourage you to express your interest, even if you don’t seem to have the right expertise or have no concrete idea for what to do. However, please note that we may not get back to you.

How we might support you

We’re open to different kinds of collaborations or support. Here are some concrete examples:
  • Funding, e.g., contracting/consulting arrangement, seed funding, introductions to funders
  • Expertise, e.g., calls with internal experts, introductions to external ones, visiting our Oxford offices
  • Operational support for setting up projects, e.g., introductions to fiscal sponsors, expertise from our operations staff

Who we're looking for

We are particularly interested in supporting or working with people with some of the following attributes:
  • Expertise in one or more relevant areas (e.g., national security, public-private partnerships / B2G dynamics, information security, machine learning & alignment science, AI auditing & evaluations, democratic backsliding and resilience, anti-trust),
  • Willingness to consider scenarios with very advanced AI capabilities (e.g., automation of AI R&D or command-and-control systems),
  • Entrepreneurialism and the ability to make progress in poorly-scoped areas.

What you could work on

We’re open to a wide range of possible approaches (e.g., research, policy development, education, or advocacy) and you don’t need to have a concrete idea to express your interest.
In our report on AI-enabled coups, we already list a few promising mitigations (though this list is not meant to be exhaustive):
  • Establishing rules for legitimate use of AI (e.g., via model specs, terms of service/use, principles for government use)
  • Implementing technical measures to enforce these rules (e.g., guardrails, audits, infosecurity, stress-testing)
  • Empowering multiple actors to prevent AI-enabled coups (e.g., transparency into capabilities, model specs, and compute usage, distributed decision-making authority over AI development, distributed AI capabilities)

Apply

Search

Search for articles...