Donate

Forethought is a non-profit that researches how to navigate the transition to a world with superintelligent AI systems.
Our research to date has highlighted the risk of AI-enabled coups, analyzed the likelihood of an intelligence explosion, and made the case that AI means we need to urgently address many challenges besides alignment.
This research has already fed into AI companies’ plans, shaped 80,000 Hours’ top problem profile list, helped to catalyze a safety-focused VC fund, and has been viewed more than 100,000 times.
You can learn more about our mission and team here, and our research here. Additional funds will allow us to expand our core research team and translate our ideas into policy change. You can donate to us using the button below.

Donations are tax-deductible in the US, UK and the Netherlands. For further details, see below.

Why Forethought?

Research into extremely neglected and important topics has a surprisingly strong track record. For instance, early work on existential risk and AI risk (e.g. from Nick Bostrom, Carl Shulman, Eliezer Yudkowsky and Paul Christiano) was crucial for catalyzing much present work to mitigate AI risk. In those cases, good ideas just really did seem to win out, and surprisingly quickly. (This is not to say that these people got everything, or even most things, right in their early work, but they got some important things right and helped build a foundation for others.)
And we think the potential path to impact from further work in this vein is potentially stronger now: there’s now already a network of people in influential positions (e.g. USG, senior lab roles, key AI regulators) who are interested in ideas we develop, and can put them into practice.
Unfortunately, many of the organizations who were focused on these issues (like the Future of Humanity Institute, the Global Priorities Institute, and the former Open Philanthropy Worldview Investigations team) no longer exist. We are inspired by their efforts, and hope to build an institution with high intellectual standards, a strong core team, and the space to focus on big-picture important questions.

Some testimonials

I'm a fan of Forethought. They've got great people and are asking the important questions. Basically a spiritual successor to the FHI and the OpenPhil worldview investigations team. If I wasn't at AI Futures Project I'd probably want to work at Forethought.
— Daniel Kokotajlo, author of AI 2027
I'm excited to recommend Forethought. They're tackling the crucial but neglected question of how to ensure transformative AI leads to genuinely good outcomes—not just human survival. Will, Tom, and Max have assembled an exceptional team that's already producing high-impact research and influencing how AI labs think about risks like AI-enabled coups. They represent exactly the kind of foundational thinking we need as we navigate the intelligence explosion.
— Zach Freitas-Groff, AI Programme Officer at Longview Philanthropy
I think Forethought is currently the best institution in the world doing full-throated, scope-sensitive macrostrategy. They've assembled a great team, and they've already made substantial contributions to the discourse (I'm especially excited about the AI-assisted coups report and the Better Futures series), and I think that the overall research agenda (e.g., work on the various grand challenges that the intelligence explosion could create) is quite promising. What's more, I think the general niche Forethought is filling is both highly neglected and crucially important – and in particular, important to our ability to notice ways in which the overall strategic and prioritization landscape may be different from what we thought, and to adjust accordingly. I think they're well worth funding, and I'm excited to see where their work goes from here.
— Joe Carlsmith, formerly Senior Advisor at Open Philanthropy, now Anthropic

Theory of change & achievements so far

We research neglected topics in AI strategy, then share those findings to inform work on preparing for advanced AI.

Research

If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destruction; what rights to give digital beings; how to govern an automated military; and how to avoid dictatorship or authoritarianism.
To date most work has focused on reducing the risk of misalignment, in part because of foundational work to make the case for this area. While misalignment risk is important, we tend to think that many non-alignment areas of work (e.g. avoiding AI-enabled coups) are particularly neglected, and we hope to scope these areas out and catalyze more work on them.
Another distinctive perspective we bring is to focus on achieving a near-best future. Most explicitly longtermist work to date has been focused on avoiding existential catastrophe, but achieving a near-best future might be even more important. Research avenues here include mapping out what a desirable “viatopia” would look like (i.e. a state of the world which is very likely to lead to a very good future), figuring out how space resources should be allocated, and addressing issues around AI epistemics and persuasion.
In our first year, we’ve published over 20 research papers, which you can see here. Some highlights are:

Impact

So far we have mostly been focused on research, but we have been pleased by the impact our work has had so far.
Stefan Torges, our Head of Strategic Initiatives, works to ensure that action-relevant insights from our research are implemented, by finding and empowering specific people to take them forward. Stefan is currently focused on mitigating AI-enabled coups by developing policy proposals, technical agendas to audit for secret loyalties, and ensuring that infosecurity prioritizes systems integrity.

Our team

Meet the full team

Our plans and budget

For details of our plans and budget for 2026, see our 2025 fundraiser page.

How to donate

  • You can donate via the button below:
  • Donations via the button above are processed via Giving What We Can.
    • Such donations are tax-deductible for donors in the US, UK (via Gift Aid) and the Netherlands.
    • Donations can be made via credit/debit card, bank transfer, direct debit (UK only), cryptocurrencies, stocks (US only), and cryptocurrencies.
  • You can also donate via Every.org using this link.
    • Note that this is our preferred donation method for large donations (>$100k) from the US.
    • Such donations are tax-deductible for donors in the US only.
    • Donors based in the US can give via credit card, bank transfer, PayPal, Venmo, cryptocurrency, stocks, or from a donor-advised fund (DAF).
  • If you have any questions, or if you’d like to make a donation above $20,000 via a method not listed above or needing tax-deductibility in another country, please contact us and we can look into it.

Search

Search for articles...