Podcast

Better Futures

Supplement: The Basic Case for Better Futures

Released on 3rd August 2025
William MacAskill
Philip Trammell

Citations

Introduction

This report introduces a simplified model for evaluating actions aimed at producing long-term good outcomes: the “SF model”, where the expected value of the future can be approximated by the product of two variables, Surviving (S) and Flourishing (F). Surviving represents the probability of avoiding a near-total loss of value this century (an “existential catastrophe”), while Flourishing represents the expected value of the future conditional on our survival. Using this model and the “scale, neglectedness, tractability,” framework, we argue that interventions aimed at improving Flourishing are of comparable priority to those focused on Surviving. 

The SF model

We’ll define value as the difference that Earth-originating intelligent life makes to the value of the universe.
Insofar as we’re aiming to do as much good as possible, we want to maximise expected value. We can break the expected value of an action into two components:1

 
EV=EV(near-term)+EV(future)EV = EV(\text{near-term}) + EV(\text{future})
We’ll let “near-term” refer to “between now and 2100”. If we accept longtermism,2

we accept that the best action we can take must be near-best with respect to the latter component.
We can further decompose EV(future) as follows:
EV(future)=P(survival this century)EV(futuresurvival this century)+P(not-survival this century)EV(futurenot-survival this century)EV(\text{future}) = P(\text{survival this century})*EV(\text{future} | \text{survival this century}) + P(\text{not-survival this century})*EV(\text{future} | \text{not-survival this century})
We’ll come back to the definition of “survival this century”, but for now we’ll say that: (i) by stipulation, outcomes involving the total extinction of Earth-originating life are of 0 value; (ii) the best feasible long-run outcomes are of value 1;3

and (iii) “survival this century” occurs if nothing has happened by 2100 to lock us into a 0-value future. That is, EV(future | not-survival this century) = 0. So our formula reduces to:
EV(future)=P(survival this century)EV(futuresurvival this century)EV(\text{future}) = P(\text{survival this century})*EV(\text{future} | \text{survival this century})
Then we can further break this up:
EV(future)=P(survival this century)P(survival in future centuriessurvival this century)EV(futuresurvival in both periods)+P(survival this century)P(not-survival in future centuriessurvival this century)EV(futuresurvival this century and not-survival in future centuries)EV(\text{future}) = P(\text{survival this century}) * P(\text{survival in future centuries} | \text{survival this century}) * EV(\text{future} | \text{survival in both periods}) + P(\text{survival this century}) * P(\text{not-survival in future centuries} | \text{survival this century}) * EV(\text{future} | \text{survival this century and not-survival in future centuries})
Where “survival in future centuries” refers to survival for a much longer time period — say, the next billion years — but not indefinitely (which is likely impossible). 
If we take seriously, per longtermism, that almost all expected value is in the far distant future, then EV(future | survival this century and not-survival in future centuries) ≈ 0, and 
EV(future)P(survival this century)P(survival in future centuriessurvival this century)EV(futuresurvival in both periods)EV(\text{future}) \approx P(\text{survival this century}) * P(\text{survival in future centuries} | \text{survival this century}) * EV(\text{future} | \text{survival in both periods})
We’ll assume that we cannot affect P(survival in future centuries | survival this century),4

and that this probability is constant across compared actions.5

Because, for an expected value maximizer, value is unique only up to a positive affine transformation, the multiplication of the value of all options by a constant does not alter those options’ relative expected value, so any agent that maximises EV(future) will also maximise EV’(future) = (EV(future) / P(survival in future centuries)). So we have: 
EV(future)P(survival this century)EV(futuresurvival this century and survival in future centuries)EV’(\text{future}) \approx P(\text{survival this century})*EV(\text{future} | \text{survival this century and survival in future centuries}) 
However, if “survival” refers to locking us into a literally 0-value future, then this is not the most useful breakdown, because then, arguably, almost all current longtermist action is focused on EV(future | survival this century and survival in future centuries). For example, from many perspectives, a future involving misaligned AI takeover is very unlikely to have literally 0 value.
Instead, we replace the idea of “survival” with “survival*”, which means that nothing has happened to essentially guarantee that the future will be of near-0-value, for some precisification of nearness. We suggest the following precisification. Let a best feasible future be a future at the 99.99th centile of your probability distribution over value. 100% value is the value of a best feasible future. An outcome is “near-0” value if it’s within 0.1 percentage points of 0 value. We think this is a natural way of precisifying “no existential catastrophe” — or, at least, precisifying the concept in the way that it’s actually used. So, someone who internalised Bostrom’s “Maxipok” principle and wanted to minimize existential risk might focus only on increasing P(survival*).6

Given this definition, EV(future | not-survival* this century) ≈ 0, and so we can use the same decomposition and simplification as above. We get to:
EV(future)P(survival this century)EV(futuresurvival this century and survival* in future centuries) EV’(\text{future}) \approx P(\text{survival}^* \text{ this century})*EV(\text{future} | \text{survival}^* \text{ this century and survival* in future centuries}) 
Finally, to tidy things up, we can refer to “P(survival* this century)” as “S” for Surviving and “EV(future | survival* this century and survival* in future centuries)” as “F” for Flourishing. So we have:
EV(future)SFEV’(\text{future}) \approx S*F
We call this the SF model.
In what follows, we will assume that F is currently positive, so that it is valuable to raise rather than lower S.

The basic case for work on Flourishing

First, we note there’s nothing stemming from the nature of wanting the future to go better to focus on Surviving rather than Flourishing. And, at the very least, the two-factor model makes clear that we shouldn’t focus only on increasing S. In general, if variables x and y are not perfectly correlated and there's a big sample size, then max(x*y) does not equal max(x) or max(y): the “tails come apart”.7

So, maxing out on S is very unlikely to be the best course of action.
But there’s also an argument based on the “scale, neglectedness, tractability” framework for thinking that work on Flourishing is in at least the same ballpark of priority as work on Survival. Let’s go through each aspect of the framework in turn.

Scale

The "scale" of a problem refers to the total amount of value at stake. We can assess the relative scale of Survival and Flourishing by examining the potential gains from improving each.
We’ll consider 100% value to be the ceiling on how high F can be. As argued in essays “No Easy Eutopia” and “Convergence and Compromise”, we’re probably far from the ceiling on F. If we are also not far from the ceiling on S, then:  
  1. Making some absolute change to F does more good than making the same absolute change to S. 
  • a. Suppose we should think that S is at 80% and F is at 10%.
  • b. Using these numbers, then increasing the latter by one percentage point has 8 times the positive impact of increasing the former.8

  1. Making some proportional reduction in the loss of value from the future going poorly does more good than making the same proportional reduction in the risk of existential catastrophe. This is the formulation used in Cotton-Barratt’s formalisation of the scale, neglectedness, tractability framework, and the formulation we prefer.9

  • a. Again using the numbers we just suggested, a 1% reduction in the loss of value from the future going poorly (i.e. a 1% reduction in 1-F) has 36 times the positive impact of a 1% reduction in near-term existential risk (1-S).10

That is, given that we are further from the ceiling of F than we are of S, the problem of non-flourishing futures has greater scale than the problem of existential catastrophe. 
We think that the illustrative numbers we’ve given are reasonable, or are even on the conservative side from the perspective of our argument. Here we report on what, given our views, we see as a reasonable range of estimates for S and F, and the comparative scale of the problem of non-flourishing.
S
F
Gains from 1 pp increase in S
Gains from 1 pp increase in F
Ratio (comparative scale)
0.65
0.5
0.005
0.0065
1.3
0.95
0.05
0.0005
0.0095
19
0.99
0.01
0.0001
0.0099
99
Comparative scale, using “absolute change” formulation of scale.
S
F
Gains from 1% reduction in (1-S)
Gains from 1% reduction in (1-F)
Ratio (comparative scale)
0.65
0.5
0.00175
0.00325
1.9
0.95
0.05
0.000025
0.009025
361
0.99
0.01
0.0001
0.9801
9801
Comparative scale, using “proportional change” formulation of scale.
So we see the problem of non-flourishing as having 1.9x–9801x the scale as the problem of non-survival. Our own views lean towards the higher end of this range; given uncertainty over these parameters, we think that the scale of non-Flourishing is on the order of 100x the scale of non-Survival.

Two additional arguments regarding Flourishing’s scale

First, at least some sorts of interventions to increase S are less valuable than commonly expected, because if human-originating life goes extinct, then non-human-originating life may settle the stars in its place. This could happen because Homo sapiens goes extinct, but, in the hundreds of millions of years remaining before the Earth is no longer habitable, nonhuman life on Earth evolves higher intelligence and cumulative cultural learning, and some nonhuman civilisation emerges. Or it could happen because extraterrestrial life settles our corner of the cosmos. Of course, it’s hard to know how likely each of these are, though we think that it's more likely than not that nonhuman life would develop civilisation even if Homo sapiens in isolation died out, and perhaps about even that alien civilisations would settle our corner of the cosmos if no Earth-originating intelligent life did so.
To formalize the first point in terms consistent with the framework above: S is higher than commonly understood, because e.g. a pandemic that killed every human but left other species intact would still leave the probability of a “survival” scenario—one in which Earth-originating life makes a significant difference to the value of the universe—significantly positive.
To formalize the second, the possibility of aliens renders F lower than commonly understood. Suppose that, before accounting for aliens, we believe that f* is the value (i.e. the impact of Earth-originating life on the value of the universe) of a best feasible future and f is the expected value of the future without our intervention, so that we believe F = f/f*. Then suppose we come to believe that, in the absence of Earth-originating life, aliens will in expectation use fraction A of the resources in our light cone that we would have used. Suppose for simplicity (i) that if we encounter an alien civilisation, the resources that we each would have used in the other’s absence will, in expectation, be split evenly, (ii) that we and the aliens will in expectation put resources to equally valuable uses, and (iii) that value is linear in resources. Now the value of a best feasible future is f* - fA/2 and the current expected value of the future is f - fA/2, so F = (f - fA/2) / (f* - fA/2) < f/f*.
Strictly speaking, introducing the possibility of aliens also raises S considerably, since if AI takes over on Earth and spreads worthless structures through the universe, partially displacing aliens who would otherwise (in expectation) have put those resources to good use, this now constitutes a case of survival*. That is, the impact of Earth-originating life on the value of the universe in this case is not near-zero but very negative. In this case, however, the definition of survival* given above does not preserve the spirit of our argument; we don’t mean to argue for working on Flourishing rather than Survival simply by reclassifying attempts to prevent AI takeover as instances of the former. When comparing the value of working on Flourishing to that of working to reduce the risk that morally worthless AI takes over and spreads, under assumptions in the direction of (i)-(iii) and the assumption that A is non-negligible, it is more natural to define value as the difference that Earth-originating intelligent life makes to the value of the universe above the baseline of AI takeover by AIs that spread, are non-sentient, and have some valueless goal like producing paperclips. This definition preserves the point that the existence of aliens decreases F, and since in the absence of aliens the AI-takeover value-baseline is (near-)zero, it does not change our discussion on any other point.
The second additional argument regarding Flourishing’s scale concerns what our views on S and F should be given that we have successfully raised F or S.11

In particular, it seems to us that your E[F | you successfully raised S] should be generally lower than your unconditional E[F]:12

scenarios where you successfully raised S are probably scenarios where existential risk was high; such scenarios are probably a shambles in other ways, too, and not on track for a truly flourishing future.13

In contrast, we don’t see strong arguments for thinking that E[S | you successfully raised F] is lower than E[S].
To illustrate this quantitatively, suppose the world can be in one of two states: a ‘well-run’ state with Flourishing at 19%, or a ‘badly-run’ state with Flourishing only at 1%. If each state is equally likely, our unconditional expectation for Flourishing is 10%. Now, suppose the risk of an existential catastrophe is just 2% in the well-run state, but 38% in the badly-run state (giving an unconditional risk of 20%). On learning that a catastrophe would have occurred but for our successful intervention (raising S), we should update our beliefs. This new information is strong evidence we are in the badly-run state (a 95% posterior probability).14

Accordingly, our expectation for Flourishing, conditional on having raised S, plummets from 10% to just 1.9%. Using these illustrative numbers, raising S by one percentage point would be 5x less valuable than it would have naively appeared, before we took conditional expectations into account.
It’s not inconsistent to think that, in some circumstances, your E[F | you successfully raised S] should be higher than your unconditional E[F]. One such argument is that we should, currently, be unsure on how much capability one can in principle reap from a given number of FLOP (i.e. what “optimal algorithmic efficiency” is). If optimal algorithmic efficiency is very high, then: (i) we should also expect a faster transition to scary capability levels, and therefore existential risk from AI takeover is higher; but (ii) we should expect that the maximal amount of value one can produce per FLOP is higher. But, if so, then conditional on you raising S by preventing AI takeover, you are more likely to be in the “high optimal algorithmic efficiency” scenario, and therefore E[F] is higher, too.
On balance, we think that the former of these arguments is stronger, in particular because it seems unclear to us how strong the correlations in (i) and (ii) really are, but we acknowledge that this is debatable.

Neglectedness

We expect the problem of non-flourishing to be more neglected than the problem of not-surviving, both now and in the near-term future. 
Of course, near-term existential risk is extremely neglected: even just from the perspective of the self-interest of the population of the United States, it gets far less attention than it should. But it has gathered at least a reasonable amount of attention in recent years. And we should expect that amount of attention to increase: most people don’t want to die, and as people progressively realise the extent of existential risk posed by AI, engineered pathogens, and other dangerous technology, we should expect them to invest a lot more time, money and political attention to reduce those risks than they currently are investing. 
Quantitatively, the willingness to pay to avoid existential catastrophe even just from the United States is truly enormous. The value of a statistical life in the US — used by the US government to estimate how much US citizens are willing to pay to reduce their risk of death — is around $10 million. The willingness to pay, therefore, from the US as a whole, to avoid a 0.1 percentage point of a catastrophe that would kill everyone in the US, is over $1 trillion.15

We don’t expect these amounts to be spent on existential risk reduction, but they show how much latent desire there is to reduce such risks. We’d expect at least part of this latent desire to become progressively mobilised with increasing indications that various global catastrophic risks, such as biorisks, are real. 
No comparable latent demand exists for improving Flourishing, at least for many sub-areas within Flourishing.16

The amount of even latent interest in, for example, ensuring that resources outside of our solar system are put to their best use, or that misaligned AI produces a somewhat-better future than it would otherwise have done even if it kills us all, is tiny, and we don’t expect society to mobilise massive resources towards these issues even if there were indications that those issues were pressing. We expect them to remain extraordinarily neglected.
Even areas that seemingly get a lot of attention can still contain highly neglected sub-problems, if attention within the area is not focused on the very most effective interventions. For example, the risk of autocracy is well-known and a lot of people focus on it; but very few people focus on how to reduce the risk of AI-enabled military coups, even though these are plausibly the most effective interventions currently.
And, even among those who take longtermism and the possibility of an imminent intelligence explosion seriously, there is currently much less attention on F than on S. AI safety and biorisk reduction have, thankfully, gotten a lot more attention and investment in the last few years. There has been less of an uptick in work dedicated to improving the quality of the future, and such work remains extremely neglected even within this community. 
All things considered, we expect Flourishing to be something like 10x–100x more neglected than Survival, in terms of quality-adjusted financial and labour allocation. Of course, this is just a rough guesstimate. 

Tractability

The tractability of work on Flourishing is much less clear and, in our view, if the case for work on Flourishing fails, it probably fails here.
In at least some cases, there are reasons for pessimism about tractability:
  • First, the main argument we made for the relative neglectedness of Flourishing might be an argument against relative tractability. If there’s a lot of latent desire to reduce existential risk, and the blocker is merely that people don’t know how high existential risk really is, then it might be much easier to unlock that latent desire than it is to convince other people to care about something that they don’t, currently, care about.  
    • Even worse, there might be active opposition to certain sorts of work on Flourishing, at least in situations where steering us towards a better future involves acting, to some degree, against the self-interest of those in the present generation. For example, people might for self-interested reasons want to give AIs no welfare rights, even if doing so were the right thing to do in the long term.
  • Second, at least some Flourishing-related issues are less likely to have technical fixes that can be implemented unilaterally than many Surviving-related issues. To reduce pandemic risk, individual actors (like companies, foundations, or countries) can produce stockpiles of PPE, build bunkers, or subsidise the development of physical sterilisation methods like far-UVC lighting. To reduce AI takeover risk, individual actors can invest heavily in alignment research. In contrast, there seem to be fewer actions of this sort for at least many areas to promote Flourishing: the most obvious actions around AI rights, space governance, or collective decision-making require regulation, and perhaps often international regulation. This is generally harder to achieve, because success requires convincing other actors of the importance of the issue and steering other actors to make better decisions on those issues than they otherwise would have done.
  • Third, some Flourishing-related issues (such as deep space governance) arise deeper into the intelligence explosion than threats to Survival. This means that:
    • More quality-adjusted AI labour will go towards the issue.
    • Early work will be more likely to be irrelevant.
    • Early action will be more likely to have difficult-to-predict consequences.
On the last of these points, some similar issues are discussed in “Preparing for the Intelligence Explosion”, section 5.17

  In brief: we think that this often does result in a major haircut on the value of working on these issues, although in many cases even with a 90% haircut the expected value can still be very high. And in many cases early work really is very high-leverage: some challenges arise early; some windows of opportunity close early; and early work can change when and how superintelligent assistance is used on these problems.
Finally, one of the strongest arguments against the tractability of better futures work is that it simply won’t pay off in time. It took many years for the fields of AI safety and AI governance to develop; but by that time, we might well already have superintelligence. Similarly, it may take years for work in these currently-neglected areas to pay off. Work in these particularly-neglected better futures areas, then, looks comparatively better in worlds where AGI comes 3+ years in the future. 
But there are some reasons for optimism on tractability as well. First, there are some quite general reasons:
  • In general, tractability doesn’t vary by as much as importance and neglectedness. 
    • Assuming logarithmic returns, then for a problem to be an order of magnitude less tractable than a baseline, we would require approximately 1000 times the resources to achieve 10% progress. Conversely, problems an order of magnitude more tractable would be nearly fully solved with a single doubling; these must be rare if they are also large-scale and neglected. Thus, problems which seem at all tractable often fall within roughly two orders of magnitude of tractability. 
    • Truly intractable problems usually meet specific conditions: they might not be solvable even in theory (like perpetual motion machines), or there's no discernible “plan of attack” (as Hamming noted regarding fundamental physics challenges like time travel). The sub-areas within Flourishing that we highlight don’t seem to meet these criteria for extreme intractability.
  • In cause areas where very little work has been done, it’s hard for expected tractability to be very low. Because of how little we know about tractability in unexplored cause areas, we should often put significant credence on the idea that the cause will turn out to be fairly tractable; this is enough to warrant some investment into the cause area — at least enough to find out how tractable the area is.18

  • There are many distinct sub-areas within better futures work. It seems unlikely that tractability in all of them is very low, and unlikely that their tractability is very highly correlated.
  • In at least some cases, there’s an opportunity for “pulling the rope sideways”, at least for now. For example, because there is very little attention on the governance of extrasolar resources, one might be able to advocate for and get uptake of sensible proposals with very little pushback.
  • There’s a reasonably promising track record of areas with seemingly low tractability turning out to be surprisingly tractable. A decade ago, work on risks from AI takeover and engineered pathogens seemed very intractable: there was very little that one could fund, and very little in the way of promising career paths. But this changed over time, in significant part because of (i) research work improving our strategic understanding, and shedding light on what interventions were most promising; (ii) scientific developments (e.g. progress in machine learning) making it clearer what interventions might be promising; (ii) the creation of organisations that could absorb funding and talent. All these same factors could well be true for better futures work, too. 
  • Early work in a seemingly-intractable area can take the form of field-building, which pays off when it becomes clearer what tractable paths forward there are. Much early work on AI safety paid off in this way; arguably, this is still the main path to impact from current AI safety work.
Of these considerations, it’s the last two that move us the most. It doesn't feel long ago that work on AI takeover risk felt extraordinarily speculative and low-tractability, where there was almost nowhere one could work for or donate to outside of the Future of Humanity Institute or Machine Intelligence Research Institute. In the early days, we were personally very sceptical about the tractability of the area. But we’ve been proved wrong. Via years of foundational work — both research work figuring out what the most promising paths forward are, and via founding new organisations that are actually squarely focused on the goal of reducing takeover risk or biorisk, rather than on a similar but tangential goal — the area has become tractable, and now there are dozens of great organisations that one can work for or donate to. We expect that, with effort, a similar dynamic would play out with better futures work. Foundational research will help us figure out the most promising paths. Founding new organisations will generate great places to work for or donate to.
As well as general considerations around tractability, we can look at specific areas. One issue facing better futures work is that many of the most neglected issues concern technologies or political developments that haven’t occurred yet. However, the same is true for AI takeover: the central challenge, there, is how to align superintelligent AI systems, which are not yet here. The path that AI safety work has taken is to work on “baby” versions of the problem: developing tools and techniques (like mechanistic interpretability) that are both helpful to aligning present-day AI models, and will scale to be useful to aligning superintelligent AIs. Even more importantly, this work also helps to build a vibrant field around AI safety, increasing the amount of skilled labour able to tackle the problem once we are training superintelligence.
We can use the same strategy in many other areas. First, consider AI for persuasion, reasoning, and decision-making. Already there are major worries about AI distorting societal epistemics. Currently, these worries are often misguided or overblown. But as AI becomes more powerful, how AI is incorporated into society’s epistemology might become of critical importance: people could lock in their pre-existing beliefs; they could become convinced of memetically powerful but false ideas; they could simply fail to keep up with all the change that’s occurring over the course of the intelligence explosion; or they could use AI to become much more curious and enlightened.
Second, the alignment target. As well as making sure that AI is aligned, we need to figure out what AI is aligned with. Ideally, we want to design an alignment target such that (i) even if we fail to align the superintelligence and it disempowers humanity, that superintelligence still produces a good future; (ii) if we do succeed at alignment, the superintelligence guides us on the path to viatopia, rather than some sort of catastrophic lock-in. The “baby” version of the problem is to work out what goals or values to align current AI systems with. 
Third, deep space governance. We are not yet at the point of seriously considering how extrasolar resources should be allocated among people and states. But the question of rights over, say, asteroid mining is live and will be decided upon soon. How we answer such questions might inform and set precedents for how larger-scale resource-use questions are settled.
In each of these cases, one can work on baby versions of the challenges today. And work on those issues today sets precedents that will guide the much larger decisions that are yet to come, can generate insights that might be useful to those larger-scale challenges, and builds up a field of people who can work on the most-crucial issues when the time comes. 
Overall, we do think that the tractability of work on Flourishing is lower than work on Surviving. This is the least certain parameter, but we think that a factor of 10x-100x lower is reasonable.  In other words, we think that the neglectedness and tractability considerations approximately cancel out.

Personal Fit

The ideal portfolio of action would include significant work on both Survival and Flourishing, such that, for career choice at least, personal fit will often be the determining factor.
Better futures work seems like it's unusually well-suited to people in or adjacent to the effective altruism movement. It particularly benefits from: having generalist research knowledge; being comfortable in messy pre-paradigm fields; being willing to shift quickly between more theoretical and very concrete work; taking ethics really seriously, including more abstract or unusual ethical ideas like digital rights; and understanding the dynamics of a potential intelligence explosion.
That said, better futures is highly inchoate and pre-paradigm: this means that it’s often hard to meaningfully contribute. The most important work, right now, is on making the area more tractable, whether via fundamental research to help us figure out what strategies are most promising, or by setting up organisations that are focused on the most-important goals.  
For many people, the best strategy right now would be to focus on building up either career capital or financial capital, in order to be able to contribute more within better futures once there are more opportunities to contribute. (Better futures makes financial capital particularly more valuable, especially insofar as many of these areas are areas that the foundation Open Philanthropy would not want to fund, unlike AI takeover risk and pandemic risk). For many other people, the personal fit costs will be too great, and it’ll be higher-impact to focus on the more well-developed areas of existential risk reduction.

Conclusion

Putting these pieces together, we think that Flourishing is at least in the same ballpark of priority as Survival. The SF model suggests a diversified portfolio of interventions is optimal. The greater scale and neglectedness of Flourishing-focused work compensate for its currently lower and more uncertain tractability. For those whose personal fit is good — especially those comfortable with messy, pre-paradigmatic problems — work on Flourishing seems promising.

Bibliography

Scott Alexander, ‘The Tails Coming Apart As Metaphor For Life’, Slate Star Codex, 25 September 2018.
Owen Cotton-Barratt, ‘Cost-effectiveness of research: overview’, The Future of Humanity Institute, 4 December 2014.
Owen Cotton-Barratt, ‘How to treat problems of unknown difficulty’, The Future of Humanity Institute, 29 July 2014.
Owen Cotton-Barratt, ‘Prospecting for Gold’, 2016.
Hilary Greaves and Will MacAskill, ‘The case for strong longtermism’, 14 June 2021.
Will MacAskill and Fin Moorhouse, ‘Preparing for the Intelligence Explosion’, Forethought.
Thrasymachus, ‘Why the tails come apart’, 1 August 2014.
Phil Trammell, ‘Which World Gets Saved’, 9 November 2018.

Footnotes

Released on 3rd August 2025

Citations

Better Futures

Article Series
Part 6 of 6

Suppose we want the future to go better. What should we do?

One approach is to avoid near-term catastrophes, like human extinction. This essay series explores a different, complementary, approach: improving on futures where we survive, to achieve a truly great future.