Podcast

Introducing Better Futures

Released on 3rd August 2025
William MacAskill

Citations

Spotify
Apple Podcasts
Podcast Addict
Pocket Casts
Overcast
RSS

1. The basic case

Suppose we want the future to go better. What should we do?
One prevailing approach is to try to avoid roughly zero-value futures: reducing the risks of human extinction or of misaligned AI takeover.
This essay series will explore an alternative point of view: making good futures even better. On this view, it’s not enough to avoid near-term catastrophe, because the future could still fall far short of what’s possible. From this perspective, a near-term priority — or maybe even the priority — is to help achieve a truly great future.
That is, we can make the future go better in one of two ways:
  1. Surviving: Making sure humanity avoids near-term catastrophes (like extinction or permanent disempowerment).1

  2. Flourishing: Improving the quality of the future we get if we avoid such catastrophes.
This essay series will argue that work on Flourishing is in the same ballpark of priority as work on Surviving. The basic case for this appeals to the scale, neglectedness and tractability of the two problems, where I think that Flourishing has greater scale and neglectedness, but probably lower tractability. This section informally states the argument; the supplement (“The Basic Case for Better Futures”) makes the case with more depth and precision.

Scale

First, scale. As long as we’re closer to the ceiling on Survival than we are on Flourishing — if there is more room for improvement on the latter — then Flourishing has greater scale.
To illustrate, suppose you think that our chances of survival this century are reasonably high (greater than 80%) but that, if we survive, we should expect a future that falls far short of how good it could be (less than 10% as good as the best feasible futures). These are close to my views; the view about Surviving seems widely-held,2

and Fin Moorhouse and I will argue in essays 2 and 3 for something like that view on Flourishing. If so, there’s more room to improve the future by working on Flourishing than by working on Surviving.
Chart comparing value loss from survival vs flourishing risks: small red area shows 20% extinction risk, large blue area shows 72% value loss from non-flourishing, demonstrating flourishing has 36x greater scale
Comparing the scale of surviving and flourishing

Image

On these numbers, if we completely solved the problem of not-Surviving, we would be 20 percentage points more likely to get a future that's 10% as good as it could be. Multiplying these together, the difference we’d make amounts to 2% of the value of the best feasible future.
In contrast, if we completely solved the problem of non-Flourishing, then we’d have an 80% chance of getting to a 100%-valuable future. The difference we’d make amounts to 72% of the value of the best feasible future — 36 times greater than if we’d solved the problem of not-Surviving. Indeed, increasing the value of the future given survival from 10% to just 12.5% would be as good as wholly eliminating the chance that we don't survive.3

And the upside from work on Flourishing could plausibly be much greater still than these illustrative numbers suggest. If Surviving is as high as 99% and Flourishing as low as 1%, then the problem of non-Flourishing is almost 10,000 times as great in scale as the risk of not-Surviving. So, for priority-setting, the value of forming better estimates of these numbers is high.4

Surviving (probability of avoiding a ~zero-value future)
Flourishing (% value of the future if we avoid a ~zero-value future)
Relative scale of non-Flourishing to not-Surviving
0.8
0.1
36
0.95
0.05
361
0.99
0.01
9801
Comparing the value of fully solving non-Flourishing with fully solving not-Surviving, given different default estimates of Surviving and Flourishing.
A further argument about scale comes from considering which worlds are saved by working on Survival, or improved by working on Flourishing. Conditional on successfully preventing an extinction-level catastrophe, you should expect Flourishing to be (perhaps much) lower than otherwise, because a world that needs saving is more likely to be uncoordinated, poorly directed, or vulnerable in the long run. So the value of increasing Survival is lower than it would first appear. On the other hand, there is little reason to believe that worlds where you successfully increase Flourishing are ones in which the chance of Surviving is especially low. So this consideration differentially increases the value of work on Flourishing.5

Neglectedness

Second, neglectedness. Most people in the world today, on both their self-interest and their moral views, care much more about avoiding near-term catastrophe (including risks to the lives of themselves and their family), than they do about long-term flourishing. So we should expect at least some aspects of Flourishing to be much more neglected, by the wider world, than risks to Survival.6

Work on Flourishing currently seems more neglected among those motivated by longtermism, too.
This neglect arises in part because the risks of failure in Flourishing are often much more subtle than the risk of near-term catastrophe. The future could even be truly wonderful, compared to the current world, yet still fall radically short of what’s possible. Ask someone to picture utopia, and they might describe a society like ours, but free from its most glaring flaws, and abundant with those things we currently want. But the difference in value between the world today and that common-sense utopia might be very small compared to the difference between that common-sense utopia and the best futures we could feasibly achieve.
Value spectrum showing existential catastrophe at zero, present day and common-sense utopia clustered near the low end, with vast unexplored space between utopia and near-best futures at value 1
Comparing the value of possible futures. The “present-day” future means a future which extends the most relevant features of the world today, for as long as the common-sense utopia lasts, and considering human lives only.

Image

Tractability

The tractability of work to improve Flourishing is less clear; essays 4 and 5 will discuss this more. I see this as the strongest argument against the better futures perspective, and the reason why I don’t feel confident that work on Flourishing is higher-priority than work on Surviving, rather than merely in the same ballpark.
But at the very least I think we should try to find out how tractable work to improve Flourishing is. Some promising areas include: reducing the risk of human concentration of power; ensuring that advanced AI is not merely corrigible but also loaded with good, reflective values; and improving the quality of decisions that structure the post-AGI world, including around space governance and the rights of digital beings.

2. The series

In the rest of the series, I argue:
  • We are unlikely to get a flourishing future by default even if we avoid catastrophe, because a flourishing future is a narrow target (essay 2) and it’s unlikely that future people will hone in on that target (essay 3)7

  • It’s possible to have persistent positive impact on how well the long-run future goes other than by avoiding catastrophe (essay 4)
  • There are concrete things we could do to this end, today (essay 5)
There’s a lot I don’t cover, too, just because of limitations of space and time. For an overview, see this footnote.8

3. What Better Futures is not

Before we dive in, I want to clarify some possible misconceptions.
First, this series doesn’t require accepting consequentialism, which is the view that the moral rightness of an act depends wholly on the value of the outcomes it produces. It’s true that my focus is on how to bring about good outcomes, which is the consequentialist part of morality. But I don’t claim you should always maximize the good, no matter the self-sacrifice, and no matter what means are involved. There are lots of other relevant moral considerations that should be weighed when taking action, including non-longtermist considerations like special obligations to those in the present (which generally favour interventions to increase Survival). But long-term consequences are important, too, and that’s what I focus on.9

, 10

 
Second, this series doesn’t require accepting moral realism, which I’ll define as the view that there are objective facts about value, true independently from what anyone happens to think.11

 Whether or not you think there are objective moral facts, you can still care about how the future goes, and worry that the future will not be in line with your own values, or the values you’d have upon careful reflection. I’m aware that this series often uses realist-flavoured language, which is simpler and reflects how I personally tend to think about ethics. But we can usually just translate between realism and antirealism: where the realist speaks of the “correct” moral view, the antirealist could think about “the preferences I’d have given some ideal reflective process”.12

Third, this series isn’t in opposition to work on preventing downsides, like “s-risks” —  risks of astronomical amounts of suffering, which also affect “Flourishing” rather than “Survival”. We should take such risks seriously: depending on your values and your takes on tractability, they might be the top priority, and their importance comes up repeatedly in the next two essays. The focus of this series, though, is generally on making good futures even better, rather than avoiding net-negative futures.13

Fourth, the better futures perspective doesn’t mean endorsing some narrow conception of an ideal future, as past utopian visions have often done. Given how much moral progress we should hope to make in the future, and how much we’ll learn about what’s even empirically possible, we should act on the assumption that we have almost no idea what the best feasible futures would look like. Committing today to some particular vision would be a great mistake.
A central concept in my thinking about better futures is that of viatopia, which is a state of the world where society can guide itself towards near-best outcomes, whatever they may be.14

We can describe viatopia even if we have little conception of what the desired end state is. Plausibly, viatopia is a state of society where existential risk is very low, where many different moral points of view can flourish, where many possible futures are still open to us, and where major decisions are made via thoughtful, reflective processes. From my point of view, the key priority in the world today is to get us closer to viatopia, not to some particular narrow end-state. I don’t discuss this concept further in this series, but I hope to write more about it in the future.
With that, let’s jump in.

Footnotes

Released on 3rd August 2025

Citations

Better Futures

Article Series
Part 1 of 6

Suppose we want the future to go better. What should we do?

One approach is to avoid near-term catastrophes, like human extinction. This essay series explores a different, complementary, approach: improving on futures where we survive, to achieve a truly great future.