No Easy Eutopia
Released on 3rd August 2025
Fin Moorhouse
William MacAskill
Citations
1. Introduction
The basic argument for the “better futures” perspective relied on the idea that we are closer to the ceiling on Surviving than we are on Flourishing. If, however, we are very likely to get to a near-best future given survival, then there’s more to gain from ensuring we survive, and there’s less potential upside from improving those futures where we do survive.

Surviving represents the probability of avoiding a near-zero value future this century (an “existential catastrophe”), while Flourishing represents the expected value of the future conditional on Surviving.
Image
We could be close to the ceiling of Flourishing for a couple of reasons. First, eutopian futures could present a big target: that is, society would end up reaching a near-best outcome across a wide variety of possible futures, even without deliberately and successfully honing in on a very specific conception of an extremely good future. We call this the easy eutopia view.
Second, even if the target is narrow, society might nonetheless hone in on that target — maybe because, first, society as a whole accurately converges onto the right moral view and is motivated to act on it, or, second, some people have the right view and compromise between them and the rest of society is sufficient to get us the rest of the way.1
As an analogy, we could think of reaching a near-best future as an expedition to sail to an uninhabited island. The expedition is more likely to reach the island to the extent that:
- The island is bigger, more visible, and closer to the point of departure;
- The ship’s navigation systems work well, and are aimed toward the island;
- The ship’s crew can send out smaller reconnaissance boats, and not everyone onboard the ship needs to reach the island for the expedition to succeed.
This essay considers point (1). If eutopia is an island, is the island easy to reach?

If reaching a mostly-great future is like sailing to an uninhabited island, then three factors could influence the difficulty: (i) whether the island is large and close, (ii) whether the ship can navigate well, and (iii) whether the expedition can make many attempts.
Image
Before we begin, it’s useful to define some terms:
A best feasible future is a future humanity would achieve if things went exceptionally well — at the 99.99th percentile of our distribution of how well things could go.A eutopia,2 or near-best future is a future which is at least nearly as good as a best feasible future, or more precisely (if applicable3) at least 90% of its value.A mostly-great future is a future which achieves at least most of the potential of a best feasible future, or more precisely (if applicable) at least 50% of its value.
Here’s a first-pass statement of the question we address in this essay: among all the futures humanity could achieve given survival, weighted by how likely those futures would be assuming no serious, coordinated efforts to promote the overall best outcomes (whatever they may be), what fraction of those futures live up to most of the potential we could have achieved?
A “no easy eutopia” view says that only a narrow range of futures achieve most the potential of the best achievable futures; and a wide range of futures fall far short — including futures which might seem fantastically advanced, grand in scale, and full of things we care about.
It’ll also be useful to talk about quantities of value, which we define in terms of a betterness relation over uncertain outcomes. On this definition, we say that eutopia is over twice as good as some other future just in case a 50–50 gamble between eutopia and near-term extinction is better than a guarantee of that other future. Of course, not all reasonable moral views allow quantitative comparisons of value, but a wide variety can be represented in this way, including non-consequentialist views.4
There’s nothing magical about the “mostly great” threshold. It’s just as valuable, by our definition of value, to go from achieving 10% to 20% of the difference in value between extinction and eutopia, than it is to cross the mostly-great threshold from 45% to 55%, or achieve a true eutopia by going from 90% to 100%. But it’s useful to focus on a threshold which gives us the most relevant information about the value of the future.
Given how we’ve defined quantities of value, the “no easy eutopia” view does not seem intuitively obvious to us. If “no easy eutopia” is correct, then we should hope for a 60–40 gamble between eutopia and extinction rather than a guarantee of many futures that intuitively seem truly wonderful, and indeed would be truly wonderful, in absolute terms.5 Ultimately, though, we conclude that the easy eutopia view is likely wrong. The future could fall dramatically short of what humanity could have achieved, even if we avoid obvious catastrophes, achieve a world of abundance, and if everyone today gets most of what they self-interestedly want.
The structure of this essay is as follows. In Section 2, we’ll consider how the value of the future could be fragile with respect to single flaws which may be easy to accidentally introduce, meaning that even reaching a mostly-great future could require getting things right in many non-obvious ways. We’ll illustrate this by looking at ways in which the world today could involve a moral catastrophe, and ways in which the future could involve one, too. In particular, we’ll consider mistakes which could undermine the value of the future even in a world which everyone approves of, and where everyone gets most of what they want.6
We’ll suggest that we can capture this idea of eutopian fragility by seeing the value of the future as the product of many factors, such that mostly-great futures are rare even among futures which score highly across the factors on average. We’ll also consider why, psychologically speaking, it could seem like ‘ideal’ societies should be easy to achieve, even if they’re not.
In Section 3, we’ll get more technical. We’ll discuss, more systematically, what it would take for a plausible moral view to be easygoing about near-best futures, looking at both unbounded and bounded moral views. It turns out to be surprisingly tricky to specify an easygoing moral view: most plausible views seem to be fussy about what it takes to reach eutopia. One upshot is that, even after much more reflection, there’s little reason to think people’s moral views would become more easygoing.
Putting it all together, it seems hard to avoid thinking that most achievable value lies in a narrow range of feasible outcomes. This is the “no easy eutopia” view. If future society hits that target, it would seem they must have deliberately optimised towards it; whether that will happen is discussed in the next essay.
2. Eutopia is fragile
2.1. Ongoing moral catastrophes
Since the agricultural revolution, most people who ever lived have lived in the midst of a moral catastrophe: in societies featuring some combination of slavery, the subjugation of women, cruel and brutal punishment for criminals or prisoners of war, widespread disenfranchisement, and rigid social stratification based on accidents of birth. Often, the fact that these arrangements were wrong wasn’t at all obvious at the time, even to the victims of those wrongs; they arose not out of resource constraints, but out of prevailing moral beliefs.
Today, too, we live in the midst of a moral catastrophe.7
The scientific and technological advances of the last few centuries, and correspondingly rapid and sustained increase in material incomes, have afforded our generation an opportunity to create a truly flourishing world — but we’ve squandered that opportunity. Ongoing inequality, conflict, poverty, subjugation, and more, all mean that the world falls far short of how good it could be. What’s more, humanity’s treatment of non-human animals has led to intense suffering on an industrial scale, amounting to tens of billions of animals living lives of misery every year. The suffering directly caused by animal farming may well be enough to outweigh most or all of the gains to human wellbeing we have seen. For this reason alone, the world today may be no better overall than it was centuries ago.8
The idea that modern society has squandered most possible values is not idiosyncratic to one moral view. From many moral perspectives, the world is in the midst of an ongoing moral catastrophe.9
Moral perspective | Moral catastrophe, from that perspective |
---|---|
Most religious views | Most people on Earth follow the wrong faith or spiritual practices, or are atheist; widespread erosion of spiritual values. |
Conservative morality | Most people violate norms of sexual propriety, such as sex before marriage. Marriage rates and birth rates are dropping across most countries. Other forms of decadence, like gambling and drug use, are common. |
Pro-life ethics | Over 70 million abortions are performed annually, more than 100 times the number of deaths from homicide. |
Cosmopolitan ethics | Millions of children die from easily-preventable diseases every year. Restrictive immigration policies limit freedom of movement and make the world poorer. |
Environmentalism | We are causing climate change, the destruction of natural ecosystems, and widespread species and biodiversity loss. |
Communism and various forms of socialism | Capitalism, extreme inequality, and widespread worker alienation predominate. |
In each case, we’re not saying that (on these views) if the world were to persist like it is today, then that would be worse than extinction, or even that the world has gotten worse over time. The thought is that these views see the world as deeply flawed, such that a significant fraction of the potential value of modern society has been lost.
All this should make us appreciate how easy it could be for a single flaw to undermine much of the moral lustre of the future. Non-obvious but severe flaws are not just the stuff of science fiction; they are the norm across history and across moral views.
2.2. Common-sense utopia
If you think the eutopian target is relatively big and achievable, you don’t need to think the world is already especially good.10 It could be that the world’s moral flaws are automatically resolved with more material abundance, and other kinds of technological and intellectual progress; and it could be that we are close to a historical tipping point, where a mostly great future is finally within reach. As long as these kinds of progress happen by default, then reaching a mostly-great future wouldn’t be so hard; humanity wouldn’t need to “aim” at more specific outcomes.
Not every moral view would agree that a materially abundant future is likely to be about as good as things can get, even if most people in that world are happy and free. For example, on conservative and religious views, technological progress could simply (continue to) unravel traditional, virtuous, or spiritually enlightened ways of living. On those views, the ‘target’ is narrower, and requires more steering to hit.
But the question is whether we should actually think that the target is narrow. Is it likely that we avoid extinction and generate enormous material abundance, but we still somehow squander our opportunity to achieve most of our potential? You could think the following: “I get that many moral views care about stuff that even a very rich and advanced civilisation isn’t guaranteed to care about. But I’m more easygoing than that. What I care about, essentially, is that people in the world get to live the lives that they want to live, and because I’m not too fussed about exactly what they choose, I think material abundance and individual freedoms is basically sufficient for a mostly-great future.” Call this an “easygoing liberalism” view, and consider this possible future:
Common-sense utopia: Future society consists of a hundred billion people at any time, living on Earth or beyond if they choose, all wonderfully happy. They are free to do almost anything they want as long as they don’t harm others, and free from unnecessary suffering. People can choose from a diverse range of groups and cultures to associate with. Scientific understanding and technological progress move ahead, without endangering the world. Collaboration and bargaining replace war and conflict. Environmental destruction is stopped and reversed; Earth is an untrammelled natural paradise. There is minimal suffering among nonhuman animals and non-biological beings.
On “easygoing liberalism”, Common-sense utopia is at least a mostly-great future. In the next subsection, however, we consider various ways in which even a common-sense utopia could feature some serious moral flaw: flaws that would be sufficient for the future to lose at least a significant fraction of its value, and which wouldn’t wash away in a materially abundant world where everyone gets most of what they self-interestedly want.11
2.3. Future catastrophes are easy
How could the future fall far short of its potential, even if everyone gets what they want under material abundance? Here we’ll consider some factors which could still vary under such an abundant future, but which could be crucial for determining the value of the world, in ways which might not be obvious even to people living in that world.
2.3.1. Scale-insensitivity and misguided population ethics
The first example is maybe the most obvious: it could be that when we’re imagining how good things could get, we’re not scope-sensitive12 enough about the potential scale of the future. If the future is small in scale, it might never achieve more than a small fraction of the value of futures which are vast in scale. A galaxy’s worth of flourishing could be billions of times more valuable than a future confined to our solar system. If making society bigger doesn’t make existing people better-off, then the insufficient scale of common-sense utopia could remain non-obvious as a moral failing: not desired by most people, and not destined to happen by default.
And even if the future is large in scale, the number and average wellbeing of people in the future could dramatically alter the future’s value. Population ethics is the branch of moral philosophy that studies how to evaluate populations of different sizes. Some views within population ethics include:
- The total view: the value of a population is given by the sum total of wellbeing.
- Critical-level views: the value of a population is given by the sum total of wellbeing that exceeds some positive “critical” level, minus the total shortfall from wellbeing that’s below that level.
- Variable-value views: the value of a population is given by some combination of its total and average wellbeing.
On the total view, the ideal future might involve vast numbers of beings each of comparatively lower welfare (given a fixed amount of resources); a small population of high-welfare lives would miss out on almost all value. But on critical-level or variable-value views, the opposite could be true.
Moreover, even if future beings have very high per-moment wellbeing, they could vary greatly in the length of their lives. The future might involve a small number of extraordinarily long-lived beings, or might involve mainly very short-lived beings (for example, digital beings that are run to perform a specific task and then shut down). On the total view, if increases in lifetime wellbeing are diminishing with respect to increases in lifetime length, then the small-population world would lose out on most value. On a critical-level view, a future of happy but very short-lived beings might even be of negative value.
There are therefore many ways in which the future could lose out on most value for population-ethical reasons.
2.3.2. Misguided attitudes to digital beings
In the future, there may be vast numbers of digital beings. If so, they will likely dramatically outnumber biological beings, and society will face hard questions about how they are treated.
Suppose that in the future digital beings are treated like any other piece of software, or how AI is treated today: humans own them, and can do with them what they wish.13 Then they might be treated much worse than would be ideal. The issue is not merely that they could suffer; it’s just that they might be much worse off than they could have been, and the future is much less valuable than it could have been as a result.
Alternatively, perhaps death is intrinsically bad,14 and those digital beings die if they stop being run, or their weights are sufficiently altered, so AI death occurs at enormous scale. Perhaps ownership of a being with moral status is wrong in and of itself, even if those beings enjoy their lives and want to do their work.15 Or perhaps the resulting inequality between humans and digital beings is intrinsically unjust. On any of these, the societal decision about what rights if any to give to digital beings could result in grave moral error, and perhaps the loss of most potential value.
The worry is not merely that we will give digital beings too few rights. Suppose instead that digital beings are given full rights in the future, including voting rights. But because the population of digital beings is faster-growing, they soon become the large majority of voting power, and ultimately control most aspects of how society is run. On moral perspectives on which there is something distinctively important about human values, this future might result in the loss of almost all that’s worthwhile.
Future decision-makers could get the treatment of digital beings wrong for a number of reasons. They might converge on the “wrong” moral beliefs; or they might have the “right” moral beliefs but simply not care, and choose to act entirely out of self-interest or on the basis of some other ideology instead. Or they might act on the basis of some false non-moral beliefs; perhaps motivated cognition (including biased training of AI advisors) leads them to convenient views, like that digital beings are mere tools without moral claims of their own. And even if in the future some people have the right beliefs and right motivations, society’s political systems might not allow those views to win out in face of a majority (or empowered minority) that opposes them.
2.3.3. Misguided attitudes to wellbeing
In moral philosophy, there are three main accounts of wellbeing:
- Hedonism: wellbeing is determined by positive and negative conscious experiences.
- The objective list view: wellbeing is also determined by “objective” goods like knowledge, friendship and the appreciation of beauty.
- Preference-satisfactionism: wellbeing is determined by getting what you want (in some sense).
Today, these each give fairly similar recommendations about how to improve people’s lives on the margin. But that is a contingent fact about our world: most of the time, we can help people by giving them instrumentally useful goods (such as money, education, or health); and, most of the time, the objective goods people claim to want also seem to improve the quality of their conscious experiences.16
For a very technologically advanced civilisation, capable of designing beings in very different ways, these views are likely to diverge in their recommendations. Future beings might exist in a state of bliss, without having knowledge, friendships, or beauty. They might have all of their preferences satisfied, but have little in the way of positive experience. And so on.
If people in the future act on the wrong understanding of wellbeing, that could be morally catastrophic. For example, the future could be filled with “happiness machines” which can be described as experiencing a state of bliss, but without meaningful autonomy or growth. On an objective list theory, then you might regard the “happiness machines” future as losing out on almost everything of value. Alternatively, people in the future might live rich lives full of striving, achievement, understanding, aesthetic appreciation, social connection, and other possible objective goods. But if all that really matters is conscious experience, then that rich future might involve a tremendous waste of resources that could have been used to support happiness machines, instead. And so on.
2.3.4. Misguided allocation of space resources
Given survival, widespread settlement of star systems outside of our solar system looks feasible, and even likely.17 If AI drives explosive industrial expansion, this point of time could in fact be within years or decades. The initial periods of settlement and resource appropriation — in our solar system, galaxy, and beyond — will involve capturing essentially all resources that will ever be available to us.18
These initial periods of settlement could introduce lasting moral errors, in a number of different ways. First, the world could allow extrasolar resources to be claimed by whoever gets there first. If so, control over those resources might end up concentrated among a tiny number of hands — to whoever was most-willing and most-able to grab them — potentially undermining moral diversity as a result. Or resources might get allocated equally to everyone alive at the time, with limited possibilities for trade. But then many resources might go to people who have no use for them, or to people who squander those resources, or even use them for harmful ends. Whatever allocation we choose, some moral views are likely to see that allocation as catastrophic.
Indeed, from certain environmentalist or suffering-focused perspectives, widespread settlement of other star systems under any allocation system might constitute a moral catastrophe; the only good outcome, from that perspective, might be if most of the accessible cosmos is kept as an eternal nature preserve. Other moral perspectives, such as views which regard a planet of flourishing lives as making the world better than the barren rock it replaces, might regard such preserves as losing out on most possible value.
2.3.5. And so on
There are many more potential risks of moral catastrophe, including:
- Wrong happiness/suffering tradeoff: Future decision-makers might allow the persistence of some amount of suffering, maybe on grounds of autonomy. If suffering should weigh extremely heavily, then this could result in a much worse future.
- Banned goods: Future societies could be broadly liberal, so not obviously dystopian, but ban the most valuable goods, maybe because they are regarded as unnatural or likely to undermine the order, similar to how most recreational drugs are banned almost everywhere in the world today.
- Wrong similarity/diversity tradeoff: The future might consist almost wholly of identical lives, hyper-optimised for value or wellbeing at an individual level. But if diversity of life is morally important, then this could lose out on almost all value.
- Equality or inequality: A future in which different groups pursue different approaches to life could end up extremely unequal, with some living short and limited but natural biological human lives and others living extraordinarily long and wonderful enhanced or even digital lives. If inequality is intrinsically bad, such a world could involve moral catastrophe.
- Wrong discount rate.19 Future generations use up the resources they can access quickly, rather than using them in a slower and more efficient way.
- Wrong decision theory. Future generations might never engage in acausal trade with other civilisations in the universe, or they might try it and do it wrong, even though some acausal decision theory turns out to be correct.
- Wrong simulation views. It might turn out we live in a simulation, but future decision-makers never take the idea seriously, or fail to strike deals with their simulators.
- Wrong decisions around infinite value. Future generations could get the ethics of infinite value wrong, or somehow miss out on achieving infinite value.
- Wrong reflective process. The first generation with the power to do so might lock in their unreflective values, even though those values are far away from what would result from deep reflection.
In these examples, we don’t mean to bake in views on what the ‘right’ answers are. The issue is that there is often no safe option, where a great outcome is guaranteed on most reasonable moral perspectives.
Quantitatively, these mistakes can be huge deals. For example, suppose that future generations discount the future, even though they shouldn’t. Then the best outcomes on their view could turn out to be a small fraction as good as the best outcomes on a non-discounting view.20
And there are perils at the meta-level, when we consider what procedures are best for avoiding moral error. For example, society might somehow succeed in maintaining characteristically “early 21st century human” values, and miss out on vast amounts of progress from sustained reflection. Or values could become unmoored from human values, drifting into worthlessness. Moreover, society could become too hyper-vigilant about avoiding moral error in the first place — becoming so overbearing, meddling, risk-averse, or paranoid that concern for avoiding error itself causes great harm, or stifles conditions like freedom and openness that are needed for positive progress.
Finally, we should expect that this list is very far from exhaustive. Some ideas, like acausal trade across the multiverse, are esoteric and recent. Potentially, the risks of moral catastrophe will stem from issues still stranger and less widely recognised.
2.4. Value as the product of factors
How is it that, on many views, much of the appeal of otherwise great futures can be unwound by a single “flaw”?
One model is that the value of the future is multiplicative between a number of relatively independent factors. Total utilitarianism is like this: even cosmically large futures could be valued at a rounding error above zero, if they are filled with the wrong kind of being. But views which are less demanding about sheer scale might also think of the goodness of the world as a product of complimentary factors like happiness, autonomy, diversity, and so on. A future of happy and free clones could fall far short of ideal, as could a future of a wide diversity of happy engineered drones.
As a toy model, imagine that the value of the future is the product of dimensions, all sampled from independent uniform distributions from the 0 to 1 open interval. The first thing to notice is just that the product of the factors can be arbitrarily close to zero, even when the average value of the factors is close to 1.21 The second thing to notice is that, as the number of dimensions increases, the expected average of the factors doesn't change (), but the expected value of the future () shrinks closer to zero. When , for example, the best feasible (99.9th centile) future is around 0.48, but the top quartile (75th centile) future is 0.034, and the expected value is around 0.03.22

Image
Visualising the value distribution of futures, when value is a product of independent factors with standard uniform distributions.
Instrumentally-valued goods often follow a distribution like this. For example, individual wealth seems to be roughly lognormally distributed,23 and this would roughly fit with a model where individual wealth is the product of a number of contributing factors. Among those people who centrally value wealth, those who score well on every last contributing factor (including through sheer luck) turn out hundreds of times wealthier than those who do well on most but not all factors.
Similarly, consider the most important factors which you think predict a person’s wellbeing: things like their physical health, mental health, quality of relationships, material comfort, and so on. The best feasible lives today do well on all these factors. Now think of someone who does well on every factor but one — maybe everything is going right for them, except they suffer chronic pain, or severe depression, or anxiety. These people’s lives are going very well; they’re just one problem away from the best feasible life today. But their overall wellbeing might be much closer to that of the median life than to those people living exceptionally good lives, by doing well on all factors.
If this multiplicative model is right, then a eutopian future needs to do very well on essentially every one of the issues we covered in the last subsection; doing badly on any one of them is sufficient to lose out on most value.

A simple model of the future, where reaching a mostly-great future depends on making the right decisions across many contingent issues.
Image
2.5. Why eutopia might be harder than it seems
There’s a tension here. On one hand, mostly-great futures can intuitively feel just within reach, given material abundance, and thus easy to achieve. On the other hand, it’s hard to describe any future in concrete detail such that it’s very likely to achieve most value. Mostly-great futures are like the mirage of an oasis, that recedes as we try to approach it.
This is similar to the hedonic treadmill effect, where we start with an intuition that says “my life would be almost as good as possible, if only X”, but on achieving X, we don’t evaluate our new life to be almost as good as possible, because we decide more things need to go right than we thought. Looking back after getting X, we might admit life is much better, but there’s still vast room for improvement. In general, our psychologies are wired to make just-about attainable goods loom large, by dangling the promise that they would deliver most of what we need, only to reset our expectations once we get them.
A similar bias might explain why mostly-great futures so often seem just about attainable, even if they’re not. That’s because, if we got to acclimatise to a nearly-attainable future which intuitively seems near-best, we would reset our expectations. Looking back, we might admit that this new world is much better, but there’s still much room for improvement. Because that could keep happening, you might eventually admit that the distance from the world today and a near-best future was far greater than you originally thought; so eutopia was more difficult than you thought. Psychological treadmill effects are a reason to think mostly-great futures are harder to achieve than we currently think.
But the analogy to the hedonic treadmill could also be misleading, by suggesting it will always be obvious how to move closer to a near-best future. Instead, the future could lose most of its potential value in ways which don’t make the inhabitants of that future discontented. For example, it’s not at all morally intuitive to people today that society would be better if it had more people, all else equal, even if those living in a much larger future society would be glad to live in it. And, in the future, people could effectively engineer their preferences, and the preferences of their offspring, to be highly satisfied with what in the grand scheme of things are tragically mediocre circumstances.
Another upshot of this discussion is that it’s just very hard to reason clearly about the value of the future at all, and our judgements are more likely to be clouded by biases when they are so unconstrained by experience or theory. In particular, although we’ve argued that eutopia is harder than it seems, we haven’t ruled out that there are some views where eutopia is easy after all. In the next section, then, we try to take a more systematic approach.
3. Which views are fussy?
Note for readers |
---|
This section is the most technical part of the series. It’s possible to skip without losing crucial context, and it’s also fine to skim without reading any of the technical footnotes. |
3.1. Valuing the future
It’ll be useful to get more precise about our terms than we were in the introduction.
We’ll consider moral views on which, for any two prospects,24 that either one prospect is better than the other, or they are equally good (that is, the moral view is complete).25 What’s more, we’ll only consider views where that betterness relation satisfies the other axioms of the von Neumann-Morgenstern axiomatisation of expected utility theory: transitivity, continuity, and an independence condition. This lets us represent such an ordering with a cardinal value function , such that one prospect is better than another just in case its value on is greater. We’ll take this value function as defining how to quantify the value of a prospect.26
Not every moral view can reasonably be represented as ascribing cardinal value to different outcomes. For example, some views might be unable to compare some pairs of prospects at all, violating the completeness axiom.27 Importantly, though, this doesn’t restrict us to consequentialist views — just because a view is able to value all outcomes, and satisfies the von Neumann-Morgenstern axioms, doesn’t mean it always recommends bringing about the best outcomes.
Recall that we defined a best feasible future as a future humanity would achieve if things went exceptionally well. More precisely, we’ll define a best feasible future as any future at least as good as a 99.99th percentile best outcome (ordered by betterness), according to a well-informed probability distribution over all futures.28 We’ll stipulate that guaranteed best feasible futures have a value of 1.29
Next, we’ll define extinction as an outcome where the human population goes to 0, and is not replaced with a morally valuable successor. We’ll stipulate that guaranteed extinction has a value of 0.
Then, per our original definition, a near-best future or (equivalently) a eutopia is any future with a value on greater than 0.9, and a mostly-great future is any future with a value on greater than 0.5.
This is why it’s not obvious that most futures fall far short of being mostly-great, let alone eutopian. Suppose you thought the difference in value between a best feasible future and some typical future were larger than the difference between the typical future and extinction; in other words . This is true, in our presentation, just in case a guaranteed typical future is worse than a prospect containing a 50% chance of the near-best future, and a 50% chance of extinction.30
Some ways of valuing futures make it look comparatively easy to reach eutopia, because they regard a wide range of futures as close in value to the best feasible future. We’ll call these views easygoing. Such views might be unusually forgiving of moral flaws and errors, and/or they might be bounded in scale, making it possible to achieve most feasibly achievable value with only a small fraction of available resources. All else equal, if your view is strictly more easygoing than someone else’s, then you should think that near-best futures are more achievable and therefore more likely.31 Other views make eutopia look comparatively hard; we’ll call these views fussy. A way of valuing futures is fussy to the extent it regards a narrower range of futures as close in value to best feasible futures.
Making this more precise, consider a reasonable probability distribution of ways the future could go, conditional on Survival, but also conditional on there being no more serious optimisation pressure than today arising from people who are trying to promote the very best outcomes de dicto (pursuing outcomes because they are good, whatever they happen to be). To the extent that some moral view regards mostly-great futures as very unlikely on this distribution (say, <1%), then that moral view is fussy, and it regards mostly-great futures as a narrow target. The more likely mostly-great futures are, the broader the target they present, and the more easygoing the correct moral view is.
3.1.1. A summary of our argument
To help navigate the next couple of sections, we’ll quickly summarise the conclusion we argue towards. To begin with, we ask whether a value function is bounded, meaning no possible world can exceed some fixed amount of positive value, even in theory. We first consider unbounded views. On unbounded views, the maximum attainable value, with a set of resources, could be approximately32 linear, superlinear, or sublinear with respect to those resources. For short, we’ll call such views “linear”, etc. Superlinear views are very implausible and strictly fussier than linear views, and sublinear but unbounded views seem strictly less plausible and fussier than sublinear and bounded views, so we don’t discuss either of these views further.33 Unbounded linear views are more plausible, but on such views, to reach a mostly-great future: (i) almost all available resources need to be used; and (ii) such resources must be put towards some very specific use. This is a narrow target; linear views are fussy.

Comparing value functions in terms of asymptotic growth behaviour. Superlinear value functions eventually dominate linear functions above some resource threshold. Sublinear value functions are eventually dominated by linear functions; bounded versions have some global upper bound, while unbounded versions do not.
Image
Turning to bounded views: if the view is bounded with respect to the value of the universe as a whole, then it will be approximately linear in practice, because the value of the universe as a whole is very large, and the difference that even all of humanity can make to that value is very small, and over small intervals, concave functions are approximately linear. And linear views are fussy.
If the view is bounded with respect to the difference that humanity makes to the value of the universe as a whole, it might still be approximately linear in practice, if the bound is extremely high. But if the bound is “low” (as it would be if it matches our ethical intuitions), then future civilisation might be able to get close to the upper bound.
But such views might still be fussy depending on how they aggregate goods and bads. If they aggregate goods and bads separately (which seems to us to be the more natural way of doing so), applying the bounded function to each of the amount of goods and amount of bads and then adding both, then the value of the future becomes extremely sensitive to the frequency of bads in any future civilisation. Even if we weigh goods and bads equally, then on a natural way of modeling we don’t reach a mostly-great future if as little as one resource in is used toward bads rather than goods. So, such views are also fussy — requiring a future with essentially no bads at all.
If goods and bads are aggregated jointly, on a difference-making bounded view, then we plausibly have a view which is not fussy. But this is quite a narrow slice of all possible views, and it suffers from some major problems that make it seem quite implausible. Putting this all together, easygoingness about the value of the future seems unlikely to us.
3.2. Unbounded views
We’ll start with views which are not bounded. Unbounded value functions can disagree about how maximum achievable value grows with the amount of resources that are under civilisation’s control.34 Maximum achievable value could grow approximately sublinearly, linearly, or superlinearly. We think that superlinear and sublinear views are both more implausible and fussier than linear and sublinear but bounded views respectively (see footnote 32), so we don’t discuss them here.
That leaves linear unbounded views. Linear views (and only linear views) are separable in resources at large enough scales:35 that is, when you separate out an outcome into smaller parcels of resources across time and space, each parcel contributes independently to the overall value of the world.36
For linear views, future civilisation needs to control most accessible resources in order to reach a mostly-great future. There are 20 billion galaxies in the affectable universe so, assuming that they wouldn’t otherwise be used for extremely good ends, an ideal society spread over the Milky Way would achieve only one 20 billionth of the value of a best feasible future.
Sheer scale is necessary for eutopia on linear views, but it’s not sufficient. Even a universe-spanning civilisation of free and happy beings could still fall far short of a mostly-great future, because, plausibly, the very best uses of resources achieve much more value, per unit of cost, than almost any other use. In particular, the distribution of value/cost, over likely uses of those resources, is probably sufficiently fat-tailed to make this true.
Fat-tailed distributions are common: wealth, city size, popularity of creative works, and citations of scientific publications all follow fat-tailed distributions. In the current world, this seems to be true of the relationship between value and cost of consumer goods, too; it seems true of different interventions in global health,37 and there are theoretical reasons for expecting this to hold true more generally.
The distribution of value among instances of intrinsically valuable goods also seems to be fat-tailed. Take the quality of subjective experience — a central source of value on most moral views. The very best experiences seem to be far better than most other experiences, and the very worst experiences seem to be far worse than most other experiences.
Some fortunate people even report experiences so apparently valuable that they would be willing to trade them for mundanely positive experiences lasting thousands of times longer. In the prologue to his autobiography, philosopher Bertrand Russell wrote: “I have sought love... because it brings ecstasy — ecstasy so great that I would often have sacrificed all the rest of life for a few hours of this joy.”38 Similarly, the Russian novelist Fyodor Dostoevsky described his experiences with epilepsy in this way: “For several instances I experience a happiness that is impossible in an ordinary state, and of which other people have no conception. I feel full harmony in myself and in the whole world, and the feeling is so strong and sweet that for a few seconds of such bliss one could give up ten years of life, perhaps all of life.”39
Probably these quotes are hyperbole — we doubt Russell and Dostoevsky would actually have chosen to die a decade earlier for another taste of those experiences. Still, it reflects something about the extremity of the experiences described. And Russell and Dostoevsky share remarkably similar brains — who knows what kinds of subjective experience other, more complex, more interconnected, more powerful minds could support, and how high the ceiling of best-possible experience could be. And similarly vast ceilings could exist for goods other than subjective experience: the intensity and quality of friendships, of romantic love, mutual connection, artistic achievement, and so on.
There have also been some preliminary surveys aimed at assessing the shape of the distribution of value of experiences. In one such survey, respondents were asked to compare their most intense and second most intense experiences. When asked how many times more intense their most intense experience was compared to their second most intense, over 50% reported a ratio of 2× or higher, and many reported substantially higher ratios again.40
Looking to the future, a fat-tailed distribution of value-per-unit-resources seems likely to us, too. Consider that, because linear views are separable across space and time at some level of granularity, there must be some single41 “value-efficient” arrangement of resources such that, in order to achieve a best-possible outcome, one needs to recreate as many of those arrangements as possible.42 But the space of things that future beings could do with resources is astronomical. Perhaps this arrangement is the size of an epoch-spanning planet, or a brief and simple experience of bliss running on a computer the size of a sugar cube. If a small fraction of those resources achieve more than 50% of the value-efficiency of the most value-efficient resources, which seems very plausible to us,43 then a mostly-great future is a narrow target.
What’s more, if one is merely uncertain about whether the distribution of future value-efficiency is fat-tailed, then one’s expected distribution is fat-tailed.44 So this aspect of the argument seems fairly robust to us. The conclusion is that the vast majority of ways of using resources are not nearly as good as the very best uses of resources.
These considerations about the value-efficiency of resources apply to both bounded and unbounded views: bounded views can just as well feature an exactingly fat-tailed distribution of value-efficiency across uses of resources. But the difference is that, if a bounded view is fussy about the best uses of some given resources, then inefficient use of those resources can still be compensated by scale — inefficiently using many more resources to approach the upper bound — or by getting things right rarely, but enough.
But unbounded linear views do not have this flexibility. Linear views which are exacting about the most value-efficient use of resources will require the future to go in a very specific way for that future to count as mostly great: unless most available resources are configured for the almost exactly the most valuable kind(s) of thing according to that moral view, then most achievable value is almost certainly lost.45 So linear views would seem to be fussy.
3.3. Bounded views
Linear views can seem fanatical. Consider a choice, between (i) a 0.001% shot at a near-best future, and extinction otherwise; and (ii) a guarantee of the common-sense utopia described in section 2.2. Because common-sense utopia is limited to our solar system, linear-in-resource views would evaluate the first option as vastly more valuable.46
This, among other considerations, motivates the idea that we should treat value as bounded.47 There’s a lot to say about these views. But the central point is that most bounded views are fussy, too; it’s only a very specific and narrow type of bounded view that’s easygoing.
There are a few ways a moral view could be upper-bounded, and it matters just how the view is put together. The most natural view evaluates the entire universe; the upper bound is on the value of the universe as a whole. However, the universe is big. On leading cosmological theories, the observable universe is tiny compared to the size of the total universe, which could even be infinite.48 If so, then it’s very likely that there are a vast number of alien civilisations, including beings with moral status, elsewhere in the universe, including outside of the observable universe. But, if so, then the difference that Earth-originating civilisation makes to the value of the universe is tiny. But strictly concave functions are approximately linear for small changes. So, even though maximum achievable value is a strictly concave function of resources under civilisation’s control, for all practical purposes we can treat value as linear in resources. But, as we saw in the last section, linear views are fussy.

At small enough scales, strictly concave (and differentiable) functions are approximately linear. Since the difference humans could possibly make to overall value is likely proportionally tiny, sublinear views which consider ‘universal’ value are likely practically near-indistinguishable from linear views.
Image
So if we’re searching for easygoing bounded views, we should look elsewhere — to views on which look at the difference in value humanity can make to the universe (or something like it), and regard that difference as bounded. If humanity’s domain is a bubble rising from the ocean floor, these views care about what we do inside our bubble, irrespective of the number of bubbles in the ocean as a whole.
There are major problems facing such difference-making bounded views, including violating stochastic dominance with respect to goodness.49 And note that, in order to avoid being in-practice linear, the upper bound on value needs to be sufficiently low that it is possible to get somewhat close to the upper bound with the resources in the accessible universe. But, even putting these issues to the side, the most plausible forms of bounded difference-making views are fussy.
To see this, consider two different ways in which the view could aggregate goods (like flourishing lives) and bads (like suffering lives): separately or jointly. A view that’s bounded (above and below, or only above), which aggregates goods and bads separately, could be described like this:
- First, add up all the bad things in the world (weighted by how bad they are), and transform that quantity according to some bounded function.
- Next, add up all the good things in that world (weighted by how good they are), and transform that quantity according to some bounded function.
- Then add the two resulting measures, to obtain a measure of overall value.
In contrast, a bounded view50 which aggregates goods and bads jointly would work differently:
- First, add up all the goods and subtract all the bads in the world (weighted by how good or bad they are).
- Next, transform this quantity according to some bounded function, to obtain an overall measure of value.
These two methods can disagree significantly. Think of a function over “units” of goods, which is concave and upper-bounded for positive values, and convex (but not necessarily bounded) for negative values, and . Imagine an outcome can contain compensating units of bads, so that — for some linear view — exactly when for any and . We can then define a separate aggregation view as giving the overall value by , and a joint aggregation view as giving the overall value by . Now, suppose humanity can either (a) produce 10 units of goods and 1 unit of bad, or (b) produce 5 units of goods and no bads. Both theories agree on the value of (b), which is just . But, on the separate aggregation view, the overall value of (a) is given by , while on the joint aggregation view, the overall value of (a) is given by . Unless is implausibly insensitive to bads — that is, unless the difference between and is smaller than the difference between and — then the joint aggregation view will value (a) more highly, disagreeing with the separate aggregation view.51
Any meaningful lower bound seems implausible, whether the view is jointly or separately aggregating. Suppose the value of a world is close to the lower bound on a separately aggregating view, because it is full of bads but empty of goods. Then that view could recommend adding an arbitrarily vast amount of further bads in order to add a small quantity of goods; and that seems totally wrong. But suppose the value of a world is instead close to the lower bound on a jointly aggregating view. Then that view could recommend taking a 50–50 gamble between adding an arbitrarily vast amount of further bads, or otherwise enough goods to compensate for the (much smaller quantity of) existing bads.
In fact, joint aggregation views are somewhat implausible even without a meaningful lower bound, because of a “scale tipping” dynamic. Suppose that, at small scales, you think that goods weigh equally against one bad, and that Common-sense utopia is a eutopia. If so, then a future involving a million galaxies of bads and million galaxies of goods, plus one common-sense utopia around one star, counts as a mostly-great future. But that seems strange — intuitively, a tiny change to the relative balance of goods and bads shouldn’t ever move the value of the future from as-good-as-extinction to eutopia.

On bounded views which jointly aggregate goods and bads, a tiny change in the balance of goods and bads can change the value of the future from worse than extinction, to eutopian.
Image
That leaves separate aggregation views without a meaningful lower bound as perhaps the least implausible bounded views. But separate aggregation views are fussy, because even tiny quantities of bads can be enough to prevent a mostly-great future. To start, take a view that treats goods and bads symmetrically, and say that Common-sense utopia constitutes a mostly-great future: it is at least 50% of the way to the upper bound. And now consider a civilisation that’s fully spread out across all stars of the affectable universe, but where one part in is bad rather than good.52 If so, then in aggregate the bads amount to a star system’s worth, and that future is therefore more than 50% of the way to the upper bound on the disvalue of bads. So, even if that future is very close to the upper bound on the value of goods, then it achieves less than 50% of achievable value overall.
That is, the view ends up being fussy in a different way than linear views were. Getting to a near-best future, when looking at goods only, is easy. But the view becomes obsessively concerned with eliminating bads, and even a tiny quantity of bads is sufficient to make a mostly-great future impossible.
The situation gets worse if the view is not symmetric about goods and bads, but instead (as is intuitively plausible) thinks that the magnitude of disvalue from a worst-feasible world is much greater than the magnitude of value from a best-feasible world.53 If so, then an even smaller quantity of bads would be sufficient to prevent a best-possible future. And, as mentioned, the disvalue of bads may not be bounded at all.54 No matter how bad a dystopian future is to begin with, it seems that one could always make it twice as bad by making it much bigger in scale or involving even worse suffering. If this is right, and the bounded view is asymmetric with respect to goods and bads, then an even tinier quantity of bads would be sufficient to outweigh any utopian future, no matter how good.55 The view would therefore be even fussier; the only mostly-great futures are those which had almost wholly eliminated the creation of bads.

Comparing bounded views which aggregate goods and bads separately. “Symmetric” views have an upper and lower bound such that some quantity of goods can compensate for any amount of bads. On “asymmetric” views, by contrast, no quantity of goods can compensate for a large enough quantity of bads.
Image
Alternatively, if the view aggregates goods and bads jointly, then we probably do end up with a view that’s easygoing. Even a future that contains a significant fraction of bads could still be mostly-great. However, it’s worth bearing in mind how specific and narrow such a view is — it faces the “scale-tipping” issue, and inherits the problems with difference-making boundedness. What’s more, if value has no lower bound, or the lower bound is very low indeed, then such views generally may become pro-extinction,56 since it’s easy for a very small chance of a very bad future to outweigh a much greater chance of a near-best future. At the very least, it’s a strange conclusion that the most plausible easygoing bounded view also looks unusually likely to favour extinction.
3.4. A recap
Let’s take stock. Our goal has been to determine how difficult it is to achieve a mostly-great future. We have concluded that most plausible moral views are fussy: the target of a mostly-great future is narrow and difficult to hit.
First, we examined unbounded views whose maximum attainable value is approximately linear with respect to resources. These views are fussy because, for a future to be mostly-great, almost all resources in the accessible universe must be harnessed, and must be put towards some very specific use.
Second, we considered views on which there is an upper bound on value. Many types of bounded views — those that are bounded with respect to the value of the universe as a whole, or are difference-making but with a sufficiently high bound — are approximately linear, too, and are therefore fussy.
Third, we considered difference-making views with a sufficiently low upper bound that they are not approximately linear. If these views aggregate goods and bads separately, then they are fussy for a different reason: even if the future contains a tiny quantity of bads, then we miss out on a near-best future. A civilization that is essentially entirely free of bads is again a very narrow target, so such views are also fussy, too.
Finally, there are low-bounded difference-making views that aggregate goods and bads jointly. These seem easygoing. However, they represent only a narrow slice of possible views, and have major issues: because they are difference-making, they violate stochastic dominance, among other problems; they don’t capture how we’d intuitively want to aggregate goods and bads; and they may be strongly in favour of human extinction. Putting this all together, we think that easygoingness is unlikely.
Finally, here’s a simplified summary of the views we’ve considered:

Summarising which value functions are likely to be fussy about achieving a mostly-great future.
Image
3.5. Moral uncertainty
What kind of view should we adopt if we are uncertain about the correct view? Some views might seem “higher-stakes” then others, such that they should effectively loom largest in decision-making. But to know which views loom largest — and whether they are easygoing or fussy — we need to consider different ways to deal with uncertainty.
For illustration, let’s consider three types of utilitarian views. First, a view on which value is unbounded both above and below. Second, a view on which value is unbounded below, but bounded above (and jointly aggregates goods and bads). Third, a view on which value is bounded above and below, which agrees with the asymmetrically bounded view on how high the bound is.57 Suppose you split your credence evenly between the three views. Consider how each view would assess the ratios of the differences in value between the following four options:
Views → Options ↓ | Unbounded (p = ⅓) | Bounded above and below (p = ⅓) | Bounded above (p = ⅓) |
---|---|---|---|
Best feasible future | 1 | 1 | 1 |
Common-sense utopia | 0.0001 (ε > 0) | 0.8 | 0.8 |
Extinction | 0 | 0 | 0 |
Worst feasible future | -1 | -1 | -10,000 (≪ -1,000) |
In order to weigh options under uncertainty, we’ll need some method for aggregating across different views, into a new combined value function.58 Here’s one way to do that: take the credence-weighted average value across views, with the value of best feasible future and extinction fixed at 1 and 0 accordingly.59 In the example above, the average value of common-sense utopia would be . That is, on our combined value function, a prospect with a 54% chance of best feasible future and extinction otherwise is equally as good a guaranteed common-sense utopia.
This is only the “obvious” method because, for each view, we chose to represent its value function by setting to and to : implicitly assuming that all three views agree on the “stakes” involved in the difference in value between extinction and a best feasible future.
But this was an entirely arbitrary choice. A value function is unique up to positive affine transformation: there are endlessly many ways to numerically represent a given value function, so there are endlessly many ways to “average” between different value functions. And, as we’ll see, even if we fix the value function in the same way across theories by fixing the same two numbers to the same two outcomes, just how to fix the value function can matter dramatically for which options the aggregated value recommends taking.

One way to act under uncertainty between moral views is to treat each view as ‘agreeing’ on the difference in value between extinction and the best feasible future.
Image
There are many other ways to make intertheoretic value comparisons. Instead of assuming agreement on the difference in value between extinction and the worst feasible future, the aggregating function could assume all views agree on the size of difference in value between:
- The worst feasible future the best feasible future (a form of “range normalisation”)60
- The worst feasible future and extinction
- The mean outcome and an outcome one standard deviation above the mean, according to some fixed prior distribution over outcomes (a form of “variance normalisation”)61

Three further methods for making quantitative comparisons between views.
Image
Normalising by the difference in value between two outcomes (“extinction–best”, “worst–extinction” and “worst–best”) seems implausible, because these methods depend on the contingent fact about how big the affectable universe is, and therefore how good or bad the best and worst feasible futures are relative to futures of a known size. Around the 1920s, cosmology began to appreciate that the affectable universe is over a billion times larger than previously thought. Someone considering the same options of known effect size would then, on learning this cosmological news, give the unbounded view less than one billionth the weight that she did before. But that seems absurd, because surely unbounded moral views shouldn’t be relegated into decision-irrelevance on discovering that the universe is bigger than imagined.
The variance normalisation approach looks more plausible. One of the main arguments for variance normalisation is that it makes things in general no higher stakes for any view than another. On variance normalisation, the difference in value between extinction and eutopia is larger than that difference on either of the bounded views, so variance normalisation affords unbounded views more “say” about upside-seeking options.
Alternatively, we could try to make judgments directly about how value compares across pairs of views. Here’s an example: start by comparing the unbounded and the asymmetric (upper- but not lower-) bounded view. It seems like these two views agree on the disvalue of bads, since both views disvalue bads linearly, and they agree precisely in all circumstances that involve tradeoffs involving only bads. What’s more, we could imagine variants of the bounded view, with progressively lower and lower bounds. In the limit, such a view would approximate strict negative utilitarianism. And, plausibly, utilitarianism and strict negative utilitarianism agree on the disvalue of bads. So we have a way to compare the unbounded and the asymmetric bounded view. Next, we compare the asymmetric bounded view and the symmetric bounded view. Here, it seems like the two views agree on the difference in value between zero and the upper bound: whatever reasons we have for setting the upper bound in one place rather than another should apply equally for both views. The two views agree in all circumstances involving choices between positive-value futures; the disagreement concerns how to weigh positive-value futures against negative-value futures.
But if we can compare the unbounded with the asymmetric bounded view, and if we can compare the asymmetric bounded view with the symmetric bounded view, then we can compare the unbounded with the symmetric bounded view, too. On this way of doing this, things are in general highest-stakes for the unbounded view, and lowest-stakes for the symmetric bounded view.

Setting a joint scale to compare across views, by reasoning about pairwise comparisons.
Image
How to weigh views under moral uncertainty can change which options are best, often dramatically. Consider how these different methods evaluate a simple choice between two options:
P(Eutopia) | P(Common-sense utopia) | P(Extinction) | P(Dystopia) | |
---|---|---|---|---|
Safety-focused option | 0.05 | 0.9 | 0.05 | 0 |
Upside-focused option | 0.7 | 0 | 0.2 | 0.1 |
When we compare these two options, we find that the approach to uncertainty critically matters for which option is best.

Image
Comparing different approaches to uncertainty between moral views, in terms of the relative differences in value between three prospects (the upside-focused option, safety-focused option, and guaranteed extinction). Where necessary, we tried to use reasonable numerical assumptions, which you can see and change on this spreadsheet.
Normalising between extinction and the best feasible future strongly recommends the safety-focused option, with the upside-focused option being far worse than extinction. Normalising by variance makes both options look very comparably attractive, compared to extinction, although the safety-focused option edges out. But normalising by the pairwise approach makes the upside-focused option look far more attractive than the safety-focused option.62
Intertheoretic comparisons are very thorny, so we don’t want to push any strong conclusions. But we can suggest two substantial upshots.
First, on both the most plausible statistical and the most plausible non-statistical approaches to intertheoretic comparisons, then the difference in value between zero and the best feasible future should be considered greater on the unbounded view than it is on either of the bounded views. So, if anything, at least when weighing between options which are all better than extinction, it's the unbounded moral views that should effectively loom larger, and drive most of the difference in aggregate value between options. In this sense, the fairest ways to evaluate options, under moral uncertainty between bounded and unbounded views, are themselves fussy.63
Second, the choice of how to approach intertheoretic uncertainty matters significantly in this context. At least on a cursory treatment, plausible approaches to uncertainty disagree over which somewhat plausible-seeming options are best, even if all those approaches are fussy in practice. In particular, some approaches end up being sensitive to extremes of both upside and downside, while others end up being far more sensitive to extremes of downside only.
4. Conclusion
It’s natural to think that a wide range of imaginable futures are almost as valuable as the very best futures — that we should be easygoing about what it takes to achieve a mostly-great future. After all, it seems bizarre to judge that a very slim chance of a eutopian future, and extinction otherwise, could be somehow better than a guarantee of a future that’s extraordinarily good in absolute terms.
The “no easy eutopia” view says we should actually be very fussy about what counts as a mostly-great future, contra the intuitive appeal of “easy eutopia” views. Single moral errors which erase a major fraction of the goodness of the world seem easy to make. Plausibly, the value of the future is well-described as a product of many independent factors, such that doing poorly on any one dimension is sufficient to lose out on most feasibly achievable value. And when you try to reason through the space of value functions more systematically, it looks like only a narrow slice of plausible views are truly easygoing, and those views are not themselves very plausible.
But our discussion so far doesn’t imply that we’re unlikely to reach eutopia, or a future which lands in that ballpark of value. The target is narrow, but there could be forces which guide society towards hitting it. It could be that society’s views converge, perhaps through some truth-seeking deliberative processes, to land on the correct views. It could be that, among a diversity of views about what ultimately matters, most views can achieve most of what they care about, especially through trade and compromise. Or the target could just exert some kind of gravitational pull, even if most people in society don’t ultimately care about reaching it. The next essay turns to this question.
Bibliography
Stuart Armstrong and Anders Sandberg, ‘Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox’, Acta Astronautica, August 2013.
Adam Bales, ‘Against willing servitude: Autonomy in the ethics of advanced artificial intelligence’, The Philosophical Quarterly, 31 March 2025.
Nick Beckstead and Teruji Thomas, ‘A paradox for tiny probabilities and enormous values’, Noûs, June 2024.
Erik Carlson, ‘Extensive measurement with incomparability’, Journal of Mathematical Psychology, 1 June 2008.
Paul Christiano, ‘Better impossibility result for unbounded utilities’, 9 February 2022.
Fabio Clementi and Mauro Gallegati, ‘Pareto’s Law of Income Distribution: Evidence for Germany, the United Kingdom, and the United States’, Microeconomics, 18 May 2005.
Owen Cotton-Barratt, William MacAskill, and Toby Ord, ‘Statistical Normalization Methods in Interpersonal and Intertheoretic Comparisons’, The Journal of Philosophy, 2020.
J. Richar Gott, Mario Jurić, David Schlegel, Fiona Hoyle, Michael Vogeley, Max Tegmark, Neta Bahcall, and Jon Brinkmann, ‘A Map of the Universe’, The Astrophysical Journal, 10 May 2005.
Hilary Greaves, Teruji Thomas, Andreas Mogensen, and William MacAskill, ‘On the desire to make a difference’, Philosophical Studies, 2024.
Larks, ‘Moral Trade, Impact Distributions and Large Worlds’, 20 September 2024.
Toby Ord, ‘The Moral Imperative toward Cost-Effectiveness in Global Health’, Center for Global Development., 8 March 2013.
Peter Salib and Simon Goldstein, ‘AI Rights for Human Safety’, 1 August 2024.
Carl Shulman, ‘Are pain and pleasure equally energy-efficient?’, Reflective Disequilibrium, 24 March 2012.
Max Tegmark, ‘Parallel Universes’, 7 February 2003.
Martin L Weitzman, ‘On Modeling and Interpreting the Economics of Catastrophic Climate Change’, The Review of Economics and Statistics, 1 February 2009.
Evan G. Williams, ‘The Possibility of an Ongoing Moral Catastrophe’, Ethical Theory and Moral Practice, 1 November 2015.
Footnotes
Released on 3rd August 2025
Citations
Better Futures
Article SeriesPart 2 of 6
Suppose we want the future to go better. What should we do?
One approach is to avoid near-term catastrophes, like human extinction. This essay series explores a different, complementary, approach: improving on futures where we survive, to achieve a truly great future.