Beyond Existential Risk

Released on 21st January 2026

Citations

Abstract

Is mitigating existential risk the top moral priority for those who take a long-term view? Nick Bostrom suggests a Maxipok principle : that existential risk reduction should be the sole priority for temporally impartial altruists. We argue against this principle on the grounds that many interventions can substantially influence the long-term future by other channels than reducing existential risk. Therefore Maxipok can lead to suboptimal or even harmful decisions.
We show that exclusive focus on existential risk would be justified if likely futures are dichotomous in value—either extremely good or comparatively worthless. We discuss three potential justifications for Dichotomy: (1) that over time society, if it survives, will very likely converge on a near-best form; (2) that moral value is bounded above; and (3) that the best uses of resources are orders of magnitude better than most others. We find each justification inadequate, and then offer two more general arguments against Dichotomy.
Finally, we argue that events other than existential catastrophes can have persistent effects on astronomical timescales. We conclude that while reducing existential risk remains important, temporally impartial altruists should also prioritize positively influencing the values, institutions, and power distributions that might be locked in during the coming century.

1. Introduction

In recent years, there has been increasing attention on the ideas of strong longtermism and of existential risk. Strong longtermism is the view that the most important feature of the most important decisions that are made today is how those decisions impact the longterm future.1

An existential risk is a risk that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.2

A natural thought is that, if one endorses strong longtermism, then one should solely focus on mitigating existential risk. This is the view suggested by Nick Bostrom:3

in “Astronomical Waste,” he argued that, for risk-neutral totalist utilitarians, minimizing existential risk should be “priority number one, two, three and four.” Relatedly, he later argued that, whenever we act out of impersonal concern, existential risk should be the dominant consideration, such that the following principle is a useful rule of thumb:
Maxipok: When pursuing the impartial good, one should seek to maximize the probability of an “OK outcome,” where an OK outcome is any outcome that avoids existential catastrophe.4

In this article, we argue against Maxipok on the grounds that, even if one endorses strong longtermism, there are many interventions that rival existential risk reduction in expected value. This fact is not trivial. The resulting discussion brings up deep and interesting issues about the probability of moral convergence over time, about just how good best-possible futures would be, and about the chance that events within our lifetimes will have very long-term persistent effects.
In “Astronomical Waste,” Bostrom was focused primarily on the implications of risk-neutral totalist utilitarianism. Our discussion will be broader. A wide variety of moral views entail strong longtermism; risk-neutrality, totalism, and utilitarianism are not required.5

The structure of this article is as follows. In section 2, we discuss the prima facie case against Maxipok, and what we see as the main defense of it, which we call Dichotomy: that the future is almost certain to either have close to zero value or some specific very high value, and one can positively influence the future only by decreasing the chance of the near-zero trajectories and increasing the chance of the high-value trajectories. In the following sections, we suggest three arguments for Dichotomy, and consider them in turn. In section 3, we discuss the idea that, in the long run, society will very likely converge to a near-best form; in section 4, we discuss the idea that there is an upper bound on moral value; in section 5, we discuss the idea that the goods that are best (per unit resource) are far better than almost all others. Though they each have interesting arguments in their favour, we ultimately find that none of these provide strong grounds to support Dichotomy.
In section 6, we give two general and, in our view, quite powerful arguments against Dichotomy: that resources in the future may get divided among different groups, and that Dichotomy is not robust to moral and empirical uncertainty. In section 7, we discuss a different argument for Maxipok: that the effects of any actions other than existential risk reduction will wash out over time. We respond by arguing that this century might see a lock-in of values, institutions and distributions of resources that could shape civilization's trajectory indefinitely.

2. Dichotomy of value

Despite the connotation of the term, human extinction is neither necessary nor sufficient for existential catastrophe.6

If the world enters a stable global totalitarian regime and never escapes, or if evolutionary processes irrevocably erode away everything of value, or if a misaligned superintelligent AI disempowers humanity and creates a future of near zero value but chooses to conserve Homo sapiens, as we have done with giant pandas, then an existential catastrophe has occurred.7

But, once we’ve started to consider catastrophes that do not involve human extinction, why should we focus only on “drastic” curtailments of humanity’s long-term potential? Aren’t there actions that improve the long-term trajectory of civilization in non-drastic ways? If there are, might not they be more valuable in expectation than existential risk reduction?
In particular, within our lifetimes we may well face moments of lock-in—events where certain distributions of power, values, or institutional arrangements become effectively permanent—or of persistent path-dependence, which significantly affect the ultimate distributions of power, values, or institutional arrangements. One key lock-in moment would be the development and deployment of artificial general intelligence (“AGI”), in particular insofar this might involve a handover of control (either voluntary or involuntary) from humanity to AI. And once AGI has been developed and deployed, many other areas of the world could get locked-in, too, including:
  1. The creation of a permanent world government or global governing institution.
  2. Decisions about the treatment and legal rights of digital beings.
  3. The allocation or settlement of extraterrestrial resources, both within and beyond the solar system.
We return to the question of whether these events could have very long-lasting persistent effects in section 7, “Persistence.”8

But, given the chance that these events could occur soon and have very persistent effects, there are many actions we could take in order to positively steer the future’s long-term trajectory.9

We could alter the timing of lock-in: deliberately postponing or accelerating irreversible decisions to change the way those decisions are made.10

To illustrate, actions here could include:
  • Altering the timing of the development of AGI, possibly by encouraging international agreements on AI regulations or compute caps, giving humanity more time to deliberate.
  • Implementing explicit reauthorization clauses for major international laws or treaties, particularly those governing digital rights and space resources.
  • Preventing premature allocation of extraterrestrial resources by advocating for an explicit temporary ban on settlement or acquisition of them.
Alternatively, we could improve lock-in: influencing lock-in events to ensure the outcomes chosen are as beneficial as possible. To do this, we could:
  • Advocate for distributed power in the institutions likely to govern AGI, ensuring no single country or company is predominant.
  • Encourage widespread moral reflection, exploration and discussion before any lock-in events occur, maximizing humanity’s chance of discovering and embracing the best values.11

    This could come via AI-assisted moral reflection, or the creation of AI superhumanly capable of novel ethical argument.
  • Ensure that responsible, humble and morally-motivated actors are in charge of designing whatever institutions do end up being locked-in, rather than ideologues or potential dictators.
If it’s true that, via these routes, we can make the longterm future better in non-drastic ways, then following Maxipok might involve missing out on some ways to positively influence the long-term future that are even better than existential risk reduction. What’s more, it means that Maxipok could recommend actions that are actively harmful. To see this, consider:
Strong World Government. In the future, advanced bioweapons have been developed, giving small terrorist groups the capability to kill everyone on Earth. The governments of the world need to decide whether to form a strong world government or not. If they do, then the risk of extinction from bioterrorism this century will go down from 1% to 0%. However, a world government would result in a reduction of cultural and moral diversity, and lock-in an increased level of authoritarian control over citizens; this would reduce the value of future civilisation, at all future times, by 5%.12

Maxipok recommends the world government, even though that’s the worse option.
So, in order to justify Maxipok, we need some argument for why the actions to delay or improve lock-in either do not truly impact the long-term future, or do but only via minimising existential risk.
Bostrom has suggested such an argument.13

He wrote that the concept of existential risk is most useful:
To the extent that likely scenarios fall relatively sharply into two distinct categories—very good ones and very bad ones. To the extent that there is a wide range of scenarios that are roughly equally plausible and that vary continuously in the degree to which the trajectory is good, the existential risk concept will be a less useful tool for thinking about our choices.14

In our experience, views of this kind are commonly held to support the belief that to improve the expected value of the longterm future, one should Maxipok. However, to rigorously evaluate whether this dichotomous picture actually justifies Maxipok, we need to make both concepts more precise.
To do so, we’ll invoke the idea of a “baseline” or "status quo” action — an action where one wastes the time or money under one’s control.15

We can then define the following ideas:
An action xx's overall impact (ΔEVx\Delta EV_x) is its increase in expected value relative to baseline. We'll let CC refer to the state of existential catastrophe, and bb refer to the baseline action. We'll define, for any action xx: Px=P[¬Cx]P_x = P[\neg C | x] and Kx=E[V¬C,x]K_x = E[V | \neg C, x]. We can then break overall impact down as follows:16

EVx=(PxPb)Kb+Px(KxKb)EV_x = (P_x - P_b) K_b + P_x (K_x - K_b)
We call (PxPb)Kb(P_x - P_b) K_b the action's existential impact and Px(KxKb)P_x (K_x - K_b) the action's trajectory impact. An action's existential impact is the portion of its expected value (relative to baseline) that comes from changing the probability of existential catastrophe; an action's trajectory impact is the portion of its expected value that comes from changing the value of the world conditional on no existential catastrophe occurring.
We can illustrate this graphically, where the areas in the graph represent overall expected value, relative to a scenario with a guarantee of catastrophe:
X-risk and trajectory changes
Fig. 1 — The impact of trajectory changes and existential risk reduction on the value of the future.

Image

With these in hand, we can then define:
Maxipok (precisified): In the decision situations that are highest-stakes with respect to the longterm future,17

if an action is near‑best on overall impact, then it is close-to-near‑best on existential impact.18

19

To illustrate, “near-best” could mean “achieves 90% of maximum possible impact” and “close-to-near‑best” could mean “achieves 88% of maximum possible impact”.20

We’ll then call the idea that Bostrom invoked Dichotomy, and make it precise:
Dichotomy: The distribution of the difference21

that human-originating intelligent life makes to the value of the universe over possible futures is very strongly bimodal, no matter what actions we take.
That is, there is significant probability mass clustered in a region of low or near-zero value and significant probability mass clustered in a region of high positive value, and little probability mass between these modes. No action we take can move outcomes to areas outside of these two clusters.
Dichotomy
Fig 2 — The probability density distribution of the value of the future if Dichotomy is true.

Image

Maxipok doesn’t follow directly from Dichotomy. But it does if supplemented with some fairly innocuous assumptions, which we will not challenge in this article. These assumptions are that: (i) we can meaningfully change the total probability of all outcomes falling in the catastrophe cluster; (ii) any difference in value achieved from moving outcomes within a cluster is very small compared to the difference in value between the high-value cluster and the low-value cluster. If so, then the maximum difference that trajectory impact can make compared to existential impact is very small. So, if an action is near-best overall, at most a very small part of that contribution must have come from trajectory impact, so the action must be close-to-near-best on existential impact.22

Dichotomy might seem implausible on its face. Suppose that, in the future, someone becomes dictator of a perennially stable global regime. Couldn’t the value of the resulting society scale from “maximally bad” (if the dictator, for some reason, wanted to maximise the bad) to “maximally good” (if the dictator wanted to maximise the good, and was willing, if needed, to change the governance regime so that that happened), and anywhere in between?
However, we think there are interesting arguments that one can muster in favour of Dichotomy, which we discuss in the next three sections. Though we ultimately reject these arguments, we don’t think that the matter is obvious. We’ll turn to these arguments now.

3. A wide basin of attraction

After suggesting Dichotomy, Bostrom made the following comment in support of his view:
Extinction is quite dichotomous, and there is also a thought that many sufficiently good future civilizations would over time asymptote to the optimal track.23

Here, as we understand him, the thought is that any civilisation good enough to achieve (say) 1% of possible value is sufficient to achieve (say) 90% of possible value.24

If so, then Dichotomy would hold.
As an analogy, we could think of the basin of gravitational attraction around a planet. An object moving towards the planet will either be captured by a planet's gravity or escape its gravitational force entirely. Once the object falls past a critical point in a planet's gravitational field, it's guaranteed to be pulled into orbit or collide with the planet—it doesn't remain suspended halfway. Similarly, the idea is that if a future society develops beyond some key ethical threshold, it will be naturally drawn toward optimality, while those below that threshold will drift toward a close-to-zero value future.
An extreme version of this view would be that almost all future civilisations that don’t go extinct end up converging into some particular sort of society, with any differences between them of small value. This could happen because people in the future will inevitably figure out what’s morally correct and act to produce an ideal society. But it could also involve convergence to some non-ideal society, for example if the force of cultural evolution pushes the trajectory of civilisation very strongly towards some particular configuration.
Either way, Dichotomy would hold. However, this extreme version of the view seems implausible. Even if we think that some amount of convergence is likely in the future, why should we expect only one attractor state? Again, consider a world in which all power is consolidated into the hands of a single immortal dictator.25

Surely there would be a multitude of ways society could turn out, depending primarily on whatever the dictator’s preferences are. And the dictator would not be bound by evolutionary forces. So the extreme view would have to hold that this dictator will inevitably figure out what is morally valuable and act to promote it.26

But that seems very implausible. The dictator might learn what is morally valuable and simply not care, deciding to do something other than promoting the good. Or, if beliefs about what is good are intrinsically strongly motivating, the dictator could choose to never learn what is morally valuable—precisely so that they don’t end up changing their initial preferences.27

Even if a stable dictatorship never occurs, it seems like there are many possible ways that the future could go. People in the future could in general fail to figure out what is morally valuable, or could find this out but just not care, instead acting on their own self-interest, or in accordance with some other ideology. Or they might not be able to fully implement their moral beliefs; they may be constrained by society’s earlier decisions. And it seems hard to believe that evolutionary forces could be so strong as to make a single sort of society inevitable, especially given that, with the intellectual advances we should expect to see over the coming centuries, future people will likely be able to understand those evolutionary forces and coordinate to prevent them.
You could make an argument that future generations (even dictators) will choose to pursue the moral good, based on the idea that people in general have quasilinear utility; i.e. their preferences are representable as a utility function of the form U = f(x) + g(y), where f(x) is strictly concave and g(y) is at least approximately linear, with f(x) representing the utility from self-interested goods and g(y) is the utility from moral goods. The thought is that the utility gained from spending to benefit oneself diminishes rapidly with further resources, whereas the utility gained from spending on moral goods is linear in expenditure, or at least diminishes much more slowly.28

So, when people’s total wealth gets sufficiently large (as it will in the future, given continued technological development), they will shift their use of marginal resources wholly onto maximising g(y), because the utility they would get from a marginal good to further satisfy their self-interested preferences would have become so small. Because of this, if individuals have even a weak preference to promote the good, with extremely large amounts of resources they will want to use almost all their resources to do so. The only individuals who would not do so are those who have essentially no preference to promote the good. But such individuals, so the argument goes, are rare.
Quasilinear utility is an argument that the bulk of the resources of the cosmos will be used to fulfill linear (or, more generally, convex) preferences. However, these preferences may not be preferences for the correct moral view.29

Some people might have approximately linear self-interested preferences, such as a desire to live for as long as possible, or to amass a collection of cosmic resources in a way that stamp collectors desire to collect stamps. Another possibility is self-interested preferences that are individually nonlinear but collectively linear. For example, for an individual, the preference to be richer than as many other people as possible is concave as a function of your own wealth. As it increases, the marginal number of people surpassed by growth in marginal wealth decreases, and your desire for additional wealth will level off. Once you are the single richest person in existence, your desire for additional wealth will entirely disappear. But if many people each have the desire to be the richest person in the world, this creates unbounded collective preferences.
What’s more, even if, given massive increases in wealth, people switch to using most resources on satisfying moral preferences rather than self-interested preferences, there is no guarantee that those moral preferences are for what is in fact good. They could be misguided about what is in fact good, or have ideological preferences that are stronger than their preferences for what is in fact good; their approximately linear preferences could result in building endless temples to their favoured god, rather than promoting the good.

4. Upper-bounded value

An alternative way of defending Dichotomy is to argue that a near-best future is intrinsically fairly easy to achieve — we'll call these easy eutopia views.30

We'll discuss what we see as the most plausible way to justify an easy eutopia view: if total possible value is bounded above. One example of such a view is bounded total utilitarianism. Let:
  • i{1,,n}i \in \{1, \ldots, n\} index all possible individuals
  • oOo \in O represent a possible outcome
  • vi(o)v_i(o) represent individual ii's wellbeing in outcome oo, where if ii does not exist in oo they are assigned wellbeing 0.
The class of bounded total utilitarian theories is defined by:
V(o)=f(i=1nvi(o))V(o) = f\left(\sum_{i=1}^{n} v_i(o)\right)
Where f()f(\cdot) is a strictly increasing function with an upper bound, like the tanh()\tanh(\cdot) function.
Strictly increasing function
Fig. 3 — The mapping from total wellbeing to value on a bounded axiology.

Image

If value is bounded above at a “low” upper bound then, so the thought goes, a very wide variety of possible futures will be close to the upper bound of value. So, to a first approximation, humanity either goes extinct in the near future or reaches close to the upper bound. So, Dichotomy holds.
The idea that there’s an upper bound on value is intuitive. Consider:
Common-sense utopia: Future society consists of a hundred billion people at any time, living on Earth or beyond if they choose, all wonderfully happy. They are free to do almost anything they want as long as they don’t harm others, and free from unnecessary suffering. People can choose from a diverse range of groups and cultures to associate with. Scientific understanding and technological progress move ahead, without endangering the world. Collaboration and bargaining replace war and conflict. Environmental destruction is stopped and reversed; Earth is an untrammelled natural paradise. There is minimal suffering among nonhuman animals and non-biological beings.31

From a common-sense moral perspective, Common-sense utopia appears to have at least a significant fraction of the value that a best-possible future would have. According to our own common-sense intuitions, at least, there is no possible future that is so good that it would be worth taking a 99.9% risk of near-term painless extinction and a 0.1% chance of that even-better future rather than a guarantee of Common-sense utopia.
A bounded axiology is also one way of responding to decision-theoretic paradoxes such as the St. Petersburg Paradox and the “fanaticism” problem.32

If total value is bounded above, then, for some cost c and for some sufficiently low probability p, there is no outcome, no matter how good, such that it is worth bearing a guarantee of c in exchange for p probability of obtaining that (very good) outcome. So we shouldn’t be willing to pay indefinitely large amounts to play the St Petersburg game, or to incur huge costs in exchange for tiny probabilities of enormous amounts of value.
Can this approach vindicate Dichotomy? Note, first, that the upper bound must be within what is a relatively small range. The upper bound must be low enough that most future trajectories get us close to that upper bound.33

But it must be high enough that we are not already close to the upper bound because, if we are already close to the upper bound, then there is no longer a case for strong longtermism on the basis of the huge potential upside from avoiding existential risk.
And note that there are some already well-known problems with bounded axiologies. They violate separability: they entail that what actions are best, today, depends sensitively on how good or bad the world is across time and space. But it seems absurd to think that the value of ancient societies or alien civilisations are relevant to what it is best, today, to do.34

This sensitivity also means that bounded axiologies may in practice evaluate actions in the same way that a linear axiology does.35

The reason is this: if the background population is very large (which, especially, seems true if alien civilisations exist), then any difference you can make to the total value of the world is small. But concave functions approximate linear functions when considering only small changes. So, given that we can only make a very small difference to the total value of the world, someone with a bounded axiology should act very similarly to someone with a linear axiology (though caring much more about losses than gains).
As a final point, bounded views face a dilemma around how to handle the possibility of negative-value futures. The idea that there is a lower bound on value is very implausible. To see this, consider
Common-sense Infernotopia: in the future, all the resources of the solar system have been used to support a huge number of beings who live very long lives of excruciating suffering.
And suppose that, today, a decision-maker faces the following choice:
  1. Let the future develop into Common-sense Infernotopia.
  2. Create some powerful technology. With probability 1–p, this technology will cause the future to be just like Common-sense Infernotopia, except with ten times as many beings. With probability p, this technology will cause civilization to come to a painless end before the hellish period begins.
For what p should the decision-maker prefer option B? It’s not totally obvious. When thinking about truly wonderful societies, it doesn’t seem that making them ten times bigger would make them anything close to ten times better. But an infernotopia that’s ten times bigger seems many times worse. It’s at least fairly plausible that the decision-maker should choose B iff p is less than 0.1; i.e. that hellish futures that are ten times as large are ten times as bad. Moreover, no matter how bad the initial infernotopia was, it seems that one could always make it twice as bad by making it much bigger. So badness is not bounded below.
But if badness is bounded above but not below, then two things follow. First, we no longer have an argument for Dichotomy. Some future trajectories could vary all the way from ~0 value to –100% value, and we no longer have an argument for why the value of those trajectories should cluster at some particular points: the best action could involve making some very-bad trajectories somewhat less bad.
Second, it becomes plausible that humanity’s continued survival is, in expectation, of negative value. If Common-sense utopia is close to the upper bound of value, then, because there are roughly 102210^{22} accessible star systems,36

the distance between the upper bound of value and 0 is over 102210^{22} times smaller than the distance between 0 and the in-practice lower bound, which would be obtained if all the accessible resources in the cosmos were optimised for badness. If one believes that there is a more than a minuscule one in 102210^{22} probability that such an infernal scenario will occur, then the expected value of human survival must be negative. So one of the best things to do, on this view, might be to promote human extinction. That is not what Maxipok recommends.
Putting this all together, it seems like bounded axiology—and the idea of “easy eutopia” more generally—is not a promising strategy to justify Dichotomy.37

5. Extremity of the best

A third argument for Dichotomy holds even if societies don't naturally converge and value isn't bounded. This argument is based on the idea that the difference between best and typical uses of resources is so vast that only near-optimal futures matter:
Extremity of the Best (EOTB): The most efficient use of resources produces far more value, per unit resources, than almost all other likely uses of resources.
Let’s use “valorium” to refer to the hypothetical good (if there is one) that produces the most moral value per unit of resources;38

for instance, if classical hedonism is correct, valorium might be whatever computational substrate most efficiently generates pleasurable experiences per joule of energy. The argument from EOTB to Dichotomy is this. Either future society optimises to produce valorium, or it doesn’t. If it does, it produces an enormously good future. If it doesn't, then, because EOTB is true, almost certainly whatever it produces will be orders of magnitude less valuable. So, relative to the all-valorium future, almost all other futures have close to 0 value.
In this section, we’ll assess whether EOTB is true; in the next section we’ll question whether it truly supports Dichotomy.
Let’s begin by making EOTB more precise. Consider the probability distributions over the efficiency (“E”) of uses of resources in the future, assuming no near-term extinction, where “efficiency” refers to how much moral value is produced per unit resource. Let D refer to the probability distribution over future efficiency that is best supported by the evidence we have at this time. Next, consider a new distribution, D*, which is the same as D except that (i) it's restricted to only positive values; (ii) it doesn’t include any uses of resources that achieve at least 90% of the efficiency as the 99.9th percentile of D. Finally, consider a third distribution D°, which is restricted to only uses of resources that achieve at least 90% of the efficiency as the most efficient use of resources in D. Then we can state our new definition:39

EOTB (precisified): The expected value of D* is less than a thousandth of the expected value of D°.
So, what shape of probability distribution over future efficiency should we have? First, we should highlight and discard a spurious argument for EOTB. This is the argument that moves from the claim that the most efficient use of resources will produce far more value, per unit resources, than almost all other possible uses of resources; therefore the most efficient use of resources will produce far more value, per unit resources, than almost all other likely uses of resources.
The starting claim is very likely true. Probably, almost all possible uses of resources—i.e. all the ways in which matter and energy could be combined over time—have zero value. But from this it doesn’t follow that almost all likely uses of resources will have zero value, or even low value. (Compare: “Almost all ways of organising matter do not produce a functioning car. Therefore it's extremely unlikely that General Motors will produce a functioning car.”) For the purposes of assessing whether Dichotomy is true, we need to compare the best uses of resources to ways in which resources are likely to be used by future civilisations, which will be a minuscule fraction of all possible uses of resources.40

From the armchair, it seems extremely hard to be confident in anything specific. But there are at least some things we can say:
  • We know that efficiency can be negative: it’s possible to use resources to produce states of negative value, like suffering. This rules out many common heavy-tailed distributions, such as an (untranslated) log-normal distribution or Pareto distribution.
  • The distribution is very likely truncated above, because there are limits on how much value can be produced by a given unit of resources.41

    For example, if value supervenes additively on conscious experiences, and conscious experiences supervene on computational processes, then there must be an upper bound on how much value can be produced from a given unit of resources, because there is an upper bound on how much computation can be performed with a given unit of resources. More generally, the most widely-accepted current physical theories would seem to imply that there is only a finite (though astronomically large) number of ways in which matter and energy can be arranged in the accessible universe.
  • There is additional probability mass at the maximal and minimal values of the future efficiency distribution, because we should expect some possible actors in the future to try to maximise the good, and a much smaller but non-zero number to try to maximise the bad.42

Even given a distribution that could in principle support EOTB—for example, a Pareto or a log-normal distribution—whether EOTB in fact holds depends on the parameter values (such as the standard deviation and the value at the upper limit of efficiency). Therefore, merely arguing that the future efficiency distribution is “heavy-tailed” is insufficient to establish EOTB. A classic hallmark of heavy‐tailed distributions is that small changes in tail thickness can massively change the likelihood of seeing extreme outliers.
So, what does the most relevant available empirical evidence imply about EOTB? We’ll start off by assuming hedonism as a theory of value, before broadening out.
In favour of EOTB, some people report that the value or disvalue of experiences in their lives seems to follow a very heavy-tailed distribution. For example, in the prologue to his autobiography, philosopher Bertrand Russell, wrote: “I have sought love... because it brings ecstasy — ecstasy so great that I would often have sacrificed all the rest of life for a few hours of this joy.”43

Similarly, the Russian novelist Fyodor Dostoevsky described his experiences with epilepsy in this way: “For several instants I experience a happiness that is impossible in an ordinary state, and of which other people have no conception. I feel full harmony in myself and in the whole world, and the feeling is so strong and sweet that for a few seconds of such bliss one could give up ten years of life, perhaps all of life.”44

There have also been some preliminary surveys aimed at assessing the prevalence of experience distributions like those Russell’s and Dostoevsky reported. In one such survey, respondents were asked to compare their most intense and second most intense experiences. When asked how many times more intense their most extreme experience was compared to their second most extreme, over 50% reported a ratio of 2× or higher, and many reported substantially higher ratios.45

This pattern provides some evidence for a heavy-tailed distribution of the valence of human experiences, which provides some amount of evidence for EOTB.
On the other hand, Weber-Fechner Laws in psychophysics provides some evidence that the distribution of the valence of human experiences has light tails. A Weber-Fechner Law is a logarithmic relationship between the physical intensity of a stimulus and the subjective intensity of corresponding sensations or perceptions. Light, sound, and weight perception roughly follow Weber-Fechner laws.46

Though the best-studied Weber-Fechner Laws are in directly sensory domains, there is some indication that pleasant feelings occur in proportion to a logarithmic transformation of pleasant physical inputs.47

However, a Weber-Fechner Law for pleasurable experiences is not logically inconsistent with the heavy-tailed view. For example, it might be that there is logarithmic input-output relationship for pleasure within domains, but a heavy tailed distribution across domains. Such a pattern seems to describe Russell’s and Dostoevsky’s reports well. It is not that some particular loves or epileptic seizures are incomparably more intense than others, but rather that these domains of experience are incomparably more intense than other domains of experience. Holding one’s newborn child may be many orders of magnitude better than eating ice cream, even if holding newborn twins is less than twice as good as holding a newborn singleton.
What do folk intuitions about gambles say about EOTB? They provide some evidence against. From straw polls, we’ve found that people in general are not willing to take a 50/50 chance of death for an indefinitely long life of maximally-positive experiences (although that could be confounded, for example by a desire not to die). You can think this through for yourself by considering two choices:
  • Choice 1 is between either (i) a guarantee of 10 years of life continuously as good as the best experiences of your life, or (ii) a 99.9% chance of 10 years where your net experience is of 0 wellbeing (you would on average be indifferent between living and being unconscious), and a 0.1% chance of 10 years of the best possible experience.
  • Choice 2 is between either, (i) a guarantee of 10 years of life continuously as good as the best experiences of your life, or (ii) three days of best-possible-experience followed by ten years minus three days where your net experience is of 0 wellbeing.
Intuitively, either the guarantee is the better option, or it’s at least unclear. This suggests that, as far as our current preferences are concerned, we don’t think that the best possible experiences are over a thousand times better than the best experiences that contemporary humans can experience today. Of course, we might just be suffering from a lack of imagination.
That said, it’s at best unclear how much evidence about human tradeoffs tell us about tradeoffs between goods in general, including tradeoffs made by beings far in the future. Evolution did not in any way optimise humans for wellbeing, so peak wellbeing might be much better than the best experiences available to contemporary human beings.48

To get a glimpse of this idea, note that, for humans, it is much easier to induce terrible pain than rapturous joy. But that has an obvious explanation in human evolutionary history. Humans can be killed very quickly, reducing their inclusive fitness to zero. Nothing that can realistically happen in the amount of time it takes to kill a human, i.e. minutes or hours, can increase their inclusive fitness by a comparable amount. Therefore, risks loomed far larger than benefits in the “calculus” of human evolution. This suggests that different beings could get peaks of joy at least as high as the troughs of misery are deep.
Putting this all together, it’s at least unclear whether EOTB is correct or not, on the assumption of hedonism. And the issue gets even harder to think about on the assumption of other theories of value. Whether EOTB is true on preference-satisfactionism depends crucially on questions about interpersonal comparisons of utility: if we create a being that desires some wonderful life vastly more than it desires the best human lives today, is that because that being gets enormously more preference-satisfaction than we do from that other life, or because they get less preference-satisfaction than we do from the equivalent of a good contemporary human life? And objective list theories give us even fewer grounds on which to adjudicate whether EOTB is true: if, for example, appreciation of beauty is an objective good, it’s not at all obvious to us how to know whether the deepest-possible appreciation of the most-beautiful-possible object is over a thousand times better than ordinary human appreciation of a sunset.
We think the right attitude to EOTB is one of agnosticism; further investigation into this claim is an important area for further work. However, even if EOTB is true, we do not think it strongly supports Dichotomy. The reasons for this are part of quite general arguments against Dichotomy, which we turn to in the next section.

6. Two general arguments against Dichotomy

In this section, we discuss two considerations that generally undermine the plausibility of Dichotomy.

6.1 Division of resources

First, in the future resources could be divided up among different groups, which form very different societies or have very different moral values. Space might even get literally divided up into different territories, controlled by different groups in rough proportion with those groups’ power at the time of space settlement. This possibility undermines Dichotomy even if EOTB is true, because some groups might optimise for valorium while other groups do not; even if EOTB is true, the future could have anywhere from ~0 to 100% of value depending on how much of the cosmos the valorium-optimisers control. This possibility also undermines Dichotomy even if there is a wide basin of attraction towards optimal societies, as long as some groups control societies that fall outside that basin. If so, then some fraction of all resources would end up optimific, and the rest would produce much less value; the overall value of the future would be some intermediate fraction of the best-possible future. Within a given group, individuals might divide their resources between selfish uses, pursuit of moral value correctly understood, and pursuit of moral value incorrectly understood. The possibility of individual resource split of this kind further undermines Dichotomy.
Crucially, the split of resources between groups and within individual budgets is also something that one could in principle affect. Slightly increasing the representation of better moral worldviews (i.e. those more likely to converge towards producing the very best outcomes) at the time of space settlement could therefore slightly increase the expected value of the very long-term future. But this isn’t about reducing existential risks, because it isn’t making a “drastic” change to Earth-originating life’s potential.
It is less clear whether the consideration that resources may be divided among groups undermines Dichotomy if moral value is bounded above. One might think that, if value is bounded above, then as long as some significant resources go towards producing what is of value, then we will get to a future that’s close to the upper bound. However, this only holds if the other groups do not produce bads. If they do, and especially if value is not bounded below, then the overall value of future civilisation is likely negative; depending on what fraction of resources are used to produce bads, that civilisation could be anywhere from “somewhat good” to “almost maximally bad”. So we think that this consideration generally undermines Dichotomy even if value is bounded above, too.

6.2 Uncertainty

As will have been clear from reading this paper, we think we should have deep uncertainty around issues relating to Dichotomy. This uncertainty can be empirical: about the extent to which different future civilisations will converge on the best society, or on some other attractor state; or the likelihood that resources will be divided among groups with very different moral values. The uncertainty can also be moral: about whether value is bounded above, or about how good the very best uses of resources could be.
In general, we think that this uncertainty undermines Dichotomy more than it supports it. The simple argument for thinking so is that if one is uncertain over many different distributions of V*, including a dichotomous distribution, then the expected distribution is non-dichotomous.
Dichotomy
Fig. 4 — A dichotomous distribution.

Image

Non-dichotomous distribution
Fig. 5 — A non-dichotomous distribution.

Image

Joint distribution
Fig. 6 — The average of the distributions from figs. 4 and 5, which is not itself dichotomous.

Image

However, this simple argument is too quick. If it’s the case that, conditional on Dichotomy, the best outcomes are much better than they would be conditional on non-Dichotomy, then Dichotomy would still hold even in the face of uncertainty. So, do we have reason to think that this is the case? There are some arguments one can muster in its favour. We’ll consider each of the three potential ways of justifying Dichotomy in turn.
First, consider the wide basin of attraction view. You might be inclined to the thought that, if there is not a wide basin of attraction toward near-best futures, then moral realism is false. And you might think that, if moral realism is false, then, morally, things in general are much lower-stakes — at least at the level of how we should use cosmic-scale resources. If so, then the best futures, conditional on a wide basin of attraction, are much better than the best futures conditional on no wide basin of attraction.
It’s hard to know how to assess the claim that, if moral realism were false, things would be lower stakes than if it were true. But even putting that to the side, we don’t think this argument works because it’s simply not true that moral realism entails that there is a wide basin of attraction toward near-best futures. For reasons we canvassed in section 3, future civilisation might well miss out on producing a near-best future even if moral realism is true. Suppose that future decision-makers know that, upon hearing moral arguments, they will change their mind and start using their resources to maximise the objective good. They might then set up commitment mechanisms such that the resources must be used for the original purpose, in a way that can’t later be undone. Or they might structure their informational environment such that they are guaranteed to never hear the moral arguments that might make them change their preferences. Or, more likely, future decision makers might just not care enough about morality to fully act on its requirements.
Second, consider the bounded value view. Here, it’s very hard to see why we should think that, if value has an upper bound, then the best futures are far better than if value has no upper bound, and there is some reason to think the opposite. In order to assess this, we will have to make comparisons of value across different axiologies, comparing the size of value-differences on the bounded value view with the size of value-differences on unbounded value views.
There are a variety of ways to make intertheoretic comparisons of value, and there is no consensus in the literature on which is correct (or even if these comparisons are meaningful at all). But we can consider some obvious methods:
  • Range normalises such that all axiologies agree on the value-difference between the best possible and worst possible outcomes.49

  • Best-zero normalises such that all axiologies agree on the value-difference between the best possible outcome and an outcome with zero value.
  • Worst-zero normalises such that all axiologies agree on the value-difference between the worst possible outcome and an outcome with zero value.
  • Variance normalises such that all axiologies agree on the variance of value over outcomes.50

  • Standard value normalises such that all axiologies agree on the value-difference between some two specified outcomes, such as the actual world and a world identical with the actual one except that today I stubbed my toe.51

In each case, “possible” here means possible given the laws of physics and the finite resources we have in-principle access to.
On none of these ways of normalising would the best outcomes, given value that’s bounded above at a reasonable level, be better than the best outcomes, given value that is in-principle unbounded (but in practice bounded by the finite resources we have access to). On Range and Best-zero, the best outcomes would be equally good. And if the bounded value view is unbounded below, or has a lower bound with greater absolute value than the upper bound (so the worst outcomes are worse than the best outcomes are good), then on Variance, Worst-zero and Standard value, the best outcomes on the bounded value view would be worse than the best outcomes given the in-principle unbounded view.
Finally, we can consider the extremity of the best view. In this case, if we use either Variance or Standard value to make the intertheoretic value comparisons, then the very best outcomes are indeed better, on EOTB, than they would be if it weren’t true.
So if those ways of making the intertheoretic comparison are correct, then EOTB may be true in expectation. However, given our current state of knowledge we shouldn't be at all confident in any particular method for intertheoretic comparisons. And there is a particular problem with using these sorts of intertheoretic comparisons to argue that EOTB is true in expectation, which is that they seem to entail fanatical conclusions.
To see this, suppose that you are uncertain between (i) a view on which the most efficient use of resources produce only ten times as much value as the mean of the bottom 90% of uses, and (ii) an EOTB view, on which the most efficient use of resources produces a thousand times as much value as the mean of the bottom 90% of uses. You think (i) is more likely than (ii), but using this intertheoretic comparison argument, you think that EOTB is true in expectation. However, if so you should also consider (iii) an “extreme EOTB” view, on which the most efficient use of resources produces a hundred thousand times as much value as the mean of the bottom 90% of uses. What’s more, given how little we know about these things, it doesn’t seem rational to have one hundred times greater credence in the EOTB view than the extreme EOTB view. So, really, it’s not EOTB that holds under moral uncertainty, it’s extreme EOTB that holds. And the same issue would occur with respect to (iv) an “ultra-extreme EOTB” view, on which the most efficient use of resources produces ten billion times as much value as the mean of the bottom 90% of uses. Again, it seems irrational to be one hundred times more confident in “extreme EOTB” than in “ultra-extreme EOTB”. And so on.
That is, if we do the intertheoretic comparisons in this way, it seems likely that the expected value of the most-efficient use of resources is undefined — greater than any finite number — in much the same way that the expected payoff of the St Petersburg game is undefined. This suggests that these methods for doing the intertheoretic comparisons involve giving too much weight to views that posit extreme upside. And even if we put this problem to the side, the “division of resources” point still holds.

7. Persistence

So far, we've considered the idea that Maxipok might be justified because Dichotomy is true. A very different line of argument is that Maxipok is justified because only existential catastrophes have persistent effects on the value of the long-term future, so the only way of improving the expected value of the long-term future is by trying to reduce existential risk.
One way of making this argument is to claim that all events other than the extinction of human-originating civilization, no matter how significant they seem at the time, will ultimately wash out over the long arc of history. Even global dictatorships, world wars, or radical changes in cultural values would eventually fade away in their expected impact. Call this view persistence skepticism. It might seem that, if persistence skepticism were correct, then Maxipok would be justified. However, that conclusion does not follow because extinction risk and existential risk are not the same. Instead of justifying Maxipok, persistence skepticism justifies:
Max-PS: When pursuing the impartial good, seek to maximize the probability of the survival of human-originating civilization.
Max-PS is a narrower principle than Maxipok because extinction events are only a subset of existential catastrophes. Switching from Maxipok to Max-PS would be somewhat revisionary in practice. In particular, Max-PS will not lead to work on AI alignment (the highest priority listed by Ord in The Precipice) because AI revolution, even if it resulted in human extinction, would not result in the extinction of human-originating civilization.
Moreover, we think there is good reason to reject persistence skepticism. We’ll use the term lock-in to refer to the idea of civilization entering a state, or the basin of attraction of a state, that it is extremely unlikely to ever leave. We consider it reasonably likely that lock-in will occur this century—we think it’s at least as likely as extinction. This fact creates path-dependence that persists for as long as Earth-originating intelligent life exists.
In our view, the two most plausible mechanisms for such lock-in are as follows:
First, AGI could enable the creation and enforcement of perpetually binding institutions, laws, or constitutions. This could occur in two ways. AGI systems themselves might establish self-perpetuating governance structures with goals that remain stable over astronomical timescales. Alternatively, human beings might use AGI to enforce systems that humans design. As Finnveden, Riedel, and Shulman argue, once an AGI system is tasked with enforcing some constitution, it could maintain and reproduce this system indefinitely.52

Unlike humans, digital agents can make exact copies of themselves, store backup copies across multiple locations, and reload original versions to verify goal stability over time. A global hegemon wielding such technology could lock in a specific system of governance indefinitely.
Second, the allocation of extrasolar resources. Once widespread space settlement becomes possible—which might come soon after the development of AGI53

—space could be divided among different groups. It seems plausible that star systems at technological maturity are defense-dominant (meaning a group that holds a star system can defend it even against a much more well-resourced aggressor), because of the enormous distances between star systems, the difficulty of targeting specific locations across light-years, the defender's ability to create protective dust clouds, and the possibility of credible threats to destroy resources if attacked. If so, then the initial allocation of resources could persist indefinitely. In such a scenario, the values and governance systems of different groups would be locked in with the territories they initially acquire.
Importantly, even though AGI and space settlement provide the mechanisms for ultimate lock-in, earlier moments in time could very significantly affect how those ultimate lock-ins go. For example, a dictator might initially secure power for only 10 years, but use that time to develop means to retain power for 20 more years, and then reach AGI within that further 20 years, thereby indefinitely entrenching what would otherwise have been only short-term dominance.
This suggests that quite a wide range of events could have an indefinite impact on the future. These might include:
  • The specific values and goals programmed into the first AGI systems that reach human-level capabilities.
  • The configuration of the first global government or global governing institution.
  • The initial legal and moral framework established for digital beings.
  • The principles governing the allocation of extraterrestrial resources, and what proportions of space resources will be controlled by different governance and value systems.
  • Whether the most powerful countries today maintain democratic governance or transition to authoritarian regimes.
Actions in these areas might sometimes involve preventing existential catastrophes, but they might also involve increasing the value of the future in non-drastic ways.

8. Conclusion

In this paper, we have argued against Maxipok—the view that the sole focus of temporally impartial altruists should be to minimize existential risk.
The key empirical premise underpinning MaxipokDichotomy—holds that the future is almost certain to be either of close to zero value or of close to some very high value, with little probability mass in between. We have examined three potential justifications for this view: that there may be a wide basin of attraction towards near-best societies; that moral value is bounded above; and that the best goods are far better than almost all others. None of these arguments, we have argued, provide strong grounds for accepting Dichotomy.
Against Dichotomy, we offered two general arguments. First, even if efficiency in producing value could be dichotomous in principle, resources in the future may be divided among different groups with different values—some might optimize for the best uses of resources, while others might use them to produce much less value. Second, moral and empirical uncertainty increase the likelihood that it is non-dichotomous.
Finally, we considered the question of persistence. We argued that events other than existential catastrophes can have persistent effects through various lock-in mechanisms, including AGI-enforced institutions and the defense-dominance of territories in space.
Rather than focusing solely on existential catastrophe, we suggest that temporally impartial altruists should tackle the broader category of grand challenges — events whose surrounding decisions alter the expected value of Earth-originating life by at least one part in a thousand (0.1%).54

These could include issues like the rights of digital beings, space governance, and the institutions in place during the transition to a post-AGI society.
The accessible universe contains vast resources that could be used to create extraordinary value. How those resources are used will depend in part on the path we take through the coming centuries. The value of the future is not simply a matter of whether Earth-originating intelligent life avoids catastrophe, but also of what values it embodies, how power is distributed, and what institutions govern society. A continuous spectrum of futures lies before us, each with different moral worth, and we have more influence over which one occurs than we might at first imagine. We may even face decision situations like Strong World Government in which decreasing existential risk also decreases the value of the future. We do not deny that existential risk reduction is important, but we have argued that it is also important to attend to ways of effecting the long-term future other than existential risk reduction.
This paper represents a step in the evolution of how we think about the moral effects of present decisions on the far future—a process that long predates the concept of existential risk. Late twentieth century scientists' and philosophers' fears of nuclear winter lead to attention to human extinction risks in general. Bostrom then triggered a further widening of the aperture, from extinction risk to existential risk. Now much like Bostrom did with the transition from extinction risk to existential risk, we suggest moving from the narrower principle of “minimize existential risk” to the more general and less binary principle of “avoid negative lock-in events, and improve any lock-ins that do occur.”

Acknowledgements

For useful comments and discussion, we thank Loren Fryxell, Hilary Greaves, Finlay Moorhouse, Toby Ord, Pablo Stafforini, Christian Tarsney, and Lizka Vaintrob. We made use of the AI assistants Claude, ChatGPT, and Gemini throughout. We are grateful to Irina Titkova for producing the graphics. Special thanks are due to Philip Trammell for extended discussion, and for prompting us to return to this project after many years of dormancy.

References

Armstrong, Stuart, and Anders Sandberg. 2013. "Eternity in Six Hours: Intergalactic Spreading of Intelligent Life and Sharpening the Fermi Paradox." Acta Astronautica 89 (August):1–13. https://doi.org/10.1016/j.actaastro.2013.04.002.
Baumann, Tobias. 2022. "A Typology of S-Risks." Center for Reducing Suffering (blog). https://centerforreducingsuffering.org/research/a-typology-of-s-risks/.
Beckstead, Nick. 2013. "A Proposed Adjustment to the Astronomical Waste Argument." Effectivealtruism.Org (blog). 2013. https://www.effectivealtruism.org/articles/a-proposed-adjustment-to-the-astronomical-waste-argument-nick-beckstead
Beckstead, Nick, and Teruji Thomas. 2024. "A Paradox for Tiny Probabilities and Enormous Values." Noûs 58 (2): 431–55. https://doi.org/10.1111/nous.12462.
Berkovich, Rotem, and Nachshon Meiran. 2024. "Both Pleasant and Unpleasant Emotional Feelings Follow Weber's Law but It Depends How You Ask." Emotion 24 (5): 1180–89. https://doi.org/10.1037/emo0001343.
Bostrom, Nick. 2002. "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards." Journal of Evolution and Technology 9 (1).
Bostrom, Nick. 2003. "Astronomical Waste: The Opportunity Cost of Delayed Technological Development: Nick Bostrom." Utilitas 15 (3): 308–14. https://doi.org/10.1017/s0953820800004076.
Bostrom, Nick. 2006. "What Is a Singleton?" Linguistic and Philosophical Investigations 5 (2): 48–54.
Bostrom, Nick. 2013a. "Existential Risk Prevention as Global Priority." Global Policy 4 (1): 15–31. https://doi.org/10.1111/1758-5899.12002.
Bostrom, Nick. 2013b. "Reply to Beckstead." Effectivealtruism.org (blog). 2013.
Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.
Bostrom, Nick. 2019. "The Vulnerable World Hypothesis." Global Policy 10 (4): 455–76. https://doi.org/10.1111/1758-5899.12718.
Bostrom, Nick. 2024. Deep Utopia. Washington: Ideapress.
Broome, John. 2004. Weighing Lives. Oxford: Oxford University Press.
Carlsmith, Joe. 2023. "Scheming AIs: Will AIs Fake Alignment during Training in Order to Get Power?" arXiv. https://doi.org/10.48550/arXiv.2311.08379.
Erdil, Ege, and Tamay Besiroglu. 2023. "Explosive Growth from AI Automation: A Review of the Arguments." arXiv. https://doi.org/10.48550/arXiv.2309.11690.
Gómez-Emilsson, Andrés, and Chris Percy. 2023. "The Heavy-Tailed Valence Hypothesis: The Human Capacity for Vast Variation in Pleasure/Pain and How to Test It." Frontiers in Psychology 14 (November). https://doi.org/10.3389/fpsyg.2023.1127221.
Greaves, Hilary. 2024. "Concepts of Existential Catastrophe." The Monist 107 (2): 109–29. https://doi.org/10.1093/monist/onae002.
Greaves, Hilary, and William MacAskill. 2019. "The Case for Strong Longtermism." GPI Working Paper. https://globalprioritiesinstitute.org/wp-content/uploads/The-Case-for-Strong-Longtermism-GPI-Working-Paper-June-2021-2-2.pdf
Kahneman, Daniel. 2011. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.
Lockhart, Ted. 2000. Moral Uncertainty and Its Consequences. Oxford: Oxford University Press.
MacAskill, William, Owen Cotton-Barratt, and Toby Ord. 2020. "Statistical Normalization Methods in Interpersonal and Intertheoretic Comparisons." Journal of Philosophy 117 (2): 61–95. https://doi.org/10.5840/jphil202011725.
MacAskill, William, Toby Ord, and Krister Bykvist. 2020. Moral Uncertainty. Oxford: Oxford University Press.
MacAskill, William. 2022. What We Owe the Future. New York: Basic Books.
MacAskill, William. 2025a. "Better Futures." Forethought Research, August 3, 2025. https://www.forethought.org/research/better-futures.
MacAskill, William. 2025b. "How to Make the Future Better." Forethought Research, August 3, 2025. https://www.forethought.org/research/how-to-make-the-future-better.
MacAskill, William. 2025c. "Persistent Path-Dependence." Forethought Research, August 3, 2025. https://www.forethought.org/research/persistent-path-dependence.
MacAskill, William, and Fin Moorhouse. 2025a. "Preparing for the Intelligence Explosion." Forethought Research, March 11, 2025. https://www.forethought.org/research/preparing-for-the-intelligence-explosion.
MacAskill, William, and Fin Moorhouse. 2025b. "No Easy Eutopia." Forethought Research, August 3, 2025. https://www.forethought.org/research/no-easy-eutopia.
Martínez, Eric, and Christoph Winter. 2022. "Ordinary Meaning of Existential Risk." SSRN Scholarly Paper. https://doi.org/10.2139/ssrn.4304670.
Ord, Toby. 2020. The Precipice: Existential Risk and the Future of Humanity. London: Bloomsbury Publishing.
Ord, Toby. 2021. "The Edges of Our Universe." arXiv. https://doi.org/10.48550/arXiv.2104.01191.
Tarsney, Christian, and Teruji Thomas. 2024. "Non-Additive Axiologies in Large Worlds." Ergo an Open Access Journal of Philosophy 11 (0). https://doi.org/10.3998/ergo.5714.

Footnotes

Released on 21st January 2026

Citations

Search

Search for articles...