Podcast

Persistent Path-Dependence

Released on 3rd August 2025
William MacAskill

Citations

1. Introduction

One of the most common objections to working on better futures is that, over sufficiently long time horizons, the effects of our actions will ‘wash out’.1

This is often combined with the view that extinction is a special case, where the impacts of our actions really could persist for an extremely long time. Taken together, these positions imply that it’s much more important, from a longtermist perspective, to work on reducing extinction risk than to work towards better futures. The future we’ll get given survival might only be a fraction as good as it could be, but we might just be unable to predictably improve on the future we get. So we should focus on Surviving rather than Flourishing.
In this essay, I’ll argue against this view. There are a number of events that are fairly likely to occur within our lifetimes that would result in extremely persistent path-dependent effects of predictable expected value. These include the creation of AGI-enforced institutions, a global concentration of power, the widespread settlement of space, the first immortal beings, the widespread design of new beings, and the ability to self-modify in significant and lasting ways.  
I’m not confident that such events will occur, but in my view they’re likely enough to make work on better futures high in expected value from a long-term perspective. To be more precise, my view is that the expected variance in the value of the future will reduce by about a third this century, with the majority of that reduction coming from things other than the risk of human extinction or disempowerment to AI.
In section 2 of this essay, I’ll explain why the skeptical argument I’m considering is more complicated than it first appears, and doesn’t justify some typical longtermist priorities, like preventing AI takeover. In section 3, I explain the concepts of lock in and path-dependence. Section 4 is the bulk of the paper, where I discuss mechanisms that could enable persistent path-dependence. In section 5, I introduce the idea of “lock-in escape velocity” as a reason why  persistent path-dependence is more likely than you might have thought. In section 6, I argue that it’s fairly likely that, given events in our lifetimes, we can have persistent path-dependent impacts on the trajectory of the future.

2. Human extinction and AI takeover

Extinction is often regarded as a unique case, where actions to reduce the risk of extinction really can predictably affect the value of the very long-term future. The thought is: by reducing the risk of human extinction, we increase the probability that there is any civilisation at all in the long term, in our cosmic neighborhood. So, in order to conclude that reducing extinction risk is good, the only view we need to have is that the expected value of future civilisation is positive rather than negative or neutral. Later in this piece, I’ll argue against the uniqueness of extinction’s (purported) predictable long-run significance. In this section, I’ll make two points about the way in which extinction has long-run significance.
First, you might think that reducing the risk of human extinction by one percentage point increases the probability that there is any civilisation in our cosmic neighborhood by one percentage point. But this isn’t right. If humans go extinct and most other mammals do not, it seems quite likely to me (more than 50%)2

that, in the hundreds of millions of years remaining before the Earth is no longer habitable, some other species will develop higher intelligence, cumulative cultural evolution and technological capability, such that they can rebuild civilisation in our stead.3

And, even if all life on Earth is wiped out, or if higher intelligence never re-evolves, it seems somewhat likely to me (around 50%)4

that alien civilisations will eventually settle our part of the cosmos.5

 
This means that, in reducing the risk of human extinction, we are mainly affecting who occupies our corner of the cosmos, rather than whether it gets occupied. We are still somewhat increasing the chance that there’s any civilisation in our part of the cosmos, because it’s not certain that non-human or alien civilisations would fill in the gap. But it means that, in order to believe that extinction risk reduction is positive in expectation, you must have the view that these alternative civilisations wouldn’t be much better than human civilisation. The view is not merely resting on the idea that the expected value of future civilisation is positive rather than negative or neutral.
In the spirit of scepticism about predictable long-term effects, you might invoke some principle of indifference and think that these non-human civilisations would be equally as good, in expectation, as human-originating civilisation. If so, then extinction risk reduction still looks positive. But it's meaningfully lower in expected value (e.g. 75% lower) than you would have thought without considering replacement civilisations. 
A second point is on AI takeover. If you have the view that futures with civilisation are overall better than empty futures but that it's hopeless to predict or influence the value of futures with civilisation, then reducing the risk of AI takeover is not a way of predictably positively influencing the long term. If AI disempowers or even kills humanity, then it will (probably) continue to build a growing AI-civilisation afterwards. 
The view that AI takeover is bad in the long term requires the judgment that the AI-civilisation would be worse than the human-controlled civilisation; it’s not a judgment about whether any civilisation is better than none. I suspect most readers will think that the AI-civilisation is indeed worse (and also that alien or other nonhuman civilisations are worse than human-originating civilisations). But it doesn’t seem like there’s a strong reason to think it’s justified to have a view about that, but not about whether, say, a future where the US becomes truly hegemonic this century is better or worse than one where China becomes hegemonic. In each case, there’s some potential event this century that affects, in a path-dependent way, the quality of the future of civilisation, not just its quantity.
You might think that even very different kinds of human-directed futures are far more similar to one another than they are to AI-directed futures.6

But, even if so, as Fin and I argued in the last two essays, different human-directed futures — equally easy to imagine from our vantage point — likely vary dramatically in value.

3. Lock in and path-dependence

In response to the “wash out” objection, we need to be able to identify effects that are (i) path dependent; (ii) extremely persistent (comparable to the persistence of extinction); and (iii) predictably influence the expected value of the future.7

That is: (i) the effects could easily have not occurred, if history had gone a different way; (ii) they will, in expectation, last for a meaningful fraction as long as civilisation lasts, assuming that is a long time; and (iii) those effects should, in expectation, change what we think about the value of future civilisation, even into the very distant future. For short, I’ll call this “persistent path-dependence”. 
One way of supporting the idea of path dependence is via the stronger idea of “lock in”. But it’s hard to define the term in a useful way that isn’t just a strong form of predictable path-dependence. The term itself suggests the idea of a state that society could enter into that it cannot escape, perhaps because there are no good escape options available to anyone.8

But the emphasis on the (im)possibility of leaving a locked-in state doesn’t capture scenarios where agents with power shape society a certain way, and continue to shape it in that way indefinitely. It doesn’t capture those scenarios because those in power could change their views and alter the society — it’s just that they won’t. And, in my view, those are some of the central scenarios that we want to point to with the concept of “lock in”.
Instead, we could define a “locked-in” state as any state of society (at some level of granularity) which persists over time with high probability.9

But if we are too permissive about what it counts for a state to persist, then the definition will count non-examples of lock-in.10

If we are too restrictive, then the definition will omit positive examples of lock-in, for example where society gets locked-in to a narrow range of trajectories which are themselves dynamic, such as boom-and-bust cycles. More importantly, it’s also useful to describe a society as “locked-in” if it has crossed a moment in time which strongly determines what features will ultimately end up occurring. Consider whether the US or China becomes the global hegemon after the creation of aligned superintelligence. We might expect the governance regime to change dramatically over time, in either case, but nonetheless we might think that this does meaningfully affect the expected value of the future over the long run.
The best definition of “lock-in” that I know of (and the best discussion of lock-in more generally) is in ‘AGI and Lock-in’, by Lukas Finnveden, C. Jess Riedel, and Carl Shulman. The reasons why “lock-in” is an awkward term is evident in their definition, which is as follows:  
We say that such a feature is locked-in at some particular time if:
  • Before that time, there is notable uncertainty about how that feature will turn out in the long run.
  • After that time, the uncertainty has been significantly reduced. In particular, there is a much smaller set of possibilities that have non-trivial probabilities.
They particularly highlight the idea of “global value lock-in”: 
Global value lock-in happens at a time if:
  • Before that time, there are many different values that might end up being adopted by powerful actors.
  • After that time, all powerful actors hold values from a much-reduced subset of the original possibilities, and it is very unlikely that any powerful actor in that civilisation will adopt values from outside that subset.
On this definition, a lock-in event doesn't need to involve some feature coming about and then persisting indefinitely — it’s just that it results in a reduction in uncertainty about how that feature will ultimately turn out.11

And, on this definition, lock-in is basically just a large amount of predictable path-dependence. There is no bright line separating lock-in events from other sorts of path-dependence, because there’s no bright-line definition of what counts as “notable” uncertainty or a “significant” reduction in uncertainty. Like the authors of ‘AGI and Lock-in’, I see persistent path-dependence as the central concept.

4. Mechanisms for persistent path-dependence

One reason you might object to the idea of persistent path-dependence is on the basis of history to date. In the past, civilisation has been in constant flux, and you might think that almost nothing has had persistently path-dependent effects.12

So, shouldn't we expect this flux to continue into the future, making predictable long-term persistence unlikely? 
However, the fact that there’s been such flux to date doesn’t entail that there will be a similar amount of flux in the future. Think of a roulette wheel: it’s wildly unpredictable while it spins, but the ball always settles into a single slot. Or consider a ball rolling over a varied landscape: it might roll up and down hills, changing its speed and direction, but it would eventually settle in some chasm or valley, reaching a stable state. 
In fact, there are positive reasons to think that the key underlying drivers of societal flux are set to end, for two main reasons. First, people today and in the past have been limited by the technology in their power to control the future. This is because they die, they cannot precisely determine the values of the generation that replaces them, and they cannot set up institutions that will reliably represent their goals after they die. But new technologies could remove those limitations.
Second, near-term developments will also change the extent to which those who want to control the future are able to do so without disruption. Advanced technological capability will likely give the option to drastically reduce the rate of unexpected environmental changes and unforced errors in leaders’ plans. It will give leaders the ability to prevent internal rebellion, too, unless that ability is deliberately constrained. Going further, reaching “technological maturity”, where society has discovered essentially everything it could discover, will mean that there are no upheavals from new technological developments, either. Finally, one source of disruption, for someone who wants to control the future, comes from outsiders such as other countries. But the global hegemony of some group, or strong defense-dominance that enables a perpetual balance of power, could also prevent interference from outsiders.
In this section I’ll cover a number of mechanisms that could drive persistent path-dependence, dividing these to match the two considerations I’ve just described. First, I’ll discuss technologies that give agents more control over the future. These include AGI-based institutions, design of the next generation, immortality, and strong self-modification. Second, there are political or technological developments that reduce the risk of disruption to plans to control the future: these include extreme technological advancement, global concentration of power, and indefinite defense-dominance. 
Throughout, I assume it’s at least reasonably likely that society will develop AGI in our lifetimes, and that this will drive explosive technological development. Because of this explosive development, we will race, over a period of just years or decades, through many of the technologies and societal developments that are relevant to persistent path-dependence. This is why we might, quite suddenly,13

move from a world where the future is highly open to one in which its trajectory seems clear.
I also focus in particular on how values (including values that contain a recipe for how they might reflect and change over time) persist into the future. This, in my view, is much more important than whether some particular individual or regime persists.

4.1. Greater control over the future

4.1.1. AGI-based institutions

The argument for why AGI-based institutions could allow certain values or goals to persist indefinitely is made at length in ‘AGI and Lock-in’ by Lukas Finnveden et al14

. Though it’s worth reading the whole thing, I’ll briefly recap the argument in my own words here.
To make the idea vivid, first, assume that, post-AGI, there is a global hegemon: a single dominant military power (which could be a country or a company), or a single dominant allied coalition of powers, or a global government. Now suppose that this hegemon wants to indefinitely lock in some constitution, which could be very complex. What they can do is:
  • Align an AGI so that it understands that constitution and has the enforcement of that constitution as its goal.
  • Empower that AGI with the ability to enforce the constitution. This could involve the AGI literally running the country, or all military and law-enforcement AIs and robots could be designed such that they obey this constitution, prevent violation of the constitution (including surveilling for and preventing attempts to build military and law-enforcement AIs that are not loyal to this constitution), and listen to the Constitutional-AGI in cases of dispute or unclarity.
  • Store copies of the neural weights of the AGI in multiple locations in order to reduce the risk of destruction of any one of the copies.
  • Reload the original Constitutional-AGI to check that any AGIs that are tasked with ensuring compliance with the constitution maintain adherence to their original goals as those AGIs learn and update their neural weights over time. (This would be as if, rather than having the Supreme Court interpret the US Constitution, we could conjure up the ghosts of Madison and Hamilton and ask them directly.)
With these in place, this AGI-enforced constitution could operate indefinitely. 
A global hegemon arising in the decades post-AGI seems reasonably likely to me. But even if there weren’t a global hegemon, individual countries could implement AGI-enabled lock-in within their own country. This could result in indefinite lock-in if that country eventually became the global hegemon, or was able to stably retain a share of global power.  
Moreover, there could be indefinitely-binding AGI-enforced treaties between countries, too. The two countries could implement much the same strategy as I just described. What they would need, in addition, is a verifiable agreement that all law-enforcement and military AIs and robots, in both countries, would be aligned with the treaty. Given future advances in interpretability, such that we can perspicuously understand an AGI’s neural weights, this would be possible in principle at least. And, if possible, some AGI-enforced treaties (though not necessarily indefinitely long-lasting ones) would likely be desirable to both parties, in order to avoid deadweight losses from economic conflict or war.

4.1.2. Immortality 

Throughout history, death has functioned as a natural brake on the persistence of any particular set of values or power structures. Over time, even the most entrenched values eventually change as new generations replace the old. However, post-AGI technology could fundamentally alter this dynamic.
Digital beings would inherently be immune to biological aging and, as we discussed, could persist indefinitely given proper maintenance. When combined with perfect replication and hardware migration capabilities, this creates the possibility of minds whose exact values and decision-making processes could persist unchanged for potentially millions of years.
To make this vivid, imagine if, in the 1950s, Stalin had been able to either upload his mind, or train an AGI that was a very close imitation of his personality. He would therefore have been able both to live indefinitely, and to make numerous copies of himself, so that every member of the Politburo, of law enforcement, and of the military, was a copy. Absent external interference, such a regime could persist indefinitely. 
A similar dynamic could hold for biological immortality. A technological explosion driven by AGI could dramatically extend or effectively eliminate biological constraints on human lifespans through technologies targeting the fundamental mechanisms of aging.
Either way, people today would have a means to influence the long-term future in a way that they don’t, today — namely, by still being alive and holding power far into the long-term future. The same beings, with the same foundational values, could remain in power indefinitely — meaning that the specific values of those who first achieve positions of power during the transition to AGI could shape civilisation throughout the entire future. 

4.1.3. Designing beings

Even if people choose not to live forever, their values could continue to persist through perfect transmission from one generation to the next. Through history, change has happened in part because successive generations do not inherit the same values as their forebears. But this dynamic could change after AGI. Probably, the vast majority of beings that we create will be AI, and they will be products of design — we will be able to choose what preferences they have. And, with sufficient technological capability, we would likely be able to choose the preferences of our biological offspring, too.
This enables lock-in of views and values. When most people think of dictatorial dystopia, they often imagine an enforced dystopia like The Handmaid’s Tale, where much of the populace secretly dislikes the regime. But it’s much more likely that, in a post-AGI dictatorial world, the population will endorse their leader and the regime they live in, because they will have been designed to do so. For that reason, a post-AGI global dictatorship need not involve totalitarianism: there is no need to surveil and control your citizens if you know for certain that they will never rebel. 
But the same mechanism could lock-in views and values even without global dictatorship. Max Planck suggested the view that science usually progresses because older generations die off and are replaced by newer generations with better views — one funeral at a time15

— rather than by older generations changing their views. Maybe the same is true moral progress; in order to make progress, perhaps we need new beings to be trained from scratch without putting too heavy a thumb on the scales regarding what their moral views are. But if so, then we might lose this driver of moral progress: the point at which we can design beings could be a point at which we entrench the prevailing moral norms of the time by ensuring that subsequent generations conform with some or all of those prevailing moral norms.

4.1.4. Strong self-modification

At the moment, we are able to modify our own beliefs and preferences only in clumsy and limited ways. We can modify our preferences (not always predictably) by changing our social circle, changing what media we consume, or through meditation or drugs. Voluntarily changing beliefs is harder, but we can do so, to some degree, by similar mechanisms.
In the future, people will probably be able to modify their own beliefs and preferences to a much stronger degree, such that they can precisely choose what beliefs and preferences to have. This means that not only might people today be able to control society’s future values by living forever; they would also be able to control the values of their future selves. 
The ability to self-modify is clearest for digital people. Digital people’s beliefs and preferences are represented in their neural weights, or code; given a good enough understanding of AI, those neural weights or that code could be modified to give precise changes in beliefs and preferences.  But, once we have a good enough understanding of neuroscience, it could even eventually become possible, for biological people too, via neural modification and changes to neurotransmitter systems.
This could be a moment of predictable path-dependence because people might choose to fix certain beliefs or preferences of theirs. For example, a religious zealot might choose to have unshakeable certainty that their favoured religion is true (so it becomes impossible for new evidence to ever rationally change that belief); an extremist of a political ideology might, in order to demonstrate the depth of their loyalty to the cause, choose to have an irrevocable and unwavering preference in favour of their political party over any other. The prevalence of such self-modification might not be limited to extremists: there might in general be strong social pressure to adopt unshakeable abhorrence to views regarded as racist or communist or otherwise unpalatable to the prevailing morality within one’s community.
Even if people don’t lock in to particular beliefs or preferences, there could still be strong path-dependency of their final beliefs and preferences based on their initial beliefs and preferences, or based on their initial choices about how to modify those beliefs and preferences. Initial changes to preferences or beliefs might become unlikely to be undone if those preferences are self-protective (e.g. if one chooses the preference, “I want to obey my favoured religious teacher, and I want to keep having this preference.”)16

These dynamics are worrying for the long term in any cases where the people who choose to strongly self-modify will themselves have power for an extremely long time.

4.2. Less disruption

4.2.1. Extreme technological advancement

Throughout history, societal changes have often been driven by technological innovations that disrupt existing power structures. However, as civilisation approaches technological maturity—the hypothetical point at which all major technologies have been invented—this source of disruption would disappear. With sufficiently advanced technological development, all technological discoveries that society will ever make would have already been made. And, even if we don’t reach technological maturity any time soon, the rate of technological change (and resulting societal disruption) would naturally decelerate as the space of possible innovations becomes increasingly explored.
Advanced technology would help prevent other sorts of disruption, too. It would dramatically improve prediction capabilities: advanced AI systems could process vastly more information, model complex systems with greater precision, and forecast outcomes over longer time horizons. So it would be much less likely people would relinquish their influence just by making some mistake. 
Similarly, although environmental changes (such as disease, floods or droughts) have often upended the existing order, society will continue to become more resilient with technological advancement: almost any environmental risks could be predicted and managed.
The combination of technological maturity and superintelligent planning capabilities creates a powerful mechanism for stability. Whereas past regimes were frequently undermined by unforeseen developments—technological, environmental, or social— political leadership at the frontier of technological advancement would face far fewer disruptions. 

4.2.2. Global concentration of power

One of the reasons for change over time is competition between people, companies, countries, and ideologies. If there’s a global concentration of power, this dynamic might cease. In the extreme, global concentration of power would look like a single all-powerful dictator ruling over the world; less extreme versions would involve most power being distributed among a much smaller number of actors, globally, than it is today. 
Even if the world became a global dictatorship, that doesn’t necessarily mean that the world will certainly end up with one specific future: the range of possible futures could in principle still remain open because the dictator might choose to later cede power, or reflect on their values extensively. But I think it would clearly be a persistently path-dependent event. A dictator may well want to entrench their power indefinitely, so the risk of that happening increases if the world has in fact entered a dictatorship. And I think extensive reflection becomes less likely, too. Moral progress often depends on open debate, with social pressure to justify one’s moral views in the face of opposing arguments. A dictator wouldn’t face that pressure and needn’t ever encounter opposing points of view if they didn’t want to.
What’s more, the path-dependent effects are probably particularly bad. The sorts of power-seeking actors who are likely to end up as global dictators are more likely to have dark tetrad traits — sadism, narcissism, Machiavellianism, and psychopathy.17

I think the chance of them producing extremely bad outcomes — for example, torturing their enemies for their entertainment — is more likely than it would be if the average person became a dictator. What’s more, dictatorship of any form loses the opportunity to benefit from gains from trade among different moral worldviews, which was discussed in Convergence and Compromise.
Even without dictatorship, any all-world institutions could be more persistent because they lack external competition or pressure. If there were a one-world government, or if a single country became truly hegemonic, they would lose one historically important source of pressure to change.

4.2.3. Defense-dominance 

In international relations theory, “defense-dominance” refers to a situation where defending territory, resources, or positions of power is significantly easier and less costly than attacking or conquering them.18

When defense-dominant conditions prevail, even relatively weaker entities can maintain control of their territory against stronger aggressors, creating stable power arrangements that resist change. 
So, even if no single country, or other group, achieves dominance over all others, there could still be a stable balance of power, if the technological situation remains defense dominant up until, and at, technological maturity. Whether or not this would be good or bad overall, it suggests how an important driver of historical dynamism — the shifting pattern of political regimes through conquest — could dry up.
Throughout history, periods of defense-dominance have been temporary — sometimes technological innovations like castles or trench warfare temporarily favored defenders, but these advantages were eventually overcome by new offensive capabilities. Advanced technology, however, could potentially create conditions of extreme and persistent defense-dominance across multiple domains. And in a defense-dominant world, the initial allocation of resources would become disproportionately important, as that distribution could persist indefinitely.19

Plausibly, indefinite defense-dominance could come about as a result of widespread space settlement.20

If star systems are strongly defense-dominant, then the starting distribution of star systems could, in principle, be held onto indefinitely. It might be that, after the initial allocation, there is trade or gifting of some star systems; but even if so, there would still be very strong path-dependence, as the final allocation of star systems would be extremely influenced by the starting allocation. 
The process for initially allocating different star systems could go in many different ways. For example, suppose that star systems are allocated on a “finders keepers” basis. Then whichever groups have the most power at the particular point of time of early space settlement will be able to hold onto that power indefinitely, as they will control essentially all resources indefinitely. Similarly, if the star systems were put up for auction, then whoever is richest at the time would be most able to buy them, and would potentially be able to lock in their economic power. Or there could be some principled allocation procedure — but this too might result in bad outcomes if the allocation procedure is itself misguided.
This all might seem particularly sci-fi, but the point of time at which widespread space settlement becomes possible could come surprisingly soon after the intelligence explosion. For example, the amount of energy needed to send small spacecraft at very close to the speed of light is tiny compared to the energy produced by our sun — likely just minutes or hours of total solar energy is needed to send spacecraft at relativistic speeds to all star systems within the Milky Way and all galaxies outside of it. These spacecraft could transport an AGI and general-purpose nano-scale robots that would build up an industrial base, including constructing a radio telescope array in order to receive further instructions.21

5. Lock-in escape velocity

The last section discussed mechanisms for persistent path-dependence. One reason why I think persistent path-dependence is likely is that short-term power entrenchment can be “bootstrapped” into long-term lock-in.
Suppose that a one-world government is formed, and the leaders of that government are able to entrench their power for a comparatively short period of time, so they very probably stay in power for 10 years. But, in that time, they are able to make it very likely they can stay in power for a further 20 years. Then, in that 20 years, they can develop the means to make it very likely to maintain power for a further 40 years… and so on. Even though initially, the political leaders were only able to entrench their power for a short period, they could turn that short-term entrenchment into indefinite lock-in;22

they achieved lock-in escape velocity.23

The extent of entrenchment could also increase over time. For example, some group could initially merely ensure that they are in power, and only later start to lock in specific laws. Or the whole world could initially commit only to some minimal moral norms; but those minimal moral norms could inexorably lead to more thoroughgoing lock-in over time. 
The “point of no return” then, might come well before there exist mechanisms for predictable path-dependence. Given how likely AGI is to come in the next few years or decades, it seems likely to me that it’s currently possible to achieve lock-in escape velocity. It even seems possible to me that things will turn out such that the US Founding Fathers successfully caused some of their values to persist for an extremely long time. By enshrining liberal values in the Constitution, they enabled those values to persist (in modified but recognisable form) and gain in power for over 250 years; if the US then wins the race to superintelligence and the post-AGI world order includes AGI-enforced institutions based on those liberal ideas, then they would have had a predictably path-dependent effect, steering the long-term future in a direction that they would have preferred.

6. Persistent path-dependence is likely, soon

Given the mechanisms I’ve described, and the nature of this century, I think that it’s reasonably likely that events in our lifetime will have persistently path dependent effects. 
The probability of reaching AGI this century is high, with most of that probability mass concentrated in the next two decades: as a rough indicator, Metaculus puts the chance of AGI (on one definition) by 2100 at 90%, and by 2045 at 77%. So there is probably ample time this century when the creation of AGI-enforced institutions is possible. And if advanced AI results in explosive technological progress and industrial expansion,24

which I also think is more likely than not, then there are a few further reasons for persistent path-dependence, too.
First, an intelligence explosion seems fairly likely to result in a concentration of power. Even if we avoid concentration of power in the hands of a very small group, I still expect one country, or a coalition of allied countries, to become far more powerful than all others: an intelligence explosion would involve super-exponentially growing capability, such that even a small lead by the leading country or coalition could soon turn into a decisive advantage. And if one country or alliance becomes hegemonic, lock-in measures to protect that hegemony seem likely.25

Second, an intelligence explosion will generate strong incentives for those in power to put in the infrastructure to secure their power at least temporarily. There would be enormous change over the course of the intelligence explosion: new technologies and intellectual discoveries that could result in catastrophe (e.g. via widely-accessible bioweapons), or in radical social change, upsetting the existing balance of power (e.g. highly persuasive new ideologies). Some of the infrastructure for temporarily securing power, like widespread surveillance, could help those in power reduce some of those risks.
Third, the technology unlocked by an intelligence explosion would allow for indefinite lifespans. As well as giving those in power greater potential control over the future, it would also increase the incentive for those in power to ensure they remain in power, as they would get to reap the benefits of that power for much longer. They wouldn’t need to be motivated by the desire to achieve ideological goals after their death in order to want to preserve the existing social order; mere self-interest would do. What’s more, these people would have superintelligent AI advisors informing them that they could further their ideology or self-interest for as long as they want, and advising them on exactly how to go about it.
Finally, in the case of space settlement, assuming the defense-dominance of star systems, path dependence occurs by default. Once some group has those resources, they thereby get to keep them indefinitely, if they choose not to die, or to give them to their heirs, or trade them away. If there’s a formal allocation system, those who decide how to allocate property rights to star systems might not be concerned about ensuring that some groups have more power than others in the long term; nonetheless, the choice about the allocation process will greatly influence how long-term power is determined. 
So it seems fairly likely that very extensive control over the future will become possible this century. But, once it’s possible, I think it’s fairly likely that some people (or beings) will in fact try to exert control over the future. Attempts to hold on to power or to entrench specific ideologies are so commonplace throughout history that it seems reasonably likely, on a “business as usual” understanding of how the world works, that people in power would try to do the same, for at least a short time period, once they get the chance. I give some historical examples in What We Owe The Future:
[V]alue systems entrench themselves, suppressing ideological competition. To see this, we can consider the many cultural and ideological purges that have occurred throughout history. Between 1209 and 1229 AD, Pope Innocent III carried out the Albigensian Crusade with the goal of eradicating Catharism, an unorthodox Christian sect, in southern France. He accomplished his goal, in part by killing about 200,000 Cathars, and Catharism was wiped out across Europe by 1350. British history is also replete with examples of monarchs trying to suppress religious opposition: in the 16th century, Mary I had Protestants burned at the stake and ordered everyone to attend Catholic Mass; just a few years later, Elizabeth I executed scores of Catholics and passed the baldly-named Act of Uniformity, which outlawed Catholic Mass and penalised people for not attending Anglican services.
Ideological purges have been common through the 20th century, too. In the Night of the Long Knives, Hitler crushed opposition from within his own party, cementing his position as supreme ruler of Germany. Stalin’s Great Terror between 1936 and 1938 murdered around 1 million people, purging the Communist Party and civil society of any opposition to him. In 1975-6, Pol Pot seized power in Cambodia and turned it into a one-party state known as Democratic Kampuchea. The Khmer Rouge had a policy of state atheism: religions were abolished and Buddhist monks were viewed as social parasites. In 1978, after consolidating his power, Pol Pot reportedly told members of his party that their slogan should be “Purify the Party! Purify the army! Purify the cadres!”
In more recent years, we’ve seen political leadership succeed at entrenching and extending their power in Russia, China, India, Hungary, Turkey and Belarus.
But, once someone has entrenched their own power for a short period of time, why should they not do so for a little bit longer? Whatever you value, it helps to continue to have power into the future in order to protect or promote those values. And because other people want power, you need to fight to maintain and entrench your own. Indeed, not locking in your values might seem morally reckless: would you want to risk society being taken over, at some point in the future, by a fascist regime? And, if you were fascist, would you want to risk your regime ultimately falling to communism or liberal democracy?

8. Conclusion

In this essay, I've addressed a common skeptical challenge to the better futures perspective: the worry that, short of extinction, our actions cannot have predictable and persistent influence on the very long-run future. This view suggests that only extinction prevention truly matters for longtermism, as all other interventions will eventually wash out.
In response, I’ve discussed multiple credible mechanisms through which values and institutional arrangements could become persistently path-dependent. The mechanisms of AGI-enforced institutions, immortality, strong self-modification, extreme technological advancement, global power concentration, and defense-dominance create conditions where initial states could determine long-term outcomes in predictable ways.
Given the high probability of AGI within our lifetimes, persistent path-dependence seems not just possible but reasonably likely. Rather than assuming our influence will fade over cosmic time, we should appreciate that aspects of civilisation’s trajectory may well get determined this century, and appreciate the obligation that gives us to try to steer that trajectory in a positive direction.

Bibliography

Stuart Armstrong and Anders Sandberg, ‘Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox’, Acta Astronautica, August 2013.
Joe Carlsmith, ‘An even deeper atheism’, Joe Carlsmith.
Joe Carlsmith, ‘Value fragility and AI takeover’, 5 August 2024.
Tom Davidson, Lukas Finnveden, and Rose Hadshar, ‘AI-Enabled Coups: How a Small Group Could Use AI to Seize Power’, Forethought.
Lukas Finnveden, Jess Riedel, and Carl Shulman, ‘AGI and Lock-in’, Forethought.
Charles L. Glaser, ‘The Security Dilemma Revisited’, World Politics, October 1997.
Charles L. Glaser and Chairn Kaufmann, ‘What Is the Offense-Defense Balance and How Can We Measure It?’, International Security, April 1998.
Jason G. Goldman, ‘The Psychology of Dictatorship’, Scientific American.
Robin Hanson, Daniel Martin, Calvin McCarter, and Jonathan Paulson, ‘If Loud Aliens Explain Human Earliness, Quiet Aliens Are Also Rare’, The Astrophysical Journal, November 2021.
Robert Jervis, ‘Cooperation under the Security Dilemma’, World Politics, January 1978.
William MacAskill, ‘What We Owe the Future’, 2022.
Will MacAskill and Fin Moorhouse, ‘Preparing for the Intelligence Explosion’, Forethought.
Segreteria di Redazione, ‘Psychopathology of Dictators’, 24 July 2020.
Eliezer Yudkowsky, ‘Value is Fragile’, 29 January 2009.

Footnotes

Released on 3rd August 2025

Citations

Better Futures

Article Series
Part 4 of 6

Suppose we want the future to go better. What should we do?

One approach is to avoid near-term catastrophes, like human extinction. This essay series explores a different, complementary, approach: improving on futures where we survive, to achieve a truly great future.