I have been reading some very pessimistic books lately. For example “Falter” or “Novacene“. Hence “The Precipice: Existential Risk and the Future of Humanity”. The power of humanity, which has reached a point where we pose a serious risk to our own existence. How we react to this risk is up to us. The problem is not so much an excess of technology as a lack of wisdom. Either humanity takes control of its destiny and reduces the risk to a sustainable level, or we destroy ourselves. The author puts the existential risk this century at around one in six: Russian roulette.
We have been around for a long time. Our species, Homo sapiens, arose on the savannahs of Africa 200,000 years ago. What set us apart was not physical but mental—our intelligence, creativity and language. Each human’s ability to cooperate with the dozens of other people in their band was unique among large animals. In ecological terms, it is not a human that is remarkable, but humanity.
Iteration and cooperation
Instead of dozens of humans in cooperation, we had tens of thousands cooperating across the generations, preserving and improving ideas through deep time. Little by little, our knowledge and our culture grew. At several points in the long history of humanity, there has been a great transition: a change in human affairs that accelerated our accumulation of power and shaped everything that would follow.
The first was the Agricultural Revolution. The next great transition was the Scientific Revolution. Soon, humanity underwent a third great transition: the Industrial Revolution. We are now in the fourth (digital), fifth (quantum) or sixth (frequency) revolution.
One of the clearest trends is towards the gradual expansion of the moral community, with the recognition of the rights of women, children, the poor, foreigners and ethnic or religious minorities. And in the last sixty years, we have added the environment and the welfare of animals to our standard picture of morality. Every day we are the beneficiaries of uncountable innovations made by people over hundreds of thousands of years.
Mammalian species typically survive for around one million years before they go extinct; our close relative, Homo erectus, survived for almost two million. If we think of one million years in terms of a single, eighty-year life, then today humanity would be in its adolescence—sixteen years old; just coming into our power; just old enough to get ourselves in serious trouble.
The question the book asks is how long we could survive on Earth? Because with the detonation of the first atomic bomb, a new age of humanity began. At that moment, our rapidly accelerating technological power finally reached the threshold where we might be able to destroy ourselves.
Consider a world in ruins: an immense catastrophe has triggered a global collapse of civilisation, reducing humanity to a pre-agricultural state. During this catastrophe, the Earth’s environment was damaged so severely that it has become impossible for the survivors to ever re-establish civilisation.
Or consider a world in chains: in a future reminiscent of George Orwell’s Nineteen Eighty-Four, the entire world has become locked under the rule of an oppressive totalitarian regime, determined to perpetuate itself. Through powerful, technologically-enabled indoctrination, surveillance and enforcement, it has become impossible for even a handful of dissidents to find each other, let alone stage an uprising. With everyone on Earth living under such rule, the regime is stable from threats, internal and external. If such a regime could be maintained indefinitely, then descent into this totalitarian future would also have much in common with extinction: just a narrow range of terrible futures remaining and no way out.
He lists the risks
- Asteroids and comets. In the next hundred years, the probability of an Earth impact falls to about one in 120,000 for asteroids between one and ten kilometres and about one in 150 million for those above ten kilometres.
- Super-volcanic eruption. A recent review gave a central estimate of one per 20,000 years, with substantial uncertainty.
- Stellar explosions. A supernova or gamma-ray burst close to our Solar System could have catastrophic effects. In an average century, the chance of such an event is about one in 5 million for supernovae and one in 2.5 million for gamma-ray bursts.
- The passage of a star through our Solar System could disrupt planetary orbits, causing the Earth to freeze or boil or even crash into another planet. But this has only a one in 100,000 chance over the next 2 billion years.
- The Earth’s entire magnetic field can shift dramatically and sometimes reverses its direction entirely.
- The chance of a full-scale nuclear war has significantly changed over time. For our purposes, we can divide it into three periods: the Cold War, the present, and the future. Recent years have witnessed the emergence of new geopolitical tensions that may again raise the risks of deliberate war—between the old superpowers or new ones.
- The most important known effect of climate change from the perspective of direct existential risk is probably the most obvious: heat stress. The most extreme climate possibility is known as a ‘runaway greenhouse effect. A runaway greenhouse effect is a type of amplifying feedback loop where the warming continues until the oceans have mostly boiled off, leaving a planet incompatible with complex life. From an existential risk perspective, a more serious concern is that the high temperatures (and the rapidity of their change) might cause a large loss of biodiversity and subsequent ecosystem collapse.
- The risk of pandemics. Centuries later, the world had become so interconnected that a truly global pandemic is possible. Evidence suggests that diseases are crossing over into human populations from animals at an increasing rate. Consider the possibility of engineered pandemics,
- Farming has increased the chance of infections from animals; improved transportation has made it easier to spread to many subpopulations in a short time, and increased trade has seen us utilise this transportation very frequently.
- Biotechnology will bring major improvements in medicine, agriculture and industry. But it will also bring risks to civilisation and humanity itself: both from accidents during legitimate research and from engineered bioweapons. One of the most exciting trends in biotechnology is its rapid democratisation—the speed at which students and amateurs can adopt cutting-edge techniques. Laboratory escapes. The book lists several incidents. From smallpox to Anthrax.
- Alongside the threat of accident is the danger of deliberate misuse. Unrestricted DNA synthesis would help bad actors overcome a major hurdle to creating extremely deadly pathogens. Deaths from war and terror appear to follow power laws with, especially heavy tails, such that the majority of the deaths happen in the few biggest events. Expect bigger attacks.
- Unaligned artificial intelligence. The most plausible existential risk would come from success in AI researchers’ grand ambition of creating agents with a general intelligence that surpasses our own. There is good reason to expect a sufficiently intelligent system to resist our attempts to shut it down. AI would not be driven by emotions such as fear, resentment, or the urge to survive. Instead, it follows directly from its single-minded preference to maximise its reward: being turned off is a form of incapacitation which would make it harder to achieve high reward, so the system is incentivised to avoid it. The real issue is that AI researchers don’t yet know how to make a system that, upon noticing this misalignment, updates its ultimate values to align with ours rather than updating its instrumental goals to overcome us. How could an AI system seize control? First, the AI system could gain access to the internet and hide thousands of backup copies, scattered among insecure computer systems around the world, ready to wake up and continue the job if the original is removed. Of course, no current AI system can do any of these things. But the question we’re exploring is whether there are plausible pathways by which a highly intelligent AGI system might seize control. And the answer appears to be ‘yes’.
- One of the most transformative technologies that might be developed this century is nanotechnology. Such a powerful technology may pose some existential risk.
- A very different kind of risk may come from our explorations beyond the Earth. This raises the possibility of ‘back contamination’ in which microbes from Mars might compromise the Earth’s biosphere.
- The extra-terrestrial risk that looms largest in popular culture is conflict with a spacefaring alien civilisation.
- Another kind of anthropogenic risk comes from our most radical scientific experiments.
Asteroid or comet impact ~ 1 in 1,000,000
Supervolcanic eruption ~ 1 in 10,000
Stellar explosion ~ 1 in 1,000,000,000
Total natural risk ~ 1 in 10,000
Nuclear war ~ 1 in 1,000
Climate change ~ 1 in 1,000
Other environmental damage ~ 1 in 1,000
‘Naturally’ arising pandemics ~ 1 in 10,000
Engineered pandemics ~ 1 in 30
Unaligned artificial intelligence ~ 1 in 10
Unforeseen anthropogenic risks ~ 1 in 30
Other anthropogenic risks ~ 1 in 50
Total anthropogenic risk ~ 1 in 6
We design our societies to deliberately stage their access to risky technologies: for example, preventing them from driving a car until they reach an appropriate age and pass a qualifying test. One could imagine applying a similar approach to humanity. Our current predicament stems from the rapid growth of humanity’s power outstripping the slow and unsteady growth of our wisdom.
Sadly, most of the existential risks we’ve considered are neglected, receiving substantially less attention than they deserve. While it is difficult to precisely measure global spending on existential risk, we can state with confidence that humanity spends more on ice cream every year than on ensuring that the technologies we develop do not destroy us. The importance of a problem is the value of solving it. What we do with our future is up to us. Our choices determine whether we live or die.
First, we can’t rely on our current intuitions and institutions that have evolved to deal with small- or medium-scale risks. The second challenge is that we cannot afford to fail even once. These are extremely challenging circumstances for sound policy-making—perhaps beyond the abilities of even the best-functioning institutions today. The third challenge is one of knowledge. How are we to predict, quantify or understand risks that have never transpired? This creates a need for international coordination on existential risk. But it is very unclear at this stage what forms they should take. And 195 countries may mean 195 chances that poor governance precipitates the destruction of humanity.
We can do better
Human life, for all its joys, could be dramatically better than it is today. Our full potential for flourishing remains undreamt. Consider the parts of your life when you brushed paths with true happiness. Or consider your peak experiences. Those individual moments where you feel most alive, where you are rapt with wonder or love or beauty. Magic. We have seen enough to know that life can offer something far grander and more alive than the standard fare. If humanity can survive, we may one day learn to dwell more and more deeply in such vitality.
Rising to our full potential for flourishing would likely involve us being transformed into something beyond the humanity of today. We can already see additional avenues for transformation on the horizon, such as implants granting digital extensions to our minds or developments in artificial intelligence allowing us to craft entirely new kinds of beings to join us or replace us.
We are only starting
Consider, for example, how little we know of how ultraviolet light looks to a finch; of how echolocation feels to a bat or a dolphin; of the way that a red fox, or a homing pigeon, experiences the Earth’s magnetic field. Yet how strange it would be if this single species of ape, equipped by evolution with this limited set of sensory and cognitive capacities, after only a few thousand years of civilisation, ended up anywhere near the maximum possible quality of life.