Quantifying the Continuum from Global Catastrophic Risk to Existential Risk

During the Cold War Herman Kahn wrote about “thinking the unthinkable.” For the Cold War, “The Unthinkable” became a euphemism for a massive nuclear exchange, and especially a MAD (mutually assured destruction) scenario. It is all-too-easy to surrender before a vision of such horror, but Kahn made an effort to rationally think through the escalation to and the consequences of nuclear war. It was unpopular to do this, Kahn was both criticized and ridiculed for his efforts, but the alternative would be to hand ourselves over to despair or panic, neither of which serves the public interest.

With the emergence of global catastrophic risk and existential risk as areas of study, we again have the opportunity to rationally think through scenarios that make most of us throw up our arms and prefer death to life in such a world, but think these scenarios through we must. Nothing is gained by avoiding them, and the older our civilization becomes the more likely we are to encounter risks that no civilization in the past ever had to reckon with.

Global Catastrophic Risk (GCR) has been defined as, “…a risk that might have the potential to inflict serious damage to human well-being on a global scale,” whereas existential risk has been defined as, “…one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.”

Between inflicting serious damage and the drastic destruction of human potential, and perhaps human extinction, there lies a continuum of risk of increasing magnitudes. If, for anthropocentric convenience, we take the welfare of the human population as the single key variable to quantify this increasing risk from the catastrophic to the existential, we can break this risk continuum into several chunks that can each constitute a scale-based taxonomy. For convenience, again — this time decimal convenience — let us break down a scale of risk into ten chunks of ten percent of the population each. For purposes of completeness, we will start with the null case that is neither GCR nor ExRisk.

A risk may have the potential to adversely affect:

  • 0% of the human population (no risk)
  • 10% of the human population
  • 20% of the human population
  • 30% of the human population
  • 40% of the human population
  • 50% of the human population
  • 60% of the human population
  • 70% of the human population
  • 80% of the human population
  • 90% of the human population
  • 100% of the human population (human extinction)

Now let’s take these percentages and break them down into familiar quantifiers:

  • No risk (0% of the human population affected)
  • Small risk (1–10% of the human population affected)
  • Some risk (10–40% of the human population affected)
  • Moderate risk (40–60% of the human population affected)
  • High risk (60–90% of the human population affected)
  • Extreme risk (90–99% of the human population affected)
  • Extinction risk (100% of the human population affected)

Now, from the perspective of almost any sane person, a calamity that resulted in the death of half of the human population would not be called “moderate,” but here we are talking about scenarios from global catastrophic risk to existential risk, so our scale of values is shifted by this consideration. During the Black Death humanity experienced a scenario of “some risk” in the above table, and while the Black Death was perhaps the most catastrophic demographic event in human history, not only did humanity survive this brush with death, even human civilization survived this catastrophic loss, although the culture was changed by the experience.

In each of these brackets of risk distinct scenarios would be played out. Above I gave the example of the Black Death, which killed between a third to a half of the planet’s human population. The Black Death has been extensively studied, so we know much about the social and economic consequences of a dire pandemic. We know, for example, that labor became more expensive, and that there was an massive generational transfer of wealth, and that, in some cases, entire villages disappeared and were never repopulated while the major cities all survived.

There is also the example of the 1918 pandemic, which killed between 3 and 5 percent of planetary population, an order of magnitude smaller than the Black Death, but because this occurred in the context of much higher human populations, still with a very high number of deaths. Coming in the wake of the First World War, when the world was tired of fighting, these unique conditions may have prevented the 1918 influenza epidemic from causing conflicts. The 1918 pandemic was, in the above table, the realization of a small risk, and clearly a global catastrophic event.

The 1918 pandemic didn’t snowball into something worse possibly because it occurred immediately after a major war, but probably more so because the nature of the catastrophe and the number of deaths did not affect essential social services or the functioning of the economy. It would be easy to construct scenarios in which an event that did not outright kill a large number of human beings could trigger a domino effect that could kill much larger numbers.

For example, in the event of catastrophic rapid climate change, in which enough of the ice at the poles melted into order to inundate all of the planet’s coastal cities, the initial death toll would probably be less than ten percent, thus a small risk in the above table. However, the nature of this event would be so economically and socially disruptive that subsequent social disruption and conflict would probably magnify the event until the death toll reached 20 to 30 percent of planetary population. What I mean by this is that many governments would not survive a catastrophe of this magnitude, and the resulting uncontrolled flow of refugees to safe areas, and the attempt by populations in these safe areas to prevent refugees from entering and swamping the area, would probably result in conflicts that would kill more than were killed by the rising ocean levels.

Generally speaking, any catastrophic event that seriously impacted or degraded the global system of food production and distribution (and, to a lesser extent, access to clean water and electricity) would rapidly lead to conflict over food supplies. The planet can only support its human population of seven billion or more with a stable network of food production and distribution industries, and very little would be required to upset this delicate balance.

At the other end of the spectrum, closer to existential risk than to global catastrophic risk, this particular problem would not be as severe. An event that resulted in the deaths of more than half of humanity would leave substantial food and material goods so that the survivors would be able to live until food production and distribution could be re-started — should this ever occur. The danger at these levels of mortality is that civilization and its institutions would collapse and could not be restored in their present form by a much smaller population. Specialist areas of knowledge would be lost, and industry would revive slowly and on a much smaller scale, if ever.

Recently I listened to a reading of Stephen King’s The Stand (I listened to the longer 1990 edition; I read the original edition not long after it appeared in paperback in 1980), which is, in a sense, an extended thought experiment in what happens when almost everyone dies — that is to say, the realization of an extreme risk event. In the book, the plague kills more than 99% of the population. However, because there are so many human beings, if they all gather in a few places, there can still be cities filled with people. It takes time for this to occur, to restore electricity to these cities, and to reconstitute some semblance of normal life.

Returning to this book after so many decades, I noticed things that I had not noticed the first time I read the book. In terms of material goods, anyone can take whatever they want, and there is plenty of canned food to last for the few survivors. So the survivors are not confronted with conflicts over scarce resources, as would happen with any catastrophe that failed to initially kill large numbers of persons. However, one of the themes of The Stand is that after such a catastrophe there are an enormous numbers of weapons of war just lying around, free for anyone to pick up. This has consequences in the novel, as it would have in fact if such an event were to come to pass.

It would be possible (and perhaps it would be a salutary intellectual exercise) to go through each quantification of risk and to determine the unique combination of challenges and opportunities for the survivors of a catastrophic scenario in each risk bracket. Different strategies and different tools in each case would be necessary to salvage civilization and the higher emergent complexities that human beings have generated as a consequence of civilization. And if civilization could not be salvaged, again, different strategies and different tools would be called for to preserve what could be preserved of the record of humanity and its civilization, whether for our distant posterity or for the edification of some other species that might come to study our remains.

Originally published at geopolicraticus.tumblr.com.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store