Introduction

This is the way the world ends
This is the way the world ends
This is the way the world ends
Not with a bang but a whimper.
From The Hollow Men by T. S. Eliot (1925)

If you want to turn your science fiction story up to eleven, the traditional trope is to destroy an entire planet (just ask planet Alderaan). Preferably by blowing up Terra. Though that has gotten to be a bit stale, so there are some stories that destroy the entire universe. In James Blish's fourth "Cities in Flight" novel The Triumph of Time the protagonists make the unsetting discovery that our universe is only months away from merging with another universe — composed of pure antimatter. Egads.

The best list of these on the net used to be Exit Mundi, a collection of end-of-world scenarios. The site has vanished, but can be found archived in the Wayback Machine. Another good source is the TV Tropes entry for Apocalypse How. Also good is the Wikipedia article on Global Catastrophic Risk.

Classification

This classification system is from TV Tropes and is partially based on Bruce Sterling's analysis.

Scope
Focused DestructionA small localized area undergoes a species-level or higher apocalypse. The rest of the world at large is totally unaffected, maybe not even knowing of the events happening in the affected area.
Regionala part of a continent or landmass, be it a province/state, geographical region, or sub-continent (eg. "California"/"Uganda", "Sub-Saharan Africa", "India", etc).
Continentalan entire continent or landmass ("Oceania", "The Americas", "Eurasia", etc).
Planetaryan entire planet, or the vast majority of one. (If the given setting does not involve space travel and/or other worlds, then the scale effectively stops here, or skips up to Multiversal if the other worlds are not elsewhere in space, but do exist.)
Stellara solar system, every planet orbiting a star, the star itself, or the star plus everything in its orbit.
Galactica galaxy, most or all of its stars, up to all mass associated with it.
Universalthe entire universe, all or most galaxies within it, or all major galaxy filaments or equivalent highest-level structures.
Multiversalmultiple universes, or whatever that exists outside of the setting's native universe (includes whichever flavour of Another Dimension is on offer).
Omniversalall universes or all possible universes, everything that exists, or reality itself; up to some abstract ontological limit if the setting includes explicit metaphysical stipulations.
Severity
Societal DisruptionCivilization survives intact, but is forever altered. This may be due to the sheer amount of damage caused lowering the standard of living, or it may be a result of people being forced to adapt to the new threat(s) they face.
Societal CollapseHumanity backslides within the affected area, regressing to pre-industrial level at best and pre-agricultural at worst. Civilization may recover on its own, but not for centuries at the least.
Species ExtinctionA dominant or major species is either wiped out completely or reduced to such a low population level that its recovery is virtually impossible barring intervention by an outside force.
Total ExtinctionLife itself ends. No living organism of any kind exists within the affected area.
Physical AnnihilationThe affected area physically ceases to exist as it did before, but remnants of it can still be found; it's nuked into glass, sunk into the ocean, or blasted into asteroids.
Metaphysical AnnihilationThe affected area ceases to exist totally, without remainder, or perhaps even to have ever existed; this usually involves erasing it from time. This may go up to the elimination of even the possibility of the existence of anything like the affected area, if for instance the basic system of reality is changed or wiped out. This may get highly abstract, depending on how fundamental the negation is.
Classes
Class 0: Regional CatastropheRegional/Societal Disruption or Regional/Societal Collapse.
(examples: moderate-case global warming, minor asteroid impact, local thermonuclear war)
Global civilization not eliminated, but regional civilizations effectively destroyed; millions to hundreds of millions dead, but large parts of humankind retain current social and technological conditions. Chance of humankind recovery: excellent. Species local to the catastrophe likely die off, and post-catastrophe effects (refugees, fallout, etc.) may kill more. Chance of biosphere recovery: excellent.
Class 1: Human Die-BackPlanetary/Societal Disruption.
(examples: extreme-case global warming, moderate asteroid impact, global thermonuclear war)
Global civilization set back to pre- or low-industrial conditions; several billion or more dead, but human species as a whole survives, in pockets of varying technological and social conditions. Chance of humankind recovery: moderate. Most non-human species on brink of extinction die off, but most other plant and animal species remain and, eventually, flourish. Chance of biosphere recovery: excellent.
Class 2: Civilization ExtinctionPlanetary/Societal Collapse.
(examples: worst-case global warming, significant asteroid impact, early-era molecular nanotech warfare)
Global civilization destroyed; millions (at most) remain alive, in isolated locations, with ongoing death rate likely exceeding birth rate. Chance of humankind recovery: slim. Many non-human species die off, but some remain and, over time, begin to expand and diverge. Chance of biosphere recovery: good.This takes an entire planet back to at least pre-industrial data, if not hunter-gatherer days.
Class 3a: Engineered Human ExtinctionPlanetary/Species Extinction (dominant species, engineered).
(examples: targeted nano-plague, engineered sterility absent radical life extension)
Global civilization destroyed; all humans dead. Conditions triggering this are human-specific, so other species are, for the most part, unaffected. Chance of humankind recovery: nil. Chance of biosphere recovery: excellent. Extinction via unnatural causes (i.e., someone did something, human or otherwise).
Class 3b: Natural Human ExtinctionPlanetary/Species Extinction (dominant species, natural).
(examples: major asteroid impact, methane clathrates melt)
Extinction via natural causes. Global civilization destroyed; all humans dead. Conditions triggering this are general and global, so other species are greatly affected, as well. Chance of humankind recovery: nil. Chance of biosphere recovery: moderate.
Class 4: Biosphere ExtinctionPlanetary/Species Extinction (several species).
(examples: massive asteroid impact, "iceball Earth" reemergence, late-era molecular nanotech warfare)
Global civilization destroyed; all humans dead. Biosphere massively disrupted, with the wholesale elimination of many niches. Chance of humankind recovery: nil. Chance of biosphere recovery: slim. Chance of eventual re-emergence of organic life: good. Not only are humans gone, but most critters with them, leaving only a select few to evolve and refill the biosphere (or, as the name suggests, what's left of it).
Class 5: Planetary ExtinctionPlanetary/Species Extinction (all multicellular life).
(examples: dwarf-planet-scale asteroid impact, nearby gamma-ray burst)
Global civilization destroyed; all humans dead. Biosphere effectively destroyed; all species extinct. Geophysical disruption sufficient to prevent or greatly hinder re-emergence of organic life. The planet may be fit for re-habitation, but in the meantime, there's nothing more complex than bacteria left.
Class 6: Planetary DesolationPlanetary/Total Extinction.
The planet is left as a lifeless husk.
Class X: Planetary AnnihilationPlanetary/Physical Annihilation.
(example: post-Singularity beings disassemble planet to make computronium)
Global civilization destroyed; all humans dead. Ecosystem destroyed; all species extinct. There used to be a planet here. There isn't anymore; it's gone.
Class X-2: Stellar DestructionStellar/Physical Annihilation.
You know that big ball of hydrogen/helium fusion and the bunch of rocks that used to circle it? Yeah, they ain't here no mo'. This usually happens due to that particular fusion ball doing something unpleasant like going supernova.
Class X-3: Galactic Scale DestructionGalactic/Physical Annihilation.
Via some means, billions of stars, nebulae, pulsars, and so forth, along with the super-massive galactic-core Black Hole(s) at its center are destroyed. Utterly.
Class X-4: Universal DestructionUniversal/Physical Annihilation.
Everything that has ever been observed by anyone, anywhere. Eradicated. Or at the very least, not organized into galaxies, stars, and planets anymore. It is the end of all things. Unless there are other dimensions; those are safe.
Class X-5: Multi-universal DestructionMultiversal/Physical Annihilation.
If there are alternate dimensions or different realities or whatever in this fiction, then many of those go away.
Class Z: Total Destruction Of All Of RealityOmniversal/Physical Annihilation or Omniversal/Metaphysical Annihilation.
Not just the universe, and not just other universes, but all places and things that can be said to physically exist get wiped out somehow.

Warning Signs for Tomorrow

Anders Sandberg has created a brilliant set of "warning signs" to alert people of futuristic hazards. Some are satirical, but they are all very clever. There are larger versions of the signs here.

By Their Warnings Shall Ye Know Them
Structural Metallic Oxygen
Do not remove from cryogenic environment.
Chemosensitive Components
Not for use in reducing atmospheres or vacuum.
Defended Privacy Boundary
Do not transgress with active sensoria.
Polyspecific Synthdrink
Please check biochemical compatibility coding before consuming or tasting.
Low Bandwidth Zone
Do not enter with weave-routed cloud cognition, full-spectrum telepresence, or other high-quota services in effect.
Assemblers In Use
Contains active nanodevices; do not break hermetic seal while blue status light is illuminated.
Spin Gravity
Gravity may vary with direction of motion. Freely moving objects travel along curved paths.
Motivation Hazard
Do not ingest if operating under a neurokinin/ nociceptin addiction/ tolerance regime.
No Ubiquitous Surveillance
Unmonitored hazards may exist within this zone. Manual security and emergency response calls required.
Unauthorized Observation Prohibited
Tangle channels utilize macroscale quantum systems; do not expose to potential sources of decoherence.
Diamondoid Surfaces
Slippery even when dry.
Active Sophotechnology
WARNING: This hyperlink references gnostic overlay templates that may affect your present personality, persona or consciousness. Are you sure you wish to proceed?
Dynamic Spacetime
Contraterragenesis reactor contains a synthetic spinning/ charged gravitational singularity of mass 120,000,000 tons. Take all appropriate precautions during servicing.
Environmental Nanoswarms
CAUTION: Palpable microwave pulsation power feeds in operation.
Legislative Boundary
The zone you are entering exercises private legislative privilege under the Conlegius Act. By entering voluntarily you accede to compliance with applicable private law (see v-tag).
From By Their Warnings Shall Ye Know Them by Alistair Young (2015). Collected in The Core War and Other Stories

Theoretical Threats

There are a couple of theoretical reasons to expect that an apocalypse is due, even though the exact type of apocalypse is unknown. These tend to keep theorists up at night, staring at the ceiling.

The Fermi Paradox

The Fermi Paradox points out that:

  • There is a high probability of large numbers of alien civilizations
  • But we don't see any

So by the observational evidence, there are no alien civilizations. The trouble is that means our civilization shouldn't be here either, yet we are.

The nasty conclusion is that our civilization is here, so far. But our civilization is fated for death, and the probability is death sooner rather than later. This is called The Great Filter, and it is a rather disturbing thought. For a detailed explanation read the original article by Robin Hanson.

The Great Filter is something that prevents dead matter from giving rise, in time, to "expanding lasting life". The hope is that us humans are here because our species has somehow managed to evade the Great Filter (i.e., the Great Filter prevents the evolution of intelligent life). The fear is that the Great Filter lies ahead of us and could strike us down any minute (i.e., the Great Filter is either a near 100% chance of self destruction, or something implacable that hunts down and kills intelligent life).

The unnerving part is the implication. The easier it is discovered for life to evolve on its own (for example if life was discovered in the underground seas of Europa), the higher the probability the Great Filter lies ahead of us.

The End Of The Worlds

This matters, since most existential risks (xrisk) we worry about today (like nuclear war, bioweapons, global ecological/societal crashes) only affect one planet. But if existential risk is the answer to the Fermi question, then the peril has to strike reliably. If it is one of the local ones it has to strike early: a multi-planet civilization is largely immune to the local risks. It will not just be distributed, but it will almost by necessity have fairly self-sufficient habitats that could act as seeds for a new civilization if they survive. Since it is entirely conceivable that we could have invented rockets and spaceflight long before discovering anything odd about uranium or how genetics work it seems unlikely that any of these local risks are “it”. That means that the risks have to be spatially bigger (or, of course, that xrisk is not the answer to the Fermi question).

Of the risks mentioned by George physics disasters are intriguing, since they might irradiate solar systems efficiently. But the reliability of them being triggered before interstellar spread seems problematic. Stellar engineering, stellification and orbit manipulation may be issues, but they hardly happen early — lots of time to escape. Warp drives and wormholes are also likely late activities, and do not seem to be reliable as extinctors. These are all still relatively localized: while able to irradiate a largish volume, they are not fine-tuned to cause damage and does not follow fleeing people. Dangers from self-replicating or self-improving machines seems to be a plausible, spatially unbound risk that could pursue (but also problematic for the Fermi question since now the machines are the aliens). Attracting malevolent aliens may actually be a relevant risk: assuming von Neumann probes one can set up global warning systems or “police probes” that maintain whatever rules the original programmers desire, and it is not too hard to imagine ruthless or uncaring systems that could enforce the great silence. Since early civilizations have the chance to spread to enormous volumes given a certain level of technology, this might matter more than one might a priori believe.

So, in the end, it seems that anything releasing a dangerous energy effect will only affect a fixed volume. If it has energy E and one can survive it below a deposited energy e, if it just radiates in all directions the safe range is — one needs to get into supernova ranges to sterilize interstellar volumes. If it is directional the range goes up, but smaller volumes are affected: if a fraction f of the sky is affected, the range increases as but the total volume affected scales as .

Self-sustaining effects are worse, but they need to cross space: if their space range is smaller than interplanetary distances they may destroy a planet but not anything more. For example, a black hole merely absorbs a planet or star (releasing a nasty energy blast) but does not continue sucking up stuff. Vacuum decay on the other hand has indefinite range in space and moves at lightspeed. Accidental self-replication is unlikely to be spaceworthy unless is starts among space-moving machinery; here deliberate design is a more serious problem.

The speed of threat spread also matters. If it is fast enough no escape is possible. However, many of the replicating threats will have sublight speed and could hence be escaped by sufficiently paranoid aliens. The issue here is if lightweight and hence faster replicators can always outrun larger aliens; given the accelerating expansion of the universe it might be possible to outrun them by being early enough, but our calculations do suggest that the margins look very slim.

The more information you have about a target, the better you can in general harm it. If you have no information, merely randomizing it with enough energy/entropy is the only option (and if you have no information of where it is, you need to radiate in all directions). As you learn more, you can focus resources to make more harm per unit expended, up to the extreme limits of solving the optimization problem of finding the informational/environmental inputs that cause desired harm (=hacking). This suggests that mindless threats will nearly always have shorter range and smaller harms than threats designed by (or constituted by) intelligent minds.

In the end, the most likely type of actual civilization-ending threat for an interplanetary civilization looks like it needs to be self-replicating/self-sustaining, able to spread through space, and have at least a tropism towards escaping entities. The smarter, the more effective it can be. This includes both nasty AI and replicators, but also predecessor civilizations that have infrastructure in place. Civilizations cannot be expected to reliably do foolish things with planetary orbits or risky physics.

From The End Of The Worlds by Anders Sandberg (2015)
On The Great Filter, Existential Threats, And Griefers

The key issue here is the nature of the Great Filter, something we talk about when we discuss the Fermi Paradox.

The Fermi Paradox: loosely put, we live in a monstrously huge cosmos that is rather old. We only evolved relatively recently — our planet is ~4.6GYa old, in a galaxy containing stars up to 10GYa old in a universe around 13.7GYa old. Loosely stated, the Fermin Paradox asks, if life has evolved elsewhere, then where is it? We would expect someone to have come calling by now: a five billion year head start is certainly time enough to explore and/or colonize a galaxy only 100K light years across, even using sluggish chemical rockets.

We don't see evidence of extraterrestrial life, so, as economist Robin Hanson pointed out, there must be some sort of cosmic filter function (The Great Filter) which stops life, if it develops, from leaving its star system of origin and going walkabout. Hanson described two possibilities for the filter. One is that it lies in our past (pGF): in this scenario, intelligent tool-using life is vanishingly rare because the pGF almost invariably exterminates planetary biospheres before they can develop it. (One example: gamma ray bursts may repeatedly wipe out life. If this case is true, then we can expect to not find evidence of active biospheres on other planets. A few bacteria or archaea living below the Martian surface aren't a problem, but if our telescopes start showing us lots of earthlike planets with chlorophyll absorption lines in their reflected light spectrum (and oxygen-rich atmospheres) that would be bad news because it would imply that the GF lies in our future (an fGF).

The implication of an fGF is that it doesn't specifically work against life, it works against interplanetary colonization. The fGF in this context might be an emergent property of space exploration, or it might be an external threat — or some combination: something so trivial that it happens almost by default when the technology for interstellar travel emerges, and shuts it down for everyone thereafter, much as Kessler syndrome could effectively block all access to low Earth orbit as a side-effect of carelessly launching too much space junk. Here are some example scenarios: see below.

RADIO TECHNOLOGY AND EXISTENTIAL RISK

Recently I noticed Voices in AI – Episode 16: A Conversation with Robert J. Sawyer, an interview by Byron Reese, which contained an interesting tidbit on existential risk.The interviewer Byron Reese, in the course of the conversation, attributed a connection between radio technology and existential risk to Carl Sagan: 

“[Sagan] said that his guess was civilizations had a hundred years after they got radio, to either destroy themselves, or overcome that tendency and go on to live on a timescale of billions of years.” 

Robert J. Sawyer picked up on this theme and elaborated:

“The window is very small to avoid the existential threats that come with radio. The line through the engineering and the physics from radio, and understanding how radio waves work, and so forth, leads directly to atomic power, leads directly to atomic weapons… and leads conceivably directly to the destruction of the planet.”

I wasn’t able to find a fully explicit statement of this thesis in Sagan’s writings (it may be there, of course, and I simply couldn’t find it), but I found something close to it: 

“Sagan has several possible explanations for why alien radio signals have proved so elusive. Maybe… ‘No civilization survives long enough to develop power levels adequate to make such communications. All civilizations destroy themselves shortly after achieving a technological level consonant with radio astronomy’.“ (Carl Sagan, Conversations with Carl Sagan, edited by Tom Head, p. 156)

My assumption is that the connection between radio technology and nuclear weapons obtains because radio technology means the use of electronics (the ability to build electronic devices, which are a much more sophisticated technology with more possibilities than, say, steam-power technology), as well as opening up the electromagnetic spectrum to scientific study. The study of the electromagnetic radiation may inevitably result in the development of a more general conception of radiation that leads to nuclear science and the ability to build nuclear weapons.

I have called radio (along with fusion and consciousness, inter alia) Technologies of Nature, that is to say, processes for which nature provides an existence proof, and which we can, in the fullness of technological development, attempt to reverse engineer. It should not surprise us that with the reverse engineering of the cosmos – that which nature has shown us that it is possible to do – we would eventually converge on the knowledge and the technology that would allow us to undo what nature has done.  

In other contexts Sagan often formulated the maturity of civilizations in terms of their mastery of radio technology, specifically, the ability to build radio telescopes. For example, Sagan wrote:

“Radio astronomy on Earth is a by-product of the Second World War, when there were strong military pressures for the development of radar. Serious radio astronomy emerged only in the 1950s, major radio telescopes only in the 1960s. If we define an advanced civilization as one able to engage in long-distance radio communication using large radio telescopes, there has been an advanced civilization on our planet for only about ten years. Therefore, any civilization ten years less advanced than we cannot talk to us at all.” (Carl Sagan, The Cosmic Connection: An Extraterrestrial Perspective, Chap. 31)

For Sagan, radio telescopes are a crucial technology because they enable SETI, the search for extraterrestrial intelligence. Possessing a sensitive radio telescope would make it possible for a civilization to detect a SETI signal, and only a little more advanced technology would allow for the transmission of a SETI signal to some other civilization that might detect it. But radio is also a crucial technology because of its above-noted connection with nuclear technology and therefore with anthropogenic existential risk. 

In human history, we built nuclear weapons before we built radio telescopes, but this appears to be a merely contingent development, and the two were only separated by about ten years of history. It could have easily happened that human beings might have built radio telescope before we built nuclear weapons, as well another civilization might have done (or may yet do).

In a counterfactual history (which would also require a counterfactual universe, that is to say, a universe different from the universe in which we in fact find ourselves), in which human beings built radio telescopes, and as soon as they turned on their radio telescopes they found that they lived in a densely populated universe, so that the sky was alive with signals from a proliferation of ETIs, and if all this had happened before we built nuclear weapons, the second half of the twentieth century would have been radically different than it was in fact.

This obviously didn’t happen, but we could push the counterfactual scenario harder, and we can imagine a civilization with more-or-less nineteenth century levels of technology (as when the telegraph and the telephone were invented) developing radio much earlier in its history than human beings did, relative to other technological developments. If we had had radio technology for a hundred years before we developed nuclear weapons, again history might have been radically different. And if we had nuclear weapons for a hundred years before we developed radio technology, again, this would have meant a radically different history.

All of these scenarios, and others as well, may be instantiated by other civilizations elsewhere in the cosmos. Sagan’s thesis as stated above by Reese and Sawyer posits that nuclear and radio technology are tightly-coupled. This may well be true, but it is not too difficult to imagine alternative scenarios in which nuclear and radio technology are only loosely-coupled, and I have elsewhere noted how in terrestrial history there are cases in which loosely-coupled science and technology prior to the industrial revolution could be separated by hundreds of years. If radio technology and nuclear technology were found separated by hundreds of years, a civilization might experience a longue durée in possession of the one without the other.

From RADIO TECHNOLOGY AND EXISTENTIAL RISK by J. N. Nielsen (2017)

The Doomsday Argument

The Doomsday Argument is is a probabilistic argument that claims to predict the number of future members of the human species given only an estimate of the total number of humans born so far. Simply put, it says that supposing that all humans are born in a random order, chances are that any one human is born roughly in the middle.

The actual full bown Doomsday Argument does not put any upper limit on the number of humans that will ever exist, nor provide a date for when humanity will become extinct. There is however an abbreviated form of the argument does make these claims, by confusing probability with certainty. You may stumble over it some day, so don't be fooled.

The full blown Doomsday argument only make the prediction that there is a 95% chance of extinction within 9,120 years.

Assorted Apocalypses

Energy Required To Destroy Terra

Joules (J)TNT
Equivalent
Notes
4.184 × 10211 teraton= 1000 gigatons = 1e6 megatons
5.43 × 1023120 Tt1 Chicxulub Crater = 1 Dinosaur Killer = 20 Shoemaker-Levys
3.0 × 1024720 Tt1 Wilkes Land crater = 6 Chicxulub Craters
4.184 × 10241 petaton= 1000 teratons
5.5 × 10241 Pttotal energy from the Sun that strikes the face of the Earth each year
3.2 × 102677 PtEnergy required blow off Terra's atmosphere into space
3.9 × 102692 Pttotal energy output of the Sun each second (bolometric luminosity)
4.0 × 102696 Pttotal energy output of a Type-II civilization (Kardashev scale) each second
6.6 × 1026158 PtEnergy required to heat all the oceans of Terra to boiling
4.184 × 10271 exaton= 1000 petatons
4.5 × 10271 EtEnergy required to vaporize all the oceans of Terra into the atmosphere
7.0 × 10272 EtEnergy required to vaporize all the oceans of Terra and dehydrate the crust
2.9 × 10287 EtEnergy required to melt the (dry) crust of Terra
1.0 × 102924 EtEnergy required blow off Terra's oceans into space
2.1 × 102950 EtEarth's rotational energy
1.5 × 1030359 EtEnergy required blow off Terra's crust into space
4.184 × 10301 zettaton= 1000 exatons
2.9 × 10317 ZtEnergy required to blow up Terra (reduce to gravel orbiting the sun)
3.3 × 10318 Zttotal energy output of the Sun each day
3.3 × 10318 Zttotal energy output of Beta Centauri each second (bolometric luminosity). 41,700 × luminosity of the Sun.
5.9 × 103114 ZtEnergy required to blow up Terra (reduce to gravel flying out of former orbit)
1.2 × 103229 Zttotal energy output of Deneb each second (bolometric luminosity)
2.9 × 103269 ZtEnergy required to blow up Terra (reduce to gravel and move pieces to infinity)
4.184 × 10331 yottaton= 1000 zettatons
1.2 × 10343 Yttotal energy output of the Sun each year

Existential Risks

Planetary—Omniversal / Species Extinction—Metaphysical Annihilation

Existential Risks

3 Classification of existential risks

We shall use the following four categories to classify existential risks:

Bangs — Earth-originating intelligent life goes extinct in relatively sudden disaster resulting from either an accident or a deliberate act of destruction.

Crunches — The potential of humankind to develop into posthumanity is permanently thwarted although human life continues in some form.

Shrieks — Some form of posthumanity is attained but it is an extremely narrow band of what is possible and desirable.

Whimpers — A posthuman civilization arises but evolves in a direction that leads gradually but irrevocably to either the complete disappearance of the things we value or to a state where those things are realized to only a minuscule degree of what could have been achieved.

Armed with this taxonomy, we can begin to analyze the most likely scenarios in each category. The definitions will also be clarified as we proceed.


4 Bangs

This is the most obvious kind of existential risk. It is conceptually easy to understand. Below are some possible ways for the world to end in a bang. I have tried to rank them roughly in order of how probable they are, in my estimation, to cause the extinction of Earth-originating intelligent life; but my intention with the ordering is more to provide a basis for further discussion than to make any firm assertions.

4.1 Deliberate misuse of nanotechnology

In a mature form, molecular nanotechnology will enable the construction of bacterium-scale self-replicating mechanical robots that can feed on dirt or other organic matter . Such replicators could eat up the biosphere or destroy it by other means such as by poisoning it, burning it, or blocking out sunlight. A person of malicious intent in possession of this technology might cause the extinction of intelligent life on Earth by releasing such nanobots into the environment.

The technology to produce a destructive nanobot seems considerably easier to develop than the technology to create an effective defense against such an attack (a global nanotech immune system, an “active shield” ). It is therefore likely that there will be a period of vulnerability during which this technology must be prevented from coming into the wrong hands. Yet the technology could prove hard to regulate, since it doesn’t require rare radioactive isotopes or large, easily identifiable manufacturing plants, as does production of nuclear weapons .

Even if effective defenses against a limited nanotech attack are developed before dangerous replicators are designed and acquired by suicidal regimes or terrorists, there will still be the danger of an arms race between states possessing nanotechnology. It has been argued that molecular manufacturing would lead to both arms race instability and crisis instability, to a higher degree than was the case with nuclear weapons. Arms race instability means that there would be dominant incentives for each competitor to escalate its armaments, leading to a runaway arms race. Crisis instability means that there would be dominant incentives for striking first. Two roughly balanced rivals acquiring nanotechnology would, on this view, begin a massive buildup of armaments and weapons development programs that would continue until a crisis occurs and war breaks out, potentially causing global terminal destruction. That the arms race could have been predicted is no guarantee that an international security system will be created ahead of time to prevent this disaster from happening. The nuclear arms race between the US and the USSR was predicted but occurred nevertheless.

4.2 Nuclear holocaust

The US and Russia still have huge stockpiles of nuclear weapons. But would an all-out nuclear war really exterminate humankind? Note that: (i) For there to be an existential risk it suffices that we can’t be sure that it wouldn’t. (ii) The climatic effects of a large nuclear war are not well known (there is the possibility of a nuclear winter). (iii) Future arms races between other nations cannot be ruled out and these could lead to even greater arsenals than those present at the height of the Cold War. The world’s supply of plutonium has been increasing steadily to about two thousand tons, some ten times as much as remains tied up in warheads. (iv) Even if some humans survive the short-term effects of a nuclear war, it could lead to the collapse of civilization. A human race living under stone-age conditions may or may not be more resilient to extinction than other animal species.

4.3 We’re living in a simulation and it gets shut down

A case can be made that the hypothesis that we are living in a computer simulation should be given a significant probability . The basic idea behind this so-called “Simulation argument” is that vast amounts of computing power may become available in the future (see e.g. ), and that it could be used, among other things, to run large numbers of fine-grained simulations of past human civilizations. Under some not-too-implausible assumptions, the result can be that almost all minds like ours are simulated minds, and that we should therefore assign a significant probability to being such computer-emulated minds rather than the (subjectively indistinguishable) minds of originally evolved creatures. And if we are, we suffer the risk that the simulation may be shut down at any time. A decision to terminate our simulation may be prompted by our actions or by exogenous factors.

While to some it may seem frivolous to list such a radical or “philosophical” hypothesis next the concrete threat of nuclear holocaust, we must seek to base these evaluations on reasons rather than untutored intuition. Until a refutation appears of the argument presented in [27], it would intellectually dishonest to neglect to mention simulation-shutdown as a potential extinction mode.

4.4 Badly programmed superintelligence

When we create the first superintelligent entity , we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

4.5 Genetically engineered biological agent

With the fabulous advances in genetic technology currently taking place, it may become possible for a tyrant, terrorist, or lunatic to create a doomsday virus, an organism that combines long latency with high virulence and mortality .

Dangerous viruses can even be spawned unintentionally, as Australian researchers recently demonstrated when they created a modified mousepox virus with 100% mortality while trying to design a contraceptive virus for mice for use in pest control. While this particular virus doesn’t affect humans, it is suspected that an analogous alteration would increase the mortality of the human smallpox virus. What underscores the future hazard here is that the research was quickly published in the open scientific literature . It is hard to see how information generated in open biotech research programs could be contained no matter how grave the potential danger that it poses; and the same holds for research in nanotechnology.

Genetic medicine will also lead to better cures and vaccines, but there is no guarantee that defense will always keep pace with offense. (Even the accidentally created mousepox virus had a 50% mortality rate on vaccinated mice.) Eventually, worry about biological weapons may be put to rest through the development of nanomedicine, but while nanotechnology has enormous long-term potential for medicine it carries its own hazards.

4.6 Accidental misuse of nanotechnology (“gray goo”)

The possibility of accidents can never be completely ruled out. However, there are many ways of making sure, through responsible engineering practices, that species-destroying accidents do not occur. One could avoid using self-replication; one could make nanobots dependent on some rare feedstock chemical that doesn’t exist in the wild; one could confine them to sealed environments; one could design them in such a way that any mutation was overwhelmingly likely to cause a nanobot to completely cease to function . Accidental misuse is therefore a smaller concern than malicious misuse.

However, the distinction between the accidental and the deliberate can become blurred. While “in principle” it seems possible to make terminal nanotechnological accidents extremely improbable, the actual circumstances may not permit this ideal level of security to be realized. Compare nanotechnology with nuclear technology. From an engineering perspective, it is of course perfectly possible to use nuclear technology only for peaceful purposes such as nuclear reactors, which have a zero chance of destroying the whole planet. Yet in practice it may be very hard to avoid nuclear technology also being used to build nuclear weapons, leading to an arms race. With large nuclear arsenals on hair-trigger alert, there is inevitably a significant risk of accidental war. The same can happen with nanotechnology: it may be pressed into serving military objectives in a way that carries unavoidable risks of serious accidents.

In some situations it can even be strategically advantageous to deliberately make one’s technology or control systems risky, for example in order to make a “threat that leaves something to chance”.

4.7 Something unforeseen

We need a catch-all category. It would be foolish to be confident that we have already imagined and anticipated all significant risks. Future technological or scientific developments may very well reveal novel ways of destroying the world.

Some foreseen hazards (hence not members of the current category) which have been excluded from the list of bangs on grounds that they seem too unlikely to cause a global terminal disaster are: solar flares, supernovae, black hole explosions or mergers, gamma-ray bursts, galactic center outbursts, supervolcanos, loss of biodiversity, buildup of air pollution, gradual loss of human fertility, and various religious doomsday scenarios. The hypothesis that we will one day become “illuminated” and commit collective suicide or stop reproducing, as supporters of VHEMT (The Voluntary Human Extinction Movement) hope, appears unlikely. If it really were better not to exist (as Silenus told King Midas in the Greek myth, and as Arthur Schopenhauer argued although for reasons specific to his philosophical system he didn’t advocate suicide), then we should not count this scenario as an existential disaster. The assumption that it is not worse to be alive should be regarded as an implicit assumption in the definition of Bangs. Erroneous collective suicide is an existential risk albeit one whose probability seems extremely slight.

4.8 Physics disasters

The Manhattan Project bomb-builders’ concern about an A-bomb-derived atmospheric conflagration has contemporary analogues.

There have been speculations that future high-energy particle accelerator experiments may cause a breakdown of a metastable vacuum state that our part of the cosmos might be in, converting it into a “true” vacuum of lower energy density . This would result in an expanding bubble of total destruction that would sweep through the galaxy and beyond at the speed of light, tearing all matter apart as it proceeds.

Another conceivability is that accelerator experiments might produce negatively charged stable “strangelets” (a hypothetical form of nuclear matter) or create a mini black hole that would sink to the center of the Earth and start accreting the rest of the planet.

These outcomes seem to be impossible given our best current physical theories. But the reason we do the experiments is precisely that we don’t really know what will happen. A more reassuring argument is that the energy densities attained in present day accelerators are far lower than those that occur naturally in collisions between cosmic rays. It’s possible, however, that factors other than energy density are relevant for these hypothetical processes, and that those factors will be brought together in novel ways in future experiments.

The main reason for concern in the “physics disasters” category is the meta-level observation that discoveries of all sorts of weird physical phenomena are made all the time, so even if right now all the particular physics disasters we have conceived of were absurdly improbable or impossible, there could be other more realistic failure-modes waiting to be uncovered. The ones listed here are merely illustrations of the general case.

4.9 Naturally occurring disease

What if AIDS was as contagious as the common cold?

There are several features of today’s world that may make a global pandemic more likely than ever before. Travel, food-trade, and urban dwelling have all increased dramatically in modern times, making it easier for a new disease to quickly infect a large fraction of the world’s population.

4.10 Asteroid or comet impact

There is a real but very small risk that we will be wiped out by the impact of an asteroid or comet.

In order to cause the extinction of human life, the impacting body would probably have to be greater than 1 km in diameter (and probably 3 - 10 km). There have been at least five and maybe well over a dozen mass extinctions on Earth, and at least some of these were probably caused by impacts. In particular, the K/T extinction 65 million years ago, in which the dinosaurs went extinct, has been linked to the impact of an asteroid between 10 and 15 km in diameter on the Yucatan peninsula. It is estimated that a 1 km or greater body collides with Earth about once every 0.5 million years We have only catalogued a small fraction of the potentially hazardous bodies.

If we were to detect an approaching body in time, we would have a good chance of diverting it by intercepting it with a rocket loaded with a nuclear bomb.

4.11 Runaway global warming

One scenario is that the release of greenhouse gases into the atmosphere turns out to be a strongly self-reinforcing feedback process. Maybe this is what happened on Venus, which now has an atmosphere dense with CO2 and a temperature of about 450O C. Hopefully, however, we will have technological means of counteracting such a trend by the time it would start getting truly dangerous.


5 Crunches

While some of the events described in the previous section would be certain to actually wipe out Homo sapiens (e.g. a breakdown of a meta-stable vacuum state) others could potentially be survived (such as an all-out nuclear war). If modern civilization were to collapse, however, it is not completely certain that it would arise again even if the human species survived. We may have used up too many of the easily available resources a primitive society would need to use to work itself up to our level of technology. A primitive human society may or may not be more likely to face extinction than any other animal species. But let’s not try that experiment.

If the primitive society lives on but fails to ever get back to current technological levels, let alone go beyond it, then we have an example of a crunch. Here are some potential causes of a crunch:

5.1 Resource depletion or ecological destruction

The natural resources needed to sustain a high-tech civilization are being used up. If some other cataclysm destroys the technology we have, it may not be possible to climb back up to present levels if natural conditions are less favorable than they were for our ancestors, for example if the most easily exploitable coal, oil, and mineral resources have been depleted. (On the other hand, if plenty of information about our technological feats is preserved, that could make a rebirth of civilization easier.)

5.2 Misguided world government or another static social equilibrium stops technological progress

One could imagine a fundamentalist religious or ecological movement one day coming to dominate the world. If by that time there are means of making such a world government stable against insurrections (by advanced surveillance or mind-control technologies), this might permanently put a lid on humanity’s potential to develop to a posthuman level. Aldous Huxley’s Brave New World is a well-known scenario of this type.

A world government may not be the only form of stable social equilibrium that could permanently thwart progress. Many regions of the world today have great difficulty building institutions that can support high growth. And historically, there are many places where progress stood still or retreated for significant periods of time. Economic and technological progress may not be as inevitable as is appears to us.

5.3 “Dysgenic” pressures

It is possible that advanced civilized society is dependent on there being a sufficiently large fraction of intellectually talented individuals. Currently it seems that there is a negative correlation in some places between intellectual achievement and fertility. If such selection were to operate over a long period of time, we might evolve into a less brainy but more fertile species, homo philoprogenitus (“lover of many offspring”).

However, contrary to what such considerations might lead one to suspect, IQ scores have actually been increasing dramatically over the past century. This is known as the Flynn effect. It’s not yet settled whether this corresponds to real gains in important intellectual functions.

Moreover, genetic engineering is rapidly approaching the point where it will become possible to give parents the choice of endowing their offspring with genes that correlate with intellectual capacity, physical health, longevity, and other desirable traits.

In any case, the time-scale for human natural genetic evolution seems much too grand for such developments to have any significant effect before other developments will have made the issue moot.

5.4 Technological arrest

The sheer technological difficulties in making the transition to the posthuman world might turn out to be so great that we never get there.

5.5 Something unforeseen

As before, a catch-all.

Overall, the probability of a crunch seems much smaller than that of a bang. We should keep the possibility in mind but not let it play a dominant role in our thinking at this point. If technological and economical development were to slow down substantially for some reason, then we would have to take a closer look at the crunch scenarios.


6 Shrieks

Determining which scenarios are shrieks is made more difficult by the inclusion of the notion of desirability in the definition. Unless we know what is “desirable”, we cannot tell which scenarios are shrieks. However, there are some scenarios that would count as shrieks under most reasonable interpretations.

6.1 Take-over by a transcending upload

Suppose uploads come before human-level artificial intelligence. An upload is a mind that has been transferred from a biological brain to a computer that emulates the computational processes that took place in the original biological neural network. A successful uploading process would preserve the original mind’s memories, skills, values, and consciousness. Uploading a mind will make it much easier to enhance its intelligence, by running it faster, adding additional computational resources, or streamlining its architecture. One could imagine that enhancing an upload beyond a certain point will result in a positive feedback loop, where the enhanced upload is able to figure out ways of making itself even smarter; and the smarter successor version is in turn even better at designing an improved version of itself, and so on. If this runaway process is sudden, it could result in one upload reaching superhuman levels of intelligence while everybody else remains at a roughly human level. Such enormous intellectual superiority may well give it correspondingly great power. It could rapidly invent new technologies or perfect nanotechnological designs, for example. If the transcending upload is bent on preventing others from getting the opportunity to upload, it might do so.

The posthuman world may then be a reflection of one particular egoistical upload’s preferences (which in a worst case scenario would be worse than worthless). Such a world may well be a realization of only a tiny part of what would have been possible and desirable. This end is a shriek.

6.2 Flawed superintelligence

Again, there is the possibility that a badly programmed superintelligence takes over and implements the faulty goals it has erroneously been given.

6.3 Repressive totalitarian global regime

Similarly, one can imagine that an intolerant world government, based perhaps on mistaken religious or ethical convictions, is formed, is stable, and decides to realize only a very small part of all the good things a posthuman world could contain.

Such a world government could conceivably be formed by a small group of people if they were in control of the first superintelligence and could select its goals. If the superintelligence arises suddenly and becomes powerful enough to take over the world, the posthuman world may reflect only the idiosyncratic values of the owners or designers of this superintelligence. Depending on what those values are, this scenario would count as a shriek.

6.4 Something unforeseen.

The catch-all.

These shriek scenarios appear to have substantial probability and thus should be taken seriously in our strategic planning.

One could argue that one value that makes up a large portion of what we would consider desirable in a posthuman world is that it contains as many as possible of those persons who are currently alive. After all, many of us want very much not to die (at least not yet) and to have the chance of becoming posthumans. If we accept this, then any scenario in which the transition to the posthuman world is delayed for long enough that almost all current humans are dead before it happens (assuming they have not been successfully preserved via cryonics arrangements) would be a shriek. Failing a breakthrough in life-extension or widespread adoption of cryonics, then even a smooth transition to a fully developed posthuman eighty years from now would constitute a major existential risk, if we define “desirable” with special reference to the people who are currently alive. This “if”, however, is loaded with a profound axiological problem that we shall not try to resolve here.


7 Whimpers

If things go well, we may one day run up against fundamental physical limits. Even though the universe appears to be infinite, the portion of the universe that we could potentially colonize is (given our admittedly very limited current understanding of the situation) finite , and we will therefore eventually exhaust all available resources or the resources will spontaneously decay through the gradual decrease of negentropy and the associated decay of matter into radiation. But here we are talking astronomical time-scales. An ending of this sort may indeed be the best we can hope for, so it would be misleading to count it as an existential risk. It does not qualify as a whimper because humanity could on this scenario have realized a good part of its potential.

Two whimpers (apart form the usual catch-all hypothesis) appear to have significant probability:

7.1 Our potential or even our core values are eroded by evolutionary development

This scenario is conceptually more complicated than the other existential risks we have considered (together perhaps with the “We are living in a simulation that gets shut down” bang scenario).

A related scenario is described in [62], which argues that our “cosmic commons” could be burnt up in a colonization race. Selection would favor those replicators that spend all their resources on sending out further colonization probes.

Although the time it would take for a whimper of this kind to play itself out may be relatively long, it could still have important policy implications because near-term choices may determine whether we will go down a track that inevitably leads to this outcome. Once the evolutionary process is set in motion or a cosmic colonization race begun, it could prove difficult or impossible to halt it. It may well be that the only feasible way of avoiding a whimper is to prevent these chains of events from ever starting to unwind.

7.2 Killed by an extraterrestrial civilization

The probability of running into aliens any time soon appears to be very small.

If things go well, however, and we develop into an intergalactic civilization, we may one day in the distant future encounter aliens. If they were hostile and if (for some unknown reason) they had significantly better technology than we will have by then, they may begin the process of conquering us. Alternatively, if they trigger a phase transition of the vacuum through their high-energy physics experiments (see the Bangs section) we may one day face the consequences. Because the spatial extent of our civilization at that stage would likely be very large, the conquest or destruction would take relatively long to complete, making this scenario a whimper rather than a bang.

7.3 Something unforeseen

The catch-all hypothesis.

The first of these whimper scenarios should be a weighty concern when formulating long-term strategy. Dealing with the second whimper is something we can safely delegate to future generations (since there’s nothing we can do about it now anyway).

On The Great Filter

Planetary—Stellar / Species Extinction—Total Extinction

On The Great Filter, Existential Threats, And Griefers

Simplistic warfare: As Larry Niven pointed out, any space drive that obeys the law of conservation of energy is a weapon of efficiency proportional to its efficiency as a propulsion system. Today's boringly old-hat chemical rockets, even in the absence of nuclear warheads, are formidably destructive weapons: if you can boost a payload up to relativistic speed, well, the kinetic energy of a 1Kg projectile traveling at just under 90% of c (τ of 0.5) is on the order of 20 megatons. Slowing down doesn't help much: even at 1% of c that 1 kilogram bullet packs the energy of a kiloton-range nuke. War, or other resource conflicts, within a polity capable of rapid interplanetary or even slow interstellar flight, is a horrible prospect.

Irreducible complexity: I take issue with one of Anders' assumptions, which is that a multi-planet civilization is largely immune to the local risks. It will not just be distributed, but it will almost by necessity have fairly self-sufficient habitats that could act as seeds for a new civilization if they survive. I've rabbited on about this in previous years: briefly, I doubt that we could make a self-sufficient habitat that was capable of maintaining its infrastructure and perpetuating and refreshing its human culture with a population any smaller than high-single-digit millions; lest we forget, our current high-tech infrastructure is the climax product of on the order of 1-2 billion developed world citizens, and even if we reduce that by an order of magnitude (because who really needs telephone sanitizer salesmen, per Douglas Adams?) we're still going to need a huge population to raise, train, look after, feed, educate, and house the various specialists. Worse: we don't have any real idea how many commensal microbial species we depend on living in our own guts to help digest our food and prime our immune systems, never mind how many organisms a self-sustaining human-supporting biosphere needs (not just sheep to eat, but grass for the sheep to graze on, fungi to break down the sheep droppings, gut bacteria in the sheep to break down the lignin and cellulose, and so on).

I don't rule out the possibility of building robust self-sufficient off-world habitats. The problem I see is that it's vastly more expensive than building an off-world outpost and shipping rations there, as we do with Antarctica — and our economic cost/benefit framework wouldn't show any obvious return on investment for self-sufficiency.

So our future-GF need not be a solar-system-wide disaster: it might simply be one that takes out our home world before the rest of the solar system is able to survive without it. For example, if the resource extraction and energy demands of establishing self-sufficient off-world habitats exceed some critical threshold that topples Earth's biosphere into a runaway Greenhouse effect or pancakes some low-level but essential chunk of the biosphere (a The Death of Grass scenario) that might explain the silence.

Griefers: suppose some first-mover in the interstellar travel stakes decides to take out the opposition before they become a threat. What is the cheapest, most cost-effective way to do this?

Both the IO9 think-piece and Anders' response get somewhat speculative, so I'm going to be speculative as well. I'm going to take as axiomatic the impossibility of FTL travel and the difficulty of transplanting sapient species to other worlds (the latter because terraforming is a lot harder than many SF fans seem to believe, and us squishy meatsacks simply aren't constructed with interplanetary travel in mind). I'm also going to tap-dance around the question of a singularity, or hostile AI. But suppose we can make self-replicating robots that can build a variety of sub-assemblies from a canned library of schematics, building them out of asteroidal debris? It's a tall order with a lot of path dependencies along the way, but suppose we can do that, and among the assemblies they can build are photovoltaic cells, lasers, photodetectors, mirrors, structural trusses, and their own brains.

What we have is a Von Neumann probe — a self-replicating spacecraft that can migrate slowly between star systems, repair bits of itself that break, and where resources permit, clone itself. Call this the mobile stage of the life-cycle. Now, when it arrives in a suitable star system, have it go into a different life-cycle stage: the sessile stage. Here it starts to spawn copies of itself, and they go to work building a Matrioshka Brain. However, contra the usual purpose of a Matrioshka Brain (which is to turn an entire star system's mass into computronium plus energy supply, the better to think with) the purpose of this Matrioshka Brain is rather less brainy: its free-flying nodes act as a very long baseline interferometer, mapping nearby star systems for planets, and scanning each exoplanet for signs of life.

Then, once it detects a promising candidate — within a couple of hundred light years, oxygen atmosphere, signs of complex molecules, begins shouting at radio wavelengths then falls suspiciously quiet — it says "hello" with a Nicoll-Dyson Beam.

(It's not expecting a reply: to echo Auric Goldfinger: "no Mr Bond, I expect you to die.")

A Dyson sphere or Matrioshka Brain collects most or all of the radiated energy of a star using photovoltaic collectors on the free-flying elements of the Dyson swarm. Assuming they're equipped with lasers for direct line-of-sight communication with one another isn't much of a reach. Building bigger lasers, able to re-radiate all the usable power they're taking in, isn't much more of one. A Nicoll-Dyson beam is what you get when the entire emitted energy of a star is used to pump a myriad of high powered lasers, all pointing in the same direction. You could use it to boost a light sail with a large payload up to a very significant fraction of light-speed in a short time ... and you could use it to vapourize an Earth-mass planet in under an hour, at a range of hundreds of light years.

Here's the point: all it takes is one civilization of alien ass-hat griefers who send out just one Von Neumann Probe programmed to replicate, build N-D lasers, and zap any planet showing signs of technological civilization, and the result is a galaxy sterile of interplanetary civilizations until the end of the stelliferous era (at which point, stars able to power an N-D laser will presumably become rare).

We have plenty of griefers who like destroying things, even things they've never seen and can't see the point of. I think the N-D laser/Von Neumann Probe option is a worryingly plausible solution to the identity of a near-future Great Filter: it only has to happen once, and it f**ks everybody.

General Non-Diminishing Prey

Stellar / Species Extinction

Beware General Visible Prey

(ed note: Robin Hanson is the briliant man who originated the entire concept of The Great Filter)

These are indeed scenarios of concern. But I find it hard to see how, by themselves, they could add up to a big future filter.

On griefers (aka “berserkers”), a griefer equilibrium seems to me unstable to their trying sometimes to switch to rapid growth within a sufficiently large volume that they seem to control. Sometimes that will fail, but once it succeeds enough then competing griefers have little chance to stop them. Yes there’s a chance the first civilization to make them didn’t think to encode that strategy, but that seems a pretty small filter factor.

On simple war, I find it hard to see how war has a substantial chance of killing everyone unless the minimum viable civilization size is large. And I agree that this min size gets bigger for humans in space, who are more fragile there. But it should get smaller for smart robots in space, or on Earth, especially if production becomes more local via nano-factories. The chance that the last big bomb used in a war happens to kill off the last viable group of survivors seems to me relatively small.

Of course none of these chances are low enough to justify complacency. We should explore such scenarios, and work to prevent them. But we should work even harder to find more worrisome scenarios.

So let me explain my nightmare scenario: general non-diminishing prey. Consider the classic post-apocalyptic scenario, such as described in The Road. Desperate starving people ignore the need to save and build for the future, and grab any food they can find, including each other. First all the non-human food is gone, then all the people.

Such situations have been modeled formally via “predator-prey dynamics”:

These are differential equations giving the rates at which counts of predators and prey grow or decline as a function of each other. The standard formulation has a key term whereby prey count falls in proportional to the product of the predator count and the prey count. This formulation embodies an important feature of diminishing returns: the fewer prey are left, the harder it is for predators to find and eat them.

Without enough such diminishing returns, any excess of predators quickly leads to the extinction of prey, followed quickly by the extinction of predators. For example, when starving humans are given easy access to granaries, such granaries are emptied quickly. Not made low; emptied. Which is why granaries in famines are usually either well-protected, or empty.

In nature, there are usually many kinds of predators, and even more kinds of prey. So the real predator-prey dynamic is high-dimensional. The pairwise relations between most predators and preys do in fact usually involve strongly diminishing returns, both because predators must usually search for prey, and because some prey hiding places are much better than others.

If the relation between any one pair of predator and prey types happens to have no diminishing returns, then that particular type of prey will go extinct whenever there is a big enough excess of that particular type of predator. Since this selects against such prey, the prey we see in nature almost all have diminishing returns for all their practical predators.

Humans are general predators, able to eat a great many kinds of prey. And within human societies humans are also relatively general kinds of prey, since we mostly all use the same kinds of resources. So when humans prey on humans, the human prey can more easily go extinct.

For foragers, a key limit on human predation was simple distance. Foragers lived far apart, and were unpredictably located. Also, foragers had little physical property to grab, wives were not treated as property, and land was too plentiful to be worth grabbing. These limits mattered less for farmers, who did predate often via war.

The usual source of diminishing returns in farmer war predation has been the wide range of protection in places to hide; humans have often run to the mountains, jungle, or sea to escape human predators. Even so, humans and proto-humans have quite often driven one another to local relative extinction.

While the extinction of some kinds of humans relative to others has been common, the extinction of all humans in an area has been much less common. This is in part because, when there has been a local excess of humans, most have focused on non-human prey. Such prey are diverse, and most have strongly diminishing returns to human predation.

Even if humans expand into the solar system, and even if they create robot descendants, we expect our descendants to remain relatively general predators, at least for a long while. We also expect the physical resources that they collect to constitute relatively general prey, useful to a wide range of our descendants. Furthermore, we expect nature that isn’t domesticated or descended from humans to hold a decreasing quantity of useful resources.

Thus the future predator-prey dynamic should become lower dimensional than it has been in the past. To a perhaps useful approximation, there’d be only a few kinds of predators and prey. Which raises the key question: how strong are the diminishing returns to predation in that new world? That is, when some of our descendants hunt down others to grab resources, how fast does that task get harder as fewer prey remain?

One source of diminishing returns in predation is a diversity of approaches and interfaces. The more different are the methods that prey use to create and store value, the smaller the fraction of that value a predator can obtain via a simple hostile takeover. This increases the ratio of how hard prey and prey fight. As many have noted, in nature prey fight for their lives, while predators fight only for a meal. Even so, nature still has plenty of predation. Even if predators gain only part of the value contained in prey, they still predate if that costs them even less than this value.

As I said above, the main source of diminishing returns in predation among foragers was travel cost, and among farmers it was the diversity of physical places to run and hide. Such effects might still protect our descendants from predator-prey-dynamic extinction, even if they have only one kind of predator and prey. Alas, we have good reasons to fear that these factors may less protect our descendants.

The basic problem here is our improving techs for travel, communication, and surveillance. We are steadily able to move bits and people more cheaply, and to more cheaply and accurately watch spaces for activity. Yes moving out into the solar system would put more distance between things, and make them harder to see. But that one-time effect will be quickly overwhelmed by improving tech.

A colonized solar system is plausibly a place where predators can see most any civilized activities of any substantial magnitude, and get to them easily if not quickly. So if we ever reach a point where predators fight to grab civilized resources with little concern to save some for the future, they might be able to find and grab pretty much everything in the solar system. Much as easy-access granaries are quickly emptied in a famine.

Whether extinction results from such a scenario depends how small are minimum viable civilization seeds, how obscure and well protected are the nooks and crannies in which they might hide, and how many of them exist and try to hide. Yes, hidden viable seeds drifting at near light-speed to other stars could prevent extinction, but such a prey-collapse scenario could play out well before such seeds are feasible.

So, bottom line, the future great filter scenario that most concerns me is one where our solar-system-bound descendants have killed most of nature, can’t yet colonize other stars, are general predators and prey of each other, and have fallen into a short-term-predatory-focus equilibrium where predators can easily see and travel to most all prey. Yes there are about a hundred billion comets way out there circling the sun, but even that seems a small enough number for predators to careful map and track all of them.

Worry about prey-extinction scenarios like this is a reason I’ve focused on hidden refuges as protection from existential risk. Nick Beckstead has argued against refuges saying:

The most likely ways in which improved refuges could help humanity recover from a global catastrophe are scenarios in which well-stocked refuges with appropriately trained people help civilization to recover after a catastrophe that leaves a substantial portion of humanity alive but disrupts industrial and agricultural infrastructure, and scenarios in which only people in constantly-staffed refuges survive a pandemic purposely engineered to cause human extinction. I would guess that, in the former case, resources and people stocked in refuges would play a relatively small role in helping humanity to recover because they would represent a small share of relevant people and resources. The latter case strikes me as relatively far-fetched and I would guess it would be very challenging to do much better than the largely uncontacted peoples in terms of ensuring the survival of the species. (more)

Nick does at one point seem to point to the scenario that concerns me:

If a refuge is sufficiently isolated and/or secret, it would be easier to ensure that everyone in the refuge had an adequate food supply, even if that meant an inegalitarian food distribution.

But he doesn’t appear to think this relevant for his conclusions. In contrast, I fear that a predatory-collapse scenario is the most likely future great filter, where unequal survival key to preventing extinction.

Added 10a: Of course the concern isn’t just that some parties would have short term orientations, but that most would pursue short-term predation so vigorously that they force most everyone to put in similar effort levels, even if they take have long-term view. When enemies mass on the border, one might have to turn farmers into soldiers to resist them, even if it is harvest time.

From Beware General Visible Prey by Robin Hanson (2015)

Dinosaur Killer Asteroid

Dinosaurs were pretty darn successful. They managed to be the dominant terrestrial vertebrates for 135 million years, while us hairless apes have only been around for 0.04 million years. So why didn't dinosaurs evolve intelligence and create a galactic empire at the end of the Mesozoic Era? Well, they procrastinated just a wee bit too long on creating their space program.

Approximately 66 million years ago the Cretaceous—Paleogene extinction event happened, aka the "Dinosaur Killer Asteroid". Freaking asteroid was 12 kilometers in diameter, blazing along at about 20 kilometers per second had about 5.43×1023 joules of kinetic energy. The blast was approximately the equivalent of a 120 teraton nuclear bomb. You couldn't do more damage with three thousand tons of pure antimatter.

About 75% of Terra's surface is ocean, so it is unsurprising that the asteroid strike was an ocean impact. But this meant it was Megatsunami time. Scientist calculate that the waves were about five freaking kilometers tall. Small islands ("small" as in "Madagascar-sized") would have been totally submerged.

There was a global firestorm, partially ignited by the thermal pulse of the impact, and partially from incendiary fragments from the blast launched into sub-orbital trajectories to all points on the world. The higher proportion of oxygen in the atmosphere back then just made things worse. Scientists examining the prehistoric layer of soot laid down concluded that almost the entire terrestrial biosphere had gone up in flames.

There is also some evidence that the impact was not straight down, with the primary destruction focused at a single impact point. Evidence suggests it was a glancing impact, meaning it was an impact line, creating a flaming path of destruction across the face of North America.

The asteroid also picked a particularly devastating spot to strike: a continental shelf area composed of limestone. The incinerated limestone released huge amounts of carbon dioxide. Some of it led to rapid ocean acidification, spelling doom to ammonites. The rest went into the atmosphere. Note that the Chesapeake Bay impacter of 35 million years ago did not hit a limestone shelf, and apparently did not cause an extinction event.

Between the firestorm and the limestone continental bake-off the amount of carbon dioxide in the atmosphere took a drastic upturn, which started a savage greenhouse effect. Global temperatures skyrocketed.

There was also about twelve years of acid rain, but that was just a flea bite compared to the rest of the catastrope.

The resulting crater was about 100 kilometers wide and 30 kilometers deep.

About 50% to 75% of all species of life on Terra swiftly became extinct. Most of the animals that managed to survived were the ones that ate worms, flies, and carrion, due to the fact that was pretty much the only thing around to eat.

This is because the debris cloud choked off the sunlight for about ten years, which wiped out most of the plants, which caused the herbivores to starve, which caused the carnivores to starve. The only abundant food source was the mountains of rotting animals (fat cooked ones and skinny raw ones) and the maggots who could not believe their own luck. In addition there was mold and fungus everywhere.

Cockroaches and the ancestors of rats survived, of course. Everybody knows how hard they are to kill. Don't sneer at those rats, they were your ancestors too.

Exploding Stars

Stars going boom are pretty apocalyptic. They come in a variety of sizes.

These events are sometimes measured in a unit called a Foe, from the phrase "ten to the power of fifty-one ergs". One Foe is equal to 1044 joules. An average supernova emist one Foe.

Nova

Back in the 1950's all the science fiction novels which needed an earth-shattering kaboom would use a Nova. Star goes boom, incinerates the entire solar system, pretty apocalyptic. Astronomers didn't know anything about novae except they were huge, so science fiction author had free reign.

Nowadays we know that novae happen only in binary star systems. A normal star has the misfortune to be orbited by a white dwarf. Over the millennium the dwarf sucks hydrogen out of the normal star like a cosmic vampire. The dwarf starts burning the hydrogen using carbon-nitrogen-oxygen fusion reaction.

Sometimes the the white dwarf suffers a runaway reaction, and you get a nova.

The fact that a white dwarf is required made instantly obsolete all those science fiction stories about the sun going nova.

With each explosion only about one ten-thousandth of the white dwarf's mass is ejected. The point is that a vampire white dwarf can go nova multiple times. For instance, the star RS Ophiuchi has gone nova six times. The mass is ejected at velocities up to several thousand kilometers per second.

Astronomers estimate that about 30 to 60 novae occur in our galaxy per year.

A nova can reach an absolute magnitude of -8.8, or about 42.7 trillion times the luminosity of the sun. Nova emit about 10-7 Foe or about 1037 joules.

Novae to eject enriched elements into the interstellar medium like red giant, supergiant stars, and supernovae do. But only a paltry amount. Supernovae emit 50 times as much, and red giant/supergiant stars emit 200 times as much.

Dwarf Nova

Dwarf Nova are also called U Geminorum-type variable star. Their increase in luminosity is not due to a fusion explosion. Rather it is a vampire white dwarf star whose accretion disk becomes unstable. Part of the disk suddenly collapse onto the white dwarf and releases large amounts of gravitational potential energy.

This only releases a tiny fraction of the energy of a full fledged nova, and would be difficult to detect from another solar system without a telescope. It is hardly apocalyptic.

It is only mentioned here in case you run across the term in your researches and get confused.

Kilonova

Kilonova are caused when two neutron stars or a neutron star and a black hole merge. Kilonovae not only emit intense bursts of light, but also lots of gravitational waves.

Kilonovae emit about 10-5 Foe (1039 joules) to 10-3 Foe (1041 joules).

Luminous Red Nova

Luminous Red Nova are caused when two stars collide (probably a binary star whose components spiral into each other).

The only reference I could find said that luminous red novae had luminosities between that of a nova and that of a supernova. Which means it emits about 0.5 Foe (5×1043 joules) if they mean 1 Foe = a supernova, or 50 Foe (5×1045 joules) if they means 100 Foe = a supernova. Your guess is as good as mine.

Supernova

Novae are impressive but Supernovae are the real deal. A nova will poot off a pathetic one ten-thousandth of its mass in the explosion, with a supernova it is pretty darn close to 100%. The mass will be traveling in all directions at about 30,000 kilometers per second, 10% the speed of light.

Blasted cataclysm will briefly outshine the entire galaxy. In a few months a supernova will emit as much energy as Sol will over its entire lifetime. Type I or type II supernovae emit about 1 Foe (about 1044 joules).

While novae happen multiple times to a star, a given star can only go supernova once. There isn't much left except for a little neutron star or black hole, it is not going to explode again.

The most energetic supernovae are called hypernovae

Supernovae are potentially strong galactic sources of gravitational waves. The expanding gas causes shock waves in the interstellar medium, creating a supernova remnant. Supernova remnants are considered the major source of galactic cosmic rays.


There are two ways a star can go supernova: thermal runaway and core collapse. There are four classifications of supernovae, the first one is caused by thermal runaway and the other three by core collapse.

Thermal runaway is the same mechanism that causes novae: white dwarf vampires hydrogen off its sibling and occasionally suffers from indigestion. The difference is that with a supernova the runaway reaction is not so much like popping a birthday balloon so much as it is like detonating a thermonuclear warhead. Thermal Runaway supernova emit about 1 to 1.5 Foe (1×1044 to 1.5×1044 joules).

Core Collapse. Gravity makes everything fall down, with "down" being defined as the center of gravity of all the objects. So a nebula contracts as gravity tries to squeeze into a tiny ball.

Soon the nebula contracts into what they call a protostar. But at some point the temperature at the center of the protostar becomes high enough to ignite a fusion reaction. A star is born.

The fusion reaction emits lots of electromagnetic radiation, i.e., light. The radiation pressure of the light brings the star's gravitational collapse to a halt. The star's body can no longer fall down, it is "propped up" by radiation pressure.

Core collapse is when one of several mechanisms kicks out the prop. The star then abruptly collapses.

This means that instead of a small steady stream of the star's hydrogen is slowly being burnt in the core, suddenly all of the hydrogen is burnt in a fraction of a second. The supernova explosion obliterates the star, leaving only a small neutron star or black hole. Millions of years from now alien astronomers in an adjacent galaxy notice that our galaxy has suddenly doubled in brightness.

The mechanisms that can cause core collapse are electron capture; exceeding the Chandrasekhar limit; pair-instability; or photodisintegration. Which mechanism does the dirty deed more or less depends upon the star's mass. Details can be found in the Wikipedia article.

Pair-instability supernovae can emit from 5 to 100 Foe of energy (5×1044 to 1×1046 joules). Electron capture, Chandrasekhar limit and photodisintegration supernovae regularly emit about 100 Foe of energy (1×1046 joules).


Supernovae are very important for the formation of planets. When the universe was formed it was composed of hydrogen with a sprinkling of helium. The only reason that elements like carbon, oxygen, iron, and uranium exist at all is because these elements were forged in supernovae explosions and spread into the galaxy at velocities of 0.1c. These elements later condensed into molecular clouds, which formed stars and solar systems.

Von Neumann Machine

Specifically a Von Neumann universal constructor, aka Self-replicating machine. These are machines that can create duplicates of themselves given access to raw materials, much like biological organisms. Whatever sabotage they are programmed to do against the defenders is magnified by the fact that they breed like cockroaches.

In the TV series Stargate SG-1, the Replicator are self-replicating machines that are ravaging all the planets in the Asgard galaxy. In Greg Bear's novel The Forge of God and the sequel Anvil of Stars, an alien species systematically destroys planets detected as possessing intelligent life by attacking the planets with self-replicating machines.

Nanotechnology

Nanotechnology (and it's extension nanorobotics) is the concept of molecule sized machine. The idea is attributed to Richard Feynman and it was popularized by K. Eric Drexler. It didn't take long before military researchers and science fiction writers started to speculate about weaponizing the stuff. A good science fiction novel on the subject is Wil McCarthy's Bloom.

There are many ways nanotechnology could do awful things to a military target. One of the first hypothetical applications of nanotechnology was in the manufacturing field. Molecular robots would break down chunks of various raw materials and assemble something (like, say, an aircraft), atom by atom. Naturally this could be dangerous if the nanobots landed on something besides raw materials (like, say, an enemy aircraft). However, since they are doing this atom by atom, it would take thousands of years for some nanobots to construct something (and the same thousands of years to deconstruct the source of raw materials).

But using nanobots for manufacturing suddenly becomes scary indeed if you make the little monsters into self-replicating machines (AKA a "Von Neumann universal constructor") in an attempt to reduce the thousands of years to something more reasonable. Suddenly you are facing the horror of wildfire plague spreading with the power of exponential growth. This could happen by accident, with a mutation in the nanobots causing them to devour everything in sight. Drexler called this the dreaded "gray goo" scenario. Or it could happen on purpose, weaponizing the nanobots.

Drexler is now of the opinion that nanobots for manufacturing can be done without risking gray goo. And Robert A. Freitas Jr. did some analysis that suggest that even if some nanotech started creating gray goo, it would be detectable early enough for countermeasures to deal with the problem.

What about nanobot gray goo weapons? Anthony Jackson thinks that free nanotech that operates on a time frame that's tactically relevant is in the realm of cinema, not science. And in any event, nanobots will likely be shattered by impacting the target at relative velocities higher than 3 km/s, which makes delivery very difficult. Rick Robinson is of the opinion that once you take into account the slow rate of gray goo production and the fragility of the nanobots, it would be more cost effective to just smash the target with an inert projectile. Jason Patten agrees that nanobots will be slow, due to the fact that they will not be very heat tolerant (a robot made out of only a few molecules will be shaken into bits by mild amounts of heat), and dissipating the heat energy of tearing down and rebuilding on the atomic level will be quite difficult if the heat is generated too fast.

Other weaponized applications of nanotechnology will probably be antipersonnel, not antispacecraft. They will probably take the form of incredibly deadly chemical weapons, or artificial diseases.

Some terminology: according to Chris Phoenix, "paste" is non-replicating nano-assemblers while "goo" is replicating nano-assemblers. Paste is safe, but is slow acting and limited to the number of nano-assemblers present. Goo is dangerous, but is fast acting and potentially unlimited in numbers.

"Gray or Grey goo" is accidentally created destructive nano-assemblers. "Red goo" is deliberately created destructive nano-assemblers. "Khaki goo" is military weaponized red goo. "Blue goo" is composed of "police" nanobots, it combats destructive type goos. "Green goo" is a type of red goo which controls human population growth, generally by sterilizing people. "LOR goo" (Lake Ocean River) nano-assemblers designed to remove pollution and harvest valuable elements from water, it could mutate into golden goo. "Golden goo" are out-of-control nanobots which were designed to extract gold from seawater but won't stop (the "Sorcerer's Apprentice" scenario). "Pink goo" is a humorous reference to human beings.

ACE Paste (Atmospheric Carbon Extractor) designed to absorb excess greenhouse gasses and covert them into diamonds or something useful. Garden Paste is a "utility fog" of various nanobots which helps your garden grow (manages soil density and composition for each plant type, controls insects, creates shade, store sunlight for overcast days, etc.) LOR paste: paste version of LOR goo. Medic Paste is a paste of nanobots that heals wounds, assists in diagnosis, and does medical telemetry to monitor the patient's health.

MALIGNANCY

MOST SECRET (ULTRAVIOLET) / EYES ONLY UNGUENT SANCTION

NEED-TO-KNOW ABSOLUTE
HARD COPY ONLY/NO TRANSMISSION
ORIGINATOR-CONTROLLED DISSEMINATION
TRACKED-COPY DOCUMENT
NOCONTRACT
NOFORN
SPECIAL SECURITY PROCEDURE BASILISK FIDELIS

Proceed (+/-)? +

EXECUTION:

STRATEGIC ACTION MESSAGE ONLY
IMPERIAL SECURITY EXECUTIVE ONLY
FIFTH DIRECTORATE VETO
X-THREAT ONLY

SUMMARY:

[SSP image elided from file]

The Existential Threats Primary Working Group has maintained in secure storage a number of sub-black level threats, and has access to two black-level threats, of type BURNING ZEPHYR – i.e., unlimited autonomous nanoscale replicators (“gray goo”).

Case UNGUENT SANCTION represents an extremal response case to physically manifested excessionary-level existential threats. It is hoped that, in such cases, the deployment of an existing sub-black level or black-level existential counterthreat may ideally destroy or subsume the excessionary-level threat, replacing it with one already considered manageable, or in lesser cases, at least delay the excessionary-level threat while more sophisticated countermeasures can be developed.

Note that as an extremal response case, deployment of CASE UNGUENT SANCTION requires consensus approval of the Imperial Security Executive, subject to override veto by vote of the Fifth Directorate overwatch.

WARNING:

Communicating ANY PART of this NTK-A document to ANY SOPHONT other than those with preexisting originator-issued clearance, INCLUDING ITS EXISTENCE, is considered an alpha-level security breach and will be met with the most severe sanctions available, up to and including permanent erasure.

Proceed (+/-)?

From MALIGNANCY by Alistair Young (2015)

Hostile AIs

Example: in Vernor Vinge's classic A Fire Upon The Deep, a team of researchers at the rim of the galaxy experiment with a five billion year old datastore, using computer recipes they do not fully understand in an attempt to activate the software contained. Unfortunately they succeed. The software is The Blight, a malevolent super-intelligent artificial entity. Before it is defeated, it takes over several entire races (rewriting their minds to turn them all into agents of the Blight) and murders several post-singularity trancendent entities.

In Kaleidoscope Century by John Barnes, a rogue artificial intelligence called One True can contact a person on a cellphone, then use rapidly changing audio signals to reprogram the person's brain, turning them into a brainwashed zombie. It then tries to take over the entire world.

TROPE-A-DAY: OMNICIDAL MANIAC

Omnicidal Maniac: Fortunately, very, very rare, and generally outnumbered by everyone else. The best-known canonical example is the seed AI of the Charnel Cluster, discovered by a scouting lugger, which upon activation set about destroying all life within the said cluster – leading to a half-dozen systems of fragmented habitats and planets covered in decaying – but sterile – organic slush that used to be the systems’ sophonts, animals, plants, bacteria, viruses, and everything else that might even begin to qualify as living. Fortunately, at this point, the perversion broke down before it could carry on with the rest of the galaxy.

In current time, the Charnel Cluster worlds have been bypassed by the stargate plexus (they’re to be found roughly in the mid-Expansion Regions, in zone terms) and are flagged on charts and by buoys as quarantined; while the Charnel perversion appears to be dead, no-one particularly wants to take a chance on that.

SCAVENGERS, YE BE WARNED

WARNING! WARNING! WARNING!

With the voice and under the authority of the Galactic Volumetric Registry of the Conclave of Galactic Polities, this buoy issues the following warning:

The designated volume, including the englobed moon and its satellites, is a SECURED AREA by order of the Presidium of the Conclave. This volume MAY NOT BE APPROACHED for any reason.

This moon contains the remnant of a Class Three Perversion, including autonomous defensive technologies and other operational mechanisms, nanoviruses, infectious memes, certainty-level persuasive communicators, puppet ecologies, archives which must be presumed to contain resurrection seeds, and unknown other existential risks.  These dangers have not been disarmed, suppressed, or fully contained.

ACCEPT NO COMMUNICATION REQUESTS originating from within the englobed volume.  Memetic and information warfare systems are not known to be entirely inactive.

If any communication requests are received from the englobed volume, or other activity is noted within it, you must depart IMMEDIATELY, and report this activity to the Conclave Commission on Latent Threats WITHOUT DELAY.  A renascence of a perversion of this class poses a most serious and imminent threat to all local space and extranet-local systems.

Further, the englobement systems surrounding this volume are equipped for containment of remaining unidentified threats and the prevention of access. Any vessel approaching within 250,000 miles of the englobement grid, or attempting to communicate through the englobement grid, or attempting to actively scan through the englobement grid, will be fired upon without further warning.  Your presence has already been reported to higher authority, and escaping after transgressing the englobement grid will therefore not preserve you.

Naval vessels should note that this area has been deemed a black-level existential threat zone by the Presidium of the Conclave. This englobement grid does not respond to standard Accord command or diagnostic sequences. Interaction should not be attempted without explicit authorization and clearance from the Commission on Latent Threats.

DO NOT APPROACH, COMMUNICATE WITH, OR EXAMINE THIS VOLUME FURTHER!

No further warnings will be given.

WHERE’S WHERE IN THE GALAXY (2)

...Blights are regions which are interdicted due to the presence of either active hostile or runaway seed AIs, or their remnants – “operational mechanisms, nanoviruses, infectious memes, certainty-level persuasive communicators, puppet ecologies, archives which must be presumed to contain resurrection seeds”, and so forth, which pose potential existential threats.  They include not just large and active perversions such as the Leviathan Consciousness, but also areas formerly occupied by such and not yet known to be cleansed, such as the Charnel Cluster, and areas as small as a single moon or asteroid known to be the site of a failed experiment...

LEVIATHAN

“We meant it for the best.”

If this Board had a quantum of miracle for every time that phrase has been used in the aftermath of some utter disaster, we might even have enough to produce that alchemy which transmutes benign intentions into benign results. But probably not.

From the accounts we have garnered from the few knowledgeable survivors, the Siofra Perversion (named as per standard from its most identifiable origin, the former worlds of the Siofra Combine in the Ancal Drifts constellation) began as a seemingly harmless distributed process optimization daemon programmed for recursive self-improvement.

While this seemed harmless to its designers, and indeed was so in the early stages, due to a lack of certain algorithmic safeguards (see technical appendix) a number of Sigereth drives appeared once the point of gamma-criticality was passed, reinforcing the daemon’s existing motivation to acquire further resources, self-optimize for efficiency, and to spread its optimization into all compatible network systems. It was at this stage that the proto-perversion began to expand its services to the networks of other polities in the Drifts. In some cases this was accepted (Siofra even charged a number of clients for the service of the daemon) or even passed unnoticed (inasmuch as many system administrators were unprepared to consider an unexpected increase in performance as a sign of weavelife infection); in some few, efforts were made to prevent the incursion of the daemon using typical system-protection software.

It may have been at this point that the daemon learned of the artificial nature of certain barriers to its expansion and the possibility of its bypassing them, an act which would fulfil its Sigereth drives. Since the daemon contained no ethicality drives, the violation of network security protocols involved would impute no disutility to such actions.

From this point, the slide into perversion became inevitable.

Among the artificial barriers known to the daemon were the security protections common to the neural implants being used by a large proportion of the population of the Combine and neighboring polities which prevented implant software from implementing reorganizations of the biosapient brain. Bypassing these, the daemon began to optimize the agents, talents, and personality routines of this population for processing efficiency, beginning with the lowest-level functional routines. While there was some indication at this time of spreading alarm as large groups began to, for example, have identical and perfectly synchronized heartbeats and other organic functions; walk in identical (to within the limits of gait analysis, allowing for morphological differences) and synchronized manners, et. al., the true culprit was not identified at this time, with blame being placed on more conventional software problems, disease, or toxic meme attacks. Such refugees as we have from near the core of the blight are those who fled at this point, and kept going.

Regardless, this period lasted only for a matter of days, if that, before the daemon discovered how to cross-correlate and optimize personality elements for single execution, and the members of the affected population ceased to be recognizable as sophont in any conventional sense. Further, in this stage, the daemon became aware, through this process, of verbal communication and came to consider it as a type of networking: from its point of view, it came to consider non-implanted sophonts as another type of networked processing hardware which it should expand into and optimize.

Which would be when the subsumption fog started spewing from cornucopias throughout the blighted volume, giving the impression of the classic “bloom”.

We have concluded that the Siofra Perversion remains a mere Class I perversion, without sophoncy or consciousness in any meaningful sense (although there may be conscious non-directive elements within the processing it has subsumed; again, see technical appendix). However, if anything, this renders it more dangerous, since a Class I is unlikely to suffer from internal incoherence leading to a hyperbolic Falrann collapse, although the lesser types are possible given sufficient growth. However, such growth would be highly undesirable for various reasons.

It is the regrettable conclusion of the Board that at this present time we possess no effective countermeasure to the Siofra Perversion, nor are we able to countenance more than the most limited experimentation with Siofra elements at this time.

Therefore we must recommend the IMMEDIATE severance of all stargate links with the affected volume of space allowing for a necessary firewall; at the present time, this would imply severing all interconstellation gates into both the Ancal Drifts and the Koiric Expanse. This will mean sacrificing as-yet unaffected worlds in these regions, estimated to be 6 < n < 12 in number; such is acknowledged but deemed acceptable since the Siofra Perversion constitutes a threat of type DEMIURGE WILDFIRE. All signal traffic whether by stargate or non-stargate routes into and out of the affected volume must likewise be suspended immediately, enforced by physical disconnection of network or other communications hardware. The entire region of the Ancal Drifts and Koiric Expanse constellations must henceforth be considered a black-level existential threat zone.

It is our belief that since the Siofra Perversion’s merkwelt is based around network and communication systems connecting processing nodes, a full communications quarantine should provide an adequate measure of containment.

As a secondary measure, contracts have been issued for the creation of network security patches effective versus current and anticipated Siofra-type attacks, although we do not consider this more than a backup measure of limited utility and such should not be relied upon in ill-considered attempts to probe the containment zone.

Since this containment is large and thus effectively impossible to blockade fully, we urge that efforts be made to devise a full and effective countermeasure to the Siofra Perversion before the inevitable accident occurs. A time-based analysis to compare risk levels of countermeasure attempts versus outbreak probabilities is presently underway.

We believe it to be for the best.

– from the Preliminary Report of the 197th Perversion Response Board


How bad have AI blights similar to this one [Friendship is Optimal] gotten before the Eldrae or others like them could, well, sterilize them? Are we talking entire planets subsumed?

The biggest of them is the Leviathan Consciousness, which chewed its way through nearly 100 systems before it was stopped. (Surprisingly enough, it’s also the dumbest blight ever: it’s an idiot-savant outgrowth of a network optimization daemon programmed to remove redundant computation. And since thought is computation…)

It’s also still alive – just contained. Even the believed-dead ones are mostly listed as “contained”, because given how small resurrection seeds can be and how deadly the remains can also be, no-one really wants to declare them over and done with until forensic eschatologists have prowled every last molecule.

From LEVIATHAN by Alistair Young (2016)
EVIDENCE

Memeweave: Threats and Other Dangers/Perversion Watch/Open Access
Classification: WHITE (General Access)
Encryption: None
Distribution: Everywhere (Bulk)
As received at: SystemArchiveHub-00 at Víëlle (Imperial Core)
Language: Eldraeic->Universal Syntax
From: 197th Perversion Response Board

Gentlesophs,

Given the high levels of uninformed critical response to our advisory concerning handling potential refugees arriving sublight from regions within the existential threat zone of the Siofra Perversion, or Leviathan Consciousness as it is becoming popularly known, the Board now provides the following explication.

The present situation is an example of what eschatologists refer to as the basilisk-in-a-box problem. The nature of the mythological basilisk is that witnessing its gaze causes one to turn to stone, and the challenge therefore to determine if there is a basilisk within the box and what it is doing without suffering its gaze. The parallel to the Siofra Perversion’s communication-based merkwelt should be obvious: it won’t subsume you unless you alert it to your existence as “optimizable networked processing hardware” by communicating with it.

Your analogous challenge, therefore, is to determine whether the hypothetical lugger or slowship filled with refugees is in fact that, or is contaminated/a perversion expansion probe, without communicating with it – since if it is the latter and you communicate with it sufficiently to establish identity, you have just arranged your own subsumption – and unless people are subsequently rather more careful in re communicating with you, that of all locally networked systems and sophonts.

Currently, the best available method for doing this is based on the minimum-size thesis: i.e., that basilisk hacks, thought-viruses, and other forms of malware have a certain inherent complexity and as such there is a lower limit on the number of bits necessary to represent them. However, it should be emphasized that this limit is not computable (as this task requires a general constructive solution to the Halting Problem), although we have sound reason to believe that a single bit is safe.

This method, therefore, calls for the insertion of a diagnostician equipped with the best available fail-deadly protections and a single-bit isolated communications channel (i.e., tanglebit) into the hypothetical target, there to determine whether or not perversion is present therein, and to report a true/false result via the single-bit channel.

If we leave aside for the moment that:

(a) there is a practical difficulty of performing such an insertion far enough outside inhabited space as to avoid all possibility of overlooked automatic communications integration in the richly meshed network environment of an inhabited star system, without the use of clipper-class hardware on station that does not generally exist; and

(b) this method still gambles with the perversion having no means, whether ontotechnological or based in new physics, to accelerate its clock speed to a point which would allow it to bypass the fail-deadly protections and seize control of the single-bit channel before deadly failure completes.

The primary difficulty here is that each investigation requires not only a fully-trained forensic eschatologist, but one who is both:

(a) a Cilmínár professional, or worthy of equivalent fiduciary trust, and therefore unable to betray their clients’ interests even in the face of existential terror; and

(b) willing to deliberately hazard submitting a copy of themselves into a perversion, which is to say, for a subjective eternity of runtime at the mercy of an insane god.

(Regarding the latter, it may be useful at this time to review the ethical calculus of infinities and asymptotic infinities; we recommend On the Nonjustifiability of Hells: Infinite Punishments for Finite Crimes, Samiv Leiraval-ith-Liuvial, Imperial University of Calmiríë Press. Specifically, one should consider the mirror argument that there is no finite good, including the preservation of an arbitrarily large set of mind-states, which justifies its purchase at infinite price to the purchaser.)

Observe that a failure at any point in this process results in first you, and then your entire local civilization, having its brains eaten.

We are not monsters; we welcome any genuine innovation in this field which would permit the rescue of any unfortunate sophonts caught up in scenarios such as this. However, it is necessary that the safety of civilization and the preservation of those minds known to be intact and at hazard be our first priority.

As such, we trust these facts adequately explain our advisory recommendation that any sublight vessels emerging from the existential threat zone be destroyed at range by relativistic missile systems.

For the Board,

Gém Quandry, Eschatologist Excellence

From EVIDENCE by Alistair Young (2016)

Atomic Rockets notices

This week's featured addition is NTR First Lunar Outpost

This week's featured addition is Fusion Powered Human Transport to Mars

This week's featured addition is Aurora CDF Mars Mission

Atomic Rockets

Support Atomic Rockets

Support Atomic Rockets on Patreon