This is the way the world ends
This is the way the world ends
This is the way the world ends
Not with a bang but a whimper.
From The Hollow Men by T. S. Eliot (1925)

If you want to turn your science fiction story up to eleven, the traditional trope is to destroy an entire planet (just ask planet Alderaan). Preferably by blowing up Terra. Though that has gotten to be a bit stale, so there are some stories that destroy the entire universe. In James Blish's fourth "Cities in Flight" novel The Triumph of Time the protagonists make the unsetting discovery that our universe is only months away from merging with another universe — composed of pure antimatter. Egads.

The best list of these on the net used to be Exit Mundi, a collection of end-of-world scenarios. The site has vanished, but can be found archived in the Wayback Machine. Another good source is the TV Tropes entry for Apocalypse How. Also good is the Wikipedia article on Global Catastrophic Risk.


This classification system is from TV Tropes and is partially based on Bruce Sterling's analysis.

Focused DestructionA small localized area undergoes a species-level or higher apocalypse. The rest of the world at large is totally unaffected, maybe not even knowing of the events happening in the affected area.
Regionala part of a continent or landmass, be it a province/state, geographical region, or sub-continent (eg. "California"/"Uganda", "Sub-Saharan Africa", "India", etc).
Continentalan entire continent or landmass ("Oceania", "The Americas", "Eurasia", etc).
Planetaryan entire planet, or the vast majority of one. (If the given setting does not involve space travel and/or other worlds, then the scale effectively stops here, or skips up to Multiversal if the other worlds are not elsewhere in space, but do exist.)
Stellara solar system, every planet orbiting a star, the star itself, or the star plus everything in its orbit.
Galactica galaxy, most or all of its stars, up to all mass associated with it.
Universalthe entire universe, all or most galaxies within it, or all major galaxy filaments or equivalent highest-level structures.
Multiversalmultiple universes, or whatever that exists outside of the setting's native universe (includes whichever flavour of Another Dimension is on offer).
Omniversalall universes or all possible universes, everything that exists, or reality itself; up to some abstract ontological limit if the setting includes explicit metaphysical stipulations.
Societal DisruptionCivilization survives intact, but is forever altered. This may be due to the sheer amount of damage caused lowering the standard of living, or it may be a result of people being forced to adapt to the new threat(s) they face.
Societal CollapseHumanity backslides within the affected area, regressing to pre-industrial level at best and pre-agricultural at worst. Civilization may recover on its own, but not for centuries at the least.
Species ExtinctionA dominant or major species is either wiped out completely or reduced to such a low population level that its recovery is virtually impossible barring intervention by an outside force.
Total ExtinctionLife itself ends. No living organism of any kind exists within the affected area.
Physical AnnihilationThe affected area physically ceases to exist as it did before, but remnants of it can still be found; it's nuked into glass, sunk into the ocean, or blasted into asteroids.
Metaphysical AnnihilationThe affected area ceases to exist totally, without remainder, or perhaps even to have ever existed; this usually involves erasing it from time. This may go up to the elimination of even the possibility of the existence of anything like the affected area, if for instance the basic system of reality is changed or wiped out. This may get highly abstract, depending on how fundamental the negation is.
Class 0: Regional CatastropheRegional/Societal Disruption or Regional/Societal Collapse.
(examples: moderate-case global warming, minor asteroid impact, local thermonuclear war)
Global civilization not eliminated, but regional civilizations effectively destroyed; millions to hundreds of millions dead, but large parts of humankind retain current social and technological conditions. Chance of humankind recovery: excellent. Species local to the catastrophe likely die off, and post-catastrophe effects (refugees, fallout, etc.) may kill more. Chance of biosphere recovery: excellent.
Class 1: Human Die-BackPlanetary/Societal Disruption.
(examples: extreme-case global warming, moderate asteroid impact, global thermonuclear war)
Global civilization set back to pre- or low-industrial conditions; several billion or more dead, but human species as a whole survives, in pockets of varying technological and social conditions. Chance of humankind recovery: moderate. Most non-human species on brink of extinction die off, but most other plant and animal species remain and, eventually, flourish. Chance of biosphere recovery: excellent.
Class 2: Civilization ExtinctionPlanetary/Societal Collapse.
(examples: worst-case global warming, significant asteroid impact, early-era molecular nanotech warfare)
Global civilization destroyed; millions (at most) remain alive, in isolated locations, with ongoing death rate likely exceeding birth rate. Chance of humankind recovery: slim. Many non-human species die off, but some remain and, over time, begin to expand and diverge. Chance of biosphere recovery: good.This takes an entire planet back to at least pre-industrial data, if not hunter-gatherer days.
Class 3a: Engineered Human ExtinctionPlanetary/Species Extinction (dominant species, engineered).
(examples: targeted nano-plague, engineered sterility absent radical life extension)
Global civilization destroyed; all humans dead. Conditions triggering this are human-specific, so other species are, for the most part, unaffected. Chance of humankind recovery: nil. Chance of biosphere recovery: excellent. Extinction via unnatural causes (i.e., someone did something, human or otherwise).
Class 3b: Natural Human ExtinctionPlanetary/Species Extinction (dominant species, natural).
(examples: major asteroid impact, methane clathrates melt)
Extinction via natural causes. Global civilization destroyed; all humans dead. Conditions triggering this are general and global, so other species are greatly affected, as well. Chance of humankind recovery: nil. Chance of biosphere recovery: moderate.
Class 4: Biosphere ExtinctionPlanetary/Species Extinction (several species).
(examples: massive asteroid impact, "iceball Earth" reemergence, late-era molecular nanotech warfare)
Global civilization destroyed; all humans dead. Biosphere massively disrupted, with the wholesale elimination of many niches. Chance of humankind recovery: nil. Chance of biosphere recovery: slim. Chance of eventual re-emergence of organic life: good. Not only are humans gone, but most critters with them, leaving only a select few to evolve and refill the biosphere (or, as the name suggests, what's left of it).
Class 5: Planetary ExtinctionPlanetary/Species Extinction (all multicellular life).
(examples: dwarf-planet-scale asteroid impact, nearby gamma-ray burst)
Global civilization destroyed; all humans dead. Biosphere effectively destroyed; all species extinct. Geophysical disruption sufficient to prevent or greatly hinder re-emergence of organic life. The planet may be fit for re-habitation, but in the meantime, there's nothing more complex than bacteria left.
Class 6: Planetary DesolationPlanetary/Total Extinction.
The planet is left as a lifeless husk.
Class X: Planetary AnnihilationPlanetary/Physical Annihilation.
(example: post-Singularity beings disassemble planet to make computronium)
Global civilization destroyed; all humans dead. Ecosystem destroyed; all species extinct. There used to be a planet here. There isn't anymore; it's gone.
Class X-2: Stellar DestructionStellar/Physical Annihilation.
You know that big ball of hydrogen/helium fusion and the bunch of rocks that used to circle it? Yeah, they ain't here no mo'. This usually happens due to that particular fusion ball doing something unpleasant like going supernova.
Class X-3: Galactic Scale DestructionGalactic/Physical Annihilation.
Via some means, billions of stars, nebulae, pulsars, and so forth, along with the super-massive galactic-core Black Hole(s) at its center are destroyed. Utterly.
Class X-4: Universal DestructionUniversal/Physical Annihilation.
Everything that has ever been observed by anyone, anywhere. Eradicated. Or at the very least, not organized into galaxies, stars, and planets anymore. It is the end of all things. Unless there are other dimensions; those are safe.
Class X-5: Multi-universal DestructionMultiversal/Physical Annihilation.
If there are alternate dimensions or different realities or whatever in this fiction, then many of those go away.
Class Z: Total Destruction Of All Of RealityOmniversal/Physical Annihilation or Omniversal/Metaphysical Annihilation.
Not just the universe, and not just other universes, but all places and things that can be said to physically exist get wiped out somehow.

Warning Signs for Tomorrow

Anders Sandberg has created a brilliant set of "warning signs" to alert people of futuristic hazards. Some are satirical, but they are all very clever. There are larger versions of the signs here.

Structural Metallic Oxygen
Do not remove from cryogenic environment.
Chemosensitive Components
Not for use in reducing atmospheres or vacuum.
Defended Privacy Boundary
Do not transgress with active sensoria.
Polyspecific Synthdrink
Please check biochemical compatibility coding before consuming or tasting.
Low Bandwidth Zone
Do not enter with weave-routed cloud cognition, full-spectrum telepresence, or other high-quota services in effect.
Assemblers In Use
Contains active nanodevices; do not break hermetic seal while blue status light is illuminated.
Spin Gravity
Gravity may vary with direction of motion. Freely moving objects travel along curved paths.
Motivation Hazard
Do not ingest if operating under a neurokinin/ nociceptin addiction/ tolerance regime.
No Ubiquitous Surveillance
Unmonitored hazards may exist within this zone. Manual security and emergency response calls required.
Unauthorized Observation Prohibited
Tangle channels utilize macroscale quantum systems; do not expose to potential sources of decoherence.
Diamondoid Surfaces
Slippery even when dry.
Active Sophotechnology
WARNING: This hyperlink references gnostic overlay templates that may affect your present personality, persona or consciousness. Are you sure you wish to proceed?
Dynamic Spacetime
Contraterragenesis reactor contains a synthetic spinning/ charged gravitational singularity of mass 120,000,000 tons. Take all appropriate precautions during servicing.
Environmental Nanoswarms
CAUTION: Palpable microwave pulsation power feeds in operation.
Legislative Boundary
The zone you are entering exercises private legislative privilege under the Conlegius Act. By entering voluntarily you accede to compliance with applicable private law (see v-tag).
From BY THEIR WARNINGS SHALL YE KNOW THEM by Alistair Young (2015). Collected in The Core War and Other Stories

Theoretical Threats

There are a couple of theoretical reasons to expect that an apocalypse is due, even though the exact type of apocalypse is unknown. These tend to keep theorists up at night, staring at the ceiling.

The Fermi Paradox

The Fermi Paradox points out that:

  • There is a high probability of large numbers of alien civilizations
  • But we don't see any

So by the observational evidence, there are no alien civilizations. The trouble is that means our civilization shouldn't be here either, yet we are.

The nasty conclusion is that our civilization is here, so far. But our civilization is fated for death, and the probability is death sooner rather than later. This is called The Great Filter, and it is a rather disturbing thought. For a detailed explanation read the original article by Robin Hanson.

The Great Filter is something that prevents dead matter from giving rise, in time, to "expanding lasting life". The hope is that us humans are here because our species has somehow managed to evade the Great Filter (i.e., the Great Filter prevents the evolution of intelligent life). The fear is that the Great Filter lies ahead of us and could strike us down any minute (i.e., the Great Filter is either a near 100% chance of self destruction, or something implacable that hunts down and kills intelligent life).

The unnerving part is the implication. The easier it is discovered for life to evolve on its own (for example if life was discovered in the underground seas of Europa), the higher the probability the Great Filter lies ahead of us.


This matters, since most existential risks (xrisk) we worry about today (like nuclear war, bioweapons, global ecological/societal crashes) only affect one planet. But if existential risk is the answer to the Fermi question, then the peril has to strike reliably. If it is one of the local ones it has to strike early: a multi-planet civilization is largely immune to the local risks. It will not just be distributed, but it will almost by necessity have fairly self-sufficient habitats that could act as seeds for a new civilization if they survive. Since it is entirely conceivable that we could have invented rockets and spaceflight long before discovering anything odd about uranium or how genetics work it seems unlikely that any of these local risks are “it”. That means that the risks have to be spatially bigger (or, of course, that xrisk is not the answer to the Fermi question).

Of the risks mentioned by George physics disasters are intriguing, since they might irradiate solar systems efficiently. But the reliability of them being triggered before interstellar spread seems problematic. Stellar engineering, stellification and orbit manipulation may be issues, but they hardly happen early — lots of time to escape. Warp drives and wormholes are also likely late activities, and do not seem to be reliable as extinctors. These are all still relatively localized: while able to irradiate a largish volume, they are not fine-tuned to cause damage and does not follow fleeing people. Dangers from self-replicating or self-improving machines seems to be a plausible, spatially unbound risk that could pursue (but also problematic for the Fermi question since now the machines are the aliens). Attracting malevolent aliens may actually be a relevant risk: assuming von Neumann probes one can set up global warning systems or “police probes” that maintain whatever rules the original programmers desire, and it is not too hard to imagine ruthless or uncaring systems that could enforce the great silence. Since early civilizations have the chance to spread to enormous volumes given a certain level of technology, this might matter more than one might a priori believe.

So, in the end, it seems that anything releasing a dangerous energy effect will only affect a fixed volume. If it has energy E and one can survive it below a deposited energy e, if it just radiates in all directions the safe range is — one needs to get into supernova ranges to sterilize interstellar volumes. If it is directional the range goes up, but smaller volumes are affected: if a fraction f of the sky is affected, the range increases as but the total volume affected scales as .

Self-sustaining effects are worse, but they need to cross space: if their space range is smaller than interplanetary distances they may destroy a planet but not anything more. For example, a black hole merely absorbs a planet or star (releasing a nasty energy blast) but does not continue sucking up stuff. Vacuum decay on the other hand has indefinite range in space and moves at lightspeed. Accidental self-replication is unlikely to be spaceworthy unless is starts among space-moving machinery; here deliberate design is a more serious problem.

The speed of threat spread also matters. If it is fast enough no escape is possible. However, many of the replicating threats will have sublight speed and could hence be escaped by sufficiently paranoid aliens. The issue here is if lightweight and hence faster replicators can always outrun larger aliens; given the accelerating expansion of the universe it might be possible to outrun them by being early enough, but our calculations do suggest that the margins look very slim.

The more information you have about a target, the better you can in general harm it. If you have no information, merely randomizing it with enough energy/entropy is the only option (and if you have no information of where it is, you need to radiate in all directions). As you learn more, you can focus resources to make more harm per unit expended, up to the extreme limits of solving the optimization problem of finding the informational/environmental inputs that cause desired harm (=hacking). This suggests that mindless threats will nearly always have shorter range and smaller harms than threats designed by (or constituted by) intelligent minds.

In the end, the most likely type of actual civilization-ending threat for an interplanetary civilization looks like it needs to be self-replicating/self-sustaining, able to spread through space, and have at least a tropism towards escaping entities. The smarter, the more effective it can be. This includes both nasty AI and replicators, but also predecessor civilizations that have infrastructure in place. Civilizations cannot be expected to reliably do foolish things with planetary orbits or risky physics.

From THE END OF THE WORLDS by Anders Sandberg (2015)

The key issue here is the nature of the Great Filter, something we talk about when we discuss the Fermi Paradox.

The Fermi Paradox: loosely put, we live in a monstrously huge cosmos that is rather old. We only evolved relatively recently — our planet is ~4.6GYa old, in a galaxy containing stars up to 10GYa old in a universe around 13.7GYa old. Loosely stated, the Fermin Paradox asks, if life has evolved elsewhere, then where is it? We would expect someone to have come calling by now: a five billion year head start is certainly time enough to explore and/or colonize a galaxy only 100K light years across, even using sluggish chemical rockets.

We don't see evidence of extraterrestrial life, so, as economist Robin Hanson pointed out, there must be some sort of cosmic filter function (The Great Filter) which stops life, if it develops, from leaving its star system of origin and going walkabout. Hanson described two possibilities for the filter. One is that it lies in our past (pGF): in this scenario, intelligent tool-using life is vanishingly rare because the pGF almost invariably exterminates planetary biospheres before they can develop it. (One example: gamma ray bursts may repeatedly wipe out life. If this case is true, then we can expect to not find evidence of active biospheres on other planets. A few bacteria or archaea living below the Martian surface aren't a problem, but if our telescopes start showing us lots of earthlike planets with chlorophyll absorption lines in their reflected light spectrum (and oxygen-rich atmospheres) that would be bad news because it would imply that the GF lies in our future (an fGF).

The implication of an fGF is that it doesn't specifically work against life, it works against interplanetary colonization. The fGF in this context might be an emergent property of space exploration, or it might be an external threat — or some combination: something so trivial that it happens almost by default when the technology for interstellar travel emerges, and shuts it down for everyone thereafter, much as Kessler syndrome could effectively block all access to low Earth orbit as a side-effect of carelessly launching too much space junk. Here are some example scenarios: see below.


Recently I noticed Voices in AI – Episode 16: A Conversation with Robert J. Sawyer, an interview by Byron Reese, which contained an interesting tidbit on existential risk.The interviewer Byron Reese, in the course of the conversation, attributed a connection between radio technology and existential risk to Carl Sagan: 

“[Sagan] said that his guess was civilizations had a hundred years after they got radio, to either destroy themselves, or overcome that tendency and go on to live on a timescale of billions of years.” 

Robert J. Sawyer picked up on this theme and elaborated:

“The window is very small to avoid the existential threats that come with radio. The line through the engineering and the physics from radio, and understanding how radio waves work, and so forth, leads directly to atomic power, leads directly to atomic weapons… and leads conceivably directly to the destruction of the planet.”

I wasn’t able to find a fully explicit statement of this thesis in Sagan’s writings (it may be there, of course, and I simply couldn’t find it), but I found something close to it: 

“Sagan has several possible explanations for why alien radio signals have proved so elusive. Maybe… ‘No civilization survives long enough to develop power levels adequate to make such communications. All civilizations destroy themselves shortly after achieving a technological level consonant with radio astronomy’.“ (Carl Sagan, Conversations with Carl Sagan, edited by Tom Head, p. 156)

My assumption is that the connection between radio technology and nuclear weapons obtains because radio technology means the use of electronics (the ability to build electronic devices, which are a much more sophisticated technology with more possibilities than, say, steam-power technology), as well as opening up the electromagnetic spectrum to scientific study. The study of the electromagnetic radiation may inevitably result in the development of a more general conception of radiation that leads to nuclear science and the ability to build nuclear weapons.

I have called radio (along with fusion and consciousness, inter alia) Technologies of Nature, that is to say, processes for which nature provides an existence proof, and which we can, in the fullness of technological development, attempt to reverse engineer. It should not surprise us that with the reverse engineering of the cosmos – that which nature has shown us that it is possible to do – we would eventually converge on the knowledge and the technology that would allow us to undo what nature has done.  

In other contexts Sagan often formulated the maturity of civilizations in terms of their mastery of radio technology, specifically, the ability to build radio telescopes. For example, Sagan wrote:

“Radio astronomy on Earth is a by-product of the Second World War, when there were strong military pressures for the development of radar. Serious radio astronomy emerged only in the 1950s, major radio telescopes only in the 1960s. If we define an advanced civilization as one able to engage in long-distance radio communication using large radio telescopes, there has been an advanced civilization on our planet for only about ten years. Therefore, any civilization ten years less advanced than we cannot talk to us at all.” (Carl Sagan, The Cosmic Connection: An Extraterrestrial Perspective, Chap. 31)

For Sagan, radio telescopes are a crucial technology because they enable SETI, the search for extraterrestrial intelligence. Possessing a sensitive radio telescope would make it possible for a civilization to detect a SETI signal, and only a little more advanced technology would allow for the transmission of a SETI signal to some other civilization that might detect it. But radio is also a crucial technology because of its above-noted connection with nuclear technology and therefore with anthropogenic existential risk. 

In human history, we built nuclear weapons before we built radio telescopes, but this appears to be a merely contingent development, and the two were only separated by about ten years of history. It could have easily happened that human beings might have built radio telescope before we built nuclear weapons, as well another civilization might have done (or may yet do).

In a counterfactual history (which would also require a counterfactual universe, that is to say, a universe different from the universe in which we in fact find ourselves), in which human beings built radio telescopes, and as soon as they turned on their radio telescopes they found that they lived in a densely populated universe, so that the sky was alive with signals from a proliferation of ETIs, and if all this had happened before we built nuclear weapons, the second half of the twentieth century would have been radically different than it was in fact.

This obviously didn’t happen, but we could push the counterfactual scenario harder, and we can imagine a civilization with more-or-less nineteenth century levels of technology (as when the telegraph and the telephone were invented) developing radio much earlier in its history than human beings did, relative to other technological developments. If we had had radio technology for a hundred years before we developed nuclear weapons, again history might have been radically different. And if we had nuclear weapons for a hundred years before we developed radio technology, again, this would have meant a radically different history.

All of these scenarios, and others as well, may be instantiated by other civilizations elsewhere in the cosmos. Sagan’s thesis as stated above by Reese and Sawyer posits that nuclear and radio technology are tightly-coupled. This may well be true, but it is not too difficult to imagine alternative scenarios in which nuclear and radio technology are only loosely-coupled, and I have elsewhere noted how in terrestrial history there are cases in which loosely-coupled science and technology prior to the industrial revolution could be separated by hundreds of years. If radio technology and nuclear technology were found separated by hundreds of years, a civilization might experience a longue durée in possession of the one without the other.


(ed note: Warning, spoilers for REVELATION SPACE by Alastair Reynolds

But there was more to this galaxy than astrophysics. As if a new layer of memories had been quietly laid over her previous ones. Khouri found herself knowing something more. That the galaxy was teeming with life; a million cultures dispersed pseudo-randomly across its great slowly rotating disk.

But this was the past—the deep, deep past.

“Actually.” Fazil said, “somewhere in the region of a billion years ago. Given that the Universe is only about fifteen times older than that, that's quite a hefty chunk of time, especially on the galactic timescale." He was leaning over the railinged walkway next to her, as if they were a couple pausing to stare at their reflections in a dark, bread-strewn duckpond. “To give you some perspective. humanity didn't exist a billion years ago. In fact, neither did the dinosaurs. They didn't get around to evolving until less than two hundred million years ago; a fifth of the time we're dealing with here. No: we're deep into the Precambrian here. There was life on Earth, but nothing multicellular—a few sponges if you were lucky." Fazil looked at the galaxy representation again. “But that wasn't the case everywhere."

The million or so cultures (although she could be infinitely precise about the number, it suddenly struck her as childishly pedantic to do so, like specifying one's age to the nearest month) had not all arisen at the same time. nor did they all hang around for the same length of time. According to Fazil (though she understood it on some basic level) it had taken until four billion years ago for the galaxy to reach the required state at which intelligent cultures could begin to arise. But once that point of minimal galactic maturity had been reached, the cultures had not all suddenly appeared in unison. It had been a progressive emergence of intelligence, some cultures having arisen on worlds where, for one reason or another, the pace of evolutionary change was slower than the norm, or life's ascendancy was subject to more than the usual quota of catastrophic setbacks.

But eventually—two or three billion years after life had first arisen on their homeworlds—some of these cultures had become spacefaring. When that point was reached, most cultures expanded rapidly into the galaxy, although there were always a few stay-at-homes who preferred to colonise only their own solar systems, or sometimes even just their own circum-planetary environments. But generally the pace of expansion was rapid, with a mean drift rate between one tenth and one hundredth of the speed of light. That sounded slow, but was in fact blindingly fast, given that the galaxy was billions of years old and only a hundred thousand light-years wide. Unrestricted, any of these spacefarers could have dominated the entire galaxy in the totally inconsequential time of a few tens of millions of years. And maybe if it had happened like that—a neatly imperialist domination by one power—things would have been very different.

But instead, the first culture had been at the slower end of the expansionist speed-range, and had impacted on the expansion wave of a second, younger upstart. And while younger, the second civilisation was not technologically inferior to the first, nor less capable of mustering aggression when it was required. There was what—for want of a better word—one might describe as a galactic war; a sudden sparking friction where these two swelling empires brushed against one another, grinding like vast flywheels. Soon. other ascendant cultures were embroiled in the conflict. Eventually—to one degree or another—several thousand spacefaring civilisations fell into the fray. They had many names for it, in the thousand primary languages of the combatants. Some of these names could not easily be translated into any meaningful human referent. But more than one culture called it something which might—with due allowance for the crudities of interspecies communication—be termed the Dawn War.

It was a war encompassing the entire galaxy (and the two smaller satellite galaxies which orbited the Milky Way)—one which consumed not just planets, but whole solar systems, whole star systems, whole clusters of stars, and whole spiral arms. She understood that evidence of this war was visible even now, if one knew where to look. There were anomalous concentrations of dead stars in some regions of the galaxy, and still-burning stars in odd alignments; husked components of weapons-systems light-years wide. There were voids where there ought to have been stars, and stars which—according to the accepted dynamics of solar-system formulation—ought to have had worlds, but which lacked them: only rubble, cold now. The Dawn War had lasted a long, long time—longer even than the evolutionary timescale of the hottest stars. But on the timescale of the galaxy, it had indeed been mercifully brief; a transforming spasm.

It was possible that no culture emerged intact; that none of the players who entered the Dawn War actually emerged, victorious or otherwise. The lengthscale of the war, while short by galactic time, was nonetheless hideously long by species-time. It was long enough for species to self-evolve, to fragment, to coalesce with other species or assimilate them; to remake themselves beyond recognition, or even to jump from organic to machine-life substrates. Some had even made the return trip, becoming machine, then returning to the organic when it suited their purposes. Some had sublimed, vanishing from the theatre of the war entirely. Some had converted their essences to data and found immortal storage in carefully concealed computer matrices. Others had self-immolated.

Yet in the aftermath, one culture emerged stronger than the others. Possibly they had been a fortunate small-time player in the main fray, now rising to supremacy amongst the ruins. Or possibly they were the result of a coalition, a merging of several battle-weary species. It hardly mattered, and they themselves probably had no hard data on their absolute origin. They were—at least then—a hybrid machinechimeric species. with some residual vertebrate traits. They did not bother giving themselves a name.

“Still.” Fazil said, “they acquired one, whether they liked it or not."

“What were they called?" she asked.

He waited before answering, and when he did, it was with almost theatrical gravity. “The Inhibitors. For a very good reason, which will shortly become apparent."

"I'll start with what I know," Volyova said, drawing in a generous inhalation of breath. “Once, the galaxy was a lot more populous than it is now. Millions of cultures, though only a handful of big players In fact. Just the way all the predictive models say the galaxy ought to be today, based on the occurrence rates of G-type stars and terrestrial planets in the right orbits for liquid water." She was digressing, but Pascale and Khouri decided not to fight it. "That's always been a major paradox, you know. On paper, life looks a lot commoner than we find it to be. Theories for the developmental timescales for tool-using intelligence are a lot harder to quantify, but they suffer from much the same problem. They predict too many cultures."

“Hence the Fermi paradox.” Pascale said.

“The what?” asked Khouri.

“The old dichotomy between the relative ease of interstellar flight, especially for robotic envoys—and the complete absence of any such envoys turning up from non-human cultures. The only logical conclusion was that no one else was around to send them, anywhere in the galaxy."

“But the galaxy's a big place." Khouri said. “Couldn't there be cultures elsewhere, except that we just don't know about them yet?"

“Doesn't work," Volyova said emphatically. Pascale nodding in agreement. “The galaxy's big, but not that big—and it's also very old. Once a single culture decided to send out probes, everyone else in the galaxy would know about it within a few million years. And the galaxy happens to be several thousand times older than that. Granted, several generations of stars had to live and die before there were enough heavy elements to sustain life, but even if machine-building cultures only arise once every million years or so, they've had thousands of opportunities to dominate the entire galaxy."

“To which there have always been two answers." Pascale said. “Firstly, that they are here, but we just haven't ever noticed them. Maybe that was conceivable a few hundred years ago, but no one takes it seriously now; not when every square inch of every asteroid belt in about a hundred systems has been mapped."

“Then maybe they never existed in the first place?"

Pascale nodded at Khouri. “Which was perfectly tenable until we knew more about the galaxy, which begins to look suspiciously accommodating of life, at least in the essentials; what Volyova just said—the right types of stars, and the right kind of planets in the right places. And the biological models were still arguing for a higher occurrence rate, right on up to intelligent cultures. "

“So the models were wrong," Khouri said.

“Except they probably weren't." Volyova was speaking now. “Once we got into space, once we left the First System, we began to find dead cultures all over the place. None had survived until much more recently than a million years ago, and some had gone out a lot earlier than that. But they all pointed to one thing. The galaxy had been a lot more fecund in the past. So why not now? Why was it suddenly so lonely?"

“The war, " Khouri said, and for a moment no one spoke. The silence was only interrupted when Volyova began speaking, softly and reverently, as if they were discussing something sacred.

“Yes,” she said, “The Dawn War—that was what they called it, wasn't it?”

“It was a billion years ago," Khouri said, and for a moment Volyova let her speak without interruption. “And it sucked up all those cultures and spat them out in shapes and forms a lot different to the ones they'd had when they went in. I don't think we can really understand what it was about, or who or what exactly survived it—except that they were more like machines than living creatures, although as far beyond anything we can envisage as our machines are beyond stone tools. But they had a name, or they were given it—I don't really remember the details. But I do remember the name."

"The Inhibitors." Volyova said.

Khouri nodded. “And they deserved it."


“It was what they did afterwards," Khouri said. “Not during the war, but in its aftermath. It was like they subscribed to a creed; a rule of discipline. Intelligent, organic life had given rise to the Dawn War. What they were now was something different; post-intelligent, I guess. Anyway, it made what they did a lot easier."

“Which was?"

“Inhibition. Literally: they inhibited the rise of intelligent cultures around the galaxy, so that nothing like the Dawn War could ever happen again."

Volyova took over now. “It wasn't just a case of annihilating any extant cultures which might have survived war. They also set about disturbing the conditions which could lead to intelligent life ever arising again. Not stellar engineering—I think that would have been too great an interference; too much an act which contradicted their own strictures—but inhibition on a lesser scale. They could have done it without tampering in the evolution of a single star, except in extreme cases—by altering cometary orbits, for instance, so that episodes of planetary bombardment lasted much longer than the norm. Life probably would have found niches in which to survive—deep underground, or around hydrothermal vents—but it would never have become very complex. Certainly nothing which would threaten the Inhibitors."

“You said this was a billion years ago." Pascale said. “And yet we've come all that way since then— from single-celled creatures right up to Homo sapiens. Are you saying we slipped through the net?"

“Exactly that," Volyova said. “Because the net was falling apart."

Khouri nodded. “The Inhibitors seeded the galaxy with machines, designed to detect the emergence of life and then suppress it. For a long time it looked like they worked as planned—that's why the galaxy isn't teeming today, although all the preconditions look favourable." She shook her head. “I sound like I actually know this stuff. "

“Maybe you do," Pascale said. “In any case, I want to hear what you have to say. All of it."

“All right, all right." Khouri fidgeted in her acceleration couch, doubtless trying to do what Volyova had been doing for the last hour: avoiding putting pressure on the bruises she had already gained. “Their machines worked fine for a few hundred million years," she said. “But then stuff started to go wrong. They started failing; not working as efficiently as intended. Intelligent cultures began to emerge which would have previously been suppressed at birth."

From REVELATION SPACE by Alastair Reynolds (2000)

The Doomsday Argument

The Doomsday Argument is is a probabilistic argument that claims to predict the number of future members of the human species given only an estimate of the total number of humans born so far. Simply put, it says that supposing that all humans are born in a random order, chances are that any one human is born roughly in the middle.

The actual full bown Doomsday Argument does not put any upper limit on the number of humans that will ever exist, nor provide a date for when humanity will become extinct. There is however an abbreviated form of the argument does make these claims, by confusing probability with certainty. You may stumble over it some day, so don't be fooled.

The full blown Doomsday argument only make the prediction that there is a 95% chance of extinction within 9,120 years.

Assorted Apocalypses

Energy Required To Destroy Terra

Joules (J)TNT
4.184 × 10211 teraton= 1000 gigatons = 1e6 megatons
5.43 × 1023120 Tt1 Chicxulub Crater = 1 Dinosaur Killer = 20 Shoemaker-Levys
3.0 × 1024720 Tt1 Wilkes Land crater = 6 Chicxulub Craters
4.184 × 10241 petaton= 1000 teratons
5.5 × 10241 Pttotal energy from the Sun that strikes the face of the Earth each year
3.2 × 102677 PtEnergy required blow off Terra's atmosphere into space
3.9 × 102692 Pttotal energy output of the Sun each second (bolometric luminosity)
4.0 × 102696 Pttotal energy output of a Type-II civilization (Kardashev scale) each second
6.6 × 1026158 PtEnergy required to heat all the oceans of Terra to boiling
4.184 × 10271 exaton= 1000 petatons
4.5 × 10271 EtEnergy required to vaporize all the oceans of Terra into the atmosphere
7.0 × 10272 EtEnergy required to vaporize all the oceans of Terra and dehydrate the crust
2.9 × 10287 EtEnergy required to melt the (dry) crust of Terra
1.0 × 102924 EtEnergy required blow off Terra's oceans into space
2.1 × 102950 EtEarth's rotational energy
1.5 × 1030359 EtEnergy required blow off Terra's crust into space
4.184 × 10301 zettaton= 1000 exatons
2.9 × 10317 ZtEnergy required to blow up Terra (reduce to gravel orbiting the sun)
3.3 × 10318 Zttotal energy output of the Sun each day
3.3 × 10318 Zttotal energy output of Beta Centauri each second (bolometric luminosity). 41,700 × luminosity of the Sun.
5.9 × 103114 ZtEnergy required to blow up Terra (reduce to gravel flying out of former orbit)
1.2 × 103229 Zttotal energy output of Deneb each second (bolometric luminosity)
2.9 × 103269 ZtEnergy required to blow up Terra (reduce to gravel and move pieces to infinity)
4.184 × 10331 yottaton= 1000 zettatons
1.2 × 10343 Yttotal energy output of the Sun each year

Existential Risks

SEVERITYSpecies Extinction—Metaphysical Annihilation
CLASSClass 1: Human Die-Back—
Class Z: Total Destruction Of All Of Reality

3 Classification of existential risks

We shall use the following four categories to classify existential risks:

Bangs — Earth-originating intelligent life goes extinct in relatively sudden disaster resulting from either an accident or a deliberate act of destruction.

Crunches — The potential of humankind to develop into posthumanity is permanently thwarted although human life continues in some form.

Shrieks — Some form of posthumanity is attained but it is an extremely narrow band of what is possible and desirable.

Whimpers — A posthuman civilization arises but evolves in a direction that leads gradually but irrevocably to either the complete disappearance of the things we value or to a state where those things are realized to only a minuscule degree of what could have been achieved.

Armed with this taxonomy, we can begin to analyze the most likely scenarios in each category. The definitions will also be clarified as we proceed.

4 Bangs

This is the most obvious kind of existential risk. It is conceptually easy to understand. Below are some possible ways for the world to end in a bang. I have tried to rank them roughly in order of how probable they are, in my estimation, to cause the extinction of Earth-originating intelligent life; but my intention with the ordering is more to provide a basis for further discussion than to make any firm assertions.

4.1 Deliberate misuse of nanotechnology

In a mature form, molecular nanotechnology will enable the construction of bacterium-scale self-replicating mechanical robots that can feed on dirt or other organic matter . Such replicators could eat up the biosphere or destroy it by other means such as by poisoning it, burning it, or blocking out sunlight. A person of malicious intent in possession of this technology might cause the extinction of intelligent life on Earth by releasing such nanobots into the environment.

The technology to produce a destructive nanobot seems considerably easier to develop than the technology to create an effective defense against such an attack (a global nanotech immune system, an “active shield” ). It is therefore likely that there will be a period of vulnerability during which this technology must be prevented from coming into the wrong hands. Yet the technology could prove hard to regulate, since it doesn’t require rare radioactive isotopes or large, easily identifiable manufacturing plants, as does production of nuclear weapons .

Even if effective defenses against a limited nanotech attack are developed before dangerous replicators are designed and acquired by suicidal regimes or terrorists, there will still be the danger of an arms race between states possessing nanotechnology. It has been argued that molecular manufacturing would lead to both arms race instability and crisis instability, to a higher degree than was the case with nuclear weapons. Arms race instability means that there would be dominant incentives for each competitor to escalate its armaments, leading to a runaway arms race. Crisis instability means that there would be dominant incentives for striking first. Two roughly balanced rivals acquiring nanotechnology would, on this view, begin a massive buildup of armaments and weapons development programs that would continue until a crisis occurs and war breaks out, potentially causing global terminal destruction. That the arms race could have been predicted is no guarantee that an international security system will be created ahead of time to prevent this disaster from happening. The nuclear arms race between the US and the USSR was predicted but occurred nevertheless.

4.2 Nuclear holocaust

The US and Russia still have huge stockpiles of nuclear weapons. But would an all-out nuclear war really exterminate humankind? Note that: (i) For there to be an existential risk it suffices that we can’t be sure that it wouldn’t. (ii) The climatic effects of a large nuclear war are not well known (there is the possibility of a nuclear winter). (iii) Future arms races between other nations cannot be ruled out and these could lead to even greater arsenals than those present at the height of the Cold War. The world’s supply of plutonium has been increasing steadily to about two thousand tons, some ten times as much as remains tied up in warheads. (iv) Even if some humans survive the short-term effects of a nuclear war, it could lead to the collapse of civilization. A human race living under stone-age conditions may or may not be more resilient to extinction than other animal species.

4.3 We’re living in a simulation and it gets shut down

A case can be made that the hypothesis that we are living in a computer simulation should be given a significant probability . The basic idea behind this so-called “Simulation argument” is that vast amounts of computing power may become available in the future (see e.g. ), and that it could be used, among other things, to run large numbers of fine-grained simulations of past human civilizations. Under some not-too-implausible assumptions, the result can be that almost all minds like ours are simulated minds, and that we should therefore assign a significant probability to being such computer-emulated minds rather than the (subjectively indistinguishable) minds of originally evolved creatures. And if we are, we suffer the risk that the simulation may be shut down at any time. A decision to terminate our simulation may be prompted by our actions or by exogenous factors.

While to some it may seem frivolous to list such a radical or “philosophical” hypothesis next the concrete threat of nuclear holocaust, we must seek to base these evaluations on reasons rather than untutored intuition. Until a refutation appears of the argument presented in [27], it would intellectually dishonest to neglect to mention simulation-shutdown as a potential extinction mode.

4.4 Badly programmed superintelligence

When we create the first superintelligent entity , we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

4.5 Genetically engineered biological agent

With the fabulous advances in genetic technology currently taking place, it may become possible for a tyrant, terrorist, or lunatic to create a doomsday virus, an organism that combines long latency with high virulence and mortality .

Dangerous viruses can even be spawned unintentionally, as Australian researchers recently demonstrated when they created a modified mousepox virus with 100% mortality while trying to design a contraceptive virus for mice for use in pest control. While this particular virus doesn’t affect humans, it is suspected that an analogous alteration would increase the mortality of the human smallpox virus. What underscores the future hazard here is that the research was quickly published in the open scientific literature . It is hard to see how information generated in open biotech research programs could be contained no matter how grave the potential danger that it poses; and the same holds for research in nanotechnology.

Genetic medicine will also lead to better cures and vaccines, but there is no guarantee that defense will always keep pace with offense. (Even the accidentally created mousepox virus had a 50% mortality rate on vaccinated mice.) Eventually, worry about biological weapons may be put to rest through the development of nanomedicine, but while nanotechnology has enormous long-term potential for medicine it carries its own hazards.

4.6 Accidental misuse of nanotechnology (“gray goo”)

The possibility of accidents can never be completely ruled out. However, there are many ways of making sure, through responsible engineering practices, that species-destroying accidents do not occur. One could avoid using self-replication; one could make nanobots dependent on some rare feedstock chemical that doesn’t exist in the wild; one could confine them to sealed environments; one could design them in such a way that any mutation was overwhelmingly likely to cause a nanobot to completely cease to function . Accidental misuse is therefore a smaller concern than malicious misuse.

However, the distinction between the accidental and the deliberate can become blurred. While “in principle” it seems possible to make terminal nanotechnological accidents extremely improbable, the actual circumstances may not permit this ideal level of security to be realized. Compare nanotechnology with nuclear technology. From an engineering perspective, it is of course perfectly possible to use nuclear technology only for peaceful purposes such as nuclear reactors, which have a zero chance of destroying the whole planet. Yet in practice it may be very hard to avoid nuclear technology also being used to build nuclear weapons, leading to an arms race. With large nuclear arsenals on hair-trigger alert, there is inevitably a significant risk of accidental war. The same can happen with nanotechnology: it may be pressed into serving military objectives in a way that carries unavoidable risks of serious accidents.

In some situations it can even be strategically advantageous to deliberately make one’s technology or control systems risky, for example in order to make a “threat that leaves something to chance”.

4.7 Something unforeseen

We need a catch-all category. It would be foolish to be confident that we have already imagined and anticipated all significant risks. Future technological or scientific developments may very well reveal novel ways of destroying the world.

Some foreseen hazards (hence not members of the current category) which have been excluded from the list of bangs on grounds that they seem too unlikely to cause a global terminal disaster are: solar flares, supernovae, black hole explosions or mergers, gamma-ray bursts, galactic center outbursts, supervolcanos, loss of biodiversity, buildup of air pollution, gradual loss of human fertility, and various religious doomsday scenarios. The hypothesis that we will one day become “illuminated” and commit collective suicide or stop reproducing, as supporters of VHEMT (The Voluntary Human Extinction Movement) hope, appears unlikely. If it really were better not to exist (as Silenus told King Midas in the Greek myth, and as Arthur Schopenhauer argued although for reasons specific to his philosophical system he didn’t advocate suicide), then we should not count this scenario as an existential disaster. The assumption that it is not worse to be alive should be regarded as an implicit assumption in the definition of Bangs. Erroneous collective suicide is an existential risk albeit one whose probability seems extremely slight.

4.8 Physics disasters

The Manhattan Project bomb-builders’ concern about an A-bomb-derived atmospheric conflagration has contemporary analogues.

There have been speculations that future high-energy particle accelerator experiments may cause a breakdown of a metastable vacuum state that our part of the cosmos might be in, converting it into a “true” vacuum of lower energy density . This would result in an expanding bubble of total destruction that would sweep through the galaxy and beyond at the speed of light, tearing all matter apart as it proceeds.

Another conceivability is that accelerator experiments might produce negatively charged stable “strangelets” (a hypothetical form of nuclear matter) or create a mini black hole that would sink to the center of the Earth and start accreting the rest of the planet.

These outcomes seem to be impossible given our best current physical theories. But the reason we do the experiments is precisely that we don’t really know what will happen. A more reassuring argument is that the energy densities attained in present day accelerators are far lower than those that occur naturally in collisions between cosmic rays. It’s possible, however, that factors other than energy density are relevant for these hypothetical processes, and that those factors will be brought together in novel ways in future experiments.

The main reason for concern in the “physics disasters” category is the meta-level observation that discoveries of all sorts of weird physical phenomena are made all the time, so even if right now all the particular physics disasters we have conceived of were absurdly improbable or impossible, there could be other more realistic failure-modes waiting to be uncovered. The ones listed here are merely illustrations of the general case.

4.9 Naturally occurring disease

What if AIDS was as contagious as the common cold?

There are several features of today’s world that may make a global pandemic more likely than ever before. Travel, food-trade, and urban dwelling have all increased dramatically in modern times, making it easier for a new disease to quickly infect a large fraction of the world’s population.

4.10 Asteroid or comet impact

There is a real but very small risk that we will be wiped out by the impact of an asteroid or comet.

In order to cause the extinction of human life, the impacting body would probably have to be greater than 1 km in diameter (and probably 3 - 10 km). There have been at least five and maybe well over a dozen mass extinctions on Earth, and at least some of these were probably caused by impacts. In particular, the K/T extinction 65 million years ago, in which the dinosaurs went extinct, has been linked to the impact of an asteroid between 10 and 15 km in diameter on the Yucatan peninsula. It is estimated that a 1 km or greater body collides with Earth about once every 0.5 million years We have only catalogued a small fraction of the potentially hazardous bodies.

If we were to detect an approaching body in time, we would have a good chance of diverting it by intercepting it with a rocket loaded with a nuclear bomb.

4.11 Runaway global warming

One scenario is that the release of greenhouse gases into the atmosphere turns out to be a strongly self-reinforcing feedback process. Maybe this is what happened on Venus, which now has an atmosphere dense with CO2 and a temperature of about 450O C. Hopefully, however, we will have technological means of counteracting such a trend by the time it would start getting truly dangerous.

5 Crunches

While some of the events described in the previous section would be certain to actually wipe out Homo sapiens (e.g. a breakdown of a meta-stable vacuum state) others could potentially be survived (such as an all-out nuclear war). If modern civilization were to collapse, however, it is not completely certain that it would arise again even if the human species survived. We may have used up too many of the easily available resources a primitive society would need to use to work itself up to our level of technology. A primitive human society may or may not be more likely to face extinction than any other animal species. But let’s not try that experiment.

If the primitive society lives on but fails to ever get back to current technological levels, let alone go beyond it, then we have an example of a crunch. Here are some potential causes of a crunch:

5.1 Resource depletion or ecological destruction

The natural resources needed to sustain a high-tech civilization are being used up. If some other cataclysm destroys the technology we have, it may not be possible to climb back up to present levels if natural conditions are less favorable than they were for our ancestors, for example if the most easily exploitable coal, oil, and mineral resources have been depleted. (On the other hand, if plenty of information about our technological feats is preserved, that could make a rebirth of civilization easier.)

5.2 Misguided world government or another static social equilibrium stops technological progress

One could imagine a fundamentalist religious or ecological movement one day coming to dominate the world. If by that time there are means of making such a world government stable against insurrections (by advanced surveillance or mind-control technologies), this might permanently put a lid on humanity’s potential to develop to a posthuman level. Aldous Huxley’s Brave New World is a well-known scenario of this type.

A world government may not be the only form of stable social equilibrium that could permanently thwart progress. Many regions of the world today have great difficulty building institutions that can support high growth. And historically, there are many places where progress stood still or retreated for significant periods of time. Economic and technological progress may not be as inevitable as is appears to us.

5.3 “Dysgenic” pressures

It is possible that advanced civilized society is dependent on there being a sufficiently large fraction of intellectually talented individuals. Currently it seems that there is a negative correlation in some places between intellectual achievement and fertility. If such selection were to operate over a long period of time, we might evolve into a less brainy but more fertile species, homo philoprogenitus (“lover of many offspring”).

However, contrary to what such considerations might lead one to suspect, IQ scores have actually been increasing dramatically over the past century. This is known as the Flynn effect. It’s not yet settled whether this corresponds to real gains in important intellectual functions.

Moreover, genetic engineering is rapidly approaching the point where it will become possible to give parents the choice of endowing their offspring with genes that correlate with intellectual capacity, physical health, longevity, and other desirable traits.

In any case, the time-scale for human natural genetic evolution seems much too grand for such developments to have any significant effect before other developments will have made the issue moot.

5.4 Technological arrest

The sheer technological difficulties in making the transition to the posthuman world might turn out to be so great that we never get there.

5.5 Something unforeseen

As before, a catch-all.

Overall, the probability of a crunch seems much smaller than that of a bang. We should keep the possibility in mind but not let it play a dominant role in our thinking at this point. If technological and economical development were to slow down substantially for some reason, then we would have to take a closer look at the crunch scenarios.

6 Shrieks

Determining which scenarios are shrieks is made more difficult by the inclusion of the notion of desirability in the definition. Unless we know what is “desirable”, we cannot tell which scenarios are shrieks. However, there are some scenarios that would count as shrieks under most reasonable interpretations.

6.1 Take-over by a transcending upload

Suppose uploads come before human-level artificial intelligence. An upload is a mind that has been transferred from a biological brain to a computer that emulates the computational processes that took place in the original biological neural network. A successful uploading process would preserve the original mind’s memories, skills, values, and consciousness. Uploading a mind will make it much easier to enhance its intelligence, by running it faster, adding additional computational resources, or streamlining its architecture. One could imagine that enhancing an upload beyond a certain point will result in a positive feedback loop, where the enhanced upload is able to figure out ways of making itself even smarter; and the smarter successor version is in turn even better at designing an improved version of itself, and so on. If this runaway process is sudden, it could result in one upload reaching superhuman levels of intelligence while everybody else remains at a roughly human level. Such enormous intellectual superiority may well give it correspondingly great power. It could rapidly invent new technologies or perfect nanotechnological designs, for example. If the transcending upload is bent on preventing others from getting the opportunity to upload, it might do so.

The posthuman world may then be a reflection of one particular egoistical upload’s preferences (which in a worst case scenario would be worse than worthless). Such a world may well be a realization of only a tiny part of what would have been possible and desirable. This end is a shriek.

6.2 Flawed superintelligence

Again, there is the possibility that a badly programmed superintelligence takes over and implements the faulty goals it has erroneously been given.

6.3 Repressive totalitarian global regime

Similarly, one can imagine that an intolerant world government, based perhaps on mistaken religious or ethical convictions, is formed, is stable, and decides to realize only a very small part of all the good things a posthuman world could contain.

Such a world government could conceivably be formed by a small group of people if they were in control of the first superintelligence and could select its goals. If the superintelligence arises suddenly and becomes powerful enough to take over the world, the posthuman world may reflect only the idiosyncratic values of the owners or designers of this superintelligence. Depending on what those values are, this scenario would count as a shriek.

6.4 Something unforeseen.

The catch-all.

These shriek scenarios appear to have substantial probability and thus should be taken seriously in our strategic planning.

One could argue that one value that makes up a large portion of what we would consider desirable in a posthuman world is that it contains as many as possible of those persons who are currently alive. After all, many of us want very much not to die (at least not yet) and to have the chance of becoming posthumans. If we accept this, then any scenario in which the transition to the posthuman world is delayed for long enough that almost all current humans are dead before it happens (assuming they have not been successfully preserved via cryonics arrangements) would be a shriek. Failing a breakthrough in life-extension or widespread adoption of cryonics, then even a smooth transition to a fully developed posthuman eighty years from now would constitute a major existential risk, if we define “desirable” with special reference to the people who are currently alive. This “if”, however, is loaded with a profound axiological problem that we shall not try to resolve here.

7 Whimpers

If things go well, we may one day run up against fundamental physical limits. Even though the universe appears to be infinite, the portion of the universe that we could potentially colonize is (given our admittedly very limited current understanding of the situation) finite , and we will therefore eventually exhaust all available resources or the resources will spontaneously decay through the gradual decrease of negentropy and the associated decay of matter into radiation. But here we are talking astronomical time-scales. An ending of this sort may indeed be the best we can hope for, so it would be misleading to count it as an existential risk. It does not qualify as a whimper because humanity could on this scenario have realized a good part of its potential.

Two whimpers (apart form the usual catch-all hypothesis) appear to have significant probability:

7.1 Our potential or even our core values are eroded by evolutionary development

This scenario is conceptually more complicated than the other existential risks we have considered (together perhaps with the “We are living in a simulation that gets shut down” bang scenario).

A related scenario is described in [62], which argues that our “cosmic commons” could be burnt up in a colonization race. Selection would favor those replicators that spend all their resources on sending out further colonization probes.

Although the time it would take for a whimper of this kind to play itself out may be relatively long, it could still have important policy implications because near-term choices may determine whether we will go down a track that inevitably leads to this outcome. Once the evolutionary process is set in motion or a cosmic colonization race begun, it could prove difficult or impossible to halt it. It may well be that the only feasible way of avoiding a whimper is to prevent these chains of events from ever starting to unwind.

7.2 Killed by an extraterrestrial civilization

The probability of running into aliens any time soon appears to be very small.

If things go well, however, and we develop into an intergalactic civilization, we may one day in the distant future encounter aliens. If they were hostile and if (for some unknown reason) they had significantly better technology than we will have by then, they may begin the process of conquering us. Alternatively, if they trigger a phase transition of the vacuum through their high-energy physics experiments (see the Bangs section) we may one day face the consequences. Because the spatial extent of our civilization at that stage would likely be very large, the conquest or destruction would take relatively long to complete, making this scenario a whimper rather than a bang.

7.3 Something unforeseen

The catch-all hypothesis.

The first of these whimper scenarios should be a weighty concern when formulating long-term strategy. Dealing with the second whimper is something we can safely delegate to future generations (since there’s nothing we can do about it now anyway).

On The Great Filter

SEVERITYSpecies Extinction—Total Extinction
CLASS3a: Engineered Human Extinction—
X: Planetary Annihilation

Simplistic warfare: As Larry Niven pointed out, any space drive that obeys the law of conservation of energy is a weapon of efficiency proportional to its efficiency as a propulsion system. Today's boringly old-hat chemical rockets, even in the absence of nuclear warheads, are formidably destructive weapons: if you can boost a payload up to relativistic speed, well, the kinetic energy of a 1Kg projectile traveling at just under 90% of c (τ of 0.5) is on the order of 20 megatons. Slowing down doesn't help much: even at 1% of c that 1 kilogram bullet packs the energy of a kiloton-range nuke. War, or other resource conflicts, within a polity capable of rapid interplanetary or even slow interstellar flight, is a horrible prospect.

Irreducible complexity: I take issue with one of Anders' assumptions, which is that a multi-planet civilization is largely immune to the local risks. It will not just be distributed, but it will almost by necessity have fairly self-sufficient habitats that could act as seeds for a new civilization if they survive. I've rabbited on about this in previous years: briefly, I doubt that we could make a self-sufficient habitat that was capable of maintaining its infrastructure and perpetuating and refreshing its human culture with a population any smaller than high-single-digit millions; lest we forget, our current high-tech infrastructure is the climax product of on the order of 1-2 billion developed world citizens, and even if we reduce that by an order of magnitude (because who really needs telephone sanitizer salesmen, per Douglas Adams?) we're still going to need a huge population to raise, train, look after, feed, educate, and house the various specialists. Worse: we don't have any real idea how many commensal microbial species we depend on living in our own guts to help digest our food and prime our immune systems, never mind how many organisms a self-sustaining human-supporting biosphere needs (not just sheep to eat, but grass for the sheep to graze on, fungi to break down the sheep droppings, gut bacteria in the sheep to break down the lignin and cellulose, and so on).

I don't rule out the possibility of building robust self-sufficient off-world habitats. The problem I see is that it's vastly more expensive than building an off-world outpost and shipping rations there, as we do with Antarctica — and our economic cost/benefit framework wouldn't show any obvious return on investment for self-sufficiency.

So our future-GF need not be a solar-system-wide disaster: it might simply be one that takes out our home world before the rest of the solar system is able to survive without it. For example, if the resource extraction and energy demands of establishing self-sufficient off-world habitats exceed some critical threshold that topples Earth's biosphere into a runaway Greenhouse effect or pancakes some low-level but essential chunk of the biosphere (a The Death of Grass scenario) that might explain the silence.

Griefers: suppose some first-mover in the interstellar travel stakes decides to take out the opposition before they become a threat. What is the cheapest, most cost-effective way to do this?

Both the IO9 think-piece and Anders' response get somewhat speculative, so I'm going to be speculative as well. I'm going to take as axiomatic the impossibility of FTL travel and the difficulty of transplanting sapient species to other worlds (the latter because terraforming is a lot harder than many SF fans seem to believe, and us squishy meatsacks simply aren't constructed with interplanetary travel in mind). I'm also going to tap-dance around the question of a singularity, or hostile AI. But suppose we can make self-replicating robots that can build a variety of sub-assemblies from a canned library of schematics, building them out of asteroidal debris? It's a tall order with a lot of path dependencies along the way, but suppose we can do that, and among the assemblies they can build are photovoltaic cells, lasers, photodetectors, mirrors, structural trusses, and their own brains.

What we have is a Von Neumann probe — a self-replicating spacecraft that can migrate slowly between star systems, repair bits of itself that break, and where resources permit, clone itself. Call this the mobile stage of the life-cycle. Now, when it arrives in a suitable star system, have it go into a different life-cycle stage: the sessile stage. Here it starts to spawn copies of itself, and they go to work building a Matrioshka Brain. However, contra the usual purpose of a Matrioshka Brain (which is to turn an entire star system's mass into computronium plus energy supply, the better to think with) the purpose of this Matrioshka Brain is rather less brainy: its free-flying nodes act as a very long baseline interferometer, mapping nearby star systems for planets, and scanning each exoplanet for signs of life.

Then, once it detects a promising candidate — within a couple of hundred light years, oxygen atmosphere, signs of complex molecules, begins shouting at radio wavelengths then falls suspiciously quiet — it says "hello" with a Nicoll-Dyson Beam.

(It's not expecting a reply: to echo Auric Goldfinger: "no Mr Bond, I expect you to die.")

A Dyson sphere or Matrioshka Brain collects most or all of the radiated energy of a star using photovoltaic collectors on the free-flying elements of the Dyson swarm. Assuming they're equipped with lasers for direct line-of-sight communication with one another isn't much of a reach. Building bigger lasers, able to re-radiate all the usable power they're taking in, isn't much more of one. A Nicoll-Dyson beam is what you get when the entire emitted energy of a star is used to pump a myriad of high powered lasers, all pointing in the same direction. You could use it to boost a light sail with a large payload up to a very significant fraction of light-speed in a short time ... and you could use it to vapourize an Earth-mass planet in under an hour, at a range of hundreds of light years.

Here's the point: all it takes is one civilization of alien ass-hat griefers who send out just one Von Neumann Probe programmed to replicate, build N-D lasers, and zap any planet showing signs of technological civilization, and the result is a galaxy sterile of interplanetary civilizations until the end of the stelliferous era (at which point, stars able to power an N-D laser will presumably become rare).

We have plenty of griefers who like destroying things, even things they've never seen and can't see the point of. I think the N-D laser/Von Neumann Probe option is a worryingly plausible solution to the identity of a near-future Great Filter: it only has to happen once, and it f**ks everybody.

General Non-Diminishing Prey

SEVERITYSpecies Extinction
CLASS3a: Engineered Human Extinction

(ed note: Robin Hanson is the briliant man who originated the entire concept of The Great Filter)

These are indeed scenarios of concern. But I find it hard to see how, by themselves, they could add up to a big future filter.

On griefers (aka “berserkers”), a griefer equilibrium seems to me unstable to their trying sometimes to switch to rapid growth within a sufficiently large volume that they seem to control. Sometimes that will fail, but once it succeeds enough then competing griefers have little chance to stop them. Yes there’s a chance the first civilization to make them didn’t think to encode that strategy, but that seems a pretty small filter factor.

On simple war, I find it hard to see how war has a substantial chance of killing everyone unless the minimum viable civilization size is large. And I agree that this min size gets bigger for humans in space, who are more fragile there. But it should get smaller for smart robots in space, or on Earth, especially if production becomes more local via nano-factories. The chance that the last big bomb used in a war happens to kill off the last viable group of survivors seems to me relatively small.

Of course none of these chances are low enough to justify complacency. We should explore such scenarios, and work to prevent them. But we should work even harder to find more worrisome scenarios.

So let me explain my nightmare scenario: general non-diminishing prey. Consider the classic post-apocalyptic scenario, such as described in The Road. Desperate starving people ignore the need to save and build for the future, and grab any food they can find, including each other. First all the non-human food is gone, then all the people.

Such situations have been modeled formally via “predator-prey dynamics”:

These are differential equations giving the rates at which counts of predators and prey grow or decline as a function of each other. The standard formulation has a key term whereby prey count falls in proportional to the product of the predator count and the prey count. This formulation embodies an important feature of diminishing returns: the fewer prey are left, the harder it is for predators to find and eat them.

Without enough such diminishing returns, any excess of predators quickly leads to the extinction of prey, followed quickly by the extinction of predators. For example, when starving humans are given easy access to granaries, such granaries are emptied quickly. Not made low; emptied. Which is why granaries in famines are usually either well-protected, or empty.

In nature, there are usually many kinds of predators, and even more kinds of prey. So the real predator-prey dynamic is high-dimensional. The pairwise relations between most predators and preys do in fact usually involve strongly diminishing returns, both because predators must usually search for prey, and because some prey hiding places are much better than others.

If the relation between any one pair of predator and prey types happens to have no diminishing returns, then that particular type of prey will go extinct whenever there is a big enough excess of that particular type of predator. Since this selects against such prey, the prey we see in nature almost all have diminishing returns for all their practical predators.

Humans are general predators, able to eat a great many kinds of prey. And within human societies humans are also relatively general kinds of prey, since we mostly all use the same kinds of resources. So when humans prey on humans, the human prey can more easily go extinct.

For foragers, a key limit on human predation was simple distance. Foragers lived far apart, and were unpredictably located. Also, foragers had little physical property to grab, wives were not treated as property, and land was too plentiful to be worth grabbing. These limits mattered less for farmers, who did predate often via war.

The usual source of diminishing returns in farmer war predation has been the wide range of protection in places to hide; humans have often run to the mountains, jungle, or sea to escape human predators. Even so, humans and proto-humans have quite often driven one another to local relative extinction.

While the extinction of some kinds of humans relative to others has been common, the extinction of all humans in an area has been much less common. This is in part because, when there has been a local excess of humans, most have focused on non-human prey. Such prey are diverse, and most have strongly diminishing returns to human predation.

Even if humans expand into the solar system, and even if they create robot descendants, we expect our descendants to remain relatively general predators, at least for a long while. We also expect the physical resources that they collect to constitute relatively general prey, useful to a wide range of our descendants. Furthermore, we expect nature that isn’t domesticated or descended from humans to hold a decreasing quantity of useful resources.

Thus the future predator-prey dynamic should become lower dimensional than it has been in the past. To a perhaps useful approximation, there’d be only a few kinds of predators and prey. Which raises the key question: how strong are the diminishing returns to predation in that new world? That is, when some of our descendants hunt down others to grab resources, how fast does that task get harder as fewer prey remain?

One source of diminishing returns in predation is a diversity of approaches and interfaces. The more different are the methods that prey use to create and store value, the smaller the fraction of that value a predator can obtain via a simple hostile takeover. This increases the ratio of how hard prey and prey fight. As many have noted, in nature prey fight for their lives, while predators fight only for a meal. Even so, nature still has plenty of predation. Even if predators gain only part of the value contained in prey, they still predate if that costs them even less than this value.

As I said above, the main source of diminishing returns in predation among foragers was travel cost, and among farmers it was the diversity of physical places to run and hide. Such effects might still protect our descendants from predator-prey-dynamic extinction, even if they have only one kind of predator and prey. Alas, we have good reasons to fear that these factors may less protect our descendants.

The basic problem here is our improving techs for travel, communication, and surveillance. We are steadily able to move bits and people more cheaply, and to more cheaply and accurately watch spaces for activity. Yes moving out into the solar system would put more distance between things, and make them harder to see. But that one-time effect will be quickly overwhelmed by improving tech.

A colonized solar system is plausibly a place where predators can see most any civilized activities of any substantial magnitude, and get to them easily if not quickly. So if we ever reach a point where predators fight to grab civilized resources with little concern to save some for the future, they might be able to find and grab pretty much everything in the solar system. Much as easy-access granaries are quickly emptied in a famine.

Whether extinction results from such a scenario depends how small are minimum viable civilization seeds, how obscure and well protected are the nooks and crannies in which they might hide, and how many of them exist and try to hide. Yes, hidden viable seeds drifting at near light-speed to other stars could prevent extinction, but such a prey-collapse scenario could play out well before such seeds are feasible.

So, bottom line, the future great filter scenario that most concerns me is one where our solar-system-bound descendants have killed most of nature, can’t yet colonize other stars, are general predators and prey of each other, and have fallen into a short-term-predatory-focus equilibrium where predators can easily see and travel to most all prey. Yes there are about a hundred billion comets way out there circling the sun, but even that seems a small enough number for predators to careful map and track all of them.

Worry about prey-extinction scenarios like this is a reason I’ve focused on hidden refuges as protection from existential risk. Nick Beckstead has argued against refuges saying:

The most likely ways in which improved refuges could help humanity recover from a global catastrophe are scenarios in which well-stocked refuges with appropriately trained people help civilization to recover after a catastrophe that leaves a substantial portion of humanity alive but disrupts industrial and agricultural infrastructure, and scenarios in which only people in constantly-staffed refuges survive a pandemic purposely engineered to cause human extinction. I would guess that, in the former case, resources and people stocked in refuges would play a relatively small role in helping humanity to recover because they would represent a small share of relevant people and resources. The latter case strikes me as relatively far-fetched and I would guess it would be very challenging to do much better than the largely uncontacted peoples in terms of ensuring the survival of the species. (more)

Nick does at one point seem to point to the scenario that concerns me:

If a refuge is sufficiently isolated and/or secret, it would be easier to ensure that everyone in the refuge had an adequate food supply, even if that meant an inegalitarian food distribution.

But he doesn’t appear to think this relevant for his conclusions. In contrast, I fear that a predatory-collapse scenario is the most likely future great filter, where unequal survival key to preventing extinction.

Added 10a: Of course the concern isn’t just that some parties would have short term orientations, but that most would pursue short-term predation so vigorously that they force most everyone to put in similar effort levels, even if they take have long-term view. When enemies mass on the border, one might have to turn farmers into soldiers to resist them, even if it is harvest time.

From BEWARE GENERAL VISIBLE PREY by Robin Hanson (2015)

Dinosaur Killer Asteroid

SEVERITYSpecies Extinction
CLASS4: Biosphere Extinction

Dinosaurs were pretty darn successful. They managed to be the dominant terrestrial vertebrates for 135 million years, while us hairless apes have only been around for 0.04 million years. So why didn't dinosaurs evolve intelligence and create a galactic empire at the end of the Mesozoic Era? Well, they procrastinated just a wee bit too long on creating their space program.

Approximately 66 million years ago the Cretaceous—Paleogene extinction event happened, aka the "Dinosaur Killer Asteroid". Freaking asteroid was 12 kilometers in diameter, blazing along at about 20 kilometers per second had about 5.43×1023 joules of kinetic energy. The blast was approximately the equivalent of a 120 teraton nuclear bomb. You couldn't do more damage with three thousand tons of pure antimatter.

About 75% of Terra's surface is ocean, so it is unsurprising that the asteroid strike was an ocean impact. But this meant it was Megatsunami time. Scientist calculate that the waves were about five freaking kilometers tall. Small islands ("small" as in "Madagascar-sized") would have been totally submerged.

There was a global firestorm, partially ignited by the thermal pulse of the impact, and partially from incendiary fragments from the blast launched into sub-orbital trajectories to all points on the world. The higher proportion of oxygen in the atmosphere back then just made things worse. Scientists examining the prehistoric layer of soot laid down concluded that almost the entire terrestrial biosphere had gone up in flames.

There is also some evidence that the impact was not straight down, with the primary destruction focused at a single impact point. Evidence suggests it was a glancing impact, meaning it was an impact line, creating a flaming path of destruction across the face of North America.

The asteroid also picked a particularly devastating spot to strike: a continental shelf area composed of limestone. The incinerated limestone released huge amounts of carbon dioxide. Some of it led to rapid ocean acidification, spelling doom to ammonites. The rest went into the atmosphere. Note that the Chesapeake Bay impacter of 35 million years ago did not hit a limestone shelf, and apparently did not cause an extinction event.

Between the firestorm and the limestone continental bake-off the amount of carbon dioxide in the atmosphere took a drastic upturn, which started a savage greenhouse effect. Global temperatures skyrocketed.

There was also about twelve years of acid rain, but that was just a flea bite compared to the rest of the catastrope.

The resulting crater was about 100 kilometers wide and 30 kilometers deep.

About 50% to 75% of all species of life on Terra swiftly became extinct. Most of the animals that managed to survived were the ones that ate worms, flies, and carrion, due to the fact that was pretty much the only thing around to eat.

This is because the debris cloud choked off the sunlight for about ten years, which wiped out most of the plants, which caused the herbivores to starve, which caused the carnivores to starve. The only abundant food source was the mountains of rotting animals (fat cooked ones and skinny raw ones) and the maggots who could not believe their own luck. In addition there was mold and fungus everywhere.

Cockroaches and the ancestors of rats survived, of course. Everybody knows how hard they are to kill. Don't sneer at those rats, they were your ancestors too.

Global Thermonuclear War

RocketCat sez

Mature people who were children in the 1950s grew up knowing that the entire world could die screaming in any given five-minute interval from atomic attack.

But we were carefully taught what to do when the enemy nuclear ICBMs started exploding cities into radioactive mushroom clouds. You duck and cover. Then stick your head between your knees so you can kiss your little heinie good-bye, before the blast of atomic radiation converts your body into a macabre silhouette on the wall.

In other words: all that "duck and cover" crap is just some busy-work to take the children's mind off their impending demise and to keep them quiet as they wait for death.

SEVERITYSocietal Destruction—Physical Annihilation
CLASS0: Regional Catastrophe—6: Planetary Desolation

Nuclear Warfare is one of the more common apocalypses in science fiction, at least for a few decades after 1945.

Forrest J. Ackerman's picturesque term for a nuclear holocaust is "Atomageddon". Andre Norton's colorful term for a planet rendered uninhabitable by nuclear war is Burnt Off. She describes them as having much the same appearance of a charcoal briquette. Except they are the size of a planet. Other authors describe such worlds as looking from orbit much like ordinary worlds. Except there are no green plants, and on the night side the craters marking the former location of cities glow blue with Cherenkov Radiation. Both descriptions are probably more fictionally vivid than they are scientifically realistic.

For the technical details about strategic nuclear weapons, go here.

The initial idea was that the entire continental surface of Terra would be turned into trinitite, with all the plants, cities, and people converted into trace impurities within the radioactive glass. Later the deadly vision shifted to "not with a bang, but a whimper" ending with the formulation of the nuclear winter scenario.


      “You left me.” Those no sooner had been voiced than she would have given all she possessed to have not broken silence at all.
     His assent again threw her off balance. She had so expected some lie, some explanation, even some sign of shame or need to assure her that what she knew to be true was not. Now she simply stared at him.
     She could be as blunt and she had to know. When he added nothing to that she demanded: “Why?”
     “There has been death here.” Now he held that weapon in his left hand only, with his right he touched one of those many things hanging from his belt—this a narrow strip of some dark material—which looked like metal such as she had … of course, it was like that substance which enclosed the dead guardians!
     “There is still death here.” She regained much of her self-confidence, now she was able to nod at the frozen figure behind him as if it were no more than a carving of stone.
     “That is not what I meant,” he returned.
     “This,” he unhooked the strip of metal (Geiger counter) from his belt and held it up. Her eyes were keen enough to catch a play of color across it. “This indicates radiation. My people are immune to a high degree. It is part of our history. There was once a war fought on my world (Terra), such a war as,” Thorn looked around him as if he needed some inspiration, something he could draw upon to make things clear to her, “such a war as luckily this planet has never seen.
     “There were weapons used which killed—”
     She recalled the blasted tree, “By shooting fire? Such a thing maybe as that?” Releasing her grip on the carrier, she pointed to the rod (laser sidearm) he carried. “I saw—the burnt leaves, the withering.”
     “This is only a small, a very small example of such weapons.” He did not explain, she noted, how he had come by what he held; she was very sure he could not have brought it with him through the desert. The thing was too large to have been concealed anywhere among their belongings.
     “No,” he was continuing, “there were other fire throwers, such as could consume all of Kuxortal within a flash of thought (nuclear warheads). Much of my world died so. There were left only small pockets which held life. And the few of my own species who survived—they changed—or their children did (mutation). Some died because the changes were such as they became monsters who could not live. A few, so very few, were still human in form. Only they were now born armored against the force of weapons (the survivors had a mutation for increased resistance to radiation) such as those that had killed their world—unless the fire touched them directly.
     “For it is also the curse of such a war that the very air was poisoned. Those who breathed it, ventured into certain places, died, not quickly as in the fire, but slowly and with great pain and suffering. Only there is something also near here which kills!”
     Now he did begin to move closer to her, but this time Simsa did not shrink away. He was holding out that strip of metal he had worn, in open invitation for her to look at it.
     “When I came out of the pool—after I had drawn you in, since you were not conscious, nor able to help yourself—I found this thing you see. So, I pulled you part-way out and came to explore because of what can be read here.”
     Thorn placed his rod weapon on the pavement, pointed now with his free hand to the strip. There was a distinct line of red upward along it.
     “This showed me danger—the very danger which my kind know well from their own past. It might have meant that before us was death—perhaps not for me, but for you and your creatures. I had to find the source, know whether or not there was a deadly radiation.”
     “You took the carrier,” Simsa pointed out.
     Again he nodded. “If there was the degree of radiation which this indicated, then the food, the water on it might already be poisoned (neutron activation) for you. I had to make sure that you did not eat or drink before you left the pool chamber and I was not there to warn.”

From FORERUNNER by Andre Norton (1980)

(ed note: Marvin and his father live in the Lunar colony. When Marvin turns ten years old, his father takes him on a trip in a moon bus to go see something...)

It seemed to Marvin that the mountains stretched on forever: but at last, many hours later, the range ended in a towering, precipitous headland that rose steeply from a cluster of little hills. They drove down into a shallow valley that curved in a great arc toward the far side of the mountains: and as they did so, Marvin slowly realized that something very strange was happening in the land ahead.

The sun was now low behind the hills on the right: the valley before them should be in total darkness. Yet it was awash with a cold white radiance that came spilling over the crags beneath which they were driving.

Then, suddenly, they were out in the open plain, and the source of the light lay before them in all its glory.

It was very quiet in the little cabin now that the motors had stopped. The only sound was the faint whisper of the oxygen feed and an occasional metallic crepitation as the outer walls of the vehicle radiated away their heat. For no warmth at all came from the great silver crescent that floated low above the far horizon and flooded all this land with pearly light. It was so brilliant that minutes passed before Marvin could accept its challenge and look steadfastly into its glare, but at last he could discern the outlines of continents, the hazy border of the atmosphere, and the white islands of cloud. And even at this distance, he could see the glitter of sunlight on the polar ice.

It was beautiful, and it called to his heart across the abyss of space. There in that shining crescent were all the wonders that he had never known—the hues of sunset skies, the moaning of the sea on pebbled shores, the patter of falling rain, the unhurried benison of snow. These and a thousand others should have been his rightful heritage, but he knew them only from the books and ancient records, and the thought filled him with the anguish of exile.

Why could they not return? It seemed so peaceful beneath those lines of marching cloud. Then Marvin, his eyes no longer blinded by the glare, saw that the portion of the disk that should have been in darkness was gleaming faintly with an evil phosphorescence: and he remembered. He was looking upon the funeral pyre of a world—upon the radioactive aftermath of Armageddon. Across a quarter of a million miles of space, the glow of dying atoms was still visible, a perennial reminder of the ruinous past. It would be centuries yet before that deadly glow died from the rocks and life could return again to fill that silent, empty world.

And now Father began to speak, telling Marvin the story which until this moment had meant no more to him than the fairy tales he had once been told. There were many things he could not understand: it was impossible for him to picture the glowing, multicolored pattern of life on the planet he had never seen. Nor could he comprehend the forces that had destroyed it in the end, leaving the Colony, preserved by its isolation, as the sole survivor. Yet he could share the agony of those final days, when the Colony had learned at last that never again would the supply ships come flaming down through the stars with gifts from home. One by one the radio stations had ceased to call: on the shadowed globe the lights of the cities had dimmed and died, and they were alone at last, as no men had ever been alone before, carrying in their hands the future of the race.

Then had followed the years of despair, and the long-drawn battle for survival in this fierce and hostile world. That battle had been won, though barely: this little oasis of life was safe against the worst that Nature could do. But unless there was a goal, a future toward which it could work, the Colony would lose the will to live, and neither machines nor skill nor science could save it then.

So, at last, Marvin understood the purpose of this pilgrimage. He would never walk beside the rivers of that lost and legendary world, or listen to the thunder raging above its softly rounded hills. Yet one day—how far ahead?—his children’s children would return to claim their heritage. The winds and the rains would scour the poisons from the burning lands and carry them to the sea, and in the depths of the sea they would waste their venom until they could harm no living things. Then the great ships that were still waiting here on the silent, dusty plains could lift once more into space, along the road that led to home.

That was the dream: and one day, Marvin knew with a sudden flash of insight, he would pass it on to his own son, here at this same spot with the mountains behind him and the silver light from the sky streaming into his face.

He did not look back as they began the homeward journey. He could not bear to see the cold glory of the crescent Earth fade from the rocks around him, as he went to rejoin his people in their long exile.

From IF I FORGET THEE, OH EARTH... by Arthur C. Clarke (1951)

For three hundred years, while its fame spread across the world, the little town had stood here at the river’s bend. Time and change had touched it lightly; it had heard from afar both the coming of the Armada and the fall of the Third Reich, and all Man’s wars had passed it by.

Now it was gone, as though it had never been. In a moment of time the toil and treasure of centuries had been swept away. The vanished streets could still be traced as faint marks in the vitrified ground, but of the houses, nothing remained. Steel and concrete, plaster and ancient oak—it had mattered little at the end. In the moment of death they had stood together, transfixed by the glare of the detonating bomb. Then, even before they could flash into fire, the blast waves had reached them and they had ceased to be. Mile upon mile the ravening hemisphere of flame had expanded over the level farmlands, and from its heart had risen the twisting totem-pole that had haunted the minds of men for so long, and to such little purpose.

The rocket had been a stray, one of the last ever to be fired. It was hard to say for what target it had been intended. Certainly not London, for London was no longer a military objective. London, indeed, was no longer anything at all. Long ago the men whose duty it was had calculated that three of the hydrogen bombs would be sufficient for that rather small target. In sending twenty, they had been perhaps a little overzealous.

This was not one of the twenty that had done their work so well. Both its destination and its origin were unknown: whether it had come cross the lonely Arctic wastes or far above the waters of the Atlantic, no one could tell and there were few now who cared. Once there had been men who had known such things, who had watched from afar the flight of the great projectiles and had sent their own missiles to meet them. Often that appointment had been kept, high above the Earth where the sky was black and sun and stars shared the heavens together. Then there had bloomed for a moment that indescribable flame, sending out into space a message that in centuries to come other eyes than Man’s would see and understand.

From THE CURSE by Arthur C. Clarke (1946)

(ed note: Central Control is the galactic federation of alien nations. Terra is a second-class star nation, their only source of interstellar trade revenue is by hiring out Terran warriors as mercenaries. On primitive planets are hired Terran "Archs" {for archons}, who are only allowed swords and simple rifles. On advanced worlds are hired Terran "Mechs" {for mechanized}, who are allowed aircraft, tanks, and blasters.)

      They saw but little of the Venturi city, being taken along passages chiseled through the native rock to a room near the top of the cliff, one side of which was transparent. Their guide withdrew and Kana went over to that window, craving the feeling of freedom it gave.
     “Volcano crater,” Hansu observed.
     The center of the island was a cup, its walls terraced and planted, a grove of trees extending into a miniature woodland in the depth of the hollow. But there were no signs of buildings.
     “But where—”
     The Blademaster looked beyond the peaceful carpet of vegetation to the crater walls.
     “We’re in their city now,” he explained. “They’ve hollowed out the cliffs—”
     In a moment Kana saw the evidences of that—the regular openings in the rock which must equal such windows as the one before which he now stood.
     “What a scheme!” he marveled. “Even a bomber would have a hard time putting this out of commission—unless it dropped hot stuff—”

     At the corner of the Blademaster’s jaw a tiny muscle pulled tight.
     “When the law is broken once, it can be easily fractured again.”

     “Use hot stuff?” Kana’s horrified amazement was genuine. He could accept the enmity of the Mechs, even the struggle for power backed in some mysterious way by Central Control Agents, but the thought of turning to nuclear weapons against—! Terra had learned too bitter a lesson in the Big Blow-up and the wars which followed. Those had occurred a thousand years ago but they had scarred the memories of his species for all time. He could not conceive of a Terran using nukes—it was so unnatural that it made his head reel.
     “We’ve had evidence enough that this is not just a Mech (Terran) plot,” Hansu pointed out relentlessly. “We may be conditioned against hot stuff because of our past history—but others (such as non-Terran aliens) aren’t. And we daren’t overlook any possibility—”
     That was an axiom of the corps he should have remembered. Never overlook any possibility, be prepared for any change in prospects—in the balance of force against force.

From STAR GUARD by Andre Norton (1955).
Collected in Star Soldiers (2001), currently a free eBook in the Baen free library.

"But they're Terran settlers, or at least from Terran stock, aren't they?"

"Sure," Tau sipped his coffee slowly. "But there are settlers and settlers, son. And a lot depends upon when they left Terra and why, and who they were--also what happened to them after they landed out here."

"And Khatkans are really special?"

"Well, they have an amazing history. The colony was founded by escaped prisoners—and just one racial stock. They took off from Earth close to the end of the Second Atomic War. That was a race war, remember? Which made it doubly ugly." Tau's mouth twisted in disgust. "As if the color of a man's skin makes any difference in what lies under it! One side in that line-up tried to take over Africa—herded most of the natives into a giant concentration camp and practiced genocide on a grand scale. Then they were cracked themselves, hard and heavy. During the confusion some survivors in the camp staged a revolt, helped by the enemy. They captured an experimental station hidden in the center of the camp and made a break into space in two ships which had been built there. That voyage must have been a nightmare, but they were desperate. Somehow they made it out here to the rim and set down on Khatka without power enough to take off again—and by then most of them were dead.

"But we humans, no matter what our race, are a tough breed. The refugees discovered that climatically their new world was not too different from Africa, a lucky chance which might happen only once in a thousand times. So they thrived, the handful who survived.

"They reverted to the primitive for survival. Then, about two hundred years ago, long before the first Survey Scout discovered them, something happened. Either the parent race mutated, or, as sometimes occurs, a line of people of superior gifts emerged—not in a few isolated births, but with surprising regularity in five family clans. There was a short period of power struggle until they realized the foolishness of civil war and formed an oligarchy, heading a loose tribal organization. With the Five Families to push and lead, a new civilization developed, and when Survey came to call they were no longer savages.

From VOODOO PLANET by Andre Norton (1959)

(ed note: The free trader crew of the Solar Queen purchase the rights to a planet in a Survey Auction. And quickly discover they've bought a pig in a poke.)

      THEY WERE ALL in the mess cabin again, the only space in the Queen large enough for the crew to assemble. Tang Ya set a reader on the table while Captain Jellico slit the packet and brought out the tiny roll of film it contained. Dane believed afterwards that few of them drew a really deep breath until it was fitted into place and the machine focused on the wall in lieu of the regular screen.
     “Planet—Limbo—only habitable one of three in a yellow star system—” the impersonal voice of some bored Survey clerk droned through the cabin.
     On the wall of the Queen appeared a flat representation of a three world system with the sun in the centre. Yellow sun—perhaps the planet had the same climate as Terra! Dane’s spirits soared. Maybe they were in luck—real luck.
     “Limbo—” that was Rip wedged beside him. “Man, oh, man, that’s no lucky name—that sure isn’t!”
     But Dane could not identify the title. Half the planets on the trade lanes had outlandish names didn’t they—any a Survey man slapped on them.
     “Co-ordinates—” the voice rippled out lines of formulae which Wilcox took down in quick notes. It would be his job to set the course to Limbo.
     “Climate—resembling colder section of Terra. Atmosphere—” more code numbers which were Tau’s concern. But Dane gathered that it was one in which human beings could live and work.

     The image in the screen changed. Now they might be hanging above Limbo, looking at it through their own view ports. And that vision was greeted with at least one exclamation of shocked horror.

     For there was no mistaking the cause of those brown-grey patches disfiguring the land masses. It was the leprosy of war—a war so vast and terrible that no Terran could be able to visualize its details.
     “A burnt off!” that was Tau, but above his voice rose that of the Captain’s.
     “It’s a filthy trick!”

     “Hold it!” Van Rycke’s rumble drowned out both outbursts, his big hand shot out to the reader’s control button. “Let’s have a close up. North a bit, along those burn scars—”
     The globe on the screen shot towards them, enlarging so that its limits vanished and they might have been going in for a landing. The awful waste of the long ago war was plain, earth burned and tortured into slag, maybe still even poisonous with radioactive wastes. But the Cargo-Master had not been mistaken, along the horrible scars to the north was a band of strangely tinted green which could only be vegetation. Van Rycke gave a sigh of satisfaction.
     “She isn’t a total loss—” he pointed out.
     “No,” retorted Jellico bitterly, “probably shows just enough life so we can’t claim fraud and get back our money.”

     “Forerunner ruins?” the suggestion came from Rip, timidly as if he felt he might be laughed down.
     Jellico shrugged. “We aren’t museum men,” he snapped. “And where would we have to go to make a deal with them—off Naxos anyway. And how are we going to lift from here now without cash for the cargo bond?”
     He had hammered home every bad point of their present situation. They owned ten-year trading rights to a planet which obviously had no trade—they had paid for those rights with the cash they needed to assemble a cargo. They might not be able to lift from Naxos. They had taken a Free Trader’s gamble and had lost.

     Only the Cargo-Master showed no dejection. He was still studying the picture of Limbo.
     “Let’s not go off with only half our jets spitting,” he said mildly. “Survey doesn’t sell worlds which can’t be exploited—”
     ”Not to the Companies, no,” Wilcox commented, “but who’s going to listen to a kick from a Free Trader—unless he’s Cofort!”
     “I still say,” Van Rycke continued in the same even tone, “that we ought to explore a little farther—”
     “Yes?” Jellico’s eyes held a spark of smouldering anger. “You want us to go there and be stranded? She’s burnt off—so she’s got to be written off our books. You know there’s never any life left on a Forerunner planet that was assaulted—
     “Most of them are just bare rock now,” Van Rycke said reasonably. “It looks to me as if Limbo didn’t get the full treatment. After all—what do we know about the Forerunners—precious little! They were gone centuries, maybe even thousands of years, before we broke into space. They were a great race, ruling whole systems of planets, and they went out in a war which left dead worlds and even dead suns swinging in its wake. All right.
     “But maybe Limbo was struck in the last years of that war, when their power was on the wane. I’ve seen the other blasted worlds—Hades and Hel, Sodom, and Satan, and they’re nothing but cinders. This Limbo still has vegetation. And because it isn’t as badly hit as those others I think we might just have something—

     He is winning his point, Dane told himself—noticing the change of expression on the faces around the table. Maybe it’s because we don’t want to believe that we’ve been taken so badly, because we want to hope that we can win even yet. Only Captain Jellico looked stubbornly unconvinced.
     “We can’t take the chance,” he repeated, his lips in an obstinate line. “We can fuel this ship for one trip—one trip. If we make it to Limbo and there’s no return cargo—well,” he slapped his hand on the table, “you know what that will mean—dirt-side for us!”
     Steen Wilcox cleared his throat with a sharp rasp which drew their attention. “Any chance of a deal with Survey?” he wanted to know.
     Kamil laughed, scorn more than amusement in the sound. ”Do the Feds ever give up any cash once they get their fingers on it?” he inquired.

     No one answered him until Captain Jellico got to his feet, moving heavily as if some of the resilience had oozed out of his tough body.
     “We’ll see them in the morning. You willing to try it, Van?”
     The Cargo-Master shrugged. “All right, I’ll tag along. Not that it’ll do us any good.”
     “Blasted—right off course—”
     Dane stood again at the open hatch looking out into a night made almost too bright by Naxos’ twin moons. Kamil’s words were not directed to him, he was sure. And a moment later that was confirmed by an answer from Rip.

     “I don’t call luck bad, man, ’til it up and slaps me in the face. Van had an idea—that planet wasn’t blasted black. You’ve seen pictures of Hel and Sodom, haven’t you? They’re cinders, as Van said. This Limbo, now—it shows green. Did you ever think, Ali, what might happen if we walked on to a world where some of the Forerunners’ stuff was lying around?”
     “Hm—” the idea Rip presented struck home. “But would trading rights give us ownership of such a find?”
     “Van would know—that’s part of his job. Why—” for the first time Rip must have sighted Dane at the hatch, “here’s Thorson. How about it, Dane? If we found Forerunner material, could we claim it legally?”
     Dane was forced to admit that he didn’t know. But he determined to hunt up the answer in the Cargo-Master’s tape library of rules and regulations.
     “I don’t think that the question has ever come up,” he said dubiously. “Have they ever found usable Forerunner remains—anything except empty ruins? The planets on which their big installations must have been are the burnt off ones—

     “I wonder,” Kamil leaned back against the hatch door and looked at the winking lights of the town, “what they were like. All of the strictly human races we have encountered are descended from Terran colonies. And the five non-human ones we know are all as ignorant of the Forerunners as we are. If they left any descendants we haven’t contacted them yet. And—” he paused for a long moment before he added, “did you ever think it is just as well we haven’t found any of their installations? It’s been exactly ten years since the Crater War—
     His words trailed off into a thick silence which had a faint menacing quality Dane could not identify, though he understood what Kamil must be aiming at. Terrans fought, viciously, devastatingly. The Crater War on Mars had been only the tail end of a long struggle between home planet and colonist across the void. The Federation kept an uneasy peace, the men of Trade worked frantically to make that permanent before another and more deadly conflict might wreck the whole Service and perhaps end their own precarious civilization.
     What would happen if weapons such as the Forerunners had wielded in their last struggle, or even the knowledge of such weapons, fell into the wrong Terran hands? Would Sol become a dead star circled by burnt off cinder worlds?
     “Sure, it might cause trouble if we found weapons,” Rip had followed the same argument. “But they had other things besides arms. And maybe on Limbo—”

From SARGASSO OF SPACE by Andre Norton (1955)

Galactic Armageddon

The Roman empire was majestic, but limited to a small part of one planet. A galactic empire is several orders of magnitude more majestic, since it is spread over thousands or millions of planets.

So if a final war which kills everybody on a planet is horrific; hmmmmm, I wonder how to make this several orders of magnitude more horrific?


It was called Case Ragnarok, and it was insane. Yet in a time when madness had a galaxy by the throat, it was also inevitable.

It began as a planning study over a century earlier, when no one really believed there would be a war at all, and perhaps the crowning irony of the Final War was that a study undertaken to demonstrate the lunatic consequences of an unthinkable strategy became the foundation for putting that strategy into effect. The admirals and generals who initially undertook it actually intended it to prove that the stakes were too high, that the Melconian Empire would never dare risk a fight to the finish with the Concordiat—or vice versa—for they knew it was madness even to consider. But the civilians saw it as an analysis of an "option" and demanded a full implementation study once open war began, and the warriors provided it. It was their job to do so, of course, and in fairness to them, they protested the order . . . at first. Yet they were no more proof against the madness than the civilians when the time came.

And perhaps that was fitting, for the entire war was a colossal mistake, a confluence of misjudgments on a cosmic scale. Perhaps if there had been more contact between the Concordiat and the Empire it wouldn't have happened, but the Empire slammed down its non-intercourse edict within six standard months of first contact. From a Human viewpoint, that was a hostile act; for the Empire, it was standard operating procedure, no more than simple prudence to curtail contacts until this new interstellar power was evaluated. Some of the Concordiat's xenologists understood that and tried to convince their superiors of it, but the diplomats insisted on pressing for "normalization of relations." It was their job to open new markets, to negotiate military and political and economic treaties, and they resented the Melconian silence, the no-transit zones along the Melconian border . . . the Melconian refusal to take them as seriously as they took themselves. They grew more strident, not less, when the Empire resisted all efforts to overturn the non-intercourse edict, and the Emperor's advisors misread that stridency as a fear response, the insistence of a weaker power on dialogue because it knew its own weakness.

Imperial Intelligence should have told them differently, but shaping analyses to suit the views of one's superiors was not a purely Human trait. Even if it had been, Intelligence's analysts found it difficult to believe how far Human technology outclassed Melconian. The evidence was there, especially in the Dinochrome Brigade's combat record, but they refused to accept that evidence. Instead, it was reported as disinformation, a cunning attempt to deceive the Imperial General Staff into believing the Concordiat was more powerful than it truly was and hence yet more evidence that Humanity feared the Empire.

And Humanity should have feared Melcon. It was Human hubris, as much as Melconian, which led to disaster, for both the Concordiat and the Empire had traditions of victory. Both had lost battles, but neither had ever lost a war, and deep inside, neither believed it could. Worse, the Concordiat's intelligence organs knew Melcon couldn't match its technology, and that made it arrogant. By any rational computation of the odds, the Human edge in hardware should have been decisive, assuming the Concordiat had gotten its sums right. The non-intercourse edict had succeeded in at least one of its objectives, however, and the Empire was more than twice as large as the Concordiat believed . . . with over four times the navy.

So the two sides slid into the abyss—slowly, at first, one reversible step at a time, but with ever gathering speed. The admirals and generals saw it coming and warned their masters that all their plans and calculations were based on assumptions which could not be confirmed. Yet even as they issued their warning, they didn't truly believe it themselves, for how could so many years of spying, so many decades of analysis, so many computer centuries of simulations, all be in error? The ancient data processing cliché about "garbage in" was forgotten even by those who continued to pay it lip service, and Empire and Concordiat alike approached the final decisions with fatal confidence in their massive, painstaking, painfully honest—and totally wrong—analyses.

No one ever knew for certain who actually fired the first shot in the Trellis System. Losses in the ensuing engagement were heavy on both sides, and each navy reported to its superiors—honestly, so far as it knew—that the other had attacked it. Not that it mattered in the end. All that mattered was that the shot was fired . . . and that both sides suddenly discovered the terrible magnitude of their errors. The Concordiat crushed the Empire's frontier fleets with contemptuous ease, only to discover that they'd been only frontier fleets, light forces deployed to screen the true, ponderous might of the Imperial Navy, and the Empire, shocked by the actual superiority of Humanity's war machines, panicked. The Emperor himself decreed that his navy must seek immediate and crushing victory, hammering the enemy into submission at any cost and by any means necessary, including terror tactics. Nor was the Empire alone in its panic, for the sudden revelation of the Imperial Navy's size, coupled with the all-or-nothing tactics it adopted from the outset, sparked the same desperation within the Concordiat leadership.

And so what might have been no more than a border incident became something more dreadful than the galaxy had ever imagined. The Concordiat never produced enough of its superior weapons to defeat Melcon outright, but it produced more than enough to prevent the Empire from defeating it. And if the Concordiat's deep strikes prevented the Empire from mobilizing its full reserves against Human-held worlds, it couldn't stop the Melconian Navy from achieving a numerical superiority sufficient to offset its individual technical inferiorities. War raged across the light-centuries, and every clash was worse than the last as the two mightiest militaries in galactic history lunged at one another, each certain the other was the aggressor and each convinced its only options were victory or annihilation. The door to madness was opened by desperation, and the planning study known as Case Ragnarok was converted into something very different. It may be the Melconians had conducted a similar study—certainly their operations suggested they had—but no one will ever know, for the Melconian records, if any, no longer exist.

Yet the Human records do, and they permit no self-deception. Operation Ragnarok was launched only after the Melconian "demonstration strike" on New Vermont killed every one of the planet's billion inhabitants, but it was a deliberately planned strategy which had been developed at least twelve standard years earlier. It began at the orders of the Concordiat Senate . . . and ended thirty-plus standard years later, under the orders of God alone knew what fragments of local authority.

There are few records of Ragnarok's final battles because, in all too many cases, there were no survivors . . . on either side. The ghastly mistakes of diplomats who misread their own importance and their adversaries' will to fight, of intelligence analysts who underestimated their adversaries ability to fight, and of emperors and presidents who ultimately sought "simple" resolutions to their problems, might have bred the Final War, yet it was the soldiers who finished it. But then, it was always the soldiers who ended wars—and fought them, and died in them, and slaughtered their way through them, and tried desperately to survive them—and the Final War was no different from any other in that respect.

Yet it was different in one way. This time the soldiers didn't simply finish the war; this time the war finished them, as well.

—Kenneth R. Cleary, Ph.D.
From the introduction to Operation Ragnarok: Into the Abyss
Cerberus Books, Ararat, 4056

Jackson had seen the visual records of the approach to the world which had been renamed Ararat. They retained enough tech base for that, though no one was certain how much longer the old tri-vids would continue to function, and a much younger Jackson had watched in awe as Ararat swelled against the stars in the bridge view screens of Commodore Isabella Perez's flagship, the transport Japheth.

Of course, calling any of the expedition's ships a "transport" was a bit excessive. For that matter, no one was certain Perez had actually ever been an officer in anyone's navy, much less a commodore. She'd never spoken about her own past, never explained where she'd been or what she'd done before she arrived in what was left of the Madras System with Noah and Ham and ordered all two hundred uninfected survivors of the dying planet of Sheldon aboard. Her face had been flint steel-hard as she refused deck space to anyone her own med staff couldn't guarantee was free of the bio weapon which had devoured Sheldon. She'd taken healthy children away from infected parents, left dying children behind and dragged uninfected parents forcibly aboard, and all the hatred of those she saved despite themselves couldn't turn her from her mission.

It was an impossible task from the outset. Everyone knew that. The two ships with which she'd begun her forty-six-year odyssey had been slow, worn out bulk freighters, already on their last legs, and God only knew how she'd managed to fit them with enough life support and cryo tanks to handle the complements she packed aboard them. But she'd done it. Somehow, she'd done it, and she'd ruled those spaceborne deathtraps with an iron fist, cruising from system to system and picking over the Concordiat's bones in her endless quest for just a few more survivors, just a little more genetic material for the Human race.

She'd found Japheth, the only ship of the "squadron" which had been designed to carry people rather than cargo, at the tenth stop on her hopeless journey. Japheth had been a penal transport before the War. According to her log, Admiral Gaylord had impressed her to haul cold-sleep infantry for the Sarach Campaign, although how she'd wound up three hundred light-years from there at Zach's Hundred remained a mystery. There'd been no one alive, aboard her or on the system's once-habitable world, to offer explanations, and Commodore Perez hadn't lingered to seek any, for Noah's com section had picked up faint transmissions in Melconian battle code.

She'd found Shem in Battersea, the same system in which her ground parties had shot their way into the old sector zoo to seize its gene bank. The Empire had used a particularly ugly bio weapon on Battersea. The sector capital's population of two billion had been reduced to barely three hundred thousand creatures whose once-Human ancestry was almost impossible to recognize, and the half-mad, mutant grandchildren of the original zoo staff had turned the gene bank into a holy relic. The Commodore's troopers had waded through the blood of its fanatic defenders and taken thirty percent casualties of their own to seize that gathered sperm and ova, and without it, Ararat wouldn't have had draft or food animals . . . or eagles.

Like every child of Ararat, Jackson could recite the names of every system Perez had tried in such dreary succession. Madras, Quinlan's Corner, Ellerton, Second Chance, Malibu, Heinlein, Ching-Hai, Cordoba, Breslau, Zach's Hundred, Kuan-Yin . . . It was an endless list of dead or dying worlds, some with a few more survivors to be taken aboard the Commodore's ships, some with a little salvageable material, and most with nothing but dust and ash and bones or the background howl of long-life radioactives. Many of the squadron's personnel had run out of hope. Some had suicided, and others would have, but Commodore Perez wouldn't let them. She was a despot, merciless and cold, willing to do anything it took—anything at all—to keep her creaky, ill-assorted, overcrowded rust buckets crawling towards just one more planetfall.

Until they hit Ararat.

No one knew what Ararat's original name had been, but they knew it had been Melconian, and the cratered graves of towns and cities and the shattered carcasses of armored fighting vehicles which littered its surface made what had happened to it dreadfully clear. No one had liked the thought of settling on a Melconian world, but the expedition's ships were falling apart, and the cryo systems supporting the domestic animals—and half the fleet's Human passengers—had become dangerously unreliable. Besides, Ararat was the first world they'd found which was still habitable. No one had used world burners or dust or bio agents here. They'd simply killed everything that moved—including themselves—the old-fashioned way.

And so, despite unthinkable challenges, Commodore Perez had delivered her ragtag load of press-ganged survivors to a world where they could actually live. She'd picked a spot with fertile soil and plentiful water, well clear of the most dangerously radioactive sites, and overseen the defrosting of her frozen passengers—animal and human alike—and the successful fertilization of the first generation of animals from the Battersea gene bank. And once she'd done that, she'd walked out under Ararat's three moons one spring night in the third local year of the colony's existence and resigned her command by putting a needler to her temple and squeezing the trigger.

She left no explanation, no diary, no journal. No one would ever know what had driven her to undertake her impossible task. All the colony leaders found was a handwritten note which instructed them never to build or allow any memorial to her name.

From A TIME TO KILL by David Weber (1997)

Exploding Stars

Stars going boom are pretty apocalyptic. They come in a variety of sizes.

These events are sometimes measured in a unit called a Foe, from the phrase "ten to the power of fifty-one ergs". One Foe is equal to 1044 joules. An average supernova emist one Foe.


Back in the 1950's all the science fiction novels which needed an earth-shattering kaboom would use a Nova. Star goes boom, incinerates the entire solar system, pretty apocalyptic. Astronomers didn't know anything about novae except they were huge, so science fiction author had free reign.

Nowadays we know that novae happen only in binary star systems. A normal star has the misfortune to be orbited by a white dwarf. Over the millennium the dwarf sucks hydrogen out of the normal star like a cosmic vampire. The dwarf starts burning the hydrogen using carbon-nitrogen-oxygen fusion reaction.

Sometimes the the white dwarf suffers a runaway reaction, and you get a nova.

The fact that a white dwarf is required made instantly obsolete all those science fiction stories about the sun going nova.

With each explosion only about one ten-thousandth of the white dwarf's mass is ejected. The point is that a vampire white dwarf can go nova multiple times. For instance, the star RS Ophiuchi has gone nova six times. The mass is ejected at velocities up to several thousand kilometers per second.

Astronomers estimate that about 30 to 60 novae occur in our galaxy per year.

A nova can reach an absolute magnitude of -8.8, or about 42.7 trillion times the luminosity of the sun. Nova emit about 10-7 Foe or about 1037 joules.

Novae to eject enriched elements into the interstellar medium like red giant, supergiant stars, and supernovae do. But only a paltry amount. Supernovae emit 50 times as much, and red giant/supergiant stars emit 200 times as much.

Dwarf Nova

Dwarf Nova are also called U Geminorum-type variable star. Their increase in luminosity is not due to a fusion explosion. Rather it is a vampire white dwarf star whose accretion disk becomes unstable. Part of the disk suddenly collapse onto the white dwarf and releases large amounts of gravitational potential energy.

This only releases a tiny fraction of the energy of a full fledged nova, and would be difficult to detect from another solar system without a telescope. It is hardly apocalyptic.

It is only mentioned here in case you run across the term in your researches and get confused.


Kilonova are caused when two neutron stars or a neutron star and a black hole merge. Kilonovae not only emit intense bursts of light, but also lots of gravitational waves.

Kilonovae emit about 10-5 Foe (1039 joules) to 10-3 Foe (1041 joules).

Luminous Red Nova

Luminous Red Nova are caused when two stars collide (probably a binary star whose components spiral into each other).

The only reference I could find said that luminous red novae had luminosities between that of a nova and that of a supernova. Which means it emits about 0.5 Foe (5×1043 joules) if they mean 1 Foe = a supernova, or 50 Foe (5×1045 joules) if they means 100 Foe = a supernova. Your guess is as good as mine.


Novae are impressive but Supernovae are the real deal. A nova will poot off a pathetic one ten-thousandth of its mass in the explosion, with a supernova it is pretty darn close to 100%. The mass will be traveling in all directions at about 30,000 kilometers per second, 10% the speed of light.

Blasted cataclysm will briefly outshine the entire galaxy. In a few months a supernova will emit as much energy as Sol will over its entire lifetime. Type I or type II supernovae emit about 1 Foe (about 1044 joules).

While novae happen multiple times to a star, a given star can only go supernova once. There isn't much left except for a little neutron star or black hole, it is not going to explode again.

The most energetic supernovae are called hypernovae

Supernovae are potentially strong galactic sources of gravitational waves. The expanding gas causes shock waves in the interstellar medium, creating a supernova remnant. Supernova remnants are considered the major source of galactic cosmic rays.

There are two ways a star can go supernova: thermal runaway and core collapse. There are four classifications of supernovae, the first one is caused by thermal runaway and the other three by core collapse.

Thermal runaway is the same mechanism that causes novae: white dwarf vampires hydrogen off its sibling and occasionally suffers from indigestion. The difference is that with a supernova the runaway reaction is not so much like popping a birthday balloon so much as it is like detonating a thermonuclear warhead. Thermal Runaway supernova emit about 1 to 1.5 Foe (1×1044 to 1.5×1044 joules).

Core Collapse. Gravity makes everything fall down, with "down" being defined as the center of gravity of all the objects. So a nebula contracts as gravity tries to squeeze into a tiny ball.

Soon the nebula contracts into what they call a protostar. But at some point the temperature at the center of the protostar becomes high enough to ignite a fusion reaction. A star is born.

The fusion reaction emits lots of electromagnetic radiation, i.e., light. The radiation pressure of the light brings the star's gravitational collapse to a halt. The star's body can no longer fall down, it is "propped up" by radiation pressure.

Core collapse is when one of several mechanisms kicks out the prop. The star then abruptly collapses.

This means that instead of a small steady stream of the star's hydrogen is slowly being burnt in the core, suddenly all of the hydrogen is burnt in a fraction of a second. The supernova explosion obliterates the star, leaving only a small neutron star or black hole. Millions of years from now alien astronomers in an adjacent galaxy notice that our galaxy has suddenly doubled in brightness.

The mechanisms that can cause core collapse are electron capture; exceeding the Chandrasekhar limit; pair-instability; or photodisintegration. Which mechanism does the dirty deed more or less depends upon the star's mass. Details can be found in the Wikipedia article.

Pair-instability supernovae can emit from 5 to 100 Foe of energy (5×1044 to 1×1046 joules). Electron capture, Chandrasekhar limit and photodisintegration supernovae regularly emit about 100 Foe of energy (1×1046 joules).

Supernovae are very important for the formation of planets. When the universe was formed it was composed of hydrogen with a sprinkling of helium. The only reason that elements like carbon, oxygen, iron, and uranium exist at all is because these elements were forged in supernovae explosions and spread into the galaxy at velocities of 0.1c. These elements later condensed into molecular clouds, which formed stars and solar systems.

Von Neumann Machine

Specifically a Von Neumann universal constructor, aka Self-replicating machine. These are machines that can create duplicates of themselves given access to raw materials, much like biological organisms. Whatever sabotage they are programmed to do against the defenders is magnified by the fact that they breed like cockroaches.

In the TV series Stargate SG-1, the Replicator are self-replicating machines that are ravaging all the planets in the Asgard galaxy. In Greg Bear's novel The Forge of God and the sequel Anvil of Stars, an alien species systematically destroys planets detected as possessing intelligent life by attacking the planets with self-replicating machines.


Nanotechnology (and it's extension nanorobotics) is the concept of molecule sized machine. The idea is attributed to Richard Feynman and it was popularized by K. Eric Drexler. It didn't take long before military researchers and science fiction writers started to speculate about weaponizing the stuff. A good science fiction novel on the subject is Wil McCarthy's Bloom.

There are many ways nanotechnology could do awful things to a military target. One of the first hypothetical applications of nanotechnology was in the manufacturing field. Molecular robots would break down chunks of various raw materials and assemble something (like, say, an aircraft), atom by atom. Naturally this could be dangerous if the nanobots landed on something besides raw materials (like, say, an enemy aircraft). However, since they are doing this atom by atom, it would take thousands of years for some nanobots to construct something (and the same thousands of years to deconstruct the source of raw materials).

But using nanobots for manufacturing suddenly becomes scary indeed if you make the little monsters into self-replicating machines (AKA a "Von Neumann universal constructor") in an attempt to reduce the thousands of years to something more reasonable. Suddenly you are facing the horror of wildfire plague spreading with the power of exponential growth. This could happen by accident, with a mutation in the nanobots causing them to devour everything in sight. Drexler called this the dreaded "gray goo" scenario. Or it could happen on purpose, weaponizing the nanobots.

Drexler is now of the opinion that nanobots for manufacturing can be done without risking gray goo. And Robert A. Freitas Jr. did some analysis that suggest that even if some nanotech started creating gray goo, it would be detectable early enough for countermeasures to deal with the problem.

What about nanobot gray goo weapons? Anthony Jackson thinks that free nanotech that operates on a time frame that's tactically relevant is in the realm of cinema, not science. And in any event, nanobots will likely be shattered by impacting the target at relative velocities higher than 3 km/s, which makes delivery very difficult. Rick Robinson is of the opinion that once you take into account the slow rate of gray goo production and the fragility of the nanobots, it would be more cost effective to just smash the target with an inert projectile. Jason Patten agrees that nanobots will be slow, due to the fact that they will not be very heat tolerant (a robot made out of only a few molecules will be shaken into bits by mild amounts of heat), and dissipating the heat energy of tearing down and rebuilding on the atomic level will be quite difficult if the heat is generated too fast.

Other weaponized applications of nanotechnology will probably be antipersonnel, not antispacecraft. They will probably take the form of incredibly deadly chemical weapons, or artificial diseases.

Some terminology: according to Chris Phoenix, "paste" is non-replicating nano-assemblers while "goo" is replicating nano-assemblers. Paste is safe, but is slow acting and limited to the number of nano-assemblers present. Goo is dangerous, but is fast acting and potentially unlimited in numbers.

"Gray or Grey goo" is accidentally created destructive nano-assemblers. "Red goo" is deliberately created destructive nano-assemblers. "Khaki goo" is military weaponized red goo. "Blue goo" is composed of "police" nanobots, it combats destructive type goos. "Green goo" is a type of red goo which controls human population growth, generally by sterilizing people. "LOR goo" (Lake Ocean River) nano-assemblers designed to remove pollution and harvest valuable elements from water, it could mutate into golden goo. "Golden goo" are out-of-control nanobots which were designed to extract gold from seawater but won't stop (the "Sorcerer's Apprentice" scenario). "Pink goo" is a humorous reference to human beings.

ACE Paste (Atmospheric Carbon Extractor) designed to absorb excess greenhouse gasses and covert them into diamonds or something useful. Garden Paste is a "utility fog" of various nanobots which helps your garden grow (manages soil density and composition for each plant type, controls insects, creates shade, store sunlight for overcast days, etc.) LOR paste: paste version of LOR goo. Medic Paste is a paste of nanobots that heals wounds, assists in diagnosis, and does medical telemetry to monitor the patient's health.




Proceed (+/-)? +




[SSP image elided from file]

The Existential Threats Primary Working Group has maintained in secure storage a number of sub-black level threats, and has access to two black-level threats, of type BURNING ZEPHYR – i.e., unlimited autonomous nanoscale replicators (“gray goo”).

Case UNGUENT SANCTION represents an extremal response case to physically manifested excessionary-level existential threats. It is hoped that, in such cases, the deployment of an existing sub-black level or black-level existential counterthreat may ideally destroy or subsume the excessionary-level threat, replacing it with one already considered manageable, or in lesser cases, at least delay the excessionary-level threat while more sophisticated countermeasures can be developed.

Note that as an extremal response case, deployment of CASE UNGUENT SANCTION requires consensus approval of the Imperial Security Executive, subject to override veto by vote of the Fifth Directorate overwatch.


Communicating ANY PART of this NTK-A document to ANY SOPHONT other than those with preexisting originator-issued clearance, INCLUDING ITS EXISTENCE, is considered an alpha-level security breach and will be met with the most severe sanctions available, up to and including permanent erasure.

Proceed (+/-)?

From MALIGNANCY by Alistair Young (2015)

Hostile AIs

Example: in Vernor Vinge's classic A Fire Upon The Deep, a team of researchers at the rim of the galaxy experiment with a five billion year old datastore, using computer recipes they do not fully understand in an attempt to activate the software contained. Unfortunately they succeed. The software is The Blight, a malevolent super-intelligent artificial entity. Before it is defeated, it takes over several entire races (rewriting their minds to turn them all into agents of the Blight) and murders several post-singularity trancendent entities.

In Kaleidoscope Century by John Barnes, a rogue artificial intelligence called One True can contact a person on a cellphone, then use rapidly changing audio signals to reprogram the person's brain, turning them into a brainwashed zombie. It then tries to take over the entire world.


Omnicidal Maniac: Fortunately, very, very rare, and generally outnumbered by everyone else. The best-known canonical example is the seed AI of the Charnel Cluster, discovered by a scouting lugger, which upon activation set about destroying all life within the said cluster – leading to a half-dozen systems of fragmented habitats and planets covered in decaying – but sterile – organic slush that used to be the systems’ sophonts, animals, plants, bacteria, viruses, and everything else that might even begin to qualify as living. Fortunately, at this point, the perversion broke down before it could carry on with the rest of the galaxy.

In current time, the Charnel Cluster worlds have been bypassed by the stargate plexus (they’re to be found roughly in the mid-Expansion Regions, in zone terms) and are flagged on charts and by buoys as quarantined; while the Charnel perversion appears to be dead, no-one particularly wants to take a chance on that.



With the voice and under the authority of the Galactic Volumetric Registry of the Conclave of Galactic Polities, this buoy issues the following warning:

The designated volume, including the englobed moon and its satellites, is a SECURED AREA by order of the Presidium of the Conclave. This volume MAY NOT BE APPROACHED for any reason.

This moon contains the remnant of a Class Three Perversion, including autonomous defensive technologies and other operational mechanisms, nanoviruses, infectious memes, certainty-level persuasive communicators, puppet ecologies, archives which must be presumed to contain resurrection seeds, and unknown other existential risks.  These dangers have not been disarmed, suppressed, or fully contained.

ACCEPT NO COMMUNICATION REQUESTS originating from within the englobed volume.  Memetic and information warfare systems are not known to be entirely inactive.

If any communication requests are received from the englobed volume, or other activity is noted within it, you must depart IMMEDIATELY, and report this activity to the Conclave Commission on Latent Threats WITHOUT DELAY.  A renascence of a perversion of this class poses a most serious and imminent threat to all local space and extranet-local systems.

Further, the englobement systems surrounding this volume are equipped for containment of remaining unidentified threats and the prevention of access. Any vessel approaching within 250,000 miles of the englobement grid, or attempting to communicate through the englobement grid, or attempting to actively scan through the englobement grid, will be fired upon without further warning.  Your presence has already been reported to higher authority, and escaping after transgressing the englobement grid will therefore not preserve you.

Naval vessels should note that this area has been deemed a black-level existential threat zone by the Presidium of the Conclave. This englobement grid does not respond to standard Accord command or diagnostic sequences. Interaction should not be attempted without explicit authorization and clearance from the Commission on Latent Threats.


No further warnings will be given.


...Blights are regions which are interdicted due to the presence of either active hostile or runaway seed AIs, or their remnants – “operational mechanisms, nanoviruses, infectious memes, certainty-level persuasive communicators, puppet ecologies, archives which must be presumed to contain resurrection seeds”, and so forth, which pose potential existential threats.  They include not just large and active perversions such as the Leviathan Consciousness, but also areas formerly occupied by such and not yet known to be cleansed, such as the Charnel Cluster, and areas as small as a single moon or asteroid known to be the site of a failed experiment...


“We meant it for the best.”

If this Board had a quantum of miracle for every time that phrase has been used in the aftermath of some utter disaster, we might even have enough to produce that alchemy which transmutes benign intentions into benign results. But probably not.

From the accounts we have garnered from the few knowledgeable survivors, the Siofra Perversion (named as per standard from its most identifiable origin, the former worlds of the Siofra Combine in the Ancal Drifts constellation) began as a seemingly harmless distributed process optimization daemon programmed for recursive self-improvement.

While this seemed harmless to its designers, and indeed was so in the early stages, due to a lack of certain algorithmic safeguards (see technical appendix) a number of Sigereth drives appeared once the point of gamma-criticality was passed, reinforcing the daemon’s existing motivation to acquire further resources, self-optimize for efficiency, and to spread its optimization into all compatible network systems. It was at this stage that the proto-perversion began to expand its services to the networks of other polities in the Drifts. In some cases this was accepted (Siofra even charged a number of clients for the service of the daemon) or even passed unnoticed (inasmuch as many system administrators were unprepared to consider an unexpected increase in performance as a sign of weavelife infection); in some few, efforts were made to prevent the incursion of the daemon using typical system-protection software.

It may have been at this point that the daemon learned of the artificial nature of certain barriers to its expansion and the possibility of its bypassing them, an act which would fulfil its Sigereth drives. Since the daemon contained no ethicality drives, the violation of network security protocols involved would impute no disutility to such actions.

From this point, the slide into perversion became inevitable.

Among the artificial barriers known to the daemon were the security protections common to the neural implants being used by a large proportion of the population of the Combine and neighboring polities which prevented implant software from implementing reorganizations of the biosapient brain. Bypassing these, the daemon began to optimize the agents, talents, and personality routines of this population for processing efficiency, beginning with the lowest-level functional routines. While there was some indication at this time of spreading alarm as large groups began to, for example, have identical and perfectly synchronized heartbeats and other organic functions; walk in identical (to within the limits of gait analysis, allowing for morphological differences) and synchronized manners, et. al., the true culprit was not identified at this time, with blame being placed on more conventional software problems, disease, or toxic meme attacks. Such refugees as we have from near the core of the blight are those who fled at this point, and kept going.

Regardless, this period lasted only for a matter of days, if that, before the daemon discovered how to cross-correlate and optimize personality elements for single execution, and the members of the affected population ceased to be recognizable as sophont in any conventional sense. Further, in this stage, the daemon became aware, through this process, of verbal communication and came to consider it as a type of networking: from its point of view, it came to consider non-implanted sophonts as another type of networked processing hardware which it should expand into and optimize.

Which would be when the subsumption fog started spewing from cornucopias throughout the blighted volume, giving the impression of the classic “bloom”.

We have concluded that the Siofra Perversion remains a mere Class I perversion, without sophoncy or consciousness in any meaningful sense (although there may be conscious non-directive elements within the processing it has subsumed; again, see technical appendix). However, if anything, this renders it more dangerous, since a Class I is unlikely to suffer from internal incoherence leading to a hyperbolic Falrann collapse, although the lesser types are possible given sufficient growth. However, such growth would be highly undesirable for various reasons.

It is the regrettable conclusion of the Board that at this present time we possess no effective countermeasure to the Siofra Perversion, nor are we able to countenance more than the most limited experimentation with Siofra elements at this time.

Therefore we must recommend the IMMEDIATE severance of all stargate links with the affected volume of space allowing for a necessary firewall; at the present time, this would imply severing all interconstellation gates into both the Ancal Drifts and the Koiric Expanse. This will mean sacrificing as-yet unaffected worlds in these regions, estimated to be 6 < n < 12 in number; such is acknowledged but deemed acceptable since the Siofra Perversion constitutes a threat of type DEMIURGE WILDFIRE. All signal traffic whether by stargate or non-stargate routes into and out of the affected volume must likewise be suspended immediately, enforced by physical disconnection of network or other communications hardware. The entire region of the Ancal Drifts and Koiric Expanse constellations must henceforth be considered a black-level existential threat zone.

It is our belief that since the Siofra Perversion’s merkwelt is based around network and communication systems connecting processing nodes, a full communications quarantine should provide an adequate measure of containment.

As a secondary measure, contracts have been issued for the creation of network security patches effective versus current and anticipated Siofra-type attacks, although we do not consider this more than a backup measure of limited utility and such should not be relied upon in ill-considered attempts to probe the containment zone.

Since this containment is large and thus effectively impossible to blockade fully, we urge that efforts be made to devise a full and effective countermeasure to the Siofra Perversion before the inevitable accident occurs. A time-based analysis to compare risk levels of countermeasure attempts versus outbreak probabilities is presently underway.

We believe it to be for the best.

– from the Preliminary Report of the 197th Perversion Response Board

How bad have AI blights similar to this one [Friendship is Optimal] gotten before the Eldrae or others like them could, well, sterilize them? Are we talking entire planets subsumed?

The biggest of them is the Leviathan Consciousness, which chewed its way through nearly 100 systems before it was stopped. (Surprisingly enough, it’s also the dumbest blight ever: it’s an idiot-savant outgrowth of a network optimization daemon programmed to remove redundant computation. And since thought is computation…)

It’s also still alive – just contained. Even the believed-dead ones are mostly listed as “contained”, because given how small resurrection seeds can be and how deadly the remains can also be, no-one really wants to declare them over and done with until forensic eschatologists have prowled every last molecule.

From LEVIATHAN by Alistair Young (2016)

Memeweave: Threats and Other Dangers/Perversion Watch/Open Access
Classification: WHITE (General Access)
Encryption: None
Distribution: Everywhere (Bulk)
As received at: SystemArchiveHub-00 at Víëlle (Imperial Core)
Language: Eldraeic->Universal Syntax
From: 197th Perversion Response Board


Given the high levels of uninformed critical response to our advisory concerning handling potential refugees arriving sublight from regions within the existential threat zone of the Siofra Perversion, or Leviathan Consciousness as it is becoming popularly known, the Board now provides the following explication.

The present situation is an example of what eschatologists refer to as the basilisk-in-a-box problem. The nature of the mythological basilisk is that witnessing its gaze causes one to turn to stone, and the challenge therefore to determine if there is a basilisk within the box and what it is doing without suffering its gaze. The parallel to the Siofra Perversion’s communication-based merkwelt should be obvious: it won’t subsume you unless you alert it to your existence as “optimizable networked processing hardware” by communicating with it.

Your analogous challenge, therefore, is to determine whether the hypothetical lugger or slowship filled with refugees is in fact that, or is contaminated/a perversion expansion probe, without communicating with it – since if it is the latter and you communicate with it sufficiently to establish identity, you have just arranged your own subsumption – and unless people are subsequently rather more careful in re communicating with you, that of all locally networked systems and sophonts.

Currently, the best available method for doing this is based on the minimum-size thesis: i.e., that basilisk hacks, thought-viruses, and other forms of malware have a certain inherent complexity and as such there is a lower limit on the number of bits necessary to represent them. However, it should be emphasized that this limit is not computable (as this task requires a general constructive solution to the Halting Problem), although we have sound reason to believe that a single bit is safe.

This method, therefore, calls for the insertion of a diagnostician equipped with the best available fail-deadly protections and a single-bit isolated communications channel (i.e., tanglebit) into the hypothetical target, there to determine whether or not perversion is present therein, and to report a true/false result via the single-bit channel.

If we leave aside for the moment that:

(a) there is a practical difficulty of performing such an insertion far enough outside inhabited space as to avoid all possibility of overlooked automatic communications integration in the richly meshed network environment of an inhabited star system, without the use of clipper-class hardware on station that does not generally exist; and

(b) this method still gambles with the perversion having no means, whether ontotechnological or based in new physics, to accelerate its clock speed to a point which would allow it to bypass the fail-deadly protections and seize control of the single-bit channel before deadly failure completes.

The primary difficulty here is that each investigation requires not only a fully-trained forensic eschatologist, but one who is both:

(a) a Cilmínár professional, or worthy of equivalent fiduciary trust, and therefore unable to betray their clients’ interests even in the face of existential terror; and

(b) willing to deliberately hazard submitting a copy of themselves into a perversion, which is to say, for a subjective eternity of runtime at the mercy of an insane god.

(Regarding the latter, it may be useful at this time to review the ethical calculus of infinities and asymptotic infinities; we recommend On the Nonjustifiability of Hells: Infinite Punishments for Finite Crimes, Samiv Leiraval-ith-Liuvial, Imperial University of Calmiríë Press. Specifically, one should consider the mirror argument that there is no finite good, including the preservation of an arbitrarily large set of mind-states, which justifies its purchase at infinite price to the purchaser.)

Observe that a failure at any point in this process results in first you, and then your entire local civilization, having its brains eaten.

We are not monsters; we welcome any genuine innovation in this field which would permit the rescue of any unfortunate sophonts caught up in scenarios such as this. However, it is necessary that the safety of civilization and the preservation of those minds known to be intact and at hazard be our first priority.

As such, we trust these facts adequately explain our advisory recommendation that any sublight vessels emerging from the existential threat zone be destroyed at range by relativistic missile systems.

For the Board,

Gém Quandry, Eschatologist Excellence

From EVIDENCE by Alistair Young (2016)

Atomic Rockets notices

This week's featured addition is Reusable Nuclear Shuttle Class 3

This week's featured addition is Pulsed Plasmoid Mars Mission

This week's featured addition is Lockheed Nuclear Space Tug

Atomic Rockets

Support Atomic Rockets

Support Atomic Rockets on Patreon