Drake Equation

Back in 1961, there was a scientific conference held in the Green Bank facility about the search for extraterrestrial intelligence. In it, the host Dr. Frank Drake presented his now-famous "Drake Equation". The equation calculates N, which is the number of civilizations in our galaxy that it would be possible to communicate with by radio. After all, this equation was invented for a conference about communicating with aliens by radio.

It is a pity that we have not got a clue about the values of the last four parameters.

This means that the equation is pretty worthless for calculating the actual number of radio-using aliens out there. But it can be useful to study how proposed values for the parameters will affect N.

Note that N is the number of radio-using alien civilizations. Science fiction authors have been using the Drake Equation to calculate the number of alien civilizations, which is not quite the same thing. But close.

Authors can start off with a desired value for N, and work backwards to find values for the other parameters that will give the desired result. Or use their personal best guess for the parameters and see what value of N pops out.

The Drake Equation is:

N = R* × ƒp × ne × ƒl × ƒi × ƒc × L


  • N = the number of civilizations in our galaxy with which radio-communication might be possible
  • R* = the average rate of star formation in our galaxy
  • ƒp = the fraction of those stars that have planets
  • ne = the average number of planets/moons that can potentially support life per star that has planets
  • ƒl = the fraction of planets that could support life that actually develop life at some point
  • ƒi = the fraction of planets with life that actually go on to develop intelligent life (civilizations)
  • ƒc = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space
  • L = the length of time for which such civilizations release detectable signals into space

Current Estimates

R* the average rate of star formation in our galaxy
NASA and ESA data suggest current rate of star formation is about 7 per year.
ƒp the fraction of those stars that have planets
Microlensing surveys suggest this is pretty close to 1.
ne the average number of planets/moons that can potentially support life per star that has planets
0.4 if you are optimist, 0.1 if you are a pessimist.
0.4 is based the probability a planet is in the star's habitable zone, determined by solar heating. 0.1 is based on the galactic habitable zone, determined by regions of the galaxy with enough heavy elements and lack of near-by deadly supernovae.
Things get more uncertain when you consider that many moons (such as Europa or Titan) might support life. This drastically increases the number of habitable sites in a given solar system.
And proponents of the Rare Earth hypothesis say in order for their hypothesis to be true, it must be so closed to zero that Terra is the only one. Which violates the mediocrity principle and the Copernican principle, as well as being no fun at all for science fiction authors.
ƒl the fraction of planets that could support life that actually develop life at some point
The first SWAG parameter.
1.0 if you are an optimist, 0.13 if you are a pessimist.
1.0 is baased on the fact that life arose on Terra almost immediately after favorable conditions arose. 0.13 is based on an estimate by Charles H. Lineweaver and Tamara M. Davis based on a statistical argument derived from the length of time life took to evolve on Terra.
ƒi the fraction of planets with life that actually go on to develop intelligent life (civilizations)
The second SWAG parameter.
The value of this parameter is controversial, which is a code word for "who the heck knows?" Pretty much every value between 0.0 and 1.0 has been proposed, depending upon the proposer's particular axe to grind.
ƒc the fraction of civilizations that develop a technology that releases detectable signs of their existence into space
The third SWAG parameter.
Also controversial. Some civilizations who have the technology to communicate might be paranoid enough that they keep silent. Yet other civilizations might not have the technology to communicate, but do have technology sufficiently noisy that it can be detected. Again: "who the heck knows?"
L the length of time for which such civilizations release detectable signals into space
The fourth SWAG parameter.
Most controversial of all. At its most innocuous, this could measure how long it takes for a civilization to become paranoid about giving away their position. At its most controversial, this could measure the average lifetime of a technological civilization, which is where the debate turns ugly. Over population, global warming, global thermonuclear war, and other terms for the Four Horsemen of the Apocalypse start being thrown around, and the discussion rapidly goes downhill from there.
More science-fictionally L could measure how long it takes a civilization to be cut short in an unexpected apotheosis by a Vingian Singularity

There have been several suggested modifications to the Drake Equation.

Alien civilizations might colonize other worlds. In a paper called The Great Silence — The Controversy Concerning Extraterrestrial Intelligent Life they derive three equations to calculate the effects of this on N. These equations require calculus so I'm not going to bother writing about them. You can find them in the report.

A given planet might give rise to several alien civilizations. An additional parameter is added for the Reappearance Factor, the average number of times a planet engenders alien civilizations. Like the other parameters this is very hard to estimate. A lot depends upon what kills off a given civilization, specifically how much it spoils the planet for making a new civilization. A little thing like global thermonuclear war and nuclear winter would eradicate a civilization but the planet would totally recover in a few million years. But if the primary star grew so swollen that it vaporized the planet, that would be the end. Another factor is that the first civilization to arise on a planet might use up all the fossil fuels and easily reached ores. The subsequent civilizations are at a disadvantage. They have to jump directly to off-shore oil drilling instead of just shooting a bullet in the ground like Jed Clampett.

An alien civilization, perfectly capable of sending radio messages, just might be paranoid enough that they keep silent. There might be civilization-killers lurking about, no sense attracting their attention. This is called the METI factor, for Messaging to ExtraTerrestrial Intelligence.

The July 2013 issue of Popular Science in an article about the TV show Doctor Who adds the parameter ƒd, which is the fraction of civilizations that can survived an alien attack from space. The "d" is for "Dalek".

Contact Motivation

An alien civilization of similar technological advancement to Terra could contact them first. The standard motives from 1950's SF novels are, according to Solomon Golumb:

  • Help!
  • Buy!
  • Convert!
  • Vacate!
  • Negotiate!
  • Work!
  • Discuss!

Sir Arthur C. Clarke notes that the nasty little short story by Damon Knight adds an eighth motive: Serve!

But don't forget the ever popular Interstellar Trading.

Contact Anti-motivation

There are also anti-motivations. Even if the human race does not want to go all genocidal on a newly discovered alien civilization's posterior, neither do you want to make it easy for them to kill you. As far back as Murray Leinster's classic "First Contact" (1945) the warning is when one of your starships encounters an alien starship, neither can let the other discover the location of their home planet. At least without finding their location as well. If the Terran starship stupidly lets the Blortch starship find the location of Terra, well Terra is at the Blortch's mercy. The Blortch can send their entire star fleet to blow Terra to Em-Cee-Squared, secure in the knowledge that the Terran star fleet has no idea where in the universe to dispatch a retaliation task force.

This only happens when mutually alien ships encounter each other in deep space. Naturally if the Terran exploration ship encounters the Blortch ship while both are orbiting the Blortch homeworld, well the cat is already out of the bag. Then the problem is how does the Terran ship get the vital information back to Terra without leading the Blortch back to your home.

Things can get quite ugly. In Michael McCollum's Antares Passage (1998) all ships have explosive charges on their navigation computers and the astrogators have been brainwashed to commit suicide if they are in danger of being captured by the enemy. In the beforementioned "First Contact", the human and alien ship try to destroy each other in battle, knowing that neither one dare run for home.

If you are really desperate, you will have to trigger the ship's self-destruct mechanism.

“Blasters, sir? What for?”

The skipper grimaced at the empty visiplate.

“Because we don’t know what they’re like and can’t take a chance! I know!” he added bitterly. “We’re going to make contacts and try to find out all we can about them—especially where they come from. I suppose we’ll try to make friends—but we haven’t much chance. We can’t trust them a fraction of an inch. We’ daren’t! They’ve locators. Maybe they’ve tracers better than any we have. Maybe they could trace us all the way home without our knowing it! We can’t risk a nonhuman race knowing where Earth is unless we’re sure of them! And how can we be sure? They could come to trade, of course—or they could swoop down on overdrive with a battle fleet,that could wipe us out before we knew what happened. We wouldn’t know which to expect, or when!”

Tommy’s face was startled.

“It’s all been thrashed out over and over, in theory,” said the skipper. “Nobody’s ever been able to find a sound answer, even on paper. But you know, in all their theorizing, no one considered the crazy, rank impossibility of a deep-space contact, with neither side knowing the other’s home world! But we’ve got to find an answer in fact! What are we going to do about them? Maybe these creatures will be aesthetic marvels, nice and friendly and polite—and, underneath, with the sneaking brutal ferocity of a mugger. Or maybe they’ll be crude and gruff as a farmer—and just as decent underneath. Maybe they’re something in between. But am I going to risk the possible future of the human race on a guess that it’s safe to trust them? God knows it would be worthwhile to make friends with a new civilization! It would be bound to stimulate our own, and maybe we’d gain enormously. But I can’t take chances. The one thing I won’t risk is having them know how to find Earth! Either I know they can’t follow me, or I don’t go home! And they’ll probably feel the same way!

He pressed the sleeve-communicator button again.

“Navigation officers, attention! Every star map on this ship is to be prepared for instant destruction. This includes photographs and diagrams from which our course or starting point could be deduced. I want all astronomical data gathered and arranged to be destroyed in a split second, on order. Make it fast and report when ready!”

He released the button. He looked suddenly old. The first contact of humanity with an alien race was a situation which had been foreseen in many fashions, but never one quite so hopeless of solution as this. A solitary Earth-ship and a solitary alien, meeting in a nebula which must be remote from the home planet of each. They might wish peace, but the line of conduct which best prepared a treacherous attack was just the seeming of friendliness. Failure to be suspicious might doom the human race—and a peaceful exchange of the fruits of civilization would be the greatest benefit imaginable. Any mistake would be irreparable, but a failure to be on guard would be fatal.

From FIRST CONTACT by Murray Leinster (1945)

“Yes. The region of the galaxy from which you have come is that which we call the desert. It is an area almost entirely devoid of planets. Would you mind telling me which star is your home?”

Cohn stiffened.

“I’m afraid our government would not permit us to disclose any information concerning our race.”

“As you wish. I am sorry you are disturbed. I was curious to know — ” He waved a negligent hand to show that the information was unimportant. We will get it later, he thought, when we decipher their charts.

“There are no charts,” he grumbled, “no maps at all. We will not be able to trace them to their home star.”

The reports were on his desk and he regarded them with a wry smile. There was indeed no way to trace them back. They had no charts, only a regular series of course-check coordinates which were preset on their home planet and which were not decipherable. Even at this stage of their civilization they had already anticipated the consequences of having their ship fall into alien hands.

From ALL THE WAY BACK by Michael Shaara (1952)

(ed note: human ("monster") ship has surprised the alien Ryall planet and Ryall ship the Space Swimmer)

     “I have a message for you from Ossfil of Space Swimmer.”
     “Proceed with the message.”
     “‘The monsters have me surrounded and I am unable to reach the gateway. I am taking evasive action, but will not be able to escape. Request instructions. Ossfil, commanding Space Swimmer.’“
     Varlan muttered a few deep imprecations to the evil star before replying. “Transmit the following: ‘From Varlan of the Scented Waters to Ossfil of Space Swimmer. As a minimum, you will destroy your astrogation computer and trigger the amnesia of your astrogator. After that is done, you may act on your own initiative.’“

     “What of the astronomical data in his computer?”
     “I have given the order that he destroy his computer and trigger his astrogator’s amnesia. Failing that, of course, he will destroy his ship.”

     “It is regrettable, Varlan of the Scented Waters, but I still have considerable astronomical data in my brain, including knowledge of the positions of many of the gateways throughout the hegemony.”
     Varlan “blinked” in horror at Salfador’s revelation.

     “You must have been fitted with an amnesia spell. Give me your trigger code and I will excise the knowledge from your brain,” she said.
     The miserable look on Salfador’s features was all the answer she needed. Even so, he said, “I’m afraid that I was never fitted with such. I had not intended to be an astrogator on a starship, and therefore, had no need.”

     To ask a philosopher to camp in the woods like a barbarian was unthinkable. Even more unthinkable, however, was allowing Salfador to fall into the grasp of the monsters. Finally, she said: “You know what you must do, of course.”
     Salfador signaled his agreement. “I have already done so. There are many poisons in the medical kits. I injected myself with one before coming here. Do not fear. My death will be quite painless.”

(ed note: the humans have captured the alien ship Space Swimmer, and are puzzling over the alien's strange behavior)

     “Naw. Shot him with a dart. He’ll be all right, ‘cept that he’s crazy as a high plateau jumper.”
     “How so?”
     “I found him amidships in one of the equipment rooms. He had this big bar he’d ripped out of some machinery and was using it to beat holy hell out of some access panel. Looked to me like he wanted to get through it and into the machinery beyond.

     “What did you say just now, Corporal?” he asked.
     “I said this damned crazy centaur attacked me, sir...”
     “No, about his trying to smash a machine. What machine?”
     “‘Fraid I don’t recognize this alien machinery too good, sir.”
     “Take me to it.”
     Sayers led the way, followed by Philip Walkirk and Sergeant Barthol. They moved through gloomy corridors until they reached a small compartment almost at the very center of the spherical ship.
     “Yonder machine over there, sir!” Sayer said, playing the beam from his hand lamp over a dented access panel.
     Philip gazed at the panel, blinked, and then emitted a low whistle.
     “This thing important, sir?” Barthol asked.
     “You might say that,” Philip replied. “What Corporal Sayers refers to as ‘yonder machine’ is their astrogation computer. The fact that he was trying to beat it to death may mean that their normal destruct mechanism failed to operate properly.”
     “That good, sir?”
     Philip Walkirk’s sudden laughter startled the two noncoms. “That box, Sergeant, may well contain information vital to the conduct of the war.”
     “What information, sir?”
     “If we’ve been very, very lucky, we may just dredge up a foldspace topology chart for the whole damned Ryall hegemony!”

(ed note: an alien Species called the Makers needs a faster-than-light starship drive. Over a period of several thousand years they send out STL robot space probes with artificial intelligence to seek out other species, and ask them about FTL.

Probe 53935 was passing by the solar system in 2065 CE when it spotted the signature of an FTL starship around Procyon. It didn't have enough reaction mass to head to Procyon, so it decided to visit Terra and bargain for some remass. Ordinarily it would have ignored Terra because they were too technologically primitive to be worthwhile.

Due to some unfortunate squabbles between have and have-not nations, the probe was destroyed by the Pan-African alliance. A part of the probes AI survived. It decided to commit suicide and told the Terran forces to back out of the blast radius.

The United Nations convinced the AI to make a bargain. The UN would mount a STL expedition to Procyon and transport the AI there. It would then help the AI fulfil its prime directive and carry the secret of FTL back to the Maker civilization. In exchange the AI would give the UN some of the Maker's technology.

About 300 years later the colonist at Procyon called "Alphans" find the secret of FTL, and head back to Terra in their first starship. They bring along the AI. They treat the original bargain made with the AI to be a sacred trust, and are committed to bringing the secret of AI back to the Makers.

Unfortunately the government of Terra is apprehensive, and feel zero obligation to honor the bargain.)

      “Williams is concerned that there are a great many species in this galaxy who would regard a starship full of humans the way we would look upon an ownerless cow with a bag of gold strapped to its neck. He fears an attack on Earth should the Alphans lose the secret of the FTL drive to aliens.”
     “Why attack us?”
     “We are potential competitors.”
     “Surely there are security measures we could take to hide the location of our home system, Sergei. Hypnosis, drugs, orders for astrogators to suicide on capture, that sort of thing.”
     Vischenko shook his head. “It might not be that simple, Executive. An FTL starship leaves a radiation wake wherever it goes. Using the proper instruments, this trail can be detected decades later. That was how the probe knew that Procyon was the site of an FTL base. It is also the reason we were so quick to detect the Alphans’ arrival. By the way, we are still tracking their wake. Scientists tell me we’ll be able to watch it all the way back to Procyon, some twelve light-years distant.”
     Duval considered Vischenko’s words for long seconds, and then nodded slowly. “I’m beginning to see your point. We’ll proceed with caution, at least until we know what we’re up against.”

     “Well,” (Henri) Duval (the equivalent of a World President) asked, turning to his one-time mentor, “what do you think?
     Josip Betrain was an old man, even by contemporary standards. He was also a sick man. He suffered from a degenerative nerve disorder that caused his body to twitch continuously. After fifteen decades of life, Betrain had not long to live. That fact gave him an unusually clear view of things, and made him a particularly valued advisor to the Chief Executive.
     “You are trying to decide whether we should participate in this adventure of theirs to search out the probe’s creators?”
     “And you want my advice?”
     “I presume that was a rhetorical question since you know damned well that I do.”
     “My advice to you, Executive, is simple. You are going to end up going out to the stars whether you like it or not. You had best like it.”
     Duval sighed “You know, Josip, I sometimes think you overplay your role as my Oracle of Delphi. Would you care to explain your last remark?”
     “Nothing mysterious about it. If you decide to allow the expedition, you will have to build a fleet of ships of your own to accompany them. It is patently obvious that they will never find the Maker world on their own. The logistics are far too great for their society to handle. And, Henri, if you decide that fulfilling this ‘Promise’ of theirs endangers the Earth, then you will need that star fleet even more.
     “I don’t follow you.”
     “Sure you do. However, when the facts are unpleasant, even a Chief Executive tends to avert his eyes. Therefore, let history record that it was from my lips that the fateful words first fell.” Betrain drew himself up and seemed to gather strength from somewhere within. When he spoke, the usual hoarse whisper was replaced by a reedy monotone.
     “If Professor Williams’ scenario is correct, Executive, it is your solemn duty to prevent the Alphans from giving the FTL secret to any alien species whatsoever! That includes the beings that built the probe. Once the secret is out, it is out!  Obviously, the only way to stop the spread of a dangerous technology is at the source. To do that, you will have to take control of Procyon VII itself. You may well be forced into becoming that which you have long despised — an imperialist aggressor who launches an unprovoked attack against people who have done nothing to deserve it.”
     “I take it then, Josip, that you agree with Professor Williams’ assessment of the danger?”
     “I do not!” the old man growled. There was a deep rattle in his chest and he was overcome with a wracking cough. After the spasm passed, he lifted himself upright, stared at Duval with rheumy eyes, and continued. “Study your history, man! Every forward step the race has ever taken was opposed by someone afraid of what we would find. So far, the ‘naysayers’ have been 100% wrong. As a result, history does not speak kindly of leaders who lose their nerve at critical moments.
     “But my opinion doesn’t count. Neither does yours. We cannot lose sight of the possibility that the Colin Williams’s of this world might be right this time. Maybe the galaxy is full of alien monsters just waiting for a shipload of hayseed humans to blunder into their clutches.”

From PROCYON'S PROMISE by Michael McCollum (1985)

The Fermi Paradox

Sooner or later one has to confront the Fermi Paradox. A good overview of the problem is David Brin's Xenology: The Science of Asking Who's Out There and The 'Great Silence': the Controversy Concerning Extraterrestrial Intelligent Life. For more detail, try Where Is Everybody?: Fifty Solutions to the Fermi Paradox and the Problem of Extraterrestrial Life by Stephen Webb.

The Fermi Paradox points out that:

  • There is a high probability of large numbers of alien civilizations
  • But we don't see any

So by the observational evidence, there are no alien civilizations. The trouble is that means our civilization shouldn't be here either, yet we are.

The nasty conclusion is that our civilization is here, so far. But our civilization is fated for death, and the probability is death sooner rather than later. This is called The Great Filter, and it is a rather disturbing thought.

And the problem is not just that we see no alien civilizations. It is the fact that humans exist at all. Terra should by rights be an alien colony, with the aliens using dinosaurs as beasts of burden.

Using slower-than-light starships it would be possible to colonize the entire galaxy in 5 million to 50 million years. By one alien civilization. Naturally the time goes down the higher the number of civilizations are colonizing.

So during the current life-span of our galaxy, it would have been possible for it to be totally colonized 250 to 2500 times. At a minimum.

The Fermi Paradox asks why isn't Terra an alien colony right now?

Granted an alien civilization might not be interested in colonization. There might be thousands of civilizations all content on their home planets, with nary a thought of colonization at all. But remember it only takes one. For anti-colonization bias to be a solution to the Fermi paradox, every single freaking civilization would have to share it with no exceptions at all. If there is even one then the galaxy is colonized in the blink of a galactic eye.

Again, the problem with no alien civilizations existing is that it implies our civilization should not exist either. A galaxy with 400 billion stars and 13.8 billion years of time to play with, it should have produced either millions of civilizations or zero civilizations. But not just one civilization. Violates the mediocrity principle and the Copernican principle, that does. Every single time people have theorized that Terra has a central specially favoried position in the universe, it has turned out to be ludicrously wrong.

Which means our civilization exists so far, but it is due to become extinct quite soon.

This means it is going to be real bad news if we discover any alien life forms at all in our solar system, even bacteria. It will imply that life is common in the universe, life on Terra is not special, The Great Filter must have wiped out all the other civilizations, and we are next.

Naturally there are quite a few solutions proposed. Stephen Webb's book has fifty of them. Some examine the Drake Equation's parameters with an eye towards finding unexpected constraints on the values.

The Wikipedia article has a broad outline of various classifications the solutions fall into. Refer to that article for details.

  • Few, if any, other civilizations currently exist
    • No other civilizations have arisen (see also Rare Earth Hypothesis)
    • It is the nature of intelligent life to destroy itself
    • It is the nature of intelligent life to destroy others (Berserker Hypothesis)
    • Life is periodically destroyed by naturally occurring events
    • Human beings were created alone
    • Inflation hypothesis and the youngness argument (multiple universes with synchronous gauge probability distribution)
  • They do exist, but we see no evidence
    • Communication is improbable due to problems of scale
    • Intelligent civilizations are too far apart in space or time
    • It is too expensive to spread physically throughout the galaxy
    • Human beings have not been searching long enough
    • Communication is improbable for technical reasons
    • Humans are not listening properly
    • Aliens aren't monitoring Earth because Earth is not superhabitable
    • Civilizations broadcast detectable radio signals only for a brief period of time
    • They tend to experience a technological singularity
    • They are too busy online
    • They are too alien
    • They are non-technological
    • The evidence is being suppressed (the Conspiracy Theory)
    • They choose not to interact with us
    • They don't agree among themselves (no talking with Terrans until Galactic UN is in agreement)
    • Earth is deliberately not contacted (Zoo Hypothesis)
    • Earth is purposely isolated (Planetarium Hypothesis)
    • It is dangerous to communicate
    • The Fermi paradox itself is what prevents communication (implies that communcation is lethal)
    • They are here unobserved

Dr. Geoffrey A. Landis has a possible solution based on Percolation Theory. A more depressing solution is in Toolmaker Koan by John McLoughlin. It argues that any intelligent species that invents tools starts a process of accelerated progress that inevitably leads to extinction by warfare over dwindling resources.

A more nasty solution is in the classic The Killing Star by Charles Pelligrino and George Zebrowski, Run To The Stars by Michael Rohan, and Antares Dawn by Michael McCollum (see below). It boils down to a variant on the Berserker Hypothesis.

In A Fire Upon The Deep, Vernor Vinge postulates a solution based upon Terra being located in the less desirable geographic region of the galaxy.

Fermi's Nightmare

When we’re considering what kind of tropes to use in a science-fictional setting, we need to be aware of an observation most commonly called the Fermi Paradox. It goes something like this:

  • The galaxy is very large (at least a hundred billion stars) and very old (billions of years).
  • Stars with planets appear to be very common, and it seems reasonable to assume that many of those planets provide conditions suitable for life.
  • Given enough time, there seems to be a significant probability that any planet supporting life will eventually give rise to an intelligent species capable of tool use and a high-technology civilization.
  • If a high-technology civilization becomes capable of interstellar travel, even using very slow methods, it should be able to colonize the entire galaxy within a few million years. If easier or faster interstellar travel turns out to be possible, that process could take considerably less time.
  • Therefore, we should see evidence of previous visits to and colonization of our own solar system. Possibly a lot of such evidence.
  • We don’t. Where is everybody?

An astute reader will notice that there’s all manner of hand-waving in that argument. When Enrico Fermi walked through it back in 1950, we didn’t know very much about the galaxy around us. Most of the probabilities and quantities implicit in the argument were unclear. Today we have more evidence for a few items – we know that most stars probably have planets, for example, because we’ve detected thousands of them in recent years. Still, at a lot of points we’re arguing from a sample size of one – our own situation – and that’s always dangerous.

It’s entirely possible that Fermi’s observation isn’t a paradox at all. Perhaps life is much rarer than we assume. Or perhaps complex life is vanishingly rare – the universe may be crammed full of bacteria, with the appearance of big tool-using animals like us as an aberration. Or perhaps high-technology civilizations almost never figure out the trick of interstellar travel, either because they don’t survive long enough, or because interstellar travel is even harder than we think. We need more data.

On the other hand, when we want to design a space-operatic setting, we have to implicitly assign values to several of those quantities. So it behooves us to assign values that make sense together, and don’t run us straight into Fermi’s Paradox at warp speed.

I’d like to suggest the following rule of thumb:

If a given science-fiction setting has multiple interstellar civilizations, and the typical civilization undergoes territorial expansion at a rate of 1% per year, then no civilization should be expected to survive longer than 1,000 years. For every factor of ten by which the growth rate is reduced, the allowable lifespan for interstellar civilizations will increase by a factor of ten.

The reasoning here is fairly straightforward.

Starting with a single fully occupied star system, a civilization which grows at 1% per year doubles its territory in not quite 70 years. Now it fully occupies two star systems (or, more likely, it has that one home system and small colonies in several other star systems). In another 70 years, it fully occupies four systems. In another 70 years, it fully occupies eight. The power of compound-interest expansion: in about 2,500 years that civilization has fully occupied one hundred billion star systems, and at that point the Milky Way is full to bursting.

If there are multiple interstellar cultures around, and that kind of growth is typical for them, then we have a problem. In the past billion years, Earth should have been overrun many times over. The Fermi Paradox is in full force, unless something comes along to eat civilizations for dinner long before they reach that point. That could be a recurring natural disaster, or an intelligent super-cultural force that cuts young civilizations short. Or maybe civilizations tend to stop their territorial expansion, turn to other concerns, and then die out. It’s your setting, your choice.

For the purpose of this rule of thumb, I stipulate that the lifespan limit is only 1,000 years. This is a nice round number, and it permits us to assume the presence of many interstellar civilizations at any given time, all of them following the same dynamics of growth and decay.

If we want star-faring cultures to live longer, then we have to adjust the other parameter in the model, their typical rate of growth. Given how the math works, if we divide the growth rate by ten, the allowable lifespan in turn grows almost exactly by a factor of ten. So if we want our interstellar civilizations to last on the order of ten thousand years, we need to assume a growth rate of 0.1% per year. A hundred thousand years, 0.01% per year.

Notice what this says about interstellar cultures, assuming that we aren’t living in a “Rare Earth” universe in which there just aren’t any intelligent beings other than ourselves. The Fermi Paradox seems to suggest that longevity requires very slow growth. The growth rates required to permit the existence of million-year-old civilizations are so low that they’re just about indistinguishable from a steady state.

Perhaps this shouldn’t surprise us. After all, on our little planet and for most of human history, our own population growth rates were very low. Only the Industrial Revolution, and subsequent improvements in agricultural technology, sanitation, and medicine, permitted us to undergo a period of rapid expansion. Human population growth peaked at a little over 2% in the early 1960s, is currently back down to a little over 1%, and may not be sustainable at even that pace for very long. The galaxy as a whole is a much bigger field of endeavor . . . but given even a little time, compound interest has a way of overwhelming such differences in scale.

Now, notice one other implication: none of this should be a surprise to any culture that manages to figure out the trick of interstellar travel. By the time such a culture has been out among the stars for a while, it should have a good estimate for every parameter in the relevant mathematics. Which means that if our characters live in a fast-growing interstellar civilization, or they know of other such cultures, they should be very worried.

Why? Well, let’s look at a specific science-fictional universe that I’ve been playing with for some time: the one in the popular Mass Effect series of video games.

In Mass Effect, humanity emerges out into the galaxy in the mid-22nd century, to find a number of other interstellar civilizations already well-established. The oldest of these civilizations date back to about three thousand years ago. We learn of dozens or even hundreds of colony worlds settled in that time, some of them with populations in the billions, which would suggest a typical territorial growth rate that’s modest but still significant – say, about 0.2% or so per year. We also learn that there have been plenty of former interstellar cultures, all of them now extinct.

In the course of the first Mass Effect game, the protagonist discovers the existence of the Reapers, a force of godlike sentient machines that periodically sweep the Milky Way and exterminate all advanced civilizations. The story details the effort to delay the return of the Reapers, and then to defeat them and win the survival of galactic civilization once they do return.

It’s all very well-done space opera, with plenty of attention given to a plausible setting and plot. But there’s one detail that should set off alarms for us: the protagonist has a great deal of difficulty persuading anyone in authority that the Reapers even exist, until it’s far too late.

From one perspective, of course, this is fine. The hero forced to act on her own, because those in authority don’t take a threat seriously, is a perfectly useful trope to apply. Yet after we’ve given the Fermi Paradox some thought, we have to ask how the rulers of any galactic civilization could remain fully ignorant of the implications.

Notice what our rule of thumb suggests for this universe. Interstellar cultures growing at about 0.2% per year tell us that the maximum lifespan for any civilization is somewhere around five thousand years. Most of that time has already passed. Just about everyone who’s paying attention should probably be looking around with a great deal of apprehension right now.

We know that interstellar travel is easy, and that civilizations can grow with significant speed. We know that there have been other interstellar cultures before our own. Why wasn’t the galaxy already full when we arrived? Why are all those other cultures extinct?

What made them extinct? What might be waiting to make us extinct before we manage to fill up the galaxy with our own colonies? Shouldn’t we be trying to find out?

If you’re a potential author, maybe your fictional galaxy won’t have anything in it like the Reapers. But there has to be something to keep the galaxy from becoming over-crowded, many times over during its long history. You need to take a moment and consider what that might be.

From Fermi's Nightmare by Sharrukin (2015)

Exterminomachy and Consequences

RocketCat sez

This is really nasty, but far too plausible. There are too many human beings who would find the paranoid logic to be perfectly reasonable.

It's sort of like a self-fulfilling Berserker Hypothesis.


Little is known of the culture, former civilization, and even biology of the skrandar species. Extreme xenophobes, they had little interaction with the species of the Worlds even post-contact. The destruction of their homeworld along with the rest of Skranpen (Charred Waste)’s1 inner system in the self-induced nova of their sun (on detecting the relativistic approach of the Serene Fleet) has left little archaeological evidence available for study. Even the name of the Skranpen system, like that of the species, is phonemically generated and institute-assigned. What little is known of the skrandar is based on abstractions from damaged and disabled examples of the skrandar berserker probes and the two identified replication sites captured in the Exterminomachy.

What has been extracted from these sources (see declassified reports tagged PYRETIC PHAGE) suggests that the skrandar were in the grip of a peculiar type of madness at the end. It is believed among crypto-archaeologists that the skrandar had a preexisting cultural obsession with the Precursor Paradox: namely, why, when we see evidence of elder races and Precursor civilizations aplenty, and both life and intelligence appear to be relatively common within the Starfall Arc, has the galaxy not been colonized and/or hegemonized long since by ancient civilizations?

(Indeed, given the relative isolation of the Skranpen system, this paradox must have weighed even more heavily on the minds of the skrandar than on those species which originated in more populous galactic neighborhoods.)

The leading hypothesis, therefore, is that xenognosis came as a severe trauma to the skrandar; upon seeing the impossible, in the light of a presumed filter preventing starfaring civilizations from existing, they collectively went mad. If, they reasoned, there was – must be – some reason for the destruction of starfaring civilizations, then they themselves could only escape that fate by becoming that reason. And so they turned as a species to the manufacture of berserker probes designed to cull all other sapient, starfaring life.

It is easy for us today, looking back on the Exterminomachy, to attribute the tragedy of the skrandar solely to some inherent flaw in the species. But consider this: the skrandar were isolated, by their own choice. They had the opportunity, therefore, to go mad quietly, unknown to the rest of the civilized galaxy, hearing no voices but their own unreason.

For this reason, among others, the Exploratory Service at this time maintains its pro-contact, pro-intervention, pro-socialization policy towards emerging species. Whatever the short-term cultural impact of xenognosis might be, in the longer term, they very much endorse the view that an ounce of prevention today is better than a gigaton of cure tomorrow.

1. While identified here as a system of the Charred Waste constellation, the Skranpen system is not connected to the stargate plexus; it is, however, located centrally in the constellation in real space.







(ed note: OMRD stands for "Office of Military Research and Development")

Radio Silence


That is the expected number of intelligent civilizations in our galaxy, according to Drake’s famous equation. For the last 78 years, we had been broadcasting everything about us – our radio, our television, our history, our greatest discoveries – to the rest of the galaxy. We had been shouting our existence at the top of our lungs to the rest of the universe, wondering if we were alone. Thirty-six million civilizations, yet in almost a century of listening, we hadn’t heard a thing. We were alone.

That was, until about five minutes ago.

The transmission came on every transcendental multiple of hydrogen’s frequency that we were listening to. Transcendental harmonics – things like hydrogen’s frequency times pi – don’t appear in nature, so I knew it had to be artificial. The signal pulsed on and off very quickly with incredibly uniform amplitudes; my initial reaction was that this was some sort of binary transmission. I measured 1679 pulses in the one minute that the transmission was active. After that, the silence resumed.

The numbers didn’t make any sense at first. They just seemed to be a random jumble of noise. But the pulses were so perfectly uniform, and on a frequency that was always so silent; they had to come from an artificial source. I looked over the transmission again, and my heart skipped a beat. 1679 – that was the exact length of the Arecibo message sent out 40 years ago. I excitedly started arranging the bits in the original 73 x 23 rectangle. I didn’t get more than halfway through before my hopes were confirmed. This was the exact same message. The numbers in binary, from 1 to 10. The atomic numbers of the elements that make up life. The formulas for our DNA nucleotides. Someone had been listening to us, and wanted us to know they were there.

Then it came to me – this original message was transmitted only 40 years ago. This means that life must be at most 20 lightyears away. A civilization within talking distance? This would revolutionize every field I have ever worked in – astrophysics, astrobiology, astro-

The signal is beeping again.

This time, it is slow. Deliberate, even. It lasts just under five minutes, with a new bit coming in once per second. Though the computers are of course recording it, I start writing them down. 0. 1. 0. 1. 0. 1. 0. 0... I knew immediately this wasn’t the same message as before. My mind races through the possibilities of what this could be. The transmission ends, having transmitted 248 bits. Surely this is too small for a meaningful message. What great message to another civilization can you possibly send with only 248 bits of information? On a computer, the only files that small would be limited to…


Was it possible? Were they really sending a message to us in our own language? Come to think of it, it’s not that out of the question – we had been transmitting pretty much every language on earth for the last 70 years… I begin to decipher with the first encoding scheme I could think of – ASCII. 0. 1. 0. 1. 0. 1. 0. 0. That’s B... 0. 1. 1 0. 0. 1. 0. 1. E…

As I finish piecing together the message, my stomach sinks like an anchor. The words before me answer everything.


From Radio Silence credited to bencbartlett ()

The Dark Forest Rule

In 2006 author Liu Cixin wrote a novel named 三体 (The Three-Body Problem) which won the Chinese Science Fiction Galaxy Award in 2006 and the 2015 Hugo Award for Best Novel. It proposed a solution to the Fermi Paradox which was plausible enough to get analyzed in a paper published in the Journal of the British Interplanetary Society (The Dark Forest Rule: One Solution to the Fermi Paradox).

It is very similar to the scenario set out in Pelligrio & Zebrowski's The Killing Star.

The Dark Forest Rule has two basic hypotheses:

  1. Survival is the primary requirement of civilization
    (civilizations that don't care if they live or die won't last long)

  2. Civilization grows and expands continuously, whereas the total cosmic materials remain constant
    (all warfare boils down to two monkeys and one banana)

and two basic concepts:

  1. Suspicion Chain: poor communication between different civilizations in the universe results in civilizations distrusting each other
    (you can't trust that mysterious tribe who lives over the hills, I'll bet you they have horns growing out of their heads and eat human flesh)

  2. Technology Explosion: technologies in civilizations may achieve explosive breakthroughs and development at any time, which are beyond the accurate estimation of any distant civilization with its own technological level
    (you never know when some other nation will unexpectedly invent the Ultimate Weapon while your back is turned)

The result of combining these hypotheses and concepts:

  1. Civilizations in the universe are competing for resources
    (there ain't enough to share)

  2. Civilization cannot trust other civilizations
    (because they have horns growing out of their heads and eat human flesh)

  3. Civilizations cannot be confident about the advancement of their technology
    (at any moment those horned flesh eaters who want our galactic resources might invent a Nicoll-Dyson Beam and kill us all)

In other words It's The Law Of The Jungle. "Every man for himself," "anything goes," "need of the sole outweights the need of the many," "survival of the strongest," "survival of the fittest," "kill or be killed," "dog eat dog" and "eat or be eaten." Basically the "state of nature" as proposed by Thomas Hobbes.


“Universe is a dark forest. Every civilization is a hunter with gun in hand and he sneaks in the forest. He must be careful enough as there are other hunters in the forest. If he discovered other lives, he can only do one thing: shoot it. In this forest, other lives are hell and constant threats. Any life that will expose his existence will be killed soon. This is the picture of universe civilizations”

From The Three-Body Problem by Liu Cixin (2006)

The hypothesis "survival is the primary requirement of civilization" is not saying that there are no alien civilzations that are moral-advocating or selfless. The hypothesis is saying that such civilizations will be slaughtered by survivalist civilizations. Much like how the Roman civilization was defeated by the Goths and the Song Dynasty was defeated by Mongolian cavalry. In other words "nice guys finish last."

The hypothesis "civilization grows and expands continuously, whereas the total cosmic materials remain constant" does not mean that civilizations will fight because they are greedy for all the resources. It means civilization will fight because they do not know how many resources will ensure survival, the only safe assumption is "all of them." In a broader sense all activities of all living civilizations increase entropy (by the second law of thermodynamics). Therefore other civilizations must be killed to slow down the heat death of the universe, thus prolonging the time for this civilization to live.

The limit of the speed of light means that two-way communication between civilizations can take years to centuries. This helps create the suspicion chain. But even with rapid communication you will instantly find yourself in the middle of The Prisoner's Dilemma which is the suspicion chain raised to the second power.

In addition, even limited communication with another civilization might inadvertently trigger in them a technological explosion, and suddenly you will find that you've brought a knife to a gun fight.

Given all that, the only safe strategy is to instantly try and kill any alien civilizations you come across. Especially since chances are they will have followed the same train of logic, and will instantly try to kill you once they discover you exist.

If you kill them, the worst thing that might happen is you'll discover they were a non-expanding moral-advocation race. Which is a shame, but even so they were creating entropy.

If you leave them alone, you are rolling the dice and risking the extinction of your entire species.

Of course when you try to kill them, you'll have to be covert about it. Just in case they turn out to be more powerful than you are. You'll have to try to avoid letting them discover that your species even exists in the first place. Keeping in mind that if they are incredibly more advanced than you are, they will have found you first and you are doomed. But there isn't much you can do about that. Given the "Apes or Angels" scenario, chances are there will be a huge technological inequality between the two civilizations. Meeting between civilizations of the same tech level will be rare.

The big draw-back to instantly killing a newly discovered alien civilization is that another ultra-highly advanced alien civilization will notice your attack (that is, a civilization vastly more advanced than you are). Then they will probably obliterate you. If you are worried about that, the best strategy is to try and hide your entire civilization, and avoid contact with anyone.

Unless the ultra-highly advanced alien civilization does not obliterate you, for fear that a third ultra-ultra-ultra-highly advanced alien civilization will notice the attack and obliterate them.

So, the answer to the Fermi Paradox is either:

  1. All the other civilizations have been killed except for a couple of bloodthirsty ones

  2. All the other civilizations are doing their best to hide, either to avoid attracting the attention a killer civilization or because they are a killer civilzation which thinks that killing is too much of a risk

The Killing Star

From The Killing Star by Charles Pellegrino and George Zebrowski (you really should read this book):

The great silence (i.e. absence of SETI signals from alien civilizations) is perhaps the strongest indicator of all that high relativistic velocities are attainable and that everybody out there knows it.

The sobering truth is that relativistic civilizations are a potential nightmare to anyone living within range of them. The problem is that objects traveling at an appreciable fraction of light speed are never where you see them when you see them (i.e., light-speed lag). Relativistic rockets, if their owners turn out to be less than benevolent, are both totally unstoppable and totally destructive. A starship weighing in at 1,500 tons (approximately the weight of a fully fueled space shuttle sitting on the launchpad) impacting an earthlike planet at "only" 30 percent of lightspeed will release 1.5 million megatons of energy -- an explosive force equivalent to 150 times today's global nuclear arsenal... (ed note: this means the freaking thing has about nine hundred mega-Ricks of damage!)

I'm not going to talk about ideas. I'm going to talk about reality. It will probably not be good for us ever to build and fire up an antimatter engine. According to Powell, given the proper detecting devices, a Valkyrie engine burn could be seen out to a radius of several light-years and may draw us into a game we'd rather not play, a game in which, if we appear to be even the vaguest threat to another civilization and if the resources are available to eliminate us, then it is logical to do so.

The game plan is, in its simplest terms, the relativistic inverse to the golden rule: "Do unto the other fellow as he would do unto you and do it first."...

When we put our heads together and tried to list everything we could say with certainty about other civilizations, without having actually met them, all that we knew boiled down to three simple laws of alien behavior:


    If an alien species has to choose between them and us, they won't choose us. It is difficult to imagine a contrary case; species don't survive by being self-sacrificing.


    No species makes it to the top by being passive. The species in charge of any given planet will be highly intelligent, alert, aggressive, and ruthless when necessary.



Your thinking still seems a bit narrow. Consider several broadening ideas:

  1. Sure, relativistic bombs are powerful because the antagonist has already invested huge energies in them that can be released quickly, and they're hard to hit. But they are costly investments and necessarily reduce other activities the species could explore. For example:
  2. Dispersal of the species into many small, hard-to-see targets, such as asteroids, buried civilizations, cometary nuclei, various space habitats. These are hard to wipe out.
  3. But wait -- while relativistic bombs are readily visible to us in foresight, they hardly represent the end point in foreseeable technology. What will humans of, say, two centuries hence think of as the "obvious" lethal effect? Five centuries? A hundred? Personally I'd pick some rampaging self-reproducing thingy (mechanical or organic), then sneak it into all the biospheres I wanted to destroy. My point here is that no particular physical effect -- with its pluses, minuses, and trade-offs -- is likely to dominate the thinking of the galaxy.
  4. So what might really aged civilizations do? Disperse, of course, and also not attack new arrivals in the galaxy, for fear that they might not get them all. Why? Because revenge is probably selected for in surviving species, and anybody truly looking out for long-term interests will not want to leave a youthful species with a grudge, sneaking around behind its back...

I agree with most parts of points 2, 3, and 4. As for point 1, it is cheaper than you think. You mention self-replicating machines in point 3, and while it is true that relativistic rockets require planetary power supplies, it is also true that we can power the whole Earth with a field of solar cells adding up to barely more than 200-by-200 kilometers, drawn out into a narrow band around the Moon's equator. Self-replicating robots could accomplish this task with only the cost of developing the first twenty or thirty machines. And once we're powering the Earth practically free of charge, why not let the robots keep building panels on the Lunar far side? Add a few self-replicating linear accelerator-building factories, and plug the accelerators into the panels, and you could produce enough anti-hydrogen to launch a starship every year. But why stop at the Moon? Have you looked at Mercury lately? ...

Dr. Wells has obviously bought into the view of a friendly galaxy. This view is based upon the argument that unless we humans conquer our self-destructive warlike tendencies, we will wipe out our species and no longer be a threat to extrasolar civilizations. All well and good up to this point.

But then these optimists make the jump: If we are wise enough to survive and not wipe ourselves out, we will be peaceful -- so peaceful that we will not wipe anybody else out, and as we are below on Earth, so other people will be above.

This is a non sequitur, because there is no guarantee that one follows the other, and for a very important reason: "They" are not part of our species.

Before we proceed any further, try the following thought experiment: watch the films Platoon and Aliens together and ask yourself if the plot lines don't quickly blur and become indistinguishable. You'll recall that in Vietnam, American troops were taught to regard the enemy as "Charlie" or "Gook," dehumanizing words that made "them" easier to kill. In like manner, the British, Spanish, and French conquests of the discovery period were made easier by declaring dark- or red- or yellow-skinned people as something less than human, as a godless, faceless "them," as literally another species.

Presumably there is some sort of inhibition against killing another member of our own species, because we have to work to overcome it...

But the rules do not apply to other species. Both humans and wolves lack inhibitions against killing chickens.

Humans kill other species all the time, even those with which we share the common bond of high intelligence. As you read this, hundreds of dolphins are being killed by tuna fishermen and drift netters. The killing goes on and on, and dolphins are not even a threat to us.

As near as we can tell, there is no inhibition against killing another species simply because it displays a high intelligence. So, as much as we love him, Carl Sagan's theory that if a species makes it to the top and does not blow itself apart, then it will be nice to other intelligent species is probably wrong. Once you admit interstellar species will not necessarily be nice to one another simply by virtue of having survived, then you open up this whole nightmare of relativistic civilizations exterminating one another.

It's an entirely new situation, emerging from the physical possibilities that will face any species that can overcome the natural interstellar quarantine of its solar system. The choices seem unforgiving, and the mind struggles to imagine circumstances under which an interstellar species might make contact without triggering the realization that it can't afford to be proven wrong in its fears.

Got that? We can't afford to wait to be proven wrong.

They won't come to get our resources or our knowledge or our women or even because they're just mean and want power over us. They'll come to destroy us to insure their survival, even if we're no apparent threat, because species death is just too much to risk, however remote the risk...

The most humbling feature of the relativistic bomb is that even if you happen to see it coming, its exact motion and position can never be determined; and given a technology even a hundred orders of magnitude above our own, you cannot hope to intercept one of these weapons. It often happens, in these discussions, that an expression from the old west arises: "God made some men bigger and stronger than others, but Mr. Colt made all men equal." Variations on Mr. Colt's weapon are still popular today, even in a society that possesses hydrogen bombs. Similarly, no matter how advanced civilizations grow, the relativistic bomb is not likely to go away...

We ask that you try just one more thought experiment. Imagine yourself taking a stroll through Manhattan, somewhere north of 68th street, deep inside Central Park, late at night. It would be nice to meet someone friendly, but you know that the park is dangerous at night. That's when the monsters come out. There's always a strong undercurrent of drug dealings, muggings, and occasional homicides.

It is not easy to distinguish the good guys from the bad guys. They dress alike, and the weapons are concealed. The only difference is intent, and you can't read minds.

Stay in the dark long enough and you may hear an occasional distance shriek or blunder across a body.

How do you survive the night? The last thing you want to do is shout, "I'm here!" The next to last thing you want to do is reply to someone who shouts, "I'm a friend!"

What you would like to do is find a policeman, or get out of the park. But you don't want to make noise or move towards a light where you might be spotted, and it is difficult to find either a policeman or your way out without making yourself known. Your safest option is to hunker down and wait for daylight, then safely walk out.

There are, of course, a few obvious differences between Central Park and the universe.

There is no policeman.

There is no way out.

And the night never ends.

From THE KILLING STAR by Charles Pellegrino and George Zebrowski

Regarding your discussion of the "Killing Star" hypothesis, I have an interesting analysis I haven't seen elsewhere. It relies on the fact that technology is not static, and the likelihood that it's a lot easier to build a relativistic interceptor operating over planetary distances than it is to build a relativistic missile operating over interstellar ones.

If two interstellar civilizations A and B discover one another (and we assume no faster-than-light travel), there are three possible relative technology levels: A is more advanced, B is more advanced, or A and B are at an exactly equivalent level.

If A is more advanced, they have nothing to fear from B. (Consider the immense difference in capabilities between an F-15 and an F-22, despite the fact that the F-22 is only a couple of decades more advanced.) A's defenses can probably stop any attack by B and their retaliation for any attack would likely wipe B out. So A has no reason to launch a pre-emptive strike against B, because it's more cost-effective to just build interceptors (which can also be used against any hypothetical C, D, etc.). And as with nuclear war, the interceptors don't have to be 100% effective — just effective enough to make B uncertain that his attack will leave A unable to retaliate. By the same token it would be suicide for B to try an attack against a more advanced A.

If B is more advanced, the same reasoning applies in reverse.

If A and B are (miraculously) at exactly the same level of technology, then A might decide it's necessary to launch a pre-emptive attack against B. But here's the kicker: if A launches a fleet of relativistic kill vehicles, they are obsolete when they arrive at B by the distance in light-years between A and B. A's attack is likely to fail. B's retaliation will, too. Consequently they have no reason to attempt strikes against each other.

While this doesn't necessarily refute the Zebrowski "kill them on detection" hypothesis, it does introduce enough uncertainty that rational alien civilizations might decide it's less risky to try peaceful coexistence first.

From James Cambias (2006)

Attacking with relativistic rockets may be a good idea if there are only two technological species, but if there are two then it seems to me that it is likely there will be more. Using a relativistic rocket to destroy a planet will reveal your position AND indicate that you are hostile to any possible third race that is out there.

To extend the Central Park analogy, the muzzle flash when you fire off your gun reveals your position and identifies that you are hostile to anyone else out there.

Bill Seney

After The Killing Star I found a flaw in Pellegrino's logic, called him, explained it to him, and he conceded the point.

Here it is: OK, you've detected radio signals from X light years away and, following the logic, prepare to send a planet-killer at the source. Only ... will the civilization still be killable by a single fractional-C strike when you get there? If it isn't, you now have definitely pissed-off neighbors that want you dead. And civilizations advance.

By the time you've registered the signal and can send a planet-killer back, optimistic assumptions about the speed of your NAFAL (not as fast as light) drives (say, 0.2c) suggest it's been a minimum of 6 * X years since the target civilization's radio output became detectable (which will be some time after the discovery of radio).

Everything hinges on your estimate of the interval between commencement of large-scale radio emissions and self-sufficient offworld colonies; call that N. The radius in lightyears beyond which you don't dare try for the kill is N / 6 . For a reasonable conservative assumption about N — say, 300 years, that comes to 50 lightyears. Not a large distance.

Go ahead and play with the assumptions — speed of NAFAL drives, the radio-to-colonies interval. It's pretty hard to come up with a plausible scenario in which launching the civ-killer looks like a good idea.

Eric Raymond (2015)

Run To The Stars

From Run To The Stars by Michael Scott Rohan (1982). The heroes have discovered the Dreadful Secret that the BC world government is hiding: explorers have discovered the first known alien species, and BC is sending a huge missile to kill all the aliens.

"Alien," muttered Ryly, and coughed rackingly, unpleasant in the confined space. "The Colony - people, that was different, but - Bellamy, hey, hold on. Think a minute. So what you say's true - couldn't the BC still be right? I mean, these're aliens, man! Better we'd never contacted them, but now they've found us - hell, we can't trust them! We can't be sure! It's the human race at stake."

"Ye're sayin' that genocide - worse than that, even - that ye like the idea?" demanded Kirsty.

"Hell, no, think I'm Stalin or somethin'? Like I said - better we'd laid low, shut up, kept to ourselves, safe, Earth and the Colony both. But these things, we can't afford to take a risk with them! Better the missile cleans the mistake off the slate, things quiet down an' we're safe again. I don't like it, I hate it - but then I'm not so wild about some of the things you feel you were justified in doin' either..."

..."Ryly, you're no fool, but you're bloody well talking like one. That missile can be tracked, man! With the mass it'll have by the time it connects it'll leave a wake of gravitational disturbance - on interstellar radiation, for a start - pointing right back this way. That's why it's a one-shot weapon - no second chances! Safe? What's safe? As if we could somehow hide away from the rest of the universe. Not as long as we use any kind of broadcast communication, we can't Think of it! Just round here, in our own little neighborhood, three planets inhabited, two with intelligent life, two with roughly the same kind of life! There must be millions of inhabited worlds out there, whatever the experts spout. Some like us, some not. Sooner or later one of them's bound to track back our communications overspill and find us. What then? Under the bed?"

"If that missile hits the target," said Kristy venomously, "we'll have tae hide. Shrink back into our own wee system, never make a noise, never stir outside it. What if any other race ever found out what we'd done? Then we'd never be safe. They'd never trust us. Not for an instant. There's bound to be some of them who think like you, Ryly. We'd be giving them grand evidence, wouldn't we? They'd wipe us out like plague germs and feel good about it!"

My own imagination was striking sparks off Kirsty's and kindling an evil flame. "Unless..." I began, and actually had trouble shaping the thought. "Unless we got them first. At once, on first contact. A pre-emptive strike, before they could possibly have a chance to find out about us. Hellfire, isn't that a glorious future history for us! A race of paranoid killers, skulking in our own backwater system when we might have had the stars! Clamping down on exploration, communications, anything that might lead someone else to us and make us stain our hands again with the same old crime... Carrying that weight down the generations. What would that make of us?"

"Predators," breathed Kirsty, "Carrion-eaters - no, worse, ghouls, vampires, killing just tae carry on our own worthless shadow-lives."

From Run To The Stars by Michael Scott Rohan (1982)


Daniel Krouse brought to my attention some important new ideas on this matter:

Peter Watts wrote a book, "Blindsight" that covers a first contact scenario from a new and interesting angle ... I wanted to share with you an excerpt that I feel would serve as a good example on your Aliens page. It has a lot in it actually, as the whole of it tackles first contact from an evolutionary and game theory POV and raises some good points (such as the possibility that even our TV signals could be considered a hostile action). But my favorite bit and the part I include here is where he expands on Powell and Pellegrino's 3 assumptions with a 4th one: Technology implies belligerence.

Daniel Krouse

Once there were three tribes. The Optimists, whose patron saints were Drake and Sagan, believed in a universe crawling with gentle intelligence — spiritual brethren vaster and more enlightened than we, a great galactic siblinghood into whose ranks we would someday ascend. Surely, said the Optimists, space travel implies enlightenment, for it requires the control of great destructive energies. Any race which can't rise above its own brutal instincts will wipe itself out long before it learns to bridge the interstellar gulf.

Across from the Optimists sat the Pessimists, who genuflected before graven images of Saint Fermi and a host of lesser lightweights. The Pessimists envisioned a lonely universe full of dead rocks and prokaryotic slime. The odds are just too low, they insisted. Too many rogues, too much radiation, too much eccentricity in too many orbits. It is a surpassing miracle that even one Earth exists; to hope for many is to abandon reason and embrace religious mania. After all, the universe is fourteen billion years old: if the galaxy were alive with intelligence, wouldn't it be here by now?

Equidistant to the other two tribes sat the Historians. They didn't have too many thoughts on the probable prevalence of intelligent, spacefaring extraterrestrials — but if there are any, they said, they're not just going to be smart. They're going to be mean.

It might seem almost too obvious a conclusion. What is Human history, if not an on going succession of greater technologies grinding lesser ones beneath their boots? But the subject wasn't merely Human history, or the unfair advantage that tools gave to any given side; the oppressed snatch up advanced weaponry as readily as the oppressor, given half a chance. No, the real issue was how those tools got there in the first place. The real issue was what tools are for.

To the Historians, tools existed for only one reason: to force the universe into unnatural shapes. They treated nature as an enemy, they were by definition a rebellion against the way things were. Technology is a stunted thing in benign environments, it never thrived in any culture gripped by belief in natural harmony. Why invent fusion reactors if your climate is comfortable, if your food is abundant? Why build fortresses if you have no enemies? Why force change upon a world which poses no threat?

Human civilization had a lot of branches, not so long ago. Even into the twenty-first century, a few isolated tribes had barely developed stone tools. Some settled down with agriculture. Others weren't content until they had ended nature itself, still others until they'd built cities in space. We all rested eventually, though. Each new technology trampled lesser ones, climbed to some complacent asymptote, and stopped — until my own mother packed herself away like a larva in honeycomb, softened by machinery, robbed of incentive by her own contentment. (ed note: Read the book for that bit to make sense)

But history never said that everyone had to stop where we did. It only suggested that those who had stopped no longer struggled for existence. There could be other, more hellish worlds where the best Human technology would crumble, where the environment was still the enemy, where the only survivors were those who fought back with sharper tools and stronger empires. The threats contained in those environments would not be simple ones. Harsh weather and natural disasters either kill you or they don't, and once conquered — or adapted to — they lose their relevance. No, the only environmental factors that continued to matter were those that fought back, that countered new strategies with newer ones, that forced their enemies to scale ever-greater heights just to stay alive.

Ultimately, the only enemy that mattered was an intelligent one.

And if the best toys do end up in the hands of those who've never forgotten that life itself is an act of war against intelligent opponents, what does that say about a race whose machines travel between the stars? The argument was straightforward enough. It might even have been enough to carry the Historians to victory — if such debates were ever settled on the basic of logic, and if a bored population hadn't already awarded the game to Fermi on points. But the Historian paradigm was just too ugly, too Darwinian, for most people, and besides, no one really cared any more. Not even the Cassidy Survey's late-breaking discoveries changed much. So what if some dirtball at Ursae Majoris Eridani had an oxygen atmosphere? It was forty-three light years away, and it wasn't talking; and if you wanted flying chandeliers and alien messiahs, you could build them to order in Heaven. (ed note: Again, read the book to understand Heaven) If you wanted testosterone and target practice you could choose an afterlife chock-full of nasty alien monsters with really bad aim. If the mere thought of an alien intelligence threatened your worldview, you could explore a virtual galaxy of empty real estate, ripe and waiting for any God-fearing earthly pilgrims who chanced by. It was all there, just the other side of a fifteen-minute splice job and a cervical socket. Why endure the cramped and smelly confines of real-life space travel to go visit pond scum on Europa?

And so, inevitably, a fourth Tribe arose, a Heavenly host that triumphed over all: the Tribe that Just Didn't Give A Sh*t. They didn't know what to do when the Fireflies showed up. So they sent us, and — in belated honor of the Historian mantra — they sent along a warrior, just in case. It was doubtful in the extreme that any child of Earth would be a match for a race with interstellar technology, should they prove unfriendly. Still, I could tell that Bates' presence was a comfort, to the Human members of the crew at least. If you have to go up unarmed against an angry T-rex with a four-digit IQ, it can't hurt to have a trained combat specialist at your side.

At the very least, she might be able to fashion a pointy stick from the branch of some convenient tree.

From Blindsight by Peter Watts (2006)

We Know You Are Out There


We made a mistake. That is the simple, undeniable truth of the matter, however painful it might be. The flaw was not in our Observatories, for those machines were as perfect as we could make, and they showed us only the unfiltered light of truth, The flaw was not in the Predictor, for it is a device of pure, infalliable logic, turning raw data into meaningful information without the taint of emotion or bias. No, the flaw was within us, the Orchestrators of this disaster, the sentients who thought themselves beyond such failings. We are responsible.

It began a short while ago, as these things are measured, less than 66 Deeli ago, though I suspect our systems of measure will mean very little by the time anyone receives this transmission. We detected faint radio signals from a blossoming intelligence 214 Deelis outward from the Galactic Core, as photons travel. At first, crude and unstructured, these leaking broadcasts quickly grew in complexity and strength, as did the messages they carried. Through our Observatories we watched a race of strife and violence, populated by a barbaric race of short-lived, fast-breeding vermin. They were brutal and uncultured things which stabbed and shot and burned each other with no regard for life or purpose. Even their concepts of Art spoke of conflict and pain. They divided themselves according to some bizarre cultural patterns and set their every industry to cause of death.

They terrified us, but we were older and wiser and so very far away, so we did no fret. Then we watched them split the atom and breech the heavens within the breadth of one of their single, short generations, and we began to worry. When they began actively transmitting messages and greetings into space, we felt fear and horror. Their transmissions promised peace and camaraderie to any who were listening, but we had watched them for too long to buy into such transparent deceptions. They knew we were out here, and they were coming for us.

The Orchestrators consulted the Predictor, and the output was dire. They would multiply and grow and flood out of their home system like some uncountable tide of Devourer worms, consuming all that lay in their path. It might that 68 Deelis, but they would destroy us if left unchecked. With aching carapaces, we decided to act, and sealed our fate.

The Gift of Mercy was 84 strides long with a mouth 2/4 that in diameter, filled with many 44 weights of machinery, fuel, and ballast. It would push itself up to 2/8th of light speed with its onboard fuel, and then begin to consume interstellar Primary Element 2/2 to feed its unlimited acceleration. It would be traveling at nearly light speed when it hit. They would never see it coming. Its launch was a day of mourning, celebration, and reflection. The horror of the act we had committed weighed heavily upon us all; the necessity of our crime did little to comfort us.

The Gift had barely cleared the outer cometary halo when the mistake was realized, but it was too late. The Gift could not be caught, could not be recalled or diverted from its path. The architects and work crews, horrified at the awful power of the thing upon which they labored, had quietly self-terminated in droves, walking unshielded into radiation zones, neglecting proper null pressure, safety or simply ceasing their nutrient consumption until their metabolic functions stopped. The appalling cost in lives had forced the Orchestrators to streamline the Gift's design and construction. There had been no time for the design or implementation of anything beyond the simple, massive engines and the stablizing systems. We could only watch in shame and horror as the light of genocide faded in infrared against the distant void.

They grew, and they changed, in a handful of lifetimes. They abolished war, abandoned their violent tendencies and turned themselves to the grand purpose of life and Art. We watched them remake first themselves, and then their world. Their frail, soft bodies gave way to gleaming metals and plastics, they unified their people through an omnipotent communications grid and produced Art of such power and emotion, the likes of which the Galaxy has never seen before. Or again, because of us.

They converted their home world into a paradise (by their standards) and many 106s of them poured out into the surrounding system with a rapidity and vigor that we could only envy. With bodies built to survive every environment from the day-lit surface of their innermost world, to the atmosphere of their largest gas giant and the cold void in between, they set out to sculpt their system into something beautiful. At first we thought them to be simple miners, stripping the rocky planets and moons for vital resources, but then we began to see the purpose to their construction, the artworks carved into every surface, and traced across the system in glittering lights and dancing fusion trails. And still, our terrible Gift approached.

They had less than 22 Deeli to see it, following so closely on the tail if its own light. In that time, oh so brief even by their fleeting lives, more than 1010 sentients prepared for death. Lovers exchanged last words, separated by worlds and the tyranny of light speed. Their planet-side engineers worked frantically to build sufficient transmission to upload countless masses with the necessary neural modification, while those above dumped lifetimes of music and literature from their databanks to make room for passengers, Those lacking the required hardware of the time to acquire it consigned themselves to death, lashed out in fear and pain, or simply went about their lives as best they could under the circumstances.

The Gift arrived suddenly, the light of its impact visible in our skies, shining bright and cruel even to the unaugmented ocular receptor. We watched and we wept for our victims, dead so many Deelis before the light of their doom had even reached us. Many 64s of those who had been directly or even tangentially involved in the creation of the Gift sealed their spiracles was past as a final penance for the small roles they had played in this atrocity. The light dimmed, the dust cleared, and our Observatories refocused upon the place where their shining blue world had once hung in the void, and found only dust and the pale gleam of an orphaned moon, wrapped in a thin, burning wisp of atmosphere that had once belonged to its parent.

Radiation and relativistic shrapnel had wiped out much of the inner system, and continent-sized chunks of molten rock carried screaming ghosts outward at interstellar escape velocities, damned to wander the great void for an eternity. The damage was apocalyptic, but not complete. From the shadows of the outer worlds, tiny points of light emerged, thousands of fusion trails of single ships and world ships and everything in between, many 106s of survivors in flesh and steel and memory banks, ready to rebuild. For a few moments we felt relief, even joy, and we were filled with the hope that their culture and Art would survive the terrible blow we had dealt them. Then came the message, tightly focused at our star, transmitted simultaneously by hundreds of their ships.

"We know you are out there, and we are coming for you."


From We Know You Are Out There by Panini's Cupcake (2008)

Why I Don't Worry


Last time I explained why I don't lie awake nights worrying that contact with extraterrestrial civilizations will lead to humanity getting conquered by invading alien armies. It's just too hard to be worth the effort.

But what if the aliens don't want to conquer the Earth? What if they just want to wipe us out?

It's actually easier to destroy rival civilizations across interstellar distances than it is to conquer them. Conquest, after all, involves fairly large armies, supplies — and above all, deceleration when the starships reach the target system. Relativisitic warheads don't need any reaction mass or energy to slow down (though they might need a motor for terminal guidance). A few bricks hitting the Earth at 99.9 percent of lightspeed would do as much damage as the asteroid which killed the dinosaurs. (And those bricks could be intelligently targeted to maximize the harm they do.)

At least one conference, featuring the likes of Isaac Asimov and Jill Tarter (I can't find a link to reference it), proposed that launching a salvo of relativistic projectiles would be the optimum strategy for any species which so much as detects another advanced civilization nearby. Get them before they get us!

As with interstellar conquest, I don't buy it. There are sound logical reasons why first strikes across interstellar distances are very bad ideas.

Hi There! Launching an interstellar death-barrage is not something you can hide. The energy output of a relativistic rocket is very bright. Any civilization with immense space-based telescopes (like the kind we're planning to use for detecting extrasolar planets) can spot them at arbitrarily large distances. If the projectiles are launched using some kind of ground-based laser or maser system, the launching beam is like a beacon shining in the direction of the target.

This is important for two reasons.

First, the target world gets some warning that the strike is on its way. That means they can they could launch a counter-strike during the flight time of the projectiles.

(How do they know it's a salvo of projectiles rather than a fleet of friendly starships? The color and brightness of the exhaust reveals the energy output involved; the Doppler shift reveals the acceleration profile and thus the mass of the payload. If someone's launching small objects at very high velocities, it's not a friendly gesture. About the only way to fool the target is to go all-in and launch a very big projectile, at a velocity which might match the mission profile for a starship. But when the "starship" fails to start decelerating, the target system still gets a clue something is amiss, and a slower vehicle could be intercepted.)

Second, other civilizations can also see this happening — civilizations the would-be genocidal lunatics don't know about. And even if you're kind of on the fence about the wisdom of firing off pre-emptive genocidal relativistic kill-vehicle attacks on other civilizations, watching someone else do it would overcome a lot of objections, and move the demonstrated genocidal lunatics to the top of everyone else's hit list. So … don't do it.

Time Lag, Again. I think we all agree that attacking a superior civilization is a bad idea, right? Some piddly Kardashev Class I outfit decides to take pot-shots at a big-time Kardashev II crew, they're gonna get messed up but good. Stands to reason.

This means, of course, that if you are a big-time Kardashev II civilization, you really don't have to worry much about the Kardashev I peons bothering you. They may be primitive, but they're not stupid. They know you can mess them up if they start something.

So nobody's going to be shooting at superior civilizations, and nobody's going to be shooting at inferior civilizations, either. What does that leave? Well, you can target civilizations at about your level of technology, just in case they have similar ideas about you.

See the problem yet? Time lag! Suppose you launch your attack at a peer civilization 50 light-years away. Even if the missiles are going nearly the speed of light, they're still going to spend half a century getting there. During which time the target civilization has half a century of technological progress. So if your aggressors are Stalin's Russia, launching the missiles in 1950, the weapons arrive in Clinton's America in 2000, equipped with interceptors which can easily handle them.

This is especially important because of course you can't know just how much progress a distant civilization will make during the transit time. Maybe you're advanced enough to catch them off-guard and wipe them out … but maybe you aren't, and now you've got an implacable enemy.

Of course, the odds of even detecting a peer or near-peer civilization are remote. Given the immense age of the universe and the long time-scale of life on Earth, it's highly improbable that any aliens we detect would be close to us in technology. It's far more likely that we'll pick up indications of Godlike Kardashev II civilizations, or send out probes which observe species just figuring out how to make tools, than beings with similar capabilities to our own.

The gods are safe from us, and we're safe from the primitives. So nobody has to get pre-emptively violent.

Deterrence. Once you're capable of building weapons which can strike across interstellar distances, you're also capable of hiding weapons across interstellar distances — tucking away a little counter-strike force in nearby star systems, or deep in the Kuiper Belt. There's no way for an attacker to know in advance if you've got some entertaining surprises in store — and the consequences for that attacker are likely to be dire if you do.

This means that the rational assumption, for anyone interested in keeping their species and civilization intact, is to assume all possible adversaries have just such a counterforce in being. So don't attack them.

If I can think of these things, so can alien strategic planners. And really, it's difficult to imagine any being with the ability to construct interstellar vehicles thinking it's a good idea to launch unprovoked attacks on newly-discovered civilizations. There are simply too many unknowns.

So that's why I don't worry about E.T. trying to wipe us out. Back to whatever you were doing.


For stories about primitive and super-advanced extraterrestrials, buy my ebook Outlaws and Aliens!

The Prisoner's Dilemma

The problem of whether to commit genocide upon an alien race or not is vaguely related to the famous "prisoner's dilemma".

The problem is that the Prisoner's Dilemma makes it all too likely that Paranoia beats reason. For those unfamiliar with it... here's the Space version.

Race A and B both have roughly comparable technology, but don't understand each other. Each race has 2 options: Launch missiles or Ignore each other.

If Both races open fire, both races are devastated but not destroyed.

If one race opens fire and the other ignores it, they're utterly exterminated.

If both races ignore each other, they live in peace and are fine.

The problem is, neither can really communicate with each other. And although the cooperative choice of ignoring each other is best, the risks of them firing first while you ignore them are too great. Thus, this scenario via game theory, will always result in missiles being exchanged.

Laura 'Nephtys' Reynolds
Race B IgnoresRace B Attacks
Race A IgnoresBoth live constant fearRace A exterminated
Race B lives free of fear
Race A AttacksRace A lives free of fear
Race B exterminated
Both are devastated but not destroyed

As the Wikipedia article shows, the dilemma comes when you assume that each race is trying to maximize it's survival.

Say you are Race A. If Race B ignores you, your best outcome is to attack. Then you do not have to live in fear, spend resources on building defenses, and so on. If race B attacks, your best outcome is still to attack, since the alternative is extermination.

And since Race B will make the same determination, both races will attack and be devastated but not destroyed.

An outside observer will note that if the two races are taken as a group, the best outcome of the group is for both races to cooperate. If either attacks, the outcome for the group will be worse. And if both attack, both races receive a worse outcome than if they had both ignored each other.

So if both races selfishly look out for themselves, both will attack and the result is devastation. If both races altruistically think about the group, both will ignore and both will live. And if one race is selfish while the other is altruistic, yet again it will be proven that nice guys finish last.

And it actually doesn't matter if they can communicate with each other or not, a given race cannot be sure if the other is being truthful. If the two races can communicate, they run into the "cooperation paradox". Each race must convince the other that they will take the altruistic option despite the fact that the race could do better for themselves by taking the selfish option.

Cooperatewin some-win somelose all-win all
Defectwin all-lose alllose some-lose some
CooperateD, DC, B
DefectB, CA, A

Of course the prisoner's dilemma is a very artificial set-up, in real life the results would not be quite so clean-cut. To the right are two formulations of the prisoner's dilemma matrix.

In the Detailed matrix, A, B, C, and D are various outcomes, and the relative value of the outcomes are B > D > A > C. If those relative values are true, the prisoner's dilemma is present. In the first example, B = alive and free from fear, D = alive but in constant fear, A = alive but devastated and C = exterminated.

The prisoner's dilemma does have some vague similarities to the old cold war doctrine of Mutual Assured Destruction, though they are actually not very closely related. The prisoner's dilemma also does not work in those cases where what is bad for one player is equally bad for the other. An example is the game of "chicken" as seen in the 1955 film Rebel Without A Cause, where the drivers of both cars race to a deadly cliff and the first one to "chicken out" loses. But game theorists are working on a new approach called "Drama Theory" (warning: commercial website. No endorsement implied.)

"Gently, Sandy," First Lieutenant Cargill interjected. "Dr. Horvath, I take it you've never been involved in military intelligence? No, of course not. But you see, in intelligence work we have to go by capabilities, not by intentions. If a potential enemy can do something to you, you have to prepare for it, without regard to what you think he wants to do."

From The Mote In God's Eye by Larry Niven and Jerry Pournelle (1975)

Atomic Rockets notices

This week's featured addition is IEC Fusion Miley Ship

This week's featured addition is Nuclear Thermal Propulsion Stage

This week's featured addition is JPL Modular Habitat System

Atomic Rockets

Support Atomic Rockets

Support Atomic Rockets on Patreon