In science fiction universes such as Star Wars, the existence of zillions of species of aliens is taken for granted. But when a society who has zero knowledge of alien species actually meets one for the very first time, well, that's a catastrophic event. Dare I say a Paradigm shift or Epistemological rupture?
And that occurs with the mere knowledge that aliens exist.
Seconds later things become real tense. While the possibilities of wonderful advances in science and inter-species trade are astronomical, sadly the same can be said for the chances of the extinction of the human race. When the existence of your species is on the line, you'd better step real cautiously and take as few risks as possible.
Subsequent first contacts with other alien species are likely not to be quite as wrenching on human society, but the danger of human extinction is always going to be at full strength no matter how routine first contact situations becomes. You never know if the next new alien species is the one that is a xenophobic axe murderer, itching to eradicate the human race with a relativistic bombardment or something.
Naturally The Encyclopedia of Science Fiction has an extensive article on the topic, including lots of examples.
An alien civilization of similar technological advancement to Terra could contact us Earth-folk first. In pulp SF this was traditionally by an alien flying saucer landing on the US White House lawn. The standard motives from 1950's SF novels are, according to Solomon Golumb:
Sir Arthur C. Clarke notes that the nasty little short story by Damon Knight adds an eighth motive: Serve!
Any or all of these motivations could apply to explorers from Terra, deciding to openly contact some alien civilization they become aware of.
But don't forget the ever popular Interstellar Trading.
There are also anti-motivations. Even if the human race does not want to go all genocidal on a newly discovered alien civilization's posterior, neither do you want to make it easy for them to kill you. As far back as Murray Leinster's classic "First Contact" (1945) the warning is when one of your starships unexpectedly encounters an alien starship, neither can let the other discover the location of their home planet. At least without finding their location as well. If the Terran starship stupidly lets the Blortch starship find the location of Terra, well Terra is at the Blortch's mercy. The Blortch can send their entire star fleet to blow Terra to Em-Cee-Squared, secure in the knowledge that the Terran star fleet has no idea where in the universe to dispatch a retaliation task force.
This only happens when mutually alien ships stumble over each other in deep space. Naturally if the Terran exploration ship encounters the Blortch ship while both are orbiting the Blortch homeworld, well the cat is already out of the bag. Then the problem is how does the Terran ship get the vital information back to Terra without leading the Blortch back to your home.
Things can get quite ugly. In Michael McCollum's Antares Passage (1998) all ships have explosive charges on their navigation computers and the astrogators have been brainwashed to commit suicide if they are in danger of being captured by the enemy. In the beforementioned "First Contact", the human and alien ship try to destroy each other in battle, knowing that neither one dare run for home.
If you are really desperate, you will have to trigger the ship's self-destruct mechanism.
Mid-last-century a popular theme was your best friend who lives next door suddenly having their entire personality utterly transformed by Invaders From Outer Space. Most analysis is of the opinion that this is a thinly disguised allegory for the second red scare in the US.
The original Invasion of the Body Snatchers had the hapless humans killed and replaced with perfect duplicates. In this section we are concerned with the creepy sub-trope of the victims not being killed, but merely mentally enslaved and controlled like meat puppets. Bonus horror points for the controlling aliens actually burrowing into the victims body in order to take control of the brain.
Examples include the neural parasite from ST:TNG Conspiracy, the flying parasites in ST:TOS Operation: Annihilate!, Redjac in ST:TOS "Wolf in the Fold", The Invisibles from The Outer Limits, The Vang series by Christopher Rowley, the slug from The Hidden, Hivers in A Hat Full of Sky, the Goa'uld in Stargate SG-1, the The Drakh Keepers from Babylon 5, the Vaylen in the Iron Empires graphic novels, the energy being from Kronos, and Robert Heinlein's The Puppet Masters. Arguable examples include the Ceti eel from Star Trek: The Wrath of Khan and the Centaurian slug from Star Trek: The Future Begins
The benign version is when the burrowing alien is not a parasite but actually a symbiont. One that helps the human: repairing bodily damage, curing disease, and granting superhuman powers (often psionic). And no enslaving at all, the alien is a partner. Technically the correct term is "symbiont" even though science fiction authors mistakenly use the term "symbiote".
Examples of symbioses include Needle, Through the Eye of a Needle, and Insidekick. The alien symbiont in Needle helped the protagonist survive being stabbed through the heart. The symbiont in Insidekick gave the protagonist incidental powers like a prolonged life span and teleportation.
Back in 1961, there was a scientific conference held in the Green Bank facility about the search for extraterrestrial intelligence. In it, the host Dr. Frank Drake presented his now-famous "Drake Equation". The equation calculates N, which is the number of civilizations in our galaxy that it would be possible to communicate with by radio. After all, this equation was invented for a conference about communicating with aliens by radio.
It is a pity that we have not got a clue about the values of the last four parameters.
This means that the equation is pretty worthless for calculating the actual number of radio-using aliens out there. But it can be useful to study how proposed values for the parameters will affect N.
Note that N is the number of radio-using alien civilizations. Science fiction authors have been using the Drake Equation to calculate the number of alien civilizations, which is not quite the same thing. But close.
Authors can start off with a desired value for N, and work backwards to find values for the other parameters that will give the desired result. Or use their personal best guess for the parameters and see what value of N pops out.
The Drake Equation is:
N = R* × ƒp × ne × ƒl × ƒi × ƒc × L
- N = the number of civilizations in our galaxy with which radio-communication might be possible
- R* = the average rate of star formation in our galaxy
- ƒp = the fraction of those stars that have planets
- ne = the average number of planets/moons that can potentially support life per star that has planets
- ƒl = the fraction of planets that could support life that actually develop life at some point
- ƒi = the fraction of planets with life that actually go on to develop intelligent life (civilizations)
- ƒc = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space
- L = the length of time for which such civilizations release detectable signals into space
0.4 is based the probability a planet is in the star's habitable zone, determined by solar heating. 0.1 is based on the galactic habitable zone, determined by regions of the galaxy with enough heavy elements and lack of near-by deadly supernovae.
Things get more uncertain when you consider that many moons (such as Europa or Titan) might support life. This drastically increases the number of habitable sites in a given solar system.
And proponents of the Rare Earth hypothesis say in order for their hypothesis to be true, it must be so closed to zero that Terra is the only one. Which violates the mediocrity principle and the Copernican principle, as well as being no fun at all for science fiction authors.
1.0 if you are an optimist, 0.13 if you are a pessimist.
1.0 is baased on the fact that life arose on Terra almost immediately after favorable conditions arose. 0.13 is based on an estimate by Charles H. Lineweaver and Tamara M. Davis based on a statistical argument derived from the length of time life took to evolve on Terra.
The value of this parameter is controversial, which is a code word for "who the heck knows?" Pretty much every value between 0.0 and 1.0 has been proposed, depending upon the proposer's particular axe to grind.
Also controversial. Some civilizations who have the technology to communicate might be paranoid enough that they keep silent. Yet other civilizations might not have the technology to communicate, but do have technology sufficiently noisy that it can be detected. Again: "who the heck knows?"
Most controversial of all. At its most innocuous, this could measure how long it takes for a civilization to become paranoid about giving away their position. At its most controversial, this could measure the average lifetime of a technological civilization, which is where the debate turns ugly. Over population, global warming, global thermonuclear war, and other terms for the Four Horsemen of the Apocalypse start being thrown around, and the discussion rapidly goes downhill from there.
More science-fictionally L could measure how long it takes a civilization to be cut short in an unexpected apotheosis by a Vingian Singularity
There have been several suggested modifications to the Drake Equation.
Alien civilizations might colonize other worlds. In a paper called The Great Silence — The Controversy Concerning Extraterrestrial Intelligent Life they derive three equations to calculate the effects of this on N. These equations require calculus so I'm not going to bother writing about them. You can find them in the report.
A given planet might give rise to several alien civilizations. An additional parameter is added for the Reappearance Factor, the average number of times a planet engenders alien civilizations. Like the other parameters this is very hard to estimate. A lot depends upon what kills off a given civilization, specifically how much it spoils the planet for making a new civilization. A little thing like global thermonuclear war and nuclear winter would eradicate a civilization but the planet would totally recover in a few million years. But if the primary star grew so swollen that it vaporized the planet, that would be the end. Another factor is that the first civilization to arise on a planet might use up all the fossil fuels and easily reached ores. The subsequent civilizations are at a disadvantage. They have to jump directly to off-shore oil drilling instead of just shooting a bullet in the ground like Jed Clampett.
An alien civilization, perfectly capable of sending radio messages, just might be paranoid enough that they keep silent. There might be civilization-killers lurking about, no sense attracting their attention. This is called the METI factor, for Messaging to ExtraTerrestrial Intelligence.
The July 2013 issue of Popular Science in an article about the TV show Doctor Who adds the parameter ƒd, which is the fraction of civilizations that can survived an alien attack from space. The "d" is for "Dalek".
Sooner or later one has to confront the Fermi Paradox. A good overview of the problem is David Brin's Xenology: The Science of Asking Who's Out There and The 'Great Silence': the Controversy Concerning Extraterrestrial Intelligent Life. For more detail, try Where Is Everybody?: Fifty Solutions to the Fermi Paradox and the Problem of Extraterrestrial Life by Stephen Webb.
The Fermi Paradox points out that:
- There is a high probability of large numbers of alien civilizations
- But we don't see any
So by the observational evidence, there are no alien civilizations. The trouble is that means our civilization shouldn't be here either, yet we are.
The nasty conclusion is that our civilization is here, so far. But our civilization is fated for death, and the probability is death sooner rather than later. This is called The Great Filter, and it is a rather disturbing thought.
And the problem is not just that we see no alien civilizations. It is the fact that humans exist at all. Terra should by rights be an alien colony, with the aliens using dinosaurs as beasts of burden and all pre-humans exterminated eons ago as pests.
Using slower-than-light starships it would be possible to colonize the entire galaxy in 5 million to 50 million years. By one alien civilization. Naturally the time goes down the higher the number of civilizations are colonizing.
So during the current life-span of our galaxy, it would have been possible for it to be totally colonized 250 to 2500 times. At a minimum.
The Fermi Paradox asks why isn't Terra an alien colony right now?
Granted an alien civilization might not be interested in colonization. There might be thousands of civilizations all content on their home planets, with nary a thought of colonization at all. But remember it only takes one. For anti-colonization bias to be a solution to the Fermi paradox, every single freaking civilization would have to share it with no exceptions at all. If there is even one then the galaxy is colonized in the blink of a galactic eye.
Again, the problem with no alien civilizations existing is that it implies our civilization should not exist either. A galaxy with 400 billion stars and 13.8 billion years of time to play with, it should have produced either millions of civilizations or zero civilizations. But not just one civilization. Violates the mediocrity principle and the Copernican principle, that does. Every single time people have theorized that Terra has a central specially favoried position in the universe, it has turned out to be ludicrously wrong.
Which means our civilization exists so far, but it is due to become extinct quite soon.
This means it is going to be real bad news if we discover any alien life forms at all in our solar system, even bacteria. It will imply that life is common in the universe, life on Terra is not special, The Great Filter must have wiped out all the other civilizations, and we are next.
The Great Filter does not necessarily mean 100% of all species become extinct around our level of development. A more merciful solution would be some filter that makes 100% of all species at our level of development stop trying to communicate with other species, and never ever attempt interstellar colonization. A candidate filter is The Singularity if you can make post-singuarlity species 100% non-communicating and non-colonizing (i.e., Fermi Invisible). Therein lies the rub. Since a singularity is literally where future prediction is impossible, making all singularities for all species 100% anything is probably impossible.
Other Fermi Paradox Solutions
Naturally there are quite a few solutions proposed. Stephen Webb's book has fifty of them. Some examine the Drake Equation's parameters with an eye towards finding unexpected constraints on the values.
A recent paper Dissolving the Fermi Paradox makes a strong case that us being the only intelligent civilization in the entire universe is not quite as low probability as previously thought.
The Wikipedia article has a broad outline of various classifications the solutions fall into. Refer to that article for details.
- Few, if any, other civilizations currently exist
- No other civilizations have arisen (see also Rare Earth Hypothesis)
- It is the nature of intelligent life to destroy itself
- It is the nature of intelligent life to destroy others (Berserker Hypothesis)
- Life is periodically destroyed by naturally occurring events
- Human beings were created alone
- Inflation hypothesis and the youngness argument (multiple universes with synchronous gauge probability distribution)
- They do exist, but we see no evidence
- Communication is improbable due to problems of scale
- Intelligent civilizations are too far apart in space or time
- It is too expensive to spread physically throughout the galaxy
- Human beings have not been searching long enough
- Communication is improbable for technical reasons
- Humans are not listening properly
- Aliens aren't monitoring Earth because Earth is not superhabitable
- Civilizations broadcast detectable radio signals only for a brief period of time
- They tend to experience a technological singularity
- They are too busy online
- They are too alien
- They are non-technological
- The evidence is being suppressed (the Conspiracy Theory)
- They choose not to interact with us
- They don't agree among themselves (no talking with Terrans until Galactic UN is in agreement)
- Earth is deliberately not contacted (Zoo Hypothesis)
- Earth is purposely isolated (Planetarium Hypothesis)
- It is dangerous to communicate
- The Fermi paradox itself is what prevents communication (implies that communcation is lethal)
- They are here unobserved
Several of the "Few if any civilization currently exist" entires are possible candidates for The Great Filter. If you can figure out how to make them 100% lethal.
Dr. Geoffrey A. Landis has a possible non-Great Filter solution based on Percolation Theory. In A Fire Upon The Deep, Vernor Vinge postulates a solution based upon Terra being located in the less desirable geographic region of the galaxy, which is a location based solution as is the Percolation solution.
A Great Filter solution is in Toolmaker Koan by John McLoughlin. It argues that any intelligent species that invents tools starts a process of accelerated progress that inevitably leads to extinction by warfare over dwindling resources.
A more nasty Great Filter solution is in the classic The Killing Star by Charles Pelligrino and George Zebrowski, Run To The Stars by Michael Rohan, and Antares Dawn by Michael McCollum (see below). It boils down to a variant on the Berserker Hypothesis.
This is from a paper called Upper limits on the probability of an interstellar civilization arising in the local Solar neighborhood by Daniel Cartin.
The researcher was examining that part of the Fermi Paradox which was asking why the solar system was not colonized by aliens billions of years ago. Heck, after a couple of decades of examination by NASA et al we have not even found the battered remains of an alien space probe. Or any other signs of an extraterrestrial visit.
Dr. Geoffrey A. Landis proposed a possible solution based on Percolation Theory. But this analysis assumed that stars were arranged on simplistic cubic lattice, which seemed to Cartin to be a spherical cow. In an earlier paper, Cartin decided to use an actual dataset of the real positions of all known star systems within 40 parsecs (130 light-years).
In the earlier paper Cartin also turned the the analysis on its head. He made the question: supposing each star system is the point of origin for a space-faring civilization with a given sociological drive and technological prowess, how many could reach the Solar System with their colonization process?
Cartin made p the probability that an alien civilization arising on a given star will colonize a particular "neighbor" star system. Dmax is the maximum travel distance of the civilization's spacecraft (in parsecs), this determines which star systems are "adjacent." A Monte Carlo simulation was used to calculate the number of alien origin systems that could reach the Solar System (as a function of φ(p, Dmax).
However, for this paper, Cartin asked a different question: for this calculated function φ(p, Dmax) and the evident lack of visitors to the Solar System, what are the upper limits on the likelihood of spacefaring life originating on a given habitable planet, as a function of p and Dmax? In other words, since Terra don't got no alien tourists in all of history, can we figure the odds of starship riding aliens?
Cartin figures that obviously the number of alien civilizations reaching the solar system must be equal to the number of alien origin stars that could possibly reach us, multiplied by the probability of an alien star-faring civilization arising at each star system. Since there ain't no evidence that any aliens actually reached us, the probability of an alien species willing and able to execute a colonization program must be simply the inverse of the probability of such civilizations evolving in a given star system. Cartin uses a break-down of relevant factors in much the same format as the Drake equation in order to solve for the probability. And he also used the real star database of known stars within 40 parsecs.
The results are in the graph above.
Remember p the probability that an alien civilization arising on a given star will colonize a particular "neighbor" star system. Dmax is the maximum travel distance of the civilization's spacecraft (in parsecs).
The contour lines are the probability of ƒℓ times ƒS, where:
- ƒℓ the fraction of planets in the habitable zone of the spectral class of the star in question that develop life
- ƒS the fraction of life-bearing planets that go on to develop a star-faring civilization
(the two probabilities are multiplied because that's how you calculate the probability of both of them happening)
So the graph displays the contour lines of the maximum values of ƒℓ × ƒS of a star-faring alien civilization which actually reached our Solar System. For given values of p and Dmax. The lines are maximum values given the fact that apparently no alien civilizations actually reached the solar system.
In other words, you pick your best guess at alien-interstellar-empire-lust p, and starship-range Dmax, and the graph will tell you the maximum possible odds for the birth of an alien civilization that could have reached us.
In 2006 author Liu Cixin wrote a novel named 三体 (The Three-Body Problem) which won the Chinese Science Fiction Galaxy Award in 2006 and the 2015 Hugo Award for Best Novel. It proposed a solution to the Fermi Paradox which was plausible enough to get analyzed in a paper published in the Journal of the British Interplanetary Society (The Dark Forest Rule: One Solution to the Fermi Paradox).
It is very similar to the scenario set out in Pelligrio & Zebrowski's The Killing Star.
The Dark Forest Rule has two basic hypotheses:
- Survival is the primary requirement of civilization
(civilizations that don't care if they live or die won't last long)
- Civilization grows and expands continuously, whereas the total cosmic materials remain constant
(all warfare boils down to two monkeys and one banana)
and two basic concepts:
- Suspicion Chain: poor communication between different civilizations in the universe results in civilizations distrusting each other
(you can't trust that mysterious tribe who lives over the hills, I'll bet you they have horns growing out of their heads and eat human flesh)
- Technology Explosion: technologies in civilizations may achieve explosive breakthroughs and development at any time, which are beyond the accurate estimation of any distant civilization with its own technological level
(you never know when some other nation will unexpectedly invent the Ultimate Weapon while your back is turned)
The result of combining these hypotheses and concepts:
- Civilizations in the universe are competing for resources
(there ain't enough to share)
- Civilization cannot trust other civilizations
(because they have horns growing out of their heads and eat human flesh)
- Civilizations cannot be confident about the advancement of their technology
(at any moment those horned flesh eaters who want our galactic resources might invent a Nicoll-Dyson Beam and kill us all)
In other words It's The Law Of The Jungle. "Every man for himself," "anything goes," "need of the sole outweights the need of the many," "survival of the strongest," "survival of the fittest," "kill or be killed," "dog eat dog" and "eat or be eaten." Basically the "state of nature" as proposed by Thomas Hobbes.
The hypothesis "survival is the primary requirement of civilization" is not saying that there are no alien civilzations that are moral-advocating or selfless. The hypothesis is saying that such civilizations will be slaughtered by survivalist civilizations. Much like how the Roman civilization was defeated by the Goths and the Song Dynasty was defeated by Mongolian cavalry. In other words "nice guys finish last."
The hypothesis "civilization grows and expands continuously, whereas the total cosmic materials remain constant" does not mean that civilizations will fight because they are greedy for all the resources. It means civilization will fight because they do not know how many resources will ensure survival, the only safe assumption is "all of them." In a broader sense all activities of all living civilizations increase entropy (by the second law of thermodynamics). Therefore other civilizations must be killed to slow down the heat death of the universe, thus prolonging the time for this civilization to live.
The limit of the speed of light means that two-way communication between civilizations can take years to centuries. This helps create the suspicion chain. But even with rapid communication you will instantly find yourself in the middle of The Prisoner's Dilemma which is the suspicion chain raised to the second power.
In addition, even limited communication with another civilization might inadvertently trigger in them a technological explosion, and suddenly you will find that you've brought a knife to a gun fight.
Given all that, the only safe strategy is to instantly try and kill any alien civilizations you come across. Especially since chances are they will have followed the same train of logic, and will instantly try to kill you once they discover you exist.
If you kill them, the worst thing that might happen is you'll discover they were a non-expanding moral-advocation race. Which is a shame, but even so they were creating entropy.
If you leave them alone, you are rolling the dice and risking the extinction of your entire species.
Of course when you try to kill them, you'll have to be covert about it. Just in case they turn out to be more powerful than you are. You'll have to try to avoid letting them discover that your species even exists in the first place. Keeping in mind that if they are incredibly more advanced than you are, they will have found you first and you are doomed. But there isn't much you can do about that. Given the "Apes or Angels" scenario, chances are there will be a huge technological inequality between the two civilizations. Meeting between civilizations of the same tech level will be rare.
The big draw-back to instantly killing a newly discovered alien civilization is that another ultra-highly advanced alien civilization will notice your attack (that is, a civilization vastly more advanced than you are). Then they will probably obliterate you. If you are worried about that, the best strategy is to try and hide your entire civilization, and avoid contact with anyone.
Unless the ultra-highly advanced alien civilization does not obliterate you, for fear that a third ultra-ultra-ultra-highly advanced alien civilization will notice the attack and obliterate them.
So, the answer to the Fermi Paradox is either:
- All the other civilizations have been killed except for a couple of bloodthirsty ones
- All the other civilizations are doing their best to hide, either to avoid attracting the attention a killer civilization or because they are a killer civilzation which thinks that killing is too much of a risk
From Run To The Stars by Michael Scott Rohan (1982). The heroes have discovered the Dreadful Secret that the BC world government is hiding: explorers have discovered the first known alien species, and BC is sending a huge missile to kill all the aliens.
Daniel Krouse brought to my attention some important new ideas on this matter:
The problem of whether to commit genocide upon an alien race or not is vaguely related to the famous "prisoner's dilemma".
|Race B Ignores||Race B Attacks|
|Race A Ignores||Both live constant fear||Race A exterminated|
Race B lives free of fear
|Race A Attacks||Race A lives free of fear|
Race B exterminated
|Both are devastated but not destroyed|
As the Wikipedia article shows, the dilemma comes when you assume that each race is trying to maximize it's survival.
Say you are Race A. If Race B ignores you, your best outcome is to attack. Then you do not have to live in fear, spend resources on building defenses, and so on. If race B attacks, your best outcome is still to attack, since the alternative is extermination.
And since Race B will make the same determination, both races will attack and be devastated but not destroyed.
An outside observer will note that if the two races are taken as a group, the best outcome of the group is for both races to cooperate. If either attacks, the outcome for the group will be worse. And if both attack, both races receive a worse outcome than if they had both ignored each other.
So if both races selfishly look out for themselves, both will attack and the result is devastation. If both races altruistically think about the group, both will ignore and both will live. And if one race is selfish while the other is altruistic, yet again it will be proven that nice guys finish last.
And it actually doesn't matter if they can communicate with each other or not, a given race cannot be sure if the other is being truthful. If the two races can communicate, they run into the "cooperation paradox". Each race must convince the other that they will take the altruistic option despite the fact that the race could do better for themselves by taking the selfish option.
|Cooperate||win some-win some||lose all-win all|
|Defect||win all-lose all||lose some-lose some|
|Cooperate||D, D||C, B|
|Defect||B, C||A, A|
Of course the prisoner's dilemma is a very artificial set-up, in real life the results would not be quite so clean-cut. To the right are two formulations of the prisoner's dilemma matrix.
In the Detailed matrix, A, B, C, and D are various outcomes, and the relative value of the outcomes are B > D > A > C. If those relative values are true, the prisoner's dilemma is present. In the first example, B = alive and free from fear, D = alive but in constant fear, A = alive but devastated and C = exterminated.
The prisoner's dilemma does have some vague similarities to the old cold war doctrine of Mutual Assured Destruction, though they are actually not very closely related. The prisoner's dilemma also does not work in those cases where what is bad for one player is equally bad for the other. An example is the game of "chicken" as seen in the 1955 film Rebel Without A Cause, where the drivers of both cars race to a deadly cliff and the first one to "chicken out" loses. But game theorists are working on a new approach called "Drama Theory" (warning: commercial website. No endorsement implied.)