The two main functions of sensors are navigational and tactical.
Navigational sensors are used by the astrogator to determine the spacecraft's current position, vector, and heading. They are also used by the pilot to perform the maneuvers calculated by the astrogator. Arguably a chronometer or other instrument to locate the spacecraft's current position in time is also a navigational sensor.
Tactical sensors are used to watch the the region around the spacecraft. This is mostly to monitor nearby objects (such as meteors on a collision course or enemy spacecraft). Arguably this also includes solar-storm warnings which detect deadly incoming proton events.
Navigational and Tactical sensors are generally found on all spacecraft, unless the designer is trying really hard to economize. There are some more specialized sensors only found on more specialized spacecraft:
Remote Sensing suites are used to scan and analyze the surface of a planet, moon, or asteroid. These are found on specialized spacecraft such as exploration vessels, mine prospecting ships, survey ships, customs and other hunter-type ships, and spy ships.
Combat sensors have two main types. Strategic combat sensors detect hostile spacecraft at long range, giving advanced warning of enemy attack. (remember that There Ain't No Stealth In Space). Tactical combat sensors work at close range in a battle, guiding your weapons to the enemy targets (a "firing solution"), detecting incoming enemy weapons, and analyzing the enemy for weakness.
There are two broad classes of sensors: passive and active.
Passive sensors just detect any emissions from the target, i.e., they passively look for the target. Passive sensors include telescopes and heat sensors.
In some SF novels, passive sensors are called "sensors" while active sensors are called "scanners."
Note that in a combat situation, using active sensors allows you to be instantly detected and targeted by hostile spacecraft. If you don't care if you are detected, or if they already know you are there, sensors can reveal valuable information about hostile spacecraft.
Tactical sensors are used to watch the the region around the spacecraft. This is mostly to monitor nearby objects (such as meteors on a collision course or enemy spacecraft). Arguably this also includes solar-storm warnings which detect deadly incoming proton events.
These are used to detect object by their gravity. Typically in science fiction, they detect objects that are invisible (usually in science fiction written before the invention of radar). Dr. Robert Forward actually invented such a detector. He suggests using it to detect asteroid-mass black holes lurking in the center of asteroids.
In C.J. Cherryh's Company Wars universe, ships use both radar and something called Longscan for detection and tactical information. Longscan helps cope with the lightspeed lag of radar. Its primary purpose is for spacecraft combat, but it has some civilian uses.
Remember the light-speed lag. Light moves quickly, but not at infinite speed. It takes about eight minutes to travel one astronomical unit. So if you are in orbit around Terra and you observe a spacecraft near the Sun with a telescope or radar, you are actually are seeing where the ship was eight minutes ago. By the same token, if you change course it will be eight minutes until the Sun-grazer ship will know.
Remote Sensing is obtaining information about an object or phenomenon without touching it. Remote sensing is found on specialized spacecraft such as exploration vessels, survey ships, customs and other hunter-type ships, and spy ships. Remote sensing is used by asteroid miners trying to figure out the locations of valuable lumps of ore on an asteroid or moon, survey ships assessing potential colony planets, military spacecraft trying to identify hostile contacts, or when Mr. Spock is scanning for life-signs.
A Gamma ray spectrometer is often used by NASA in their space probes to map chemical element and isotope regions on a planet, moon, or asteroid. Such an instrument would be incredibly useful for an asteroid miner. Note that it can only detect elements, not compounds. The spectrometer cannot, for instance, detect water (H2O). It can, however, detect suspiciously large amounts of hydrogen in the same area as oxygen which may suggest the presence of water.
Current NASA instruments can probe to a depth of about 0.1 meters, have a range of a close orbit about one body radius from the surface, and require several months to gather enough readings to make worthwhile maps of the elemental composition. Presumably this performance can be improved. Of course the gamma signal strength can be increased if the range is reduced, say, by somebody on the ground using a man-portable instrument. Trying to do detection from orbit weakens the signal due to the inverse square law.
Detecting valuable deposits of elements is done by analyzing cosmogenic gamma rays (that is, gamma rays created by cosmic rays). Galactic cosmic rays (mostly high-energy protons) from outer space bombard the upper 0.1 meter layer of the asteroid or other celestial object. When a cosmic ray proton hits an atom of the object, it splits it, creating among other things a shower of fast neutrons. The neutrons collide with other atoms (inelastic collision) and are eventually absorbed by another atom (radiative neutron capture). Both of which cause the atom involved to emit a gamma ray (γ).
The important point is that the frequency of the gamma ray depends upon what element the atom is. In other words, the frequency is the "fingerprint" of that element. If you see a gamma ray with a frequency of 6 MeV, you know it came from an oxygen atom.
All the gamma ray spectrometer does is detect gamma ray photons and notes the frequency and where on the asteroid they came from. The frequencies reveal what elements are at a given location, and the relative amounts of different frequencies reveal the relative concentrations of the various elements. For instance, if you are getting twice as many 6 MeV gamma rays as 3 MeV gamma rays from a location, it means that location has oxygen and aluminum, and there is twice as much oxygen as aluminum.
Note that since this technique depends upon cosmic rays, it doesn't work well on planets where cosmic rays are sparse. On planets with thick atmospheres (e.g., Terra and Venus) the atmosphere stops most of the cosmic rays from reaching the surface (which is good news if you are living there). I suppose on such planets one could generate gamma rays to fingerprint if you sprayed the ground with a proton particle beam weapon. But that obviously has problems.
- Typical cosmic ray protons contain 40 million times the energy of particles accelerated by the Large Hadron Collider.
- Particle beam weapons are power hogs. Presumably an effective cosmogenic particle accellerator would have power requirements on the order of 40 million times that of the Large Hadron Collider.
- Particle beam weapons are, well, weapons.
- It is hard to set the proton strength high enough to get a good gamma signal but low enough so the gamma radiation doesn't kill the user and everybody standing nearby.
- If the proton strength is too high it will slice up the terrain, which could tend to upset people.
Gamma ray spectrometers are a scintillation counter rigged to be a spectroscope. That is, they are composed of a crystal which makes flashes of light when radiation strikes it, and a photomultiplier tube which watches the crystal and counts the flashes.
To operate as a spectrometer, the instrument has to be able to tell the frequency of each gamma ray photon. The brightness of each flash indicates the frequency of the gamma ray that caused it (the shorter the frequency of a gamma ray photon, the higher the energy, the more visible photos per gamma ray are created, which makes the flash brighter).
Lanthanum bromide (LaBr3) works poorly as a gamma detector crystal. It seems that some of the lanthanum atoms are radioactive isotopes, which means the blasted detector crystal is generating gamma rays. These internal gamma rays drown out the gamma signal from the planet or moon you are surveying.
The Lunar Prospector probe used a crude bismuth germanate (BGO) gamma detector crystal. The advantage was that the crystal did not require cryogenic cooling and was relatively inexpensive. It does require high-voltage vacuum tube photomultipliers due to the type of light flashes it emits.
The Kaguya probe used a ultra-high-res high-purity germanium (HPGe) detector crystal (the technical term is "high spectral resolution"). HPGe are considered to be the "gold standard" among gamma detector crystals. The drawback is that they require cryogenic cooling, which requires mass for the cooling equipment (remember Every Gram Counts) and limiting the gamma detector usable lifespan to the on-board supply of coolant. The crystal is also much more expensive. It also does require high-voltage vacuum tube photomultipliers.
The cutting edge, hot new advanced gamma ray detector crystal is europium-doped strontium iodide (SrI2). Pretty good spectral resolution, not too expensive, and does not require cryogentic cooling. Another advantage is that its light flashes do not require high-voltage vacuum tube photomultipliers, you can use small low powered silicon photomultipliers. This allows the construction of a orbital element scanner that is small and low powered enough to fit into a CubeSat. The drawback is that its spectral resolution is not as good as a HPGe crystal. But it is much better than a BGO crystal.
Burger Lab at Fisk University made a prototype of a CubeSat version of a SrI2 gamma-ray spectrometer built from off-the-shelf components that weighs only half a kilogram, fills 0.001 cubic meters of space and consumes about three watts of electricity yet can do the job of a full lab system that weighs 90 kilograms and fills 0.3 cubic meters of space. The crystal is only five centimeters long.
Note this technique only works in an oxygen atmosphere.
In Star Trek, when the Enterprise approaches a starship or planet, one of the first things Captain Kirk does is order a scan for life signs. This will reveal if anybody lurking there. "Scanning for life-forms, Captain. We are reading life signs for nine Humans and a Klingon."
In Stargate Atlantis, the Lanteans in the Pegasus Galaxy have nifty little hand-held units called Life signs detectors. They can detect all sentient aliens within about 100 meters, as long as the user has the the Ancient Technology Activation gene and they are not trying to detect a hibernating Wraith.
This is obviously a very useful sensor to have, but how the heck does it work?
Here are some possibilities.
The Star Fleet Medical Reference Manual says life signs sensors detect Kirlian radiation from the entity's Kirlian Aura. This would be a fine explanation, were the Kirlian Aura not revealed to be a steaming pile of Phlogiston back in 1979. This is just another form of the old notion that living creatures are living due to the presence of some sort of "life force" which is as-yet undetected by science. This discredited idea is called Vitalism.
One quick and dirty way to detect life is the fact that generally living things move. Even if they are not walking around, their hearts pump blood and their lungs pump air. Plants move somewhat slowly, and microorganisms move over short distances.
A slightly less quick and dirty technique is to somehow detect the presence of DNA. Which means the sensor will oblivious to any life form with a biochemistry that doesn't use it.
Vulcans and humans (for instance) can be distinguished by their biosignatures. There will be differences in their heart rates (or whatever the acoustics are from their fluid pumping organs), heat signatures, breathing rates, exhaled gases, elements and organic molecules composing their bodies, biological chemical reactions, and the like. This would require remote sensing of sound and chemical composition. As an example, the human will have a sizable amount of iron in their bodies from the hemoglobin in their blood, while Vulcan blood is based on copper.
Laser fluorescence can detect and identify certain organic molecules on the surface of a planet (or the surface of an alien's skin).
In 2007 a company called Kai Sensors obtained a contract from the US Army to develop a unit called a LifeReader. This would use doppler radar and sophisticated computer algorithms to detect and monitor multiple subjects by their individual heart rates, even through walls. Unfortunately the Kai Sensors company appears to have vanished.
Goddard Space Flight Center scientist Sam Floyd is working on the Neutron/Gamma ray Geologic Tomography (NUGGET) instrument. A beam of neutrons is focused through a neutron lens at a specific point inside the target object. As atoms inside the object at the point absorb neutrons, they produce a characteristic gamma-ray signal for that atom's element. NUGGET detect the gamma-ray signature, thus identifying the element at the target point. By sweeping the focus through the object in a regular pattern, a three-dimensional elemental plot of the object can be created. It can also measure the relative amounts of various element pairs (if there is more copper than iron you might have detected a Vulcan).
A partnership by the Department of Homeland Security's Science and Technology Directorate and the National Aeronautics and Space Administration's Jet Propulsion Laboratory designed a device called FINDER (Finding Individuals for Disaster and Emergency Response). It uses microwaves (1150 MHz or 450 MHz, L or S band) to detect the heartbeats of victims trapped in wreckage (for instance a collapsed building after an earthquake). When a microwave beam is aimed at a pile of earthquake rubble covering a human subject or illuminated through a barrier obstructing a human subject, the microwave beam can penetrate the rubble or the barrier to reach the human subject. When the human subject is illuminated by a microwave beam, the reflected wave from the human subject will be modulated by the subject's body movements, which include the breathing and the heartbeat. It can detect heartbeats from people behind [A] 6 meters of solid concrete, [B] 9 meters of rubble, or [C] 30 meters of open space. After the April 25, 2015 earthquake in Nepal two prototype FINDERs managed to locate four men in two different locations who had been trapped under 3 meters of bricks for several days.
LifeReader would use doppler radar and sophisticated computer algorithms to detect and monitor multiple subjects by their individual heart rates, even through walls. Project was apparently discontinued.
Range-R uses a similar principle to LifeReader. It is a motion detector using radio waves that can detect the presence of people and movements as small as human breathing at a range of 15 meters or so, even through solid walls. It will penetrate most common building wall, ceiling or floor types including poured concrete, concrete block, brick, wood, stucco glass, adobe, dirt, etc. However, It will not penetrate metal or walls saturated with water. These are actually currently being used by US police, which has raised the alarm of possible Fourth Amendment abuses by law enforcement personnel.
Dan Slater is the lead technologist working on a new microphone technology called the remote acoustic sensor (RAS), which is capable of capturing sounds within extreme and often inaccessible aerospace environments. It is sensitive enough to detect the sound of microbes moving around. Things that are constantly moving are probably alive. The RAS could probably hear the sound of a Klingon's heartbeat, breathing, blood turbulence, and gastrointestinal rumbling. Probably in enough detail to distinguish the Klingon from a Vulcan or a Human.
Chen-Chia Wang et al have utilized something called a optical speckle-tolerant photo-EMF pulsed laser vibrometer (PPLV) for the detection of human heartbeats, breathing, and gross physical movement from essentially any part of a human subject's surface, even in the presence of clothing, all the while without limiting the interrogation points to specific locations like the chest and carotid areas.
Spy rays or spy beams are a jolly science fiction idea, apparently invented by the legendary E. E. "Doc" Smith in his novel Triplanetary (1934). Adjust the setting and you too can see and hear everything that happens inside an enclosed room at a remote location. Currently they do not exist, but certain remote sensing technologies are getting real close.
Spy rays are popular with spies (of course), combat spacecraft trying to get intel on their opponents, military intelligence, criminal gangsters trying to get the inside dope on their targets and/or rival gangs, police especially in the same situations where they'd have an agent "wearing a wire", astromilitary trying to obtain the details of the enemy's new secret weapon breakthrough, industrial espionage, and so on. In E. E. "Doc" Smith's Lensman series, warships would used spy rays to locate the enemy ship's crew at their enemy control panels then use needlebeams to vaporize the control panels (and probably the hapless crewmember at the panel).
Naturally this leads to the an arms race, with the creation of "spy-ray blocks" to foil spy rays. And improved spy rays to defeat the spy-ray blocks.
Spy-ray blocks are naturally popular with the targets of the activites listed in the previous paragraph. Note that the target of spies includes diplomats, top-secret development labs, military planning offices, and enemy spies.
Technologies that are almost spy rays include:
- LifeReader would use doppler radar and sophisticated computer algorithms to detect and monitor multiple subjects by their individual heart rates, even through walls. Project was apparently discontinued.
- Range-R uses a similar principle to LifeReader. It is a motion detector using radio waves that can detect the presence of people and movements as small as human breathing at a range of 15 meters or so, even through solid walls. It will penetrate most common building wall, ceiling or floor types including poured concrete, concrete block, brick, wood, stucco glass, adobe, dirt, etc. However, It will not penetrate metal or walls saturated with water. These are actually currently being used by US police, which has raised the alarm of possible Fourth Amendment abuses by law enforcement personnel.
- Laser Microphones uses a remote laser beam to monitor the vibrations of an object inside a room, to create an impromptu microphone suitable for obtaining intel on drug deals and other conspiracies. Rippled glass windows can defeat laser microphones. But in theory the principle can be adapted to use microwaves, which means it can only be defeated by acoustically isolating the room and surrounding it with a Faraday Cage.
- Time-of-Flight Microwave Cameras use a parabolic antenna as a lens to actually capture crude images through walls.
Patterns of specific readings from one or more sets of sensors can indicated the presence of an object or event, giving meaning to the raw readings. These are called Signatures.
For instance, if your seismometer indicates a small earthquake and the atmospheric radiation meter records an abrupt rise in atmospheric radiation, you can be pretty sure that a nuclear explosion has happened. The two readings correlated in time is the signature of a nuclear detonation.
Spectral signatures are a spectrum of intensities of various frequencies of electromagnetic radiation which identifies elements and compounds. The signature is the spectral "fingerprint" of an element. The instrument used is called an optical spectrometer or spectroscope.
In other words Meteor Mike the rock-rat can point his ship's spectroscope at an asteroid and say "Hot Rockets! Thar's gold in them thar rocks!"
In 1835 positivist French philosopher Auguste Comte foolishly defied Clarke's first law and stated: "We will never know how to study by any means the chemical composition (of stars), or their mineralogical structure."
Comte should have known better. Joseph von Fraunhofer had discovered the beginning of how to do just that in 1814, twenty one years earlier. The discovery is now called Fraunhofer lines, Joseph had basically invented the spectroscope.
The point is if you are a scientist in those primitive days before rocket propelled space probes, the only thing you can get from planets, the Sun, and the stars is electromagnetic radiation. Since that is all you get, you would do well to analyze that radiation with a fine-tooth comb. Which is what a spectroscope does.
Back in the 1660s people knew that you could make a rainbow by passing a ray of white light through a prism, and you should know that too if you have a copy of Pink Floyd's Dark Side Of The Moon album. But back then they figured that white light was white and the glass prism was somehow staining it with various colors. In 1666 super-genius Isaac Newton proved that the prism wasn't staining anything, the white light ray was actually composed of a mix of colored light. For one thing you could use one prism to turn a white ray into a rainbow, then use a second upside-down prism to turn the rainbow back into a white ray. This doesn't make sense if the prism is staining the light. Newton wrote up his results in a book called Opticks which is considered to be one of the greatest works of science in history. Arguably the greatest is Newton's other work Philosophiæ Naturalis Principia Mathematica, which among other things contains his law of universal gravitation and his laws of motion so near and dear to spacecraft astrogators. But I digress.
The point is the prism is taking all the various frequencies of light in the ray and separating them. Which allows you to analyze the ray with a fine-tooth comb. You can check which light frequencies are present and which are absent, and the relative strengths of each. This is the basis of the spectroscope.
Glass prisms have limitations when used in a spectroscope, they were eventually replaced with diffraction gratings once the latter had been invented.
Elements heated inside a blazing star emit a unique pattern of frequencies which is the signature (fingerprint) of that element. These are the Fraunhofer lines. Thus the spectroscope can stick its tongue out at Comte and routinely determine the chemical composition of stars and planets.
As a matter of fact, the element Helium was discovered by spectroscope on the Sun before it was discovered on Terra. Several astronomers spotted a previously unknown Fraunhofer line at a wavelength of 587.49 nanometers in the solar corona spectrum. It was named "helium" after the Greek word the Sun: ἥλιος (helios). It wasn't discovered on Terra until 27 years later. Chemist Sir William Ramsay discovered it in a chunk of cleveite when his spectroscope spotted the tell-tale Fraunhofer line.
Sometimes this backfires, though. In 1869 astronomers spotted another previously unknown line at 530.3 nanometers in the solar corona spectrum. Aha! Obviously another unknown element, discovered by the awesome power of spectroscopy! They named it Coronium, though Dmitri Mendeleev renamed it Newtonium.
However in the 1930s researchers discovered that the unknown line was actually due to highly ionized iron, not from a mystery element. Up until then scientists could not ionize elements quite as extremely as obtains in the solar corona. Bye-bye Coronium. This also explained quite a few other mystery lines that had been observed.
I first encountered Coronium in Fletcher Pratt's novel Alien Planet, which was written before Coronium was discovered to be a myth. In the novel the alien visitor needs Coronium to refuel his spaceship. Since it does not exist on Terra, he makes do with helium for a short hop to the planet Mercury. There he harvests the mythical Coronium from the solar wind. Which goes to show that reading science fiction can be educational, but you need a crib sheet to separate the good science from the obsolete science.
If a line is bright it is an emission lines, if it is black it is an absorption line. But either way they are the fingerprint signatures of the elements. When looking at the spectrographs, it doesn't matter if a line is bright or black, the position is the important thing.
Since objects like the Sun are composed of lots of elements, all their signatures will be overlaying each other. The spectrum will look like a herd of rainbow zebras. But astronomers have become quite skilled at untangling the signatures.
But spectroscopes can do so much more than just identify the elemental composition of the object.
In theory astronomers can tell if a star is approaching, receding, rotating, or orbiting another star by observing the Doppler effect with a spectroscope. This is impossible to detect if you have a featureless spectrum of light from the star in question. Kind of like trying to measure something using a featureless ruler with no indicator marks.
Fraunhofer lines to the rescue! They put identifiable marks on the star's spectrum. Suddenly your ruler has lines on it. Now you can see a red shift or blue shift by looking at the position of an element's Fraunhofer lines.
So you can take a photograph through a spectrograph of a star's spectrum while simultaneously photographing the spectrum of (say) some burning sodium in the lab (a "comparison spectra"), side by side on the same piece of photographic film (yes, kiddies, back in the days when dinosaurs roamed the planet people used photographic film instead of digital cameras to take their selfies). On the photo you can then measure the displacement between the lab's sodium signature and the sodium signature in the star's spectrum. A quick calculation and you know how fast the star is moving relative to Terra.
Obviously nowadays they use electronic photosensors instead of photographic film but the principle is the same. Instead of a photo the spectra is displayed as a jagged line in a graph, more accurate but nowhere near as pretty as a rainbow.
Fraunhofer used sodium lines for his lab comparison spectra because they are easily produced by sprinkling common table salt into a Bunsen burner flame. Electronic photosensors do not need comparision spectra because they can directly measure the exact frequency of a given Fraunhofer line.
If the signature is shifted toward the red end of the spectrum (with respect to the comparison spectra), the object has a "red-shift" and is moving away from you (technically its vector has a radial component if you are nit-picky). Shifting the other way is a "blue-shift", meaning the object is approaching.
Due to Hubble's Law, for objects like galaxies which are further away than 10 megaparsecs or so, you can use the red-shift to measure the distance to the galaxy. Which is real convenient, other measurement techniques are a pain in the posterior to utilize, and give fuzzy results.
Not only can you use red/blue shift on objects, but also on parts of objects. Say Planet X is spinning according to the right hand rule. When you look at it through a telescope, if "north" is upward, then the right edge of the planet will be receding from you, and the left edge of the planet will be approaching you. So if you measure the red/blue shift of each limb of the planet, you can calculate how rapidly it is spinning.
There are some binary stars where the two stars are so close that the telescope cannot resolve them (translation: it looks like a single star to the scope). But a spectroscope can reveal the truth. Using the spectroscope, astronomers will see not one but two sets of Fraunhofer lines. By observing how the two sets move back and forth relative to each other the two star's orbital period can be determined. Such stars are called Spectroscopic binaries.
You can even tell if the star has a strong magnetic field. The Zeeman effect is when the signature Fraunhofer lines are split in the presence of a magnetic field. The stronger the field, the wider the split.
This is why books about amateur astronomy tell you a plain old telescope is only useful for seeing stars as bright dots (
or watching the co-eds undress through the dormitory windows). But add a spectroscope to your telescope and suddenly you've got a real live scientific instrument that you can do real science with.
Patterns of sensor readings that will detect the presence of a technological civilization on a planet. This is very important, because contacting an alien civilization means you are gambling with the extinction of the human species.
Patterns of sensor readings that will detect the remains of annihilated civilizations. These could be the gravesite of Forerunners, with the promise/threat of valuable/civilization-killing paleotechnology.
The paper is focused on refining the "L" factor in the famous Drake equation, the average lifetime of a technological civilization. So the paper estimates some scenarios where a planetary civilization can destroy itself, and tries to figure out what their sensor signatures are. Then astronomers can see if they can spot any. If they see lots, it might mean L is quite short.
But for our purposes, keep in mind that many of these sensor signatures will also work if a civilization has been exterminated by external alien invaders.
…The aim of this paper is to use the Earth as a test case in order to categorise the potential scenarios for complete civilisational destruction, quantify the observable signatures that these scenarios might leave behind, and determine whether these would be observable with current or near-future technology.
The variety of potential apocalyptic scenarios are essentially only limited in scope by imagination and in plausibility according to our current understanding of science. However, the scenarios considered here are limited to those that: are self inflicted (and therefore imply the development of intelligence and sufficient technology); technologically plausible (even if the technology does not currently exist); and that totally eliminate the (in the test case) human civilisation.
Only a few plausible scenarios fulfil these criteria:
- complete nuclear, mutually-assured destruction
- a biological or chemical agent designed to kill either the human species, all animals, all eukaryotes, or all living things
- a technological disaster such as the “grey goo” scenario, or
- excessive pollution of the star, planet or interplanetary environment
Other scenarios, such as an extinction level impact event, dangerous stellar activity or ecological collapse could occur without the intervention of an intelligent species, and any signatures produced in these events would not imply intelligent life…
2.1 Nuclear Annihilation
Current estimates of nuclear weapons held around the world are of the order 6 million kilotonnes (kt) (2.5 x 1016 J)…
…Nuclear weapons produce a short, intense burst of gamma radiation with a characteristic double peak over several milliseconds. These gamma flashes could be detected using the same techniques as for the detection of gamma ray bursts (GRB)…
…Given that the world’s nuclear arsenal is equivalent to around 1019 J of energy, the resulting radiation from its combined detonation would be much fainter than a typical GRB. If we assume that the energy is released on a similar timescale and with a similar spectrum to a GRB, a nuclear apocalypse is equivalent in bolometric flux to a GRB detonating around a trillion times closer than its typical distance. If we take a nearby GRB such as GRB 980425 which is thought to have detonated around 40 Mpc away, then we would expect a global nuclear detonation event to produce a similar amount of bolometric flux only 8 AU away!
Therefore, for us to be able to detect nuclear detonation outside the Solar system, the total energy of detonation must be at least nine orders of magnitude larger, i.e. the ETIs responsible for the event must engage in massive weapon proliferation and concurrent usage.
However, the production of fallout from terrestrial size payloads, which persists for much longer timescales, may make itself visible in studies of extrasolar planet atmospheres.
For the purposes of estimating fallout, the weapon impacts are assumed to be evenly distributed across the entire land area of the planet (1.5 x 108 km2 ). This gives an equivalent of approximately one 25 kt (1011 J) weapon per square kilometre of land area. This is of the same order of magnitude as the weapon used in the Semipalatinsk Nuclear Test, for which the effects of radioactive fallout were measured over time. However, given the local climatic conditions at this site (which were very windy) and the fact that our estimates include nuclear events every square kilometre, the effects are likely to be much worse than the results of this test. From measurements of soil at a town near the test site and modelling of radionuclide decay chains, the dose rate due to fallout from the weapon test (not the dose from the blast itself) was shown to begin around 10 3 microgray/hour, decaying to background levels after around 100 days.
Fallout products of fusion weapons are typically nonradioactive, though they do produce a low yield of energetic protons and electrons. Most fallout products from fission weapons are beta emitters and decay to other beta emitting isotopes. Some radioisotopes produced by fission weapons are gamma emitters, but these have short halflives. Ignoring the effects on the health of humans or other lifeforms (which would be severe), the deposition of a large amount of betaradioactive material into the atmosphere would have a significant effect on atmospheric chemistry and would quickly ionise many atmospheric species, with high altitude nuclear tests increasing local electron density several times. This would give ionised air the distinct blue or green of nitrogen and oxygen emission. Given that spacecraft and Earth based telescopes have detected (faint) nighttime airglow on Venus and Mars it may be possible to measure what would be considerably brighter airglow features in exoplanets, given that the order of magnitude increase in electron density caused by a nuclear war would generate an order of magnitude increase in airglow brightness. The brightest airglow feature in the visible spectrum on an Earthlike exoplanet would be the green oxygen line at 558 nm, which would be enhanced by global nuclear war to a photon flux of up to 1400 rayleighs.
IR emission from exoplanets in their secondary eclipse phase has been measured by spacebased telescopes so in theory these measurements could be extended into the visible part of the spectrum in future, though this would require exquisite precision in our knowledge of the host star’s properties, and would most likely be dominated by reflected light from the planet itself, especially in the bluegreen spectral region. A tenfold increase in brightness at 558 nm would potentially be observable with only a modest increase in sensitivity over instruments observing exoplanets today, especially since the airglow maximum occurs well above the tropopause and would therefore be observable above even a very cloudy planet. Airglow caused by fallout products would last for several years before decaying to unobservable levels.
The thermal effects of nuclear explosions also affect atmospheric chemistry. For every kilotonne in yield, approximately 5000 tonnes of nitrogen oxides are produced by the blast itself. Blasts from higher yield weapons will carry these nitrogen oxides high into the stratosphere, where they are able to react with and significantly deplete the ozone layer. Ozone can be detected in the ultraviolet transmission spectrum of an exoplanet, as can other oxygen molecules, and so the disruption of an exoplanetary ozone layer presents another potential observational signature.
Global nuclear war therefore potentially offers several spectral signatures that could be observed: a gamma flash, followed by UV/visible airglow and the depletion of ozone signatures. However, the aftermath of a global nuclear war will also act to obscure these spectral signatures. Groundburst nuclear explosions generate a significant amount of dust that will be lofted into the atmosphere. Airburst explosions do not generate dust, but still introduce particulates into the atmosphere. Atmospheric effects of nuclear warfare have been extensively modelled in climate simulations, the global consequences being known as “nuclear winter”. Recent simulations have shown that even with reduced modern nuclear arsenals severe climate effects are felt for at least ten years after a global conflict, especially due to the long lifetime of aerosols lofted into the stratosphere. They show that the atmospheric optical depth is increased several times for several years. The worst effects are confined to the northern hemisphere given that the model includes conflict over the US and Russia, though the entire planet is affected to a lesser extent.
A nuclear winter would dramatically increase the opacity of the atmosphere. This process itself would be observable if a planet observed with a previously transparent atmosphere (perhaps with an Earthlike spectroscopic signature) was observed again and the atmosphere was opaque, this would be a sign of a large dust event. However, such an event could also be caused by a large impact and therefore would not imply a civilisation had caused the disaster (though would be interesting in itself). If the atmosphere had not been observed before the event, it would simply seem like the planet had an extremely dusty atmosphere. What would be crucial is measuring the relative change in atmosphere as a result of nuclear detonation, hopefully with an added bonus of identifying a weak gamma ray or other high energy emission in the vicinity of the planet.
Hence, to confirm that a planet had been subject to a global nuclear catastrophe would require the observation of several independent signatures in short succession. One on its own is unlikely to be sufficient, and could easily be caused by any number of other processes on planets with potentially no biological activity whatsoever. There are cases beyond a global nuclear catastrophe that a spacefaring civilisation might be able to inflict on itself, given that the destructive energy at their disposal would be far greater than nuclear weapons, including redirecting asteroids. These would be far more destructive than nuclear warfare but would generate observable signatures different than those of a naturally occurring impact event.
2.2 Biological Warfare
Biological warfare involves the use of naturally occurring, or artificially modified, bacteria, viruses or other biological agents to intentionally cause illness or death. The use of a naturally occurring pathogen in a global conflict would probably have a limited net effect on a global population. The destruction would be selflimiting; once a population is reduced in size, transmission from host to host would become more difficult and the epidemic eventually ends. Artificially modified or created biological agents however, could potentially push a civilisation to extinction…
…Assuming a global conflict took place that made use of this method of warfare on a planet that hosts an intelligent civilisation, we pose the question of whether the self-destruction of that species, via this method, could be remotely observable.
If we assume that the time between the release of the engineered virus and its global spread is very short and that the virus is potent enough that a civilisation becomes fully extinct, the environmental impacts of this scenario can be assessed. The actions of anaerobic organisms cause biomass to decay, releasing methanethiol, CH3SH (via production of methionine) as one of the products. This can be spectrally inferred and has no abiotic source. For a population with a similar biomass to the present human population (currently, in terms of carbon biomass, ~2.8x1011 kg), the decay products can be estimated. Since the dry mass of a cell is approximately 50% carbon, the total human biomass would be 5.6x1011 kg. With an estimated cell sulphur content of 0.3-1%, the maximum amount of S available to form CH3SH would be 5.6x109 kg. If 10% of this S is incorporated into methionine, all of which is then converted into methanethiol, this would result in a total CH3SH flux of ~108 kg.
At the current biological production rate on Earth, this would be released to the atmosphere over a period of a year and would rapidly photodissociate, making this a very shortlived biosignature. One of the products of the decay of methanethiol is ethane (C2H6), which can be spectrally detected, but has an atmospheric lifetime under Earth-like conditions of < 1 year, leading to a short window of time for detection. Additionally, if carrion-eating species were unaffected, this would reduce the amount of organic matter available for microbial decay, further reducing the final biosignature.
However, if the engineered virus could cross species barriers, then the total amount of dead biomass could be as high as 6x1013 kg (the total animal biomass on Earth), potentially producing 1011 kg of CH3SH, which would enter the atmosphere over a period of ~30 years. It is likely that, due to its short atmospheric lifetime, this atmospheric CH3SH would still not produce a detectable signature. However, the associated C2H6 absorption signature between 11-13 μm may lend itself to remote detection. This signature would be deeper and therefore more readily detectable if the CH3SH production rate was higher.
Other decay products include CH4, H4S, NH3 and CO2. The most promising biosignature gas for global bioterrorism is CH4. The CH4 flux to the atmosphere is related to ethane production, potentially increasing the C2H6 absorption signature…For the case where only humans can be infected, both signatures are shortlived, requiring observations to be taking place at exactly the right time for a detection to be made. In the case where the virus can cross species barriers leading the the total annihilation of animal life, persistently high levels of these gases could make a detection more likely.
2.3 Destruction via ‘Grey Goo’
The terrestrial biosphere offers many examples of naturally occurring nanoscale machines. Feynman extolled the advantages of engineering at atomic scales. In Engines of Creation, Drexler described “nanotechnology” as a means of fabricating structures at nanoscales using chemical machinery. While the word now has a broader meaning, we can still consider the possibility that such a machine can be sufficiently generalpurpose to be able to make a copy of itself.
Following Phoenix and Drexler we define an engineered system that can duplicate itself exactly in a resource-limited environment as a self-replicator. (NB: This strict definition excludes biological replicators, as they are not engineered). The engineers of such machines have two broad choices as to what resources the self-replicator might use: resources that are naturally occurring in the biosphere, and resources that are not. Engineers that make the former choice run the risk of a “grey goo” scenario, where uncontrolled self-replication converts a large fraction of available biomass into self-replicators, collapsing the biosphere and destroying life on a world. This may be an accident or failure of oversight, or it may be due to a deliberate attack, where the replicators are specifically designed to destroy biomass (what Freitas refers to as “goodbots” and “badbots” respectively). In Engines of Creation, Drexler notes:“Replicators can be more potent than nuclear weapons: to devastate Earth with bombs would require masses of exotic hardware and rare isotopes, but to destroy all life with replicators would require only a single speck made of ordinary elements. Replicators give nuclear war some company as a potential cause of extinction, giving a broader context to extinction as a moral concern.”
Freitas places some important limitations on the ability of replicators to convert the biosphere into “grey goo” (land based replicators), “grey lichen” (chemolithotrophic replicators), “grey plankton” (ocean-borne replicators) and “grey dust” (airborne replicators). With conservative estimates based on contemporary technology, it is suggested that if the replicators are carbon-rich, around a quarter of the Earth’s biomass could be converted as quickly as a few weeks. Equally, Freitas estimates the energy dissipated by carbon conversion, implying that subsequent thermal signatures (local heating and local changes to atmospheric opacity) would be sufficient to trigger local defence systems to combat gooification. For example, In the case of malevolent airborne replicators, a possible defensive strategy is the deployment of non-self replicating “goodbots” which unfurl a dragnet to remove them from the atmosphere.
Phoenix and Drexler emphasise that all these variants of the grey goo scenario are easily avoidable, provided that engineers design wisely (and that military powers exercise restraint). Indeed, they indicate that fully autonomous self-replicating units are not likely to be the most efficient design choice for manufacturing, and that having a central control computer guiding production is likely to be safer and more cost-effective. Provided that the control computer is not separated by distances large enough to introduce time-lag, as would be the case on interplanetary scales, this seems to be reasonable.
However, this still leaves the risk of replicator technology being weaponised. We will assume, as we do throughout this paper, that prudence is not a universal trait in galactic civilisations, and that grey goo is a potential death channel that might be detected.
So what signatures might a grey goo scenario produce? If a quarter of the Earth’s biomass is converted into micron sized objects, how would this affect spectra of Earthlike planets? This situation shares several parallels with the nuclear winter scenario described previously. In the case of grey goo, we may expect there to be a substantially larger amount of “dust”, as well as a fixed grain size. This will be deposited as sand dunes or suspended in the atmosphere, with similar spectral signatures as previously discussed.
Depending on the grain size of the dunes, it may be possible to observe a brightness increase as the angle measured by the observer between the illumination source (the host star) and the planet decreases towards zero on the approach to secondary eclipse.
Surfaces that are composed of a large number of relatively small elements packed together will produce significant shadowing. This shadowing increases as the angle between the surface and an illumination source increases. As the angle decreases towards zero, these shadows disappear, resulting in a net increase in brightness. This is sometimes described as the opposition surge effect, or the Seeliger effect in deference to Hugo von Seeliger, who first described it. Seeliger saw this shadowhiding mechanism in Saturn’s rings, which grow brighter at opposition relative to the planetary disc. Coherent backscattering of light also plays a role in this brightening effect.
This phenomenon is observed in the lunar regolith, so it seems reasonable to expect that this phenomenon would also act in artificially generated regoliths such as those we might expect from a grey goo incident. During exoplanet transits, it may be possible to detect an increase in the brightness of the system as the planet enters secondary eclipse. The Moon’s brightness increases by around 40% as it moves towards the peak of opposition surge, so it may well be the case that grey-goo planets produce opposition surges of similar magnitude. Buratti notes that the wavelength dependence of the surge is relatively weak, which would suggest that nearIR observations may be sufficient to observe this phenomenon.
On what timescale might we expect this artificial nano-sand to persist on a planetary surface? If the planet has an active hydrological cycle, airborne replicators will be incorporated into precipitation and delivered to the planet’s surface. The Earth’s Sahara desert transports away of order a billion tonnes of sand per year. Deposition into rivers and streams may deliver the material to oceans, and eventually the seabed, effectively removing it from view at interstellar distances. This material will be subducted into the mantle and reprocessed on geological timescales, removing all trace of engineering. Using Freitas’s estimate of available biomass, and assuming the nano-sand can be processed out of view at a few billion tonnes per year (which we propose as an upper limit) then this suggests that a goo-ified planet may require several thousand years to refresh its surface. It is likely that processing rates may be accelerated or impeded by other physical processes, but it seems to be the case that goo-ified planets remain characterisable as such over timescales comparable to that of recorded human history.
2.5 Total Planetary Destruction
Finally, it is not inconceivable that a civilisation capable of harnessing large amounts of energy could unbind a large fraction (or all) of a planet’s mass. Kardashev Type II civilisations wishing to build a Dyson sphere require this capability to generate raw materials for the sphere — it is estimated that to create a Dyson sphere in the Solar System with radius 1 AU would require the destruction of Mercury and Venus to supply sufficient raw materials.
Equally, civilisations with access to this level of energy control and manipulation may decide to use it maliciously, destroying large parts of a planetary habitat while it is still occupied, and in the extreme case destroying the planet completely. This would release a significant fraction of the planet’s gravitational binding energy.
The Earth’s binding energy is of order 1039 ergs. This is again several orders of magnitude fainter than a typical supernova or GRB of 10 51 ergs, but is strong compared to the solar luminosity — the Sun would require several days to radiate the same quantity of energy. This would likely produce a gamma ray signature even stronger than expected from the nuclear winter scenario described previously, and we may expect afterglows similar to those observed in other astrophysical explosions.
The destruction of an orbiting body will produce a ring of debris around the central star, in a manner analogous to the production of rings when solid bodies cross the Roche limit of a larger body.
The subsequent evolution of this material will be similar to that of the debris discs. The remnants of the planet formation process, debris discs have been detected around a variety of stars, and the behaviour of grains of differing sizes under gravity and radiation pressure has been modelled in detail. It is likely that, if a terrestrial planet has been destroyed, the debris will be principally composed of silicates, and as such any detection of refined or engineered materials is unlikely, even if such matter survives the planet’s demise untouched.
The fate of the material depends largely on the local gravitational potential and the local radiation field, as well as the grain size distribution of the debris. Grains below the “blowout” size — typically a few microns — will be removed from the system via radiation pressure. Neighbouring planets may collect some of the remaining debris in resonances while the debris grinds into material of sufficient grain size that it either loses angular momentum through Poynting-Robertson drag and is consumed by the central star or a neighbouring planet, or gains momentum through radiative forces and is removed from the system.
In any case, this death channel does not appear to be amenable to detection by Earth astronomers. If we are fortunate to witness the instant of destruction, then we may be able to speculate on the energies released in the event, and search for a natural progenitor of such energy, i.e. another celestial body. Giant impacts between planet-sized bodies will produce the required energies to unbind or destroy one of the objects, as was the case for the impact which formed the EarthMoon system. If such efforts fail, and no other explanation fits the observations, then we may tentatively consider extraterrestrial foul play.
The timescale for observing destruction as it happens will be short — perhaps a few days. The debris can be expected to persist for several centuries, but observing this is unlikely to elucidate its origins as a destroyed planet.
Prospects for Observing Civilisation Destruction Death Channel Detection Method Signature
Y Y 0-5 years Bioterrorism Transit spectroscopy Y Y 1-30 years Grey Goo Transit spectroscopy
N Y >1,000 years Stellar Pollution Asteroseismology,
Y Y >100,000 years
Y Y 10-100,000
Y Y <100,000 years Total Planetary
Debris Disk Imaging
Y Y <100,000 years