The two main functions of sensors are navigational and tactical.
Navigational sensors are used by the astrogator to determine the spacecraft's current position, vector, and heading. They are also used by the pilot to perform the maneuvers calculated by the astrogator. Arguably a chronometer or other instrument to locate the spacecraft's current position in time is also a navigational sensor.
Tactical sensors are used to watch the the region around the spacecraft. This is mostly to monitor nearby objects (such as meteors on a collision course or enemy spacecraft). Arguably this also includes solar-storm warnings which detect deadly incoming proton events.
Navigational and Tactical sensors are generally found on all spacecraft, unless the designer is trying really hard to economize. There are some more specialized sensors only found on more specialized spacecraft:
Remote Sensing suites are used to scan and analyze the surface of a planet, moon, or asteroid. These are found on specialized spacecraft such as exploration vessels, mine prospecting ships, survey ships, customs and other hunter-type ships, and spy ships.
Combat sensors have two main types. Strategic combat sensors detect hostile spacecraft at long range, giving advanced warning of enemy attack. (remember that There Ain't No Stealth In Space). Tactical combat sensors work at close range in a battle, guiding your weapons to the enemy targets (a "firing solution"), detecting incoming enemy weapons, and analyzing the enemy for weakness.
There are two broad classes of sensors: passive and active.
Passive sensors just detect any emissions from the target, i.e., they passively look for the target. Passive sensors include telescopes and heat sensors.
In some SF novels, passive sensors are called "sensors" while active sensors are called "scanners."
Note that in a combat situation, using active sensors allows you to be instantly detected and targeted by hostile spacecraft. If you don't care if you are detected, or if they already know you are there, sensors can reveal valuable information about hostile spacecraft.
Tactical sensors are used to watch the the region around the spacecraft. This is mostly to monitor nearby objects (such as meteors on a collision course, enemy spacecraft, or incoming weapons). Arguably this also includes solar-storm warnings which detect deadly incoming proton events.
These are used to detect object by their gravity. Typically in science fiction, they detect objects that are invisible (usually in science fiction written before the invention of radar). Dr. Robert Forward actually invented such a detector. He suggests using it to detect asteroid-mass black holes lurking in the center of asteroids.
In C.J. Cherryh's Company Wars universe, ships use both radar and something called Longscan for detection and tactical information. Longscan helps cope with the lightspeed lag of radar. Its primary purpose is for spacecraft combat, but it has some civilian uses.
Remember the light-speed lag. Light moves quickly, but not at infinite speed. It takes about eight minutes to travel one astronomical unit. So if you are in orbit around Terra and you observe a spacecraft near the Sun with a telescope or radar, you are actually are seeing where the ship was eight minutes ago. By the same token, if you change course it will be eight minutes until the Sun-grazer ship will know.
Remote Sensing is obtaining information about an object or phenomenon without touching it. Remote sensing is found on specialized spacecraft such as exploration vessels, survey ships, customs and other hunter-type ships, and spy ships. Remote sensing is used by asteroid miners trying to figure out the locations of valuable lumps of ore on an asteroid or moon, survey ships assessing potential colony planets, military spacecraft trying to identify hostile contacts, or when Mr. Spock is scanning for life-signs.
A Gamma ray spectrometer is often used by NASA in their space probes to map chemical element and isotope regions on a planet, moon, or asteroid. Such an instrument would be incredibly useful for an asteroid miner. Note that it can only detect elements, not compounds. The spectrometer cannot, for instance, detect water (H2O). It can, however, detect suspiciously large amounts of hydrogen in the same area as oxygen which may suggest the presence of water.
Current NASA instruments can probe to a depth of about 0.1 meters, have a range of a close orbit about one body radius from the surface, and require several months to gather enough readings to make worthwhile maps of the elemental composition. Presumably this performance can be improved. Of course the gamma signal strength can be increased if the range is reduced, say, by somebody on the ground using a man-portable instrument. Trying to do detection from orbit weakens the signal due to the inverse square law.
Detecting valuable deposits of elements is done by analyzing cosmogenic gamma rays (that is, gamma rays created by cosmic rays). Galactic cosmic rays (mostly high-energy protons) from outer space bombard the upper 0.1 meter layer of the asteroid or other celestial object. When a cosmic ray proton hits an atom of the object, it splits it, creating among other things a shower of fast neutrons. The neutrons collide with other atoms (inelastic collision) and are eventually absorbed by another atom (radiative neutron capture). Both of which cause the atom involved to emit a gamma ray (γ).
The important point is that the frequency of the gamma ray depends upon what element the atom is. In other words, the frequency is the "fingerprint" of that element. If you see a gamma ray with a frequency of 6 MeV, you know it came from an oxygen atom.
All the gamma ray spectrometer does is detect gamma ray photons and notes the frequency and where on the asteroid they came from. The frequencies reveal what elements are at a given location, and the relative amounts of different frequencies reveal the relative concentrations of the various elements. For instance, if you are getting twice as many 6 MeV gamma rays as 3 MeV gamma rays from a location, it means that location has oxygen and aluminum, and there is twice as much oxygen as aluminum.
Note that since this technique depends upon cosmic rays, it doesn't work well on planets where cosmic rays are sparse. On planets with thick atmospheres (e.g., Terra and Venus) the atmosphere stops most of the cosmic rays from reaching the surface (which is good news if you are living there). I suppose on such planets one could generate gamma rays to fingerprint if you sprayed the ground with a proton particle beam weapon. But that obviously has problems.
- Typical cosmic ray protons contain 40 million times the energy of particles accelerated by the Large Hadron Collider.
- Particle beam weapons are power hogs. Presumably an effective cosmogenic particle accellerator would have power requirements on the order of 40 million times that of the Large Hadron Collider.
- Particle beam weapons are, well, weapons.
- It is hard to set the proton strength high enough to get a good gamma signal but low enough so the gamma radiation doesn't kill the user and everybody standing nearby.
- If the proton strength is too high it will slice up the terrain, which could tend to upset people.
Gamma ray spectrometers are a scintillation counter rigged to be a spectroscope. That is, they are composed of a crystal which makes flashes of light when radiation strikes it, and a photomultiplier tube which watches the crystal and counts the flashes.
To operate as a spectrometer, the instrument has to be able to tell the frequency of each gamma ray photon. The brightness of each flash indicates the frequency of the gamma ray that caused it (the shorter the frequency of a gamma ray photon, the higher the energy, the more visible photos per gamma ray are created, which makes the flash brighter).
Lanthanum bromide (LaBr3) works poorly as a gamma detector crystal. It seems that some of the lanthanum atoms are radioactive isotopes, which means the blasted detector crystal is generating gamma rays. These internal gamma rays drown out the gamma signal from the planet or moon you are surveying.
The Lunar Prospector probe used a crude bismuth germanate (BGO) gamma detector crystal. The advantage was that the crystal did not require cryogenic cooling and was relatively inexpensive. It does require high-voltage vacuum tube photomultipliers due to the type of light flashes it emits.
The Kaguya probe used a ultra-high-res high-purity germanium (HPGe) detector crystal (the technical term is "high spectral resolution"). HPGe are considered to be the "gold standard" among gamma detector crystals. The drawback is that they require cryogenic cooling, which requires mass for the cooling equipment (remember Every Gram Counts) and limiting the gamma detector usable lifespan to the on-board supply of coolant. The crystal is also much more expensive. It also does require high-voltage vacuum tube photomultipliers.
The cutting edge, hot new advanced gamma ray detector crystal is europium-doped strontium iodide (SrI2). Pretty good spectral resolution, not too expensive, and does not require cryogentic cooling. Another advantage is that its light flashes do not require high-voltage vacuum tube photomultipliers, you can use small low powered silicon photomultipliers. This allows the construction of a orbital element scanner that is small and low powered enough to fit into a CubeSat. The drawback is that its spectral resolution is not as good as a HPGe crystal. But it is much better than a BGO crystal.
Burger Lab at Fisk University made a prototype of a CubeSat version of a SrI2 gamma-ray spectrometer built from off-the-shelf components that weighs only half a kilogram, fills 0.001 cubic meters of space and consumes about three watts of electricity yet can do the job of a full lab system that weighs 90 kilograms and fills 0.3 cubic meters of space. The crystal is only five centimeters long.
Note this technique only works in an oxygen atmosphere.
In Star Trek, when the Enterprise approaches a starship or planet, one of the first things Captain Kirk does is order a scan for life signs. This will reveal if anybody lurking there. "Scanning for life-forms, Captain. We are reading life signs for nine Humans and a Klingon."
In Stargate Atlantis, the Lanteans in the Pegasus Galaxy have nifty little hand-held units called Life signs detectors. They can detect all sentient aliens within about 100 meters, as long as the user has the the Ancient Technology Activation gene and they are not trying to detect a hibernating Wraith.
This is obviously a very useful sensor to have, but how the heck does it work?
Here are some possibilities.
The Star Fleet Medical Reference Manual says life signs sensors detect Kirlian radiation from the entity's Kirlian Aura. This would be a fine explanation, were the Kirlian Aura not revealed to be a steaming pile of Phlogiston back in 1979. This is just another form of the old notion that living creatures are living due to the presence of some sort of "life force" which is as-yet undetected by science. This discredited idea is called Vitalism.
One quick and dirty way to detect life is the fact that generally living things move. Even if they are not walking around, their hearts pump blood and their lungs pump air. Plants move somewhat slowly, and microorganisms move over short distances.
A slightly less quick and dirty technique is to somehow detect the presence of DNA. Which means the sensor will oblivious to any life form with a biochemistry that doesn't use it.
Vulcans and humans (for instance) can be distinguished by their biosignatures. There will be differences in their heart rates (or whatever the acoustics are from their fluid pumping organs), heat signatures, breathing rates, exhaled gases, elements and organic molecules composing their bodies, biological chemical reactions, and the like. This would require remote sensing of sound and chemical composition. As an example, the human will have a sizable amount of iron in their bodies from the hemoglobin in their blood, while Vulcan blood is based on copper.
Laser fluorescence can detect and identify certain organic molecules on the surface of a planet (or the surface of an alien's skin).
In 2007 a company called Kai Sensors obtained a contract from the US Army to develop a unit called a LifeReader. This would use doppler radar and sophisticated computer algorithms to detect and monitor multiple subjects by their individual heart rates, even through walls. Unfortunately the Kai Sensors company appears to have vanished.
Goddard Space Flight Center scientist Sam Floyd is working on the Neutron/Gamma ray Geologic Tomography (NUGGET) instrument. A beam of neutrons is focused through a neutron lens at a specific point inside the target object. As atoms inside the object at the point absorb neutrons, they produce a characteristic gamma-ray signal for that atom's element. NUGGET detect the gamma-ray signature, thus identifying the element at the target point. By sweeping the focus through the object in a regular pattern, a three-dimensional elemental plot of the object can be created. It can also measure the relative amounts of various element pairs (if there is more copper than iron you might have detected a Vulcan).
A partnership by the Department of Homeland Security's Science and Technology Directorate and the National Aeronautics and Space Administration's Jet Propulsion Laboratory designed a device called FINDER (Finding Individuals for Disaster and Emergency Response). It uses microwaves (1150 MHz or 450 MHz, L or S band) to detect the heartbeats of victims trapped in wreckage (for instance a collapsed building after an earthquake). When a microwave beam is aimed at a pile of earthquake rubble covering a human subject or illuminated through a barrier obstructing a human subject, the microwave beam can penetrate the rubble or the barrier to reach the human subject. When the human subject is illuminated by a microwave beam, the reflected wave from the human subject will be modulated by the subject's body movements, which include the breathing and the heartbeat. It can detect heartbeats from people behind [A] 6 meters of solid concrete, [B] 9 meters of rubble, or [C] 30 meters of open space. After the April 25, 2015 earthquake in Nepal two prototype FINDERs managed to locate four men in two different locations who had been trapped under 3 meters of bricks for several days.
LifeReader would use doppler radar and sophisticated computer algorithms to detect and monitor multiple subjects by their individual heart rates, even through walls. Project was apparently discontinued.
Range-R uses a similar principle to LifeReader. It is a motion detector using radio waves that can detect the presence of people and movements as small as human breathing at a range of 15 meters or so, even through solid walls. It will penetrate most common building wall, ceiling or floor types including poured concrete, concrete block, brick, wood, stucco glass, adobe, dirt, etc. However, It will not penetrate metal or walls saturated with water. These are actually currently being used by US police, which has raised the alarm of possible Fourth Amendment abuses by law enforcement personnel.
Dan Slater is the lead technologist working on a new microphone technology called the remote acoustic sensor (RAS), which is capable of capturing sounds within extreme and often inaccessible aerospace environments. It is sensitive enough to detect the sound of microbes moving around. Things that are constantly moving are probably alive. The RAS could probably hear the sound of a Klingon's heartbeat, breathing, blood turbulence, and gastrointestinal rumbling. Probably in enough detail to distinguish the Klingon from a Vulcan or a Human.
Chen-Chia Wang et al have utilized something called a optical speckle-tolerant photo-EMF pulsed laser vibrometer (PPLV) for the detection of human heartbeats, breathing, and gross physical movement from essentially any part of a human subject's surface, even in the presence of clothing, all the while without limiting the interrogation points to specific locations like the chest and carotid areas.
Spy rays or spy beams are a jolly science fiction idea, apparently invented by the legendary E. E. "Doc" Smith in his novel Triplanetary (1934) and of course stolen by Gene Roddenberry for Star Trek. Adjust the setting on the spy-ray projector and you too can see and hear everything that happens inside an enclosed room at a remote location. It is like you have a magic invisible intangible TV camera you can position anywhere, regardless of intervening walls and obstacles. Currently they do not exist, but certain remote sensing technologies are getting real close.
Spy rays are popular with spies (of course), combat spacecraft trying to get intel on their opponents, military intelligence, criminal gangsters trying to get the inside dope on their targets and/or rival gangs, police especially in the same situations where they'd have an agent "wearing a wire", astromilitary trying to obtain the details of the enemy's new secret weapon breakthrough, industrial espionage, and so on. Not to mention peeping toms. In E. E. "Doc" Smith's Lensman series, warships would used spy rays to locate the enemy ship's crew at their enemy control panels then use needlebeams to vaporize the control panels (and probably the hapless crewmember at the panel).
Naturally this leads to the an arms race, with the creation of "spy-ray blocks" to foil spy rays (sort of like a jammer designed to defeat hidden listening devices). And improved spy rays to defeat the spy-ray blocks.
Spy-ray blocks are naturally popular with the targets of the activites listed in the previous paragraph. Note that the target of spies includes diplomats, top-secret development labs, organized-crime mob bosses, military planning offices, and enemy spies.
Technologies that are almost spy rays include:
- LifeReader would use doppler radar and sophisticated computer algorithms to detect and monitor multiple subjects by their individual heart rates, even through walls. Project was apparently discontinued.
- Range-R uses a similar principle to LifeReader. It is a motion detector using radio waves that can detect the presence of people and movements as small as human breathing at a range of 15 meters or so, even through solid walls. It will penetrate most common building wall, ceiling or floor types including poured concrete, concrete block, brick, wood, stucco glass, adobe, dirt, etc. However, It will not penetrate metal or walls saturated with water. These are actually currently being used by US police, which has raised the alarm of possible Fourth Amendment abuses by law enforcement personnel.
- Laser Microphones uses a remote laser beam to monitor the vibrations of an object inside a room, to create an impromptu microphone suitable for obtaining intel on drug deals and other conspiracies. Rippled glass windows can defeat laser microphones. But in theory the principle can be adapted to use microwaves, which means it can only be defeated by acoustically isolating the room and surrounding it with a Faraday Cage.
- Time-of-Flight Microwave Cameras use a parabolic antenna as a lens to actually capture crude images through walls.
Patterns of specific readings from one or more sets of sensors can indicated the presence of an object or event, giving meaning to the raw readings. These are called Signatures.
For instance, if your seismometer indicates a small earthquake and the atmospheric radiation meter records an abrupt rise in atmospheric radiation, you can be pretty sure that a nuclear explosion has happened. The two readings correlated in time is the signature of a nuclear detonation.
Spectral signatures are a spectrum of intensities of various frequencies of electromagnetic radiation which identifies elements and compounds. The signature is the spectral "fingerprint" of an element. The instrument used is called an optical spectrometer or spectroscope.
In other words Meteor Mike the rock-rat can point his ship's spectroscope at an asteroid and say "Hot Rockets! Thar's gold in them thar rocks!"
In 1835 positivist French philosopher Auguste Comte foolishly defied Clarke's first law and stated: "We will never know how to study by any means the chemical composition (of stars), or their mineralogical structure."
Comte should have known better. Joseph von Fraunhofer had discovered the beginning of how to do just that in 1814, twenty one years earlier. The discovery is now called Fraunhofer lines, Joseph had basically invented the spectroscope.
The point is if you are a scientist in those primitive days before rocket propelled space probes, the only thing you can get from planets, the Sun, and the stars is electromagnetic radiation. Since that is all you get, you would do well to analyze that radiation with a fine-tooth comb. Which is what a spectroscope does.
Back in the 1660s people knew that you could make a rainbow by passing a ray of white light through a prism, and you should know that too if you have a copy of Pink Floyd's Dark Side Of The Moon album. But back then they figured that white light was white and the glass prism was somehow staining it with various colors. In 1666 super-genius Isaac Newton proved that the prism wasn't staining anything, the white light ray was actually composed of a mix of colored light. For one thing you could use one prism to turn a white ray into a rainbow, then use a second upside-down prism to turn the rainbow back into a white ray. This doesn't make sense if the prism is staining the light. Newton wrote up his results in a book called Opticks which is considered to be one of the greatest works of science in history. Arguably the greatest is Newton's other work Philosophiæ Naturalis Principia Mathematica, which among other things contains his law of universal gravitation and his laws of motion so near and dear to spacecraft astrogators. But I digress.
The point is the prism is taking all the various frequencies of light in the ray and separating them. Which allows you to analyze the ray with a fine-tooth comb. You can check which light frequencies are present and which are absent, and the relative strengths of each. This is the basis of the spectroscope.
Glass prisms have limitations when used in a spectroscope, they were eventually replaced with diffraction gratings once the latter had been invented.
Elements heated inside a blazing star emit a unique pattern of frequencies which is the signature (fingerprint) of that element. These are the Fraunhofer lines. Thus the spectroscope can stick its tongue out at Comte and routinely determine the chemical composition of stars and planets.
As a matter of fact, the element Helium was discovered by spectroscope on the Sun before it was discovered on Terra. Several astronomers spotted a previously unknown Fraunhofer line at a wavelength of 587.49 nanometers in the solar corona spectrum. It was named "helium" after the Greek word the Sun: ἥλιος (helios). It wasn't discovered on Terra until 27 years later. Chemist Sir William Ramsay discovered it in a chunk of cleveite when his spectroscope spotted the tell-tale Fraunhofer line.
Sometimes this backfires, though. In 1869 astronomers spotted another previously unknown line at 530.3 nanometers in the solar corona spectrum. Aha! Obviously another unknown element, discovered by the awesome power of spectroscopy! They named it Coronium, though Dmitri Mendeleev renamed it Newtonium.
However in the 1930s researchers discovered that the unknown line was actually due to highly ionized iron, not from a mystery element. Up until then scientists could not ionize elements quite as extremely as obtains in the solar corona. Bye-bye Coronium. This also explained quite a few other mystery lines that had been observed.
I first encountered Coronium in Fletcher Pratt's novel Alien Planet, which was written before Coronium was discovered to be a myth. In the novel the alien visitor needs Coronium to refuel his spaceship. Since it does not exist on Terra, he makes do with helium for a short hop to the planet Mercury. There he harvests the mythical Coronium from the solar wind. Which goes to show that reading science fiction can be educational, but you need a crib sheet to separate the good science from the obsolete science.
If a line is bright it is an emission line, if it is black it is an absorption line. But either way they are the fingerprint signatures of the elements. When looking at the spectrographs, it doesn't matter if a line is bright or black, the position is the important thing. An emission spectra has a black background with bright emission lines in various colors. An absorption spectra has a rainbow background with black lines in various positions.
Since objects like the Sun are composed of lots of elements, all their signatures will be overlaying each other. The spectrum will look like a herd of rainbow zebras. But astronomers have become quite skilled at untangling the signatures.
Another important point is that the pattern of signatures from a given star is almost as unique as a fingerprint. Thus while the set of lines are fingerprints of an individual element, the set of all the lines (and their relative intensities) is the fingerprint of a star.
This can be important in science fiction stories where a starship winds up in an unknown location (because of the normal operation of a "jump" drive, or by some technobabble hyperspace accident). The spectrum of three stars will identify those stars. The astrogator's navigational tables will have the exact location of those stars in the galaxy. By measuring the angles between the stars, the starship's location can be calculated by triangulation.
But spectroscopes can do so much more than just identify the elemental composition of the object.
In theory astronomers can tell if a star is approaching, receding, rotating, or orbiting another star by observing the Doppler effect with a spectroscope. This is impossible to detect if you have a featureless spectrum of light from the star in question. Kind of like trying to measure something using a featureless ruler with no indicator marks.
Fraunhofer lines to the rescue! They put identifiable marks on the star's spectrum. Suddenly your ruler has lines on it. Now you can see a red shift or blue shift by looking at the position of an element's Fraunhofer lines.
So you can take a photograph through a spectrograph of a star's spectrum while simultaneously photographing the spectrum of (say) some burning sodium in the lab (a "comparison spectra"), side by side on the same piece of photographic film (yes, kiddies, back in the days when dinosaurs roamed the planet people used photographic film instead of digital cameras to take their selfies). On the photo you can then measure the displacement between the lab's sodium signature and the sodium signature in the star's spectrum. A quick calculation and you know how fast the star is moving relative to Terra.
Obviously nowadays they use electronic photosensors instead of photographic film but the principle is the same. Instead of a photo the spectra is displayed as a jagged line in a graph, more accurate but nowhere near as pretty as a rainbow.
Fraunhofer used sodium lines for his lab comparison spectra because they are easily produced by sprinkling common table salt into a Bunsen burner flame. Electronic photosensors do not need comparision spectra because they can directly measure the exact frequency of a given Fraunhofer line.
If the signature is shifted toward the red end of the spectrum (with respect to the comparison spectra), the object has a "red-shift" and is moving away from you (technically its vector has a radial component if you are nit-picky). Shifting the other way is a "blue-shift", meaning the object is approaching.
Due to Hubble's Law, for objects like galaxies which are further away than 10 megaparsecs or so, you can use the red-shift to measure the distance to the galaxy. Which is real convenient, other measurement techniques are a pain in the posterior to utilize, and give fuzzy results.
Not only can you use red/blue shift on objects, but also on parts of objects. Say Planet X is spinning according to the right hand rule. When you look at it through a telescope, if "north" is upward, then the right edge of the planet will be receding from you, and the left edge of the planet will be approaching you. So if you measure the red/blue shift of each limb of the planet, you can calculate how rapidly it is spinning.
There are some binary stars where the two stars are so close that the telescope cannot resolve them (translation: it looks like a single star to the scope). But a spectroscope can reveal the truth. Using the spectroscope, astronomers will see not one but two sets of Fraunhofer lines. By observing how the two sets move back and forth relative to each other the two star's orbital period can be determined. Such stars are called Spectroscopic binaries.
You can even tell if the star has a strong magnetic field. The Zeeman effect is when the signature Fraunhofer lines are split in the presence of a magnetic field. The stronger the field, the wider the split.
This is why books about amateur astronomy tell you a plain old telescope is only useful for seeing stars as bright dots (
or watching the co-eds undress through the dormitory windows). But add a spectroscope to your telescope and suddenly you've got a real live scientific instrument that you can do real science with.
Patterns of sensor readings that will detect the detonation of a nuclear weapon
Project Vela was a 1950s DARPA research program that was accelerated with the advent of the 1963 Partial Test Ban Treaty. The Vela satellites were designed to monitor compliance with the treaty, detecting the signature of nuclear tests.
Satellite Vela Hotel first 41 nuclear detonation detects were all confirmed. Detonation 42, the South Atlantic Flash or Vela Incident is still highly disputed to this day.
The Vela satellites carried 12 external X-ray detectors and 18 internal neutron and gamma-ray detectors. They were also equipped with sensors which could detect the electromagnetic pulse from an atmospheric explosion.
Finally they had two non-imaging silicon photodiode sensors called bhangmeters which monitored light levels over sub-millisecond intervals. They could determine the location of a nuclear explosion to within about 3,000 miles. Atmospheric (not vacuum) nuclear explosions produce a unique signature, often called a "double-humped curve": a short and intense flash lasting around 1 millisecond, followed by a second much more prolonged and less intense emission of light taking a fraction of a second to several seconds to build up. The effect occurs because the surface of the early fireball is quickly overtaken by the expanding atmospheric shock wave composed of ionised gas. Although it emits a considerable amount of light itself it is opaque and prevents the far brighter fireball from shining through. As the shock wave expands, it cools down becoming more transparent allowing the much hotter and brighter fireball to become visible again.
No single natural phenomenon is known to produce this double-humped curve signature, although there was speculation that the Velas could record exceptionally rare natural double events, such as a meteoroid strike triggering a lightning superbolt in the Earth's atmosphere.
Patterns of sensor readings that will detect the presence of a technological civilization on a planet. This is very important, because contacting an alien civilization means you are gambling with the extinction of the human species. The term "technosignature" was apparently coined by Dr. Jill Tartar.
These are the signatures of a slower than light starship traveling from star to star.
This paper examines the possibility of detecting extraterrestrial civilizations by means of searching for the spectral signature of their interstellar transportation systems. Four methods of interstellar propulsion are considered: antimatter rockets, fusion rockets, fission rockets, and magnetic sails. The types of radiation emit- ted by each of these propulsion systems are described, and the signal strength for starships of a characteristic mass of 1 million tons traveling at speeds and acceleration levels characteristic of the various propulsion systems is estimated.
It is shown that for the power level of ships considered, the high energy gamma radiation emitted by the antimatter, fusion and fission propulsion systems would be undetectable at interstellar distances. Bremsstrahlung radiation from the plasma confinement systems of fusion devices might be detectable at distances of about 1 light-year. Visible light emitted from the radiators of an antimatter driven photon rocket might be detectable by the Hubble Space Telescope at a distance of several hundred light-years provided the rocket nozzle is oriented towards the Earth.
The most detectable form of starship radiation is found to be the low frequency radio emissions of cyclotron radiation caused by interaction of the interstellar medium with a magnetic sail. A space-based antenna with a 6 km effective diameter could detect the magsail emission of a characteristic starship at distances of up to several thousand light-years. Both photon rockets and magnetic sails would emit a signal that could easily be distinguished from natural sources.
We conclude that the detection of extraterrestrial civilizations via the spectral signature of their spacecraft is possible in principle.
ASSUMED STARSHIP CHARACTERISTICS
For purposes of the present analysis we consider four methods of interstellar propulsion, the principles of which are fairly well understood. These are antimatter rockets, fusion rockets, and fission rockets, all of which can be used to either accelerate or decelerate a spacecraft, and magnetic sails, which can be used to decelerate a spacecraft by creating drag against the interstellar medium. We also assume that the extraterrestrials have a physiological scale and lifespan comparable to humans.
The temporal and physical parameters of the extraterrestrials help define the desirable speed and size of their starships. If it is desired that an interstellar journey be completed within the working lifetime of adults who commence it, and if the characteristic distance between stars is about 6 light years, then a starship should be able to attain a velocity on the order of 10% of the speed of light (0.1 c), which also implies rocket exhaust velocities of the same order. If excessive time is not to be wasted accelerating and decelerating, then it is desirable that the acceleration and deceleration phases each require no more than about 25% of the trip time, which would imply average accelerations on the order of 0.1 m/s2. Finally, since technological creatures must also be social creatures, a fair sized crew may be desired for such a long voyage and the tasks of colonization to follow.
These considerations, combined with the need for shielding the crew against both cosmic rays and near relativistic interstellar particles, imply that the optimum vessel for interstellar travel may be a ship of considerable mass. The maximum size spacecraft that humans can seriously consider assembling on orbit today is one on the order of 1,000 tons. Such small size is due, however, to current launch vehicle limitations; once in-space manufacturing of components is developed, much larger spacecraft will become practical. A better yardstick for estimating the scale of spacecraft of an advanced spacefaring civilization would be humanity’s recent progress in the construction of ships to sail the Earth’s oceans. This is illustrated in Table 1.
Table 1. Progress in Nautical Engineering Ship Date Tonnage Santa Maria 1492 150 tons San Martin 1588 1,100 tons HMS Victory 1803 2,200 tons HMS Dreadnought 1900 20,000 tons USS Missouri 1943 65,000 tons USS Enterprise 1965 100,000 tons current supertanker 1990 400,000 tons
On the basis of the trend illustrated in Table 1, we postulate a mass for a “typical” starship on the order of 1,000,000 tons. Taking this assumption together with the exhaust velocity (0.1 c) and acceleration performance (0.01 g’s), we find that such a starship would require a thrust of 100 MN (22.4 Mlbf) and a power of 1,500 TW.
While the thrust required of this starship is only three times that employed on a Saturn V, the amount of power used is remarkable, equal to 0.9% of the total amount of sunlight falling on the Earth, and only 11 orders of magnitude less than the total output of the Sun. For purposes of comparison, humanity today collectively uses about 12 TW, and the most powerful propulsion system ever built, that on the S1 stage of the Saturn V, had a power output of 0.1 TW. On the other hand, humanity’s power production is currently growing at a rate of 2.6% per year. If this rate continues, we will produce 1,500 TW around the year 2180, and 30,000 TW in 2300. Furthermore, the maximum size of individual power plants has been growing at a rate of 2 orders of magnitude per century for the past two hundred years. Thus if present trends continue, the apparently astronomical power required of our standard starship should be common in 3 or 4 centuries, a blink of an eye on the cosmic time scale.
FORMS OF STARSHIP RADIATION
Depending upon the propulsion system employed, a starship could reveal itself via various forms of radiation.
If antimatter is employed, then after several intermediate, but very short time scale reactions, about 40% of the total energy will be released in the form of very hard gamma rays with energies between 130 and 350 MeV. It would be both difiicult and undesirable to attempt to block all of these rays from escaping the starship structure, and thus the primitive proton-proton annihilation spectrum could be expected to be radiated to space. To obtain the high specific impulse necessary for interstellar flight, the antimatter would have to either be used to heat a plasma, presumably magnetically confined, or used to heat a radiator to produce thrust in the form of photons. If a plasma confinement system is used, there will be both cyclotron and bremsstrahlung radiation, which will be broadcast to space. In order to obtain the maximum specific impulse in an antimatter-fed plasma drive, the plasma will be heated to several MeV, and will thus produce bremsstrahlung gammas in this energy range. The cyclotron radiation frequency is determined solely by the strength of the magnetic field employed. If the field strength is 5 Tesla, then there will be electron cyclotron radiation at 140 GHz and higher harmonics, along with ion cyclotron radiation at 80 MHz and higher harmonics.
If photon propulsion is employed, about half of the hard gamma radiation plus all of the rest of the annihilation energy will be thermalized to heat which will be radiated to space by a set of radiators. Because the amount of power that can be radiated goes as temperature to the fourth power, it is highly advantageous to run the radiator as hot as possible. The maximum temperature of the system is governed by the long duration temperature limits of materials, which based upon our current knowledge would be about 2,400 K (for tungsten). Radiators operating at this temperature will emit strongly in both the visible and infrared portions of the spectrum. In order to maximize the useful thrust, reflectors will be used to channel the emitted photons into as small a cone angle as possible.
If thermonuclear fusion power is employed, there will be cyclotron radiation, and bremsstrahlung, whose frequency will be governed by the plasma temperature spectrum. The optimum fusion reaction for interstellar rocket propulsion may well be D-He3, since nearly all of the energy it releases is in the form of charged particles whose momentum can be converted to thrust. The products of this reaction, a proton and an alpha particle, are released with energies of 18 and 4.5 MeV respectively, and thus some gamma rays may be expected with energies in this range. However the optimum power/ mass ratio fusion reactor will be realized if the plasma temperature is kept such that the ratio of the reaction rate parameter divided by the square of the temperature, < συ > /T2, is maximized. For the D-He3 reaction, this will occur at an average plasma temperature of about 60 keV. The bremsstrahlung emittance from such a reactor will thus be dominated by X-rays in this frequency range.
Fission could be employed either as an electric propulsion system or as a sort of plasma drive using a variety of techniques. Whatever the conversion system, the unique signature of a fission drive would be the well known spectrum of prompt and delayed fission gamma rays, in the 0.5 to 5 MeV range, collectively accounting for about 14% of the total output produced by the reactor. If fission is used in a plasma drive it could also be easily distinguished from either a fusion or an antimatter system by its ion cyclotron radiation, about 2 orders of magnitude lower in frequency than that of the alternatives due to the high atomic mass of the magnetically directed fission products. If the fission source is used for electric propulsion, then its radiators will operate at a lower temperature compared to those possible in a photon rocket, because the efficiency of the electric propulsion conversion system will go to zero as the radiator temperature approaches the temperature of the hot side of the energy conversion cycle. It can be shown analytically that the optimum ratio of the maximum temperature (absolute) to the radiator temperature in a space electric conversion system is 4:3. Thus an advanced electric propulsion system would operate with a radiator temperature at around 1,800 K, emitting in the visible and IR.
A magnetic sail (or “magsail” ) would be of unique value to an interstellar spacecraft because of its ability to decelerate a ship without the use of propellant. The magnetosphere of the magsail will create a standoff collisionless bowshock, which will heat the interstellar medium it encounters to hundreds of keV to a few MeV, depending upon the ship’s velocity. The plasma so created will then encounter the magnetic field of the magsail, where it will emit cyclotron radiation.
The cyclotron frequency emitted by a magsail is not a function of spacecraft design, but only of the ship’s velocity and the density of ions in the interstellar medium. At a density of 1 ion/cc, a ship traveling at 0.1 c would produce electron cyclotron radiation at a frequency of about 12 kHz.
MAGNITUDE OF STARSHIP RADIATION
The magnitude of the radiation that a starship will emit is a function of the magnitude of the power of its rocket engine, as well as of the engine design. For some types of engines, the fraction of jet power emitted as certain types of radiation can be calculated accurately. Where such information is lacking, we have assumed that the magnitude of a major type of radiation generic to an engine is 10% of the jet power.
The jet power is calculated as follows: if we assume a characteristic distance of 6 light-years for an interstellar voyage, and let the acceleration time equal 1/4 the trip time, then since 1 g is also an acceleration to the speed of light in a year, we have
S = 6ly = Vt/2 + (t/2)(V/2) = 3/4Vt or t = 8/V
A = 4V/t = V2/2where V is the maximum cruise velocity, t is the trip time, S is the trip distance, and A is the time-averaged acceleration.
We also assume that a more advanced starship technology would only be employed to achieve performances substantially beyond what a more primitive technology might do. Thus since fusion can achieve a velocity of 0.1 c, we place a demand on an antimatter plasma rocket that it be used to achieve 0.2 c. Combining these with
Pjet = FU/ 2= MAU/2where F is the engine thrust and U is the exhaust velocity, and assume a time-averaged mass during the burn, M, of 109 kg, we find the characteristic power level required of each of the technologies discussed.
DETECTION OF STARSHIP RADIATION
Table 2. Characteristic Power Levels of Interstellar Propulsion Systems Technology U/c V/c A (g’s) Pjet (TW) Prad (TW) fission electric 0.02 0.02 0.0002 6 1 at 1-5 MeV
18 at visible, IR
fusion plasma 0.08 0.1 0.005 600 60 at 1-100 KeV
60 at cyclotron
antimatter plasma 0.2 0.2 0.02 6,000 4,000 at 200 MeV
600 at 20 MeV
600 at cyclotron
antimatter photon 1.0 0.4 0.08 120,000 40,000 at 200 MeV
120,000 at visible, IR
magsail-fission — 0.02 0.0003 18 2 at 80 KeV
2 at 2.4 kHz
magsail-fusion — 0.1 0.026 780 80 at 2 MeV
80 at 12 kHz
magsail-AM plasma — 0.2 0.0065 4,000 400 at 8 MeV
400 at 24 kHz
magsail-AM photon — 0.4 0.0166 20,000 2,000 at 32 MeV
2,000 at 48 kHz
- U/c: exhaust velocity as fraction of lightspeed
- V/c: ship velocity as fraction of lightspeed
- A(g’s): acceleration in gravities
- Pjet (TW): exhaust jet power in terawatts
- Prad (TW): exhaust jet radiation in terawatts
It can be seen in Table 2 that certain potential starship propulsion systems, notably the very high power antimatter drives, emit vast quantities of energy in the form of gamma rays. The problem with detecting such radiation however is that since each gamma ray carries a large amount of energy, the number of photons emitted by even a very high powered starship at a characteristic interstellar distance impacting per square meter of collection area is quite small, and thus extremely difficult to distinguish from instrument noise and background radiation. For example, a starship emitting 10,000 TW of 200 MeV gamma radiation at a distance of 1 light-year from Earth will cause 7.5 photons per year to impact on a 1 square meter collection device This would obviously be undetectable.
Because their characteristic energies are about 4 orders of magnitude less than gamma rays, starship X-rays emissions offer some promise of increasing the photon count to detectable levels. On the other hand, the magnitude of the power source on those types of propulsion systems that emit X-rays appears to be about 2 orders of magnitude less than those characteristic of antimatter drives. A starship at 1 light-year emitting X-rays at a rate of 1 TW/keV (t.e., 50 TW at 50 keV, etc. ) would impact a collection device in Earth orbit at a rate of about 0.02 photons per hour, which would still be statistically undetectable. However a portion of the X-ray emissions would be less than 2 keV, and such radiation could he concentrated by an X-ray grazing incidence telescope. If such a telescope could be constructed that would focus a 1 m diameter aperture down to a 1 cm diameter collection area, then a 1 TW/keV (2 TW at 2 keV) source at 1 light-year would cause about 160 impacts per hour on the collection plate, which would be statistically detectable. One light-year is not a very great detection distance capability, however. At 10 light-years, a neighborhood within which there are less than a dozen target stars, the impact rate would be about 2 per hour, and the signal would fade into the noise background.
While the gamma ray emissions from the engine of an antimatter photon rocket would be undetectable, the visible radiation composing its exhaust is another story. If we consider the sample photon rocket in Table 2, with a jet power of 120,000 TW, and assume that it uses a reflective nozzle to focus the emitted light to a half angle of 30 degrees, then it will shine in the direction of its exhaust with an effective irradiated power of 1,800,000 TW. Such an object at a distance of 1 light-year would be seen from Earth as a 17th magnitude light source, and could be detected on film by a first class amateur telescope. The 200 inch telescope on Mount Palomar could image it at 20 light-years, and the Hubble Space Telescope at a distance of about 300 light-years. Curves of apparent magnitude verses distance are shown in Figure 1, as are the number of stellar systems (N = R3/80) within range. Since at least for the upper-end telescopes considered, the number of stellar systems Within range is significant (100,000 stars are within 200 light-years of Earth) this approach offers some hope for a successful search. The light from the photon rocket could be distinguished from that of a dim star by the lack of hydrogen lines in the rocket’s emissions.
Radio waves may be emitted from a starship as a result of plasma interaction with either the magnetic confinement field of a plasma drive engine or the deceleration field of a magnetic sail. Plasma drive engines produce electron and ion cyclotron radiation with frequencies on the order of a hundred GHz and MHz, respectively. Magsails produce electron cyclotron radiation with frequencies of ten’s of kHz and ion cyclotron radiation with frequencies of ten’s of Hz. The frequency of plasma drive engines is thus high enough to penetrate the Earth’s ionosphere and be detected on the ground by radio telescopes, while magsail radiation is below the cutoff frequency and can only be detected by antennas positioned in space.
The signal to noise ration (SNR) of a radio receiver is given by:
SNR = P(Ar)/4πR2BkT
where Ar is the area of the receiver antenna, R is the distance from the source, P is the radiated power of the source, B is the bandwidth, k is the Boltzman constant, and T is the absolute temperature of the receiver. Since plasma confinement fields and magsail fields both vary by factors greater than 2 over the region of plasma contact, the bandwidth required to capture a large percentage of the signal must be a sizable fraction of the signal’s peak frequency. Thus with the same size antenna and power source, magsail radiation can produce a SNR six orders of magnitude greater than that possible from a plasma drive.Furthermore, since the low frequency magsail radiation has very long wavelengths (12 kHz = 25 km wavelength), huge collection areas can be created with very little mass by orbiting dishes or antennas made of sparsely placed wires or crossed tethers. For these reasons, magsail cyclotron radiation will be much easier to detect than that from plasma engines.
If we assume a SNR of 2 and a bandwidth of 1 kHz, suflicient to capture a significant fraction of electron cyclotron radiation emitted by a magsail (we assume 10% of the emitted electron cyclotron radiation within this bandwidth), and orbiting antennas with eflective equivalent radii of 6 km and 30 km respectively, then the power that needs to be emitted by a magsail for it to be detectable on Earth is shown in Figure 2.
It can be seen that the magsail radiation of a characteristic fusion starship being decelerated from a cruise velocity of 0.1c could be detected by a 6 km orbiting antenna from a distance of 400 light-years, while that emitted by a characteristic antimatter photon rocket in its deceleration phase could be seen as far away as 2,000 light-years. There are about 100,000,000 stellar systems to be found within the latter distance. This extended range detection capability combined with magsail radiation’s unique time-dependent frequency spectrum appears to make a search for magsail radiation the most promising option for extraterrestrial starship detection.
It may be noted that our estimate of starship mass is a speculative guess. However since the signal detectability is proportional to the mass of the ship divided by the square of the distance, a decrease of ship mass by two orders of magnitude only results in a decrease in detectability distance by one order or magnitude. Thus even if the true characteristic mass of starships is 10,000 tons, and not the 1,000,000 tons we have postulated as a baseline, magsail radiation would still be detectable by the 6 km antenna at 40 light-years, and by the 30 km antenna at 200 light-years.
We have considered a variety of potential technologies that may be in current use by advanced extraterrestrial civilizations for interstellar propulsion, and find that of those considered, the ones most likely to be detectable are photon rockets and magnetic sails. Photon rockets could be detected by currently existing orbital equipment at distances of several hundred light-years, while magsails could be detected by near-term deployable orbital equipment at several thousand light-years. Both photon rockets and magnetic sails would emit a signal that could easily be distinguished from natural sources. We therefore conclude that the detection of extraterrestrial civilizations via the spectral signature of their spacecraft is possible in principle and recommend that the approach be studied further.
If the starship is traveling at more than about 14% of the speed of light (V/c = 0.14 in table) you will have to start worrying about relativity. Gamma (γ) is the relativistic factor.
The paper Observational Signatures of Self-Destructive Civilisations is about patterns of sensor readings that will detect the remains of annihilated civilizations. These could be the gravesite of Forerunners, with the promise/threat of valuable and/or civilization-killing paleotechnology.
The paper is focused on refining the value of the "L" factor in the famous Drake equation, the average lifetime of a technological civilization. So the paper estimates some scenarios where a planetary civilization can destroy itself, and tries to figure out what their sensor signatures are. Then astronomers can see if they can spot any.
If they see lots, it might mean L is quite short. Which would mean that the L for our civilization is likely to be quite short as well.
But for our science-fiction writing purposes, keep in mind that many of these sensor signatures will also work if a civilization has been exterminated by external alien invaders.
…The aim of this paper is to use the Earth as a test case in order to categorise the potential scenarios for complete civilisational destruction, quantify the observable signatures that these scenarios might leave behind, and determine whether these would be observable with current or near-future technology.
The variety of potential apocalyptic scenarios are essentially only limited in scope by imagination and in plausibility according to our current understanding of science. However, the scenarios considered here are limited to those that: are self inflicted (and therefore imply the development of intelligence and sufficient technology); technologically plausible (even if the technology does not currently exist); and that totally eliminate the (in the test case) human civilisation.
Only a few plausible scenarios fulfil these criteria:
- complete nuclear, mutually-assured destruction
- a biological or chemical agent designed to kill either the human species, all animals, all eukaryotes, or all living things
- a technological disaster such as the “grey goo” scenario, or
- excessive pollution of the star, planet or interplanetary environment
Other scenarios, such as an extinction level impact event, dangerous stellar activity or ecological collapse could occur without the intervention of an intelligent species, and any signatures produced in these events would not imply intelligent life…
2.1 Nuclear Annihilation
Current estimates of nuclear weapons held around the world are of the order 6 million kilotonnes (kt) (2.5 x 1016 J)…
…Nuclear weapons produce a short, intense burst of gamma radiation with a characteristic double peak over several milliseconds. These gamma flashes could be detected using the same techniques as for the detection of gamma ray bursts (GRB)…
…Given that the world’s nuclear arsenal is equivalent to around 1019 J of energy, the resulting radiation from its combined detonation would be much fainter than a typical GRB. If we assume that the energy is released on a similar timescale and with a similar spectrum to a GRB, a nuclear apocalypse is equivalent in bolometric flux to a GRB detonating around a trillion times closer than its typical distance. If we take a nearby GRB such as GRB 980425 which is thought to have detonated around 40 Mpc away, then we would expect a global nuclear detonation event to produce a similar amount of bolometric flux only 8 AU away!
Therefore, for us to be able to detect nuclear detonation outside the Solar system, the total energy of detonation must be at least nine orders of magnitude larger, i.e. the ETIs responsible for the event must engage in massive weapon proliferation and concurrent usage.
However, the production of fallout from terrestrial size payloads, which persists for much longer timescales, may make itself visible in studies of extrasolar planet atmospheres.
For the purposes of estimating fallout, the weapon impacts are assumed to be evenly distributed across the entire land area of the planet (1.5 x 108 km2 ). This gives an equivalent of approximately one 25 kt (1011 J) weapon per square kilometre of land area. This is of the same order of magnitude as the weapon used in the Semipalatinsk Nuclear Test, for which the effects of radioactive fallout were measured over time. However, given the local climatic conditions at this site (which were very windy) and the fact that our estimates include nuclear events every square kilometre, the effects are likely to be much worse than the results of this test. From measurements of soil at a town near the test site and modelling of radionuclide decay chains, the dose rate due to fallout from the weapon test (not the dose from the blast itself) was shown to begin around 10 3 microgray/hour, decaying to background levels after around 100 days.
Fallout products of fusion weapons are typically nonradioactive, though they do produce a low yield of energetic protons and electrons. Most fallout products from fission weapons are beta emitters and decay to other beta emitting isotopes. Some radioisotopes produced by fission weapons are gamma emitters, but these have short halflives. Ignoring the effects on the health of humans or other lifeforms (which would be severe), the deposition of a large amount of betaradioactive material into the atmosphere would have a significant effect on atmospheric chemistry and would quickly ionise many atmospheric species, with high altitude nuclear tests increasing local electron density several times. This would give ionised air the distinct blue or green of nitrogen and oxygen emission. Given that spacecraft and Earth based telescopes have detected (faint) nighttime airglow on Venus and Mars it may be possible to measure what would be considerably brighter airglow features in exoplanets, given that the order of magnitude increase in electron density caused by a nuclear war would generate an order of magnitude increase in airglow brightness. The brightest airglow feature in the visible spectrum on an Earthlike exoplanet would be the green oxygen line at 558 nm, which would be enhanced by global nuclear war to a photon flux of up to 1400 rayleighs.
IR emission from exoplanets in their secondary eclipse phase has been measured by spacebased telescopes so in theory these measurements could be extended into the visible part of the spectrum in future, though this would require exquisite precision in our knowledge of the host star’s properties, and would most likely be dominated by reflected light from the planet itself, especially in the bluegreen spectral region. A tenfold increase in brightness at 558 nm would potentially be observable with only a modest increase in sensitivity over instruments observing exoplanets today, especially since the airglow maximum occurs well above the tropopause and would therefore be observable above even a very cloudy planet. Airglow caused by fallout products would last for several years before decaying to unobservable levels.
The thermal effects of nuclear explosions also affect atmospheric chemistry. For every kilotonne in yield, approximately 5000 tonnes of nitrogen oxides are produced by the blast itself. Blasts from higher yield weapons will carry these nitrogen oxides high into the stratosphere, where they are able to react with and significantly deplete the ozone layer. Ozone can be detected in the ultraviolet transmission spectrum of an exoplanet, as can other oxygen molecules, and so the disruption of an exoplanetary ozone layer presents another potential observational signature.
Global nuclear war therefore potentially offers several spectral signatures that could be observed: a gamma flash, followed by UV/visible airglow and the depletion of ozone signatures. However, the aftermath of a global nuclear war will also act to obscure these spectral signatures. Groundburst nuclear explosions generate a significant amount of dust that will be lofted into the atmosphere. Airburst explosions do not generate dust, but still introduce particulates into the atmosphere. Atmospheric effects of nuclear warfare have been extensively modelled in climate simulations, the global consequences being known as “nuclear winter”. Recent simulations have shown that even with reduced modern nuclear arsenals severe climate effects are felt for at least ten years after a global conflict, especially due to the long lifetime of aerosols lofted into the stratosphere. They show that the atmospheric optical depth is increased several times for several years. The worst effects are confined to the northern hemisphere given that the model includes conflict over the US and Russia, though the entire planet is affected to a lesser extent.
A nuclear winter would dramatically increase the opacity of the atmosphere. This process itself would be observable if a planet observed with a previously transparent atmosphere (perhaps with an Earthlike spectroscopic signature) was observed again and the atmosphere was opaque, this would be a sign of a large dust event. However, such an event could also be caused by a large impact and therefore would not imply a civilisation had caused the disaster (though would be interesting in itself). If the atmosphere had not been observed before the event, it would simply seem like the planet had an extremely dusty atmosphere. What would be crucial is measuring the relative change in atmosphere as a result of nuclear detonation, hopefully with an added bonus of identifying a weak gamma ray or other high energy emission in the vicinity of the planet.
Hence, to confirm that a planet had been subject to a global nuclear catastrophe would require the observation of several independent signatures in short succession. One on its own is unlikely to be sufficient, and could easily be caused by any number of other processes on planets with potentially no biological activity whatsoever. There are cases beyond a global nuclear catastrophe that a spacefaring civilisation might be able to inflict on itself, given that the destructive energy at their disposal would be far greater than nuclear weapons, including redirecting asteroids. These would be far more destructive than nuclear warfare but would generate observable signatures different than those of a naturally occurring impact event.
2.2 Biological Warfare
Biological warfare involves the use of naturally occurring, or artificially modified, bacteria, viruses or other biological agents to intentionally cause illness or death. The use of a naturally occurring pathogen in a global conflict would probably have a limited net effect on a global population. The destruction would be selflimiting; once a population is reduced in size, transmission from host to host would become more difficult and the epidemic eventually ends. Artificially modified or created biological agents however, could potentially push a civilisation to extinction…
…Assuming a global conflict took place that made use of this method of warfare on a planet that hosts an intelligent civilisation, we pose the question of whether the self-destruction of that species, via this method, could be remotely observable.
If we assume that the time between the release of the engineered virus and its global spread is very short and that the virus is potent enough that a civilisation becomes fully extinct, the environmental impacts of this scenario can be assessed. The actions of anaerobic organisms cause biomass to decay, releasing methanethiol, CH3SH (via production of methionine) as one of the products. This can be spectrally inferred and has no abiotic source. For a population with a similar biomass to the present human population (currently, in terms of carbon biomass, ~2.8x1011 kg), the decay products can be estimated. Since the dry mass of a cell is approximately 50% carbon, the total human biomass would be 5.6x1011 kg. With an estimated cell sulphur content of 0.3-1%, the maximum amount of S available to form CH3SH would be 5.6x109 kg. If 10% of this S is incorporated into methionine, all of which is then converted into methanethiol, this would result in a total CH3SH flux of ~108 kg.
At the current biological production rate on Earth, this would be released to the atmosphere over a period of a year and would rapidly photodissociate, making this a very shortlived biosignature. One of the products of the decay of methanethiol is ethane (C2H6), which can be spectrally detected, but has an atmospheric lifetime under Earth-like conditions of < 1 year, leading to a short window of time for detection. Additionally, if carrion-eating species were unaffected, this would reduce the amount of organic matter available for microbial decay, further reducing the final biosignature.
However, if the engineered virus could cross species barriers, then the total amount of dead biomass could be as high as 6x1013 kg (the total animal biomass on Earth), potentially producing 1011 kg of CH3SH, which would enter the atmosphere over a period of ~30 years. It is likely that, due to its short atmospheric lifetime, this atmospheric CH3SH would still not produce a detectable signature. However, the associated C2H6 absorption signature between 11-13 μm may lend itself to remote detection. This signature would be deeper and therefore more readily detectable if the CH3SH production rate was higher.
Other decay products include CH4, H4S, NH3 and CO2. The most promising biosignature gas for global bioterrorism is CH4. The CH4 flux to the atmosphere is related to ethane production, potentially increasing the C2H6 absorption signature…For the case where only humans can be infected, both signatures are shortlived, requiring observations to be taking place at exactly the right time for a detection to be made. In the case where the virus can cross species barriers leading the the total annihilation of animal life, persistently high levels of these gases could make a detection more likely.
2.3 Destruction via ‘Grey Goo’
The terrestrial biosphere offers many examples of naturally occurring nanoscale machines. Feynman extolled the advantages of engineering at atomic scales. In Engines of Creation, Drexler described “nanotechnology” as a means of fabricating structures at nanoscales using chemical machinery. While the word now has a broader meaning, we can still consider the possibility that such a machine can be sufficiently generalpurpose to be able to make a copy of itself.
Following Phoenix and Drexler we define an engineered system that can duplicate itself exactly in a resource-limited environment as a self-replicator. (NB: This strict definition excludes biological replicators, as they are not engineered). The engineers of such machines have two broad choices as to what resources the self-replicator might use: resources that are naturally occurring in the biosphere, and resources that are not. Engineers that make the former choice run the risk of a “grey goo” scenario, where uncontrolled self-replication converts a large fraction of available biomass into self-replicators, collapsing the biosphere and destroying life on a world. This may be an accident or failure of oversight, or it may be due to a deliberate attack, where the replicators are specifically designed to destroy biomass (what Freitas refers to as “goodbots” and “badbots” respectively). In Engines of Creation, Drexler notes:“Replicators can be more potent than nuclear weapons: to devastate Earth with bombs would require masses of exotic hardware and rare isotopes, but to destroy all life with replicators would require only a single speck made of ordinary elements. Replicators give nuclear war some company as a potential cause of extinction, giving a broader context to extinction as a moral concern.”
Freitas places some important limitations on the ability of replicators to convert the biosphere into “grey goo” (land based replicators), “grey lichen” (chemolithotrophic replicators), “grey plankton” (ocean-borne replicators) and “grey dust” (airborne replicators). With conservative estimates based on contemporary technology, it is suggested that if the replicators are carbon-rich, around a quarter of the Earth’s biomass could be converted as quickly as a few weeks. Equally, Freitas estimates the energy dissipated by carbon conversion, implying that subsequent thermal signatures (local heating and local changes to atmospheric opacity) would be sufficient to trigger local defence systems to combat gooification. For example, In the case of malevolent airborne replicators, a possible defensive strategy is the deployment of non-self replicating “goodbots” which unfurl a dragnet to remove them from the atmosphere.
Phoenix and Drexler emphasise that all these variants of the grey goo scenario are easily avoidable, provided that engineers design wisely (and that military powers exercise restraint). Indeed, they indicate that fully autonomous self-replicating units are not likely to be the most efficient design choice for manufacturing, and that having a central control computer guiding production is likely to be safer and more cost-effective. Provided that the control computer is not separated by distances large enough to introduce time-lag, as would be the case on interplanetary scales, this seems to be reasonable.
However, this still leaves the risk of replicator technology being weaponised. We will assume, as we do throughout this paper, that prudence is not a universal trait in galactic civilisations, and that grey goo is a potential death channel that might be detected.
So what signatures might a grey goo scenario produce? If a quarter of the Earth’s biomass is converted into micron sized objects, how would this affect spectra of Earthlike planets? This situation shares several parallels with the nuclear winter scenario described previously. In the case of grey goo, we may expect there to be a substantially larger amount of “dust”, as well as a fixed grain size. This will be deposited as sand dunes or suspended in the atmosphere, with similar spectral signatures as previously discussed.
Depending on the grain size of the dunes, it may be possible to observe a brightness increase as the angle measured by the observer between the illumination source (the host star) and the planet decreases towards zero on the approach to secondary eclipse.
Surfaces that are composed of a large number of relatively small elements packed together will produce significant shadowing. This shadowing increases as the angle between the surface and an illumination source increases. As the angle decreases towards zero, these shadows disappear, resulting in a net increase in brightness. This is sometimes described as the opposition surge effect, or the Seeliger effect in deference to Hugo von Seeliger, who first described it. Seeliger saw this shadowhiding mechanism in Saturn’s rings, which grow brighter at opposition relative to the planetary disc. Coherent backscattering of light also plays a role in this brightening effect.
This phenomenon is observed in the lunar regolith, so it seems reasonable to expect that this phenomenon would also act in artificially generated regoliths such as those we might expect from a grey goo incident. During exoplanet transits, it may be possible to detect an increase in the brightness of the system as the planet enters secondary eclipse. The Moon’s brightness increases by around 40% as it moves towards the peak of opposition surge, so it may well be the case that grey-goo planets produce opposition surges of similar magnitude. Buratti notes that the wavelength dependence of the surge is relatively weak, which would suggest that nearIR observations may be sufficient to observe this phenomenon.
On what timescale might we expect this artificial nano-sand to persist on a planetary surface? If the planet has an active hydrological cycle, airborne replicators will be incorporated into precipitation and delivered to the planet’s surface. The Earth’s Sahara desert transports away of order a billion tonnes of sand per year. Deposition into rivers and streams may deliver the material to oceans, and eventually the seabed, effectively removing it from view at interstellar distances. This material will be subducted into the mantle and reprocessed on geological timescales, removing all trace of engineering. Using Freitas’s estimate of available biomass, and assuming the nano-sand can be processed out of view at a few billion tonnes per year (which we propose as an upper limit) then this suggests that a goo-ified planet may require several thousand years to refresh its surface. It is likely that processing rates may be accelerated or impeded by other physical processes, but it seems to be the case that goo-ified planets remain characterisable as such over timescales comparable to that of recorded human history.
2.5 Total Planetary Destruction
Finally, it is not inconceivable that a civilisation capable of harnessing large amounts of energy could unbind a large fraction (or all) of a planet’s mass. Kardashev Type II civilisations wishing to build a Dyson sphere require this capability to generate raw materials for the sphere — it is estimated that to create a Dyson sphere in the Solar System with radius 1 AU would require the destruction of Mercury and Venus to supply sufficient raw materials.
Equally, civilisations with access to this level of energy control and manipulation may decide to use it maliciously, destroying large parts of a planetary habitat while it is still occupied, and in the extreme case destroying the planet completely. This would release a significant fraction of the planet’s gravitational binding energy.
The Earth’s binding energy is of order 1039 ergs. This is again several orders of magnitude fainter than a typical supernova or GRB of 10 51 ergs, but is strong compared to the solar luminosity — the Sun would require several days to radiate the same quantity of energy. This would likely produce a gamma ray signature even stronger than expected from the nuclear winter scenario described previously, and we may expect afterglows similar to those observed in other astrophysical explosions.
The destruction of an orbiting body will produce a ring of debris around the central star, in a manner analogous to the production of rings when solid bodies cross the Roche limit of a larger body.
The subsequent evolution of this material will be similar to that of the debris discs. The remnants of the planet formation process, debris discs have been detected around a variety of stars, and the behaviour of grains of differing sizes under gravity and radiation pressure has been modelled in detail. It is likely that, if a terrestrial planet has been destroyed, the debris will be principally composed of silicates, and as such any detection of refined or engineered materials is unlikely, even if such matter survives the planet’s demise untouched.
The fate of the material depends largely on the local gravitational potential and the local radiation field, as well as the grain size distribution of the debris. Grains below the “blowout” size — typically a few microns — will be removed from the system via radiation pressure. Neighbouring planets may collect some of the remaining debris in resonances while the debris grinds into material of sufficient grain size that it either loses angular momentum through Poynting-Robertson drag and is consumed by the central star or a neighbouring planet, or gains momentum through radiative forces and is removed from the system.
In any case, this death channel does not appear to be amenable to detection by Earth astronomers. If we are fortunate to witness the instant of destruction, then we may be able to speculate on the energies released in the event, and search for a natural progenitor of such energy, i.e. another celestial body. Giant impacts between planet-sized bodies will produce the required energies to unbind or destroy one of the objects, as was the case for the impact which formed the EarthMoon system. If such efforts fail, and no other explanation fits the observations, then we may tentatively consider extraterrestrial foul play.
The timescale for observing destruction as it happens will be short — perhaps a few days. The debris can be expected to persist for several centuries, but observing this is unlikely to elucidate its origins as a destroyed planet.
Prospects for Observing Civilisation Destruction Death Channel Detection Method Signature
Y Y 0-5 years Bioterrorism Transit spectroscopy Y Y 1-30 years Grey Goo Transit spectroscopy
N Y >1,000 years Stellar Pollution Asteroseismology,
Y Y >100,000 years
Y Y 10-100,000
Y Y <100,000 years Total Planetary
Debris Disk Imaging
Y Y <100,000 years