This page is for the generalized impact of technological advances on society. The next page focuses on specific technologies.
Technology: the progressives dream and the reactionary's nightmare. Advances in technology have been creating upheavals in society all the way back to the start of the Bronze age and further.
But things really shifted into high gear with the Industrial Revolution. Technology started industrializing the United States around 1790, changing it from an agricultural society into a manufacturing society.
Things seemed to settle down until some clown invented television and the inhabitants of sleepy little United States towns had their minds blown by the realization that people who lived in other places were (gasp) different! That was just awful! Well, actually it was just change, which was bad enough to these folk. They didn't even notice the real problem: technology was starting the transformation of the United States from a manufacturing society into a service society.
The reactionaries started freaking out with the exodus of good-paying jobs from rural areas into the cities. And they started foaming at the mouth when technology gave us the internet.
Understand that the angry reactionaries are not just hicks rubes living in the sticks. Many of them are very rich and sophisticated people who happen to be buggy-whip magnates and are upset that the basis of their wealth just evaporated.
The point is, if you the science fiction writer postulate lots of technological advances in your novels, you must at least pay lip service to the sad fact that it will make a sizable segment of your society very angry.
On the other tentacle, progressives will find things bewildering as well.
As of this writing (2017) a person in their 50s will find much about current life to be quite different from when they were young. Nowadays land-line telephones are increasingly rare while mobile cell phones are proliferating. Children do not understand technologies like printed encyclopedias and telephone directories. For us old geezers a "computer" is a box with a monitor and a keyboard, increasingly a computer is a smart phone. Jerry Pournelle predicted that in the far future people could use something like an internet to find answers to their questions, but failed to predict that people would be angry if the answer took longer than three seconds to appear (drat that Google is slow today). There are even jobs that did not exist a couple of decades ago (Search engine optimization expert?)
Funny thing about society in general and people in specific. Back in the 1750's this new thing called "Science" really started coming into its own. It was amazing the things it could discover, and so many of them with marvelously practical uses! It seemed like there was nothing science could not do. Science was going to bring us to a grand and glorious Utopian future. Even now there is some nostalgia for this view, the technical term is "Retro-Futurism".
This all turned to worms in the early 1900's. Suddenly science revealed its dark side. Science unleashed unspeakable horrors, there were things man was not meant to know, and one started to see more and more dystopias in science fiction literature.
Science didn't change, it can't. The change was in the attitude of society.
So what happened? Yes, I know most of you suddenly shouted "The invention of the atom bomb, you moron!". BZZZT! You're wrong, thank you for playing. It was already in full swing long before 1945. So what's the answer?
I believe that master science fiction author and science explainer Isaac Asimov has the answer. He wrote about it in a 1969 essay entitled The Sin of the Scientist (collected in The Stars In Their Courses). He was speculating on what a "scientific sin" would be. Turns out it would be an act that would blacken the very name of science itself.
A milder but more tech-hostile form of this comes from powerful people whose basis of power is threatened with technological disruption. If you are an ultra-rich oil baron for whom petroleum is the basis of all your wealth and power, you are going to fight the solar power industry like you were a cornered wolverine. Just try to find a CEOs of telephone-directories, newspaper, encyclopedia, and magazine publishers who has anything nice to say about the advent of the internet. All of those publishers are rapidly going bankrupt.
Such powerful people want the status quo ante, thank you very much. Not for deep-seated psychological reasons, it is just about the money. They will use every tool at their disposal. Everything from buying all the rights to the tech and supressing it, to forcing their bribed politicians to pass laws outlawing the disruptive technology. Remember all those urban legends about the guy who invented an automobile that would run on water, and how they mysteriously vanished never to be seen again? Most likely a legend, but doesn't it seem all too possible that a corporation would be sending a stealth team of elite assasins to kill the researchers developing the technology and burn all the research notes?
On the other hand there are 'powerful people' wannabes who hope to seize power by exploiting a new disruptive technology. They are more or less at war with the status quo group. Examples include Steve Jobs, Bill Gates, and Elon Musk. Let alone any corporation who have made their profits skyrocket by utilizing this new thing called "the internet."
Science fiction writers sometimes use this as a plot idea. Indeed, the oil industry's fight against solar power was predicted in Robert Heinlein's short story "Let There Be Light" (1940). On a cynical note, Heinlein made a time-line to place all his stories and characters on. In the story the two protagonists Douglas and Martin prevail over the Power Syndicate. On the time-line I noticed that Douglas and Martin died on the same day. I suspect that they were assasinated in revenge by the Power Syndicate.
Another science fiction example of disruptive technology used to destroy a corrupt establishment can be found in Gilpin's Space by Reginald Bretnor.
Eccentric but brilliant scientist Saul Gilpin invents a magic hyperspace faster-than-light propulsion system / antigravity surface-to-orbit gadget which can be cobbled together from parts available from your local hardware store. He mounts it on a submarine and has instant starship. Then he and the submarine depart for parts unknown.
This makes the totalitarian government very unhappy. They want to use this technology, they do not want citizens getting their hands on it. Makes it far to easy to escape the totalitarian state. Then they find out that Gilpin has mailed blueprints of the gadget to quite a few people. Hilarity ensues.
Technological Unemployment is when a machine steals your job.
(But the term "sabotage" did not come from Luddites tossing their wooden clog sabots into the the machinery. That is not supported by the etymology. I don't care what Lt. Valeris said in Star Trek VI. It is a common story, though.)
Anyway the economists will assure you that history proves there is nothing to worry about. Yes there will be some short-term pain as all the buggy-whip making jobs vanish, but in the long-term the march of technology will create more new jobs than were originally lost. Believing otherwise means you are an economic ignoramus making the mistake of falling for the Luddite Fallacy.
But around 2013 more and more economists became alarmed that this time it was different.
Up until now, machines were taking away jobs by replacing human strength. Now they were taking away jobs by replacing human intellect. Yuval Harari said “Humans only have two basic abilities — physical and cognitive. When machines replaced us in physical abilities, we moved on to jobs that require cognitive abilities. ... If AI becomes better than us in that, there is no third field humans can move to.”
It started slow. Personal computers with word-processing software drastically reduced the number of secretarial jobs. Income tax preparation software drastically reduced the number of tax preparation companies. Currently many fast food franchises are replacing food preparation workers with robots.
But that's OK said the economists. The displaced workers just need some more education so they can find jobs which have not been computerized yet. And they will be higher paying jobs, just you wait and see!
The economists got a rude shock when computers started taking away high-education jobs. That wasn't supposed to happen. It was also a chilling wake-up call to those with high-education jobs who had been smugly saying their jobs were safe.
For example, a new company called Enlitic applied Google's deep learning software TensorFlow to the task of diagnosing lung cancer by examining lung CT scans. They easily trained the software to do the work. Then they did a test where a panel of four of the world’s top human radiologists competed with the software. The results were dramatic. The human radiologists had a false positive rate (incorrectly diagnosing cancer) of 66%. The software had a false positive rate of only 47%. What is worse, the human radiologists had a false negative rate (missing a cancer diagnosis) of 7% while the software had a false negative rate of Zero.
Which means that once Enlitic trains their software on the other diseases, human radiologists will suddenly find themselves out of a job. The software will be cheaper than a radiologist's salary ($286,000/year), and can work 24-7. OK Mr. Economist, what sort of education would you suggest so these suddenly unemployed radiologists can find a better-paying job? Preferably a job that will NOT become lost to computer software before they even complete their education.
Such software is also making inroads into stealing such jobs as writing sports stories, journalism, computer programming, sewing garments, marketing, doing the job of junior lawyers by sorting through previous court cases and legal resources to find precedents, money management, and writing legal briefs. Not to mention financial analysts. And it is just a matter of time before general medical diagnosis falls as well.
As one commentor noted: It's not that complicated. Automation will go from freeing us up to do what we're good at, to being better than us at what we're good at. That's why it's different.
The mood among economists is becoming grim. While many are still maintaining that new jobs will eventually replaced the vanished ones, their pronouncements are starting to sound a bit hollow. The economists who believe the jobs will not be coming back used to be a tiny minority, but a 2014 Pew Research revealed such economists are now more like 48%. Technology is now destroying more jobs than it creates. The Luddite Fallacy is on very shaky ground.
Oxford academics Carl Benedikt Frey and Michael A. Osborne published a study with the findings that almost half of U.S. jobs are at high risk of computerization over the next 20 years. Positions that are particularly vulnerable to automation include telemarketers, tax preparers, watch repairers, insurance underwriters, cargo and freight agents, and mathematical technicians. Driving jobs on mining sites are already being automated and long-distance truck drivers, forklift operators and agricultural drivers could be replaced within five to 10 years.
A more recent McKinsey report suggested today's technology could feasibly replace 45% of jobs right now.
And for jobs requiring lower education lost to automation, even if they are eventually replace in the long-term, the short-term can wreck the entire US economy if the number of jobs is huge enough. It can be a disaster if the transition is too fast. The advent of autonomous cars and driverless trucks could put five million people in the US out of a job. The point being that the US economy does not have the ability to create five million new jobs fast enough to employ these people.
There are those who say: but what about creative jobs? A robot might be stronger and a computer might be smarter, but can they make art? The first point is if you actually think you can solve the unemployment problem by teaching the unemployed to be artists, well good luck with that. The second point is yes, computers are starting to make art.
Taken to its logical extreme, eventually there won't be any more jobs. None, everything will be done by robots and computers. Which is a problem since in modern society one needs money in order to avoid starving to death. And there are not a lot of ways to get money without a job. Not legal ways at any rate. The only people with money will be the ones that own the robots, or have income from either stocks or being independently wealthy.
Yes, corporations that manufacture goods for sale are shooting themselves in the foot by firing all their employees and replacing them with robots. This reduces the number of potential customers (ones who have money to purchase your product at any rate). However this is a "tragedy of the commons" situation. Basically each company figures the declining number of customers is Somebody Else's Problem, not their problem. Even worse, if a company decides to virtuously hang on to their workers to maintain the number of consumers, the company will find itself at a competitive disadvantage with respect to all their evil competitors who use robots. The virtuous companies will go bankrupt from the unfair competition from the evil companies.
But the big point is any society is only three missed meals away from violent anarchy. If widespread technological unemployment increases, the problem will be solved either elegantly by government and society, or it will solve itself inelegantly by natural forces. Probably food riots and angry hungry people setting up lots of guillotines to take care of the robot owners. The French Revolution was over 200 years ago, but the situation is much the same and if we are unlucky so will be the solution. Everything old is new again.
And obviously the food riots are not going to hold off until 100% unemployment happens. They will start much sooner than that.
So what are the elegant solutions?
And there are those who say that the rich should foot the bill for a solution, telling them that this is the fee for "guillotine insurance."
But a commentator named Kalin said: The elites will share their wealth only insofar as it's cheaper to do that (bread and circuses) than it is to keep the proles at bay through force. What Marx saw as an inexorable trend towards socialism may have in fact just been a temporary consequence of the industrial revolution, wherein labor was especially important and the power of an individual worker was large in historical terms. It's not impossible to imagine a sort of "Neo Feudalism" where a small minority of elites find it cheaper to maintain control via technological force-multipliers than to share their earnings such that everyone is actually happy or nearly so.
In other words, the rich will do the math and may well discover that a private army is cheaper than funding a Basic Income.
For dishonest people, technological advances can provide new and improved ways to steal money. A fact learned the hard way by the victims of internet fraud, since the internet was not commonly available prior to 1993.
While some crimes are just an updating of age-old scams (the Nigerian Email fraud is a re-hash of the 1800's "Spanish Prisoner" confidence trick), some are new. Such as credit card skimmers and ransomware style cyberattacks.
The point is that such cutting edge scams require intelligent criminals; since they have to quickly learn the new techology, spot weak points to adapt to a scam, and implement the attack. Since such attacks seem to explode with each technological advance, the implication is that there are lots and lots of smart crooks.
One of the many ways of classifying personally types in twain is into "Neophiles" and "Neophobes." The former love and enjoy changes and new things, the latter instead hate and feel threatened by the same. Neophobes are hostile to series of changes, with responses ranging from "Stop The World, I Want To Get Off" to full blown Reactionary feelings.
And when the changes start coming faster and faster (i.e, the rate of change increases), Neophobes become more and more frantic. Which makes the current world situation a pretty dire place for Neophobes, since accelerated change is exactly what is happening. None of the Neophobe attempts to turn the clock back have any effect (generally because large corporations are making too much money exploiting the changes). At some point a given Neophobe is going to snap.
This threatens advancement along a tech tree since technological advancement is by definition a series of changes. Such technological changes always have a social impact. Just ask anybody who used to have a job on an automobile assembly line. Or people forced to be caregivers for their elderly parents who were granted longer lifespans by advancements in medical technology (welcome to the Sandwich Generation).
You also see this in any hierarchical organization, such as a corporation or an academy of science. Physicist Max Planck observed "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." Or more tersely "Truth never triumphs—its opponents just die out."
Thomas Kuhn wrote about much the same thing in his award-winning monograph The Structure of Scientific Revolutions.
The hidebound, fossilized, longtime, conservative members of such an organization are usually called the "Old Guard." The agile, free-thinking, new, innovative members of such an organization are usually called the "Young Turks." I remember some old aphorism about how a new idea will never become accepted until the last of the old guard opposing it has died of old age, but Google has failed me.
One of the potential problems with immortality is that the old guard never dies, which puts the brakes on progress.
The concept (and the very term itself) of Future Shock was popularized by Alvin Toffler in his 1970 book. As of this writing (2016) Toffler's book has been shown to be quite accurate by current events. There is a worryingly large segement of the population that is so oppressed by Future Shock that they apparently have undergone a psychological break, and now refuse to accept facts from science and indeed from reality in general.
Characteristically science fiction authors have some future shock aversion themselves, because it makes writing science fiction so much more difficult. There are many literary methods.
This is when a technological advance is so powerful and destructive that some idiot will eventually use it to cause powerful destruction. Things are bad enough when some human researcher stumbles over the advance, but it can be lots worse when the advance is some ultra-high tech paleotechnology from a long extinct Forerunner species. The classic example is the Krell technology from the movie Forbidden Planet.
The important point to note is that the technology is not bad or evil per se, only in the hands of a primitive emotional race such as human beings. Once the human race reaches maturity such technology is safe. Imagine a type of nanotechnology that can be hacked into a form that can turn the entire planet Terra into gray goo, yet simple enough to be made by a bright teenager in their parent's garage. Terra wouldn't last five minutes before it started to dissolve. Some idiot would try it, probably thousands of idiots simultaneously. Morons who what to see what happens, angry people who want to make the entire world pay, depressed people who want to really end it all, those who think such a corrupt world needs a do-over, I'm sure you can think of many others.
However, you cannot child-proof the entire universe. The long term solution is not to suppress technology, but to uplift humanity. Because suppressing technology never works in the long term. When it is steam-engine time, it is steam-engine time.
But sometimes people try. The TV Trope is Keeper of Forbidden Knowledge. An example are the colonists sent to planet Topaz in Andre Norton's The Defiant Agents. The Western Alliance and Greater Russian are both frantically sending interstellar colonists to every planet they can find. They hope to find valuable paleotechnology from the forerunner galactic empire that collapsed about ten thousand years ago. Topaz is supposed to be a Western planet but Russia infiltrates some of their own. Paleotechnology is discovered. However, both sets of colonists realize that if either the Alliance or Russia gets their hands on the alien tech, Terra will be destroyed in the resulting war. So they set themselves up as keepers of forbidden knowledge, with three colonists on each side knowing the secret, and faking the failure of the colony.
This is sort of the inverse of You Are Not Ready. Terran space explorer may run across a planet-bound technologically-primitive alien species. Naïve and idealistic explorers may succumb to the temptation to help out the aliens by giving them a few technological tips.
This is generally a very bad idea:
- The aliens technological development will become distorted. At the least will be horrible technological disruption, at worse the species may inadvertently or advertently kill themselves off.
- Do you want Space Barbarians? Because this is how you get space barbarians.
This is more or less the reason for the Star Trek's famed Prime Directive. Author Sylvia Engdahl maintains that she thought of her version of the Prime Directive in 1950 (before Star Trek) though her first novel featuring it was not published until 1970. Though the first example in written science fiction appears to be Olaf Stapledon's classic 1937 novel Star Maker.
In most incarnations of the non-interference claus, the space explorers are forbidden to give the primitive aliens any technological tips. In the extreme version the space explorers are forbidden to let the alien know that the explorers exist. Either the explorers have to restrict their explorations to observing the planet from orbit, or if it is possible for the explorers to disguise themselves as the aliens in question you cannot make any revealing slips.
When does a civilization graduate to being "ready" and no longer subject to the Prime Directive? Generally there is some benchmark. In Star Trek the primitive civilization graduates when they invent a faster-than-light starship. In the CoDominion Universe it is when the level of technology advances to the point where they can put an astronaut into space. In the Anthropology Service Universe it is when the species evolves to the point where they sponaneously develop psionic powers.
As opposed to the above, this is technology that will be forever bad and evil, no matter how mature humanity becomes. This is not quite as common in science fiction, since science generally does not admit to the existence of any such thing. It generally is found only in stories based around one moral system or another so the evilness is relative to the morality, not an absolute. More like in fantasies like H. P. Lovecraft's tales of dread Cthulhu.
A mild version of this sometimes appears in science fictions as a dystopian background, or as something the author uses to deliberately put the brakes on technological development so the novel's background does not confuse the readers.