Robots work in warehouses, explore Mars, assist police, clean floors, and serve as companions for kids and adults alike. But before Furbies there was Frankenstein, before Roombas there was R.U.R., and before androids we had Asimov.
Frankenstein is generally regarded as the first work of science fiction. Published in 1818, Mary Shelley’s novel exposes the dangers of the Industrial Revolution, as well as man’s arrogance in using technology to thwart nature. While Frankenstein’s monster isn’t a robot, it is fiction’s first artificial life form, and thus paves the way for the robots that now dominate the genre. Shelley was the first author to engage the question of whether humans—or more specifically, men—can create life. Her answer is that it’s possible, but just because one can doesn’t mean one should. Frankenstein establishes two motifs that now define sci-fi—the human-like and “uncanny” nature of artificial life, and the possibility that such life will take over.
The first fictional robots appear in Karel Čapek’s 1920 play R.U.R. (Rossum’s Universal Robots). Čapek coined the word “robot,” which means “forced labor” in Czech (robota). In his play, robots free humans from work. Interestingly, the humanoid robots are organic:
I will show you the kneading trough…the pestle for beating up the paste. In each one we mix the ingredients for a thousand Robots at one operation. Then there are the vats for the preparation of live, brains, and so on. Then you will see the bone factory. After that I’ll show you the spinning mill…[f]or weaving nerves and veins. Miles and miles of digestive tubes pass through it at a time.
Perhaps Čapek couldn’t envision mechanical humanoid robots, but given that he had recently witnessed the technological advancements of World War I, it’s more likely that Čapek predicted that humans would choose to make robots in their image both inside and out.
The “Uncanny Valley” is the point where robots’ apparent humanness make people uneasy. The term comes from a theory proposed by German psychiatrist Ernst Jentsch in 1906. Jentsch referred to wax dolls, which often prompt double-takes. That uncertainty about whether something is animate or human creeps us out. Čapek was the first to exemplify Jentsch’s theory in fiction: “They make me feel so strange,” says human Helena of the robots. When she feels the skin of a robot called Sulla, she says, “Oh, no, no,” and upon closely inspecting it, she says, “Oh, stop, stop.” A certain likeness to humans inspires kinship, but when the line blurs, that kinship turns to fear.
While technically human, Frankenstein’s monster’s grotesque appearance inspires more revulsion and horror than Čapek’s robots. Its uncanniness comes from its ability to feel, or for lack of a better word, its soul. The monster forages for food and refuses to eat anything it has to kill. It learns how to give voice to its emotions: “Believe me, Frankenstein, I was benevolent; my soul glowed with love and humanity; but am I not alone, miserably alone? You, my creator, abhor me; … [others] spurn and hate me.” The monster feels the way any ostracized human would, which juxtaposed with its ghastly visage creates an uncomfortable dissonance.
Human behavior is at the root of the monster’s murderous revolt. Victor bolts out of his apartment as soon as the monster becomes animate, and the monster is rejected by countless others. It’s only after Victor reneges on his promise to create a mate that the monster becomes vengeful: “On you it rests whether I quit forever the neighbourhood of man and lead a harmless life, or become the scourge of your fellow creatures and the author of your own speedy ruin.”
R.U.R. extends the revenge plotline to robots, establishing the well-worn narrative: humans create robots, robots serve humans, robots rebel and take over. R.U.R. also establishes the now-familiar motive for such rebellion: resentment at being enslaved by an inferior species and the desire to avenge that skewed dynamic. Radius, the robot that leads the revolt, sums it up: “You are not as strong as the Robots. You are not as skillful as the Robots. The Robots can do everything. You only give orders. You do nothing but talk…I don’t want a master. I want to be master.” You can guess the way this story ends.
Isaac Asimov, best known for his short story collection, I, Robot, challenged the robot takeover trope, which he referred to as the “Frankenstein Complex.” To Asimov, the solution was obvious—to build “safeguards” into robots the same way humans build them into machines. Hence, Asimov developed the three laws of robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where those orders would conflict with the First Law.
3. A robot must protect its own existence except where such protection would conflict with the First and Second Law.
Asimov’s robots follow these laws—though not without a few snags and loopholes—and function as tools, not as existential threats. In fact, Asimov depicts robots as often more ethical than man, and as “essentially decent.”
Asimov added a zeroth law that echoes the first, but replaces “a human being” with “humanity,” giving robots the ability to act in the interest of the common good, even if it means causing harm (i.e. saving a crowd by taking down a gunman). This also allows robots to override human orders if they have a better understanding of what’s best for humanity. Asimov even sets the stage for robot rights. In “The Bicentennial Man,” the robot protagonist lobbies for its liberation, arguing that “only someone who wishes for freedom can be free”—though the three laws always apply.
Our relationship—both fictional and now factual—with our artificially animated counterparts has always been complicated. What’s consistent among these works is that as the creators of artificial life, humans, like all parents, play a crucial role in what their “offspring” become.