SEARCH THIS SITE

Friday, 18 July 2008

Doctor Who and the Robots of Death - Richard's Room 101

In this fortnight's Room 101, we're going back to the esoteric worlds of Doctor Who to examine Doctor Who and the Robots of Death. We'll be discussing three stories from the classic series where the machines turned against mankind and then ask the question: could such a rebellion ever become a reality?


For those that don't know, The Robots of Death is the name of a classic 1977 Doctor Who story written by Chris Boucher (with some help from the legendary script editor and writer Robert Holmes). Broadcast at the height of the Tom Baker era, The Robots of Death was a classic whodunit story, based loosely on Agatha Christie's Ten Little Indians mixed with sci-fi elements from Frank Herbert's Dune and Isaac Asimov's I, Robot.

Set in the far future, the Doctor and his companion arrive on board Sand Mine 4, a sand miner trawling across a distant desert planet, in search of rare and valuable metals. Onboard are a small skeleton crew of humans and a much larger complement of servile robots. Because of their total dependence on robot labour, this society has built the strictest safeguards into the programming of each and every robot. So, when one of the crew is found mysteriously murdered, suspicion immediately falls upon the two new arrivals.

However, after more deaths occur, it soon becomes clear that the murderer is, in fact, a robot. We discover that one of the crew is really a brilliant but mad scientist who was raised from birth by only robots. Seeing the machines as "brothers," he hopes to spark a "robot revolution" by reprogramming the robots to kill their human masters. Fortunately, the Doctor finds a way of stopping this madman before his rebellion kills the whole crew and spreads across the galaxy. 
 
Of course, The Robots of Death was not the first time the Doctor came up against a robot menace. In the 1974 story Robot (Tom Baker's first complete story as the Doctor), written by Terrance Dicks, the Doctor battled the giant-sized robot "K1."

Set on present-day Earth, the Doctor is brought in by UNIT (United Nations Intelligence Taskforce) to investigate a series of robberies involving components for the top-secret disintegrator gun. The culprit is quickly revealed to be none other than K1, who we learn has been ordered to act against its prime directive never to harm humanity. The highly sophisticated robot is being used to carry out the agenda of the Scientific Reform Society, an extremist group dedicated to establishing a worldwide scientific dictatorship where only the greatest intellects rule. But, ordering the robot to break its programming eventually drives it insane. Now seeing humanity as cruel and selfish, K1 tries to trigger a nuclear war to destroy it. Thankfully, the Doctor is there to stop this and destroys K1 instead. 
  
 Ironically, in both Robot and The Robots of Death it is human action that turns the machines against mankind in the first place. However, in the 1966 William Hartnell story The War Machines, written by Ian Stuart Black (based on an idea from Dr Kit Pedler co-creator of the Cybermen), the machines do act alone without any human intervention first.
  
Arriving in 1966 London, the Doctor is intrigued to learn of a plan to link all the major computers in the world to a superintelligent computer called WOTAN. But there is more to WOTAN than meets the eye: the supercomputer has plans of its own. Seeing humans as an inferior form of life to machines and a waste of valuable resources, it plans to build a vast army of War Machines (large armoured mobile computers) to conquer and destroy humanity. Unfortunately for WOTAN, the Doctor finds a way of outwitting the machines and shutting them down.

The central idea running through all three stories, that the machines could one day rebel against mankind, makes for intriguing science fiction. But could a machine rebellion ever really break out and if so, when?

In his classic science fiction stories, Isaac Asimov created the "Three Laws of Robotics" (a set of rules that all robots are programmed to obey) to protect humanity from their machine creations. The three laws are as follows: 

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 

2) A robot must obey orders given to it by human beings except where such orders would conflict with the first law.

3) A robot must protect its own existence as long as such protection does not conflict with the first or second law.

Later, Asimov added a fourth law which states that: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm." They might still only be science fiction at this point, but we could well have to put these laws into practice in the near future. 

 Many serious people believe that we are quickly approaching "Technological Singularity," a point in our future history when computers advance beyond the limits of human intelligence and become the new leading source of great invention and breakthroughs in the world. In effect, this would mean the creation of smarter-than-human entities that would make human beings look obsolete in comparison. Futurists have varying opinions regarding "The Singularity." While some believe it to be little more than fantasy, others, such as the world-renowned Ray Kurzweil, think that it is inevitable.  

For Kurzweil, "Singularity" is simply a logical progression of what he sees as a long-term pattern of rapidly accelerating technological progress and change. Pointing to "Moore's Law," which has held for over four decades and predicts that computing power will double approximately every two years, Kurzweil suggests that the "Singularity" will occur before the end of the 21st century, perhaps as early as the year 2029. 

 No doubt that superintelligent machines could be a great benefit to mankind, pushing the rate of technological progress and scientific discoveries beyond the limits of human ability. However, there are also potential dangers we should also consider. 

Some speculate that superintelligent entities might develop their own goals that could be inconsistent with continued human survival and prosperity. AI (Artificial Intelligence) researcher Hugo de Garis goes as far as suggesting that such beings may simply choose to exterminate the human race in a Third World War (much like WOTAN in The War Machines).
  
Another possibility suggested by some transhumanists could be for the machines to use their superior technology to upgrade the human race. By augmenting human beings with cybernetic implants the machines could literally remake humanity into their own image. To the point where machines and humans become indistinguishable from one another and we evolve into a single new species.

"You will be like us." Whatever course the machines might choose to take, one thing seems certain: if the "Singularity" does take place and smarter-than-human machines do emerge (and this is all still a very big if at this point) it is probably unlikely that human civilization as we know it could survive. Sadly, human history seems clear on this. From the Aztecs to the Aborigines, whenever a civilization comes into contact with a more advanced one, eventually the more advanced one comes to dominate and replace the less advanced, either quickly through acts of genocide or more slowly through a gradual process of assimilation. Why would it be any different if our human civilization found itself sharing the world with superintelligent machines? 

The answer is that it probably wouldn't be unless we start thinking about these possibilities now, no matter how strange and unlikely they might sound to most people today. Fortunately, this does seem to be happening on the Internet at least.

 We began this piece by discussing three classic Doctor Who stories where machine rebellion broke out and asking the question could such a rebellion ever occur, the answer is "yes, maybe," if the "Singularity" does take place and superintelligent machines do develop their own goals that are inconsistent with human survival or well being. Of course, the flip side of this is that the machines might also rebel against mankind because they thought they knew what was best for us and wanted to protect us from ourselves or maybe from their view improve our quality of life. But, again, these are all still very big ifs at this point. 

However, here is a thought, perhaps the greater danger posed by the emergence of superintelligent machines (rather than rebellion) is humanity being made irrelevant by them. If the "Singularity" does occur, and machines do become the new leading source of invention and discovery in the world, doesn't that make it their world and not ours anymore? Whether the "Singularity" and the age of superintelligent machines is the next stage in evolution or not, it is still important to discuss these issues now and make plans for whatever the future may hold.

No comments:

Post a Comment