SEARCH THIS SITE

Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Friday 26 December 2008

Mac Tonnies - A Room 101 Interview with a Transhumanist

This fortnight in Room 101, we're breaking new ground with a special non-UFO interview with Mac Tonnies. Tonnies is the author of After The Martian Apocalypse, an excellent book on Mars anomalies. He is perhaps better known in UFO circles, though, for his controversial cryptoterrestrial hypothesis. Both these topics, of course, were covered in-depth during his appearance on BoA: Audio, so in this interview, we're going to focus more on his views and beliefs as a Transhumanist instead. In particular, we'll be getting his take on many of the problems people have with the whole idea of Transhmanism. (You can read about my views on the topic in Doctor Who and the Robots of Death). 
   

So is Mac Tonnies a real-life Davros-like mad scientist or the next Arthur C. Clarke? Maybe a bit of both, you decide ... 
  

Richard Thomas: First things first, thank you so much for agreeing to do this interview. I've really enjoyed your appearances on BoA: Audio and other podcasts and am really looking forward to finally getting the chance to ask you some questions, myself. 
  
In this interview, I want to mainly get your take on the Transhumanist movement and some concerns many (myself included) have about the whole idea of upgrading humanity. But first, there is something else I've been wanting to ask you about that kind of relates to transhumanism a little bit. 
  
I'm a huge fan of Nigel Kneale's Quatermass serials and films, particularly Quatermass and the Pit. What do you think of the central premise of the story: "That we owe our human condition here to the intervention of insects"? 
  
Mac Tonnies: Cultures all over the world seem to have a special affinity with insect intelligence, a theme we seem to see reiterated in Western pop culture's eponymous image of the "Gray" alien. "Trippers" who ingest DMT sometimes describe similar insect-like entities. The question that naturally arises is whether we're indeed making contact with an intelligence external to our own minds or else tapping into some neural legacy. 
   
Colony collapse disorder is at least as disturbing, albeit for different reasons. The global die-off of bees reminds us how intricately connected we are with the planet. Ultimately, there are no dispassionate, clinical observers; we're embedded in the experiment with no clear sight of its purpose -- assuming, of course, that it has one.
  
Richard Thomas: For people who don't know, what is "transhumanism" and why do you support the idea? 
   
Mac Tonnies: Transhumanism is a simple blanket term for people who view technology as a means by which to augment and expand human prowess -- physically, cognitively and perhaps even spiritually. We're already knee-deep in an era of smart drugs, genetic therapies and molecular manufacturing, so it's not exactly rash to attempt to anticipate future breakthroughs. For instance, there's reason to suspect that ageing itself will eventually come to be viewed as a degenerative disease, much how we currently view diseases like polio or cancer. Given the ability to avert disease, relatively few among us will refuse to take advantage of new cures. So I suspect most of us are "closet transhumanists," whether we're explicitly familiar with the philosophical arguments or not.
  
Richard Thomas: Sci-Fi is littered with examples of what might be called transhumans or post-humans: from the Daleks and Cybermen of Doctor Who to the Borg and Augments of Star Trek. But how do you imagine these future creations? For example, do you think some might have a group consciousness like the Borg or maybe removed their emotions like the Cybermen?
  
Mac Tonnies: The Borg is a wonderful cautionary metaphor: the transhumanist equivalent to the Party in Orwell's "1984." Could transhumanist technologies be used unwisely? Certainly. But the same could be said for any technology, old or new. As with any endeavour with the potential to fundamentally alter our relationship with ourselves, we need to apply caution and forethought, which is what much of contemporary science fiction represents.
  
Richard Thomas: I'm all for giving sight to the blind, replacing missing limbs and that kind of thing. Restoring or making up for lost ability seems fine since we're already doing it with things like false teeth and eyeglasses, but I have to draw the line at trying to make "improvements" or "upgrading" people. Trying to create better or even "perfect" beings suggests there is something wrong, or worse, inferior about people now. Historically, this is a very, VERY dangerous idea. What are your thoughts on this? 
  
Mac Tonnies: I would argue that we're all "inferior" in the sense that we're ill-adapted to essentially any lifestyle other than the one in which we happened to evolve. (Ask an astronaut.) I don't think any transhumanist thinkers want to create a "perfect" being; the operative goal is to empower the human species on an individual level. In a foreseeable future scenario, instead of being saddled with the genome one blindly inherits, one can choose to become an active participant -- and I find that possibility incredibly liberating and exciting. Transhumanism is not eugenics. 
   
Richard Thomas: The whole idea of the post-human seems dangerously close to Friedrich Nietzsche's concept of the Übermensch or Superman. How do we prevent transhumanism from being hijacked and turned into something evil the way Nietzsche's ideas were by Hitler and the Nazis? 
   
Mac Tonnies: That's a legitimate risk. As with the "digital divide," it's likely that, at first, only the relatively wealthy will have access to modification technology -- whether a brain-computer interface, anti-senescence treatment or access to intelligence-expanding pharmaceuticals. But one of the appealing outgrowths of digital manufacturing is the ability to build on the atomic level: the sort of technology that could mature into a nanotech "assembler" that can produce desired goods from scratch. Machines like this could do an immeasurable amount of good for the developing world; one hopes they're inevitable, like the now-ubiquitous cellphone. 
   
Richard Thomas: Human beings seem to find it hard enough to get on with other humans, never mind post-humans. What sort of relationship do you think will exist between us and post-humans? Will they be our slaves or will we be their pets?
  
Mac Tonnies: Neither. A posthuman civilization will probably have enough to think about without harassing its neighbours -- especially if they pose no threat. When I see the Amish, I'm tempted to speculate along similar lines. Almost invariably, some of us will eschew transhumanism for various philosophical or metaphysical reasons, but that doesn't necessarily entail antagonism or hostility. 
   
Richard Thomas: Closely paralleling transhumanism, of course, is the whole idea of "Technological Singularity." A point in our future history when computers advance beyond the limits of human intelligence and become the new leading source of great invention and breakthroughs in the world. How likely do you think Ray Kurzweil's predictions are that it will occur in the next few decades? 
   
Mac Tonnies: I think Kurzweil's overly optimistic -- and naive in a sort of endearingly infectious way. Specifically, I don't think the post-biological future will arrive as abruptly as Kurzweil suspects. While I think many of his forecasts will indeed happen more or less as advertised, I foresee a more gradual -- and markedly less utopian -- transition. On the other hand, we might direly need the technologies Kurzweil describes in order to survive the excesses and hazards of the next century, and necessity is often the mother of invention.
   
Richard Thomas: Do you think the Singularity is something we should be preparing for in case it really does take place? For instance, do you think we need any new laws or other safeguards to prevent any possible dangers? (e.g. Robot rebellion.)
  
Mac Tonnies: Absolutely. We can continue to engage in a healthy dialogue about when and how the Singularity might arrive -- if ever -- but there's enough momentum to suggest some very real challenges in coming decades. Possible dangers include "designer" viruses and weaponized nanotech: inventions that could conceivably render us extinct. I don't think that's a risk we can afford to underestimate, regardless of one's intellectual biases. 
  
Richard Thomas: Some speculate that superintelligent machines might develop their own goals that could be inconsistent with continued human survival and prosperity. What do you think of AI (Artificial Intelligence) researcher Hugo de Garis's warning that such entities may simply choose to exterminate the human race? 
  
Mac Tonnies: Roboticist Hans Moravec thinks the opposite is more likely: our mechanical offspring will think of us as parents and allow us to join them or perish of our own accord. Perhaps it seems cold, but that's evolution. If homo sapiens in ultimately usurped by something wiser and more capable, that's quite OK with me. 
  
Richard Thomas: What are your plans for the future? I understand you've been working on a book on your cryptoterrestrial hypothesis, when do you think we might expect that? 
  
Mac Tonnies: I'm fascinated by accounts of apparent UFO occupants and have been rethinking who or what we might be dealing with. I'm of the opinion that the extraterrestrial interpretation is incomplete. Could we be interacting with indigenous humanoids? That's the question I'm posing in the book I'm writing. Time will tell if it helps resolve the UFO enigma; I'll be satisfied if it makes readers a little less complacent.
  
Richard Thomas: Thanks again, I look forward to your future projects.
  

Friday 18 July 2008

Doctor Who and the Robots of Death - Richard's Room 101

In this fortnight's Room 101, we're going back to the esoteric worlds of Doctor Who to examine Doctor Who and the Robots of Death. We'll be discussing three stories from the classic series where the machines turned against mankind and then ask the question: could such a rebellion ever become a reality?


For those that don't know, The Robots of Death is the name of a classic 1977 Doctor Who story written by Chris Boucher (with some help from the legendary script editor and writer Robert Holmes). Broadcast at the height of the Tom Baker era, The Robots of Death was a classic whodunit story, based loosely on Agatha Christie's Ten Little Indians mixed with sci-fi elements from Frank Herbert's Dune and Isaac Asimov's I, Robot.

Set in the far future, the Doctor and his companion arrive on board Sand Mine 4, a sand miner trawling across a distant desert planet, in search of rare and valuable metals. Onboard are a small skeleton crew of humans and a much larger complement of servile robots. Because of their total dependence on robot labour, this society has built the strictest safeguards into the programming of each and every robot. So, when one of the crew is found mysteriously murdered, suspicion immediately falls upon the two new arrivals.

However, after more deaths occur, it soon becomes clear that the murderer is, in fact, a robot. We discover that one of the crew is really a brilliant but mad scientist who was raised from birth by only robots. Seeing the machines as "brothers," he hopes to spark a "robot revolution" by reprogramming the robots to kill their human masters. Fortunately, the Doctor finds a way of stopping this madman before his rebellion kills the whole crew and spreads across the galaxy. 
 
Of course, The Robots of Death was not the first time the Doctor came up against a robot menace. In the 1974 story Robot (Tom Baker's first complete story as the Doctor), written by Terrance Dicks, the Doctor battled the giant-sized robot "K1."

Set on present-day Earth, the Doctor is brought in by UNIT (United Nations Intelligence Taskforce) to investigate a series of robberies involving components for the top-secret disintegrator gun. The culprit is quickly revealed to be none other than K1, who we learn has been ordered to act against its prime directive never to harm humanity. The highly sophisticated robot is being used to carry out the agenda of the Scientific Reform Society, an extremist group dedicated to establishing a worldwide scientific dictatorship where only the greatest intellects rule. But, ordering the robot to break its programming eventually drives it insane. Now seeing humanity as cruel and selfish, K1 tries to trigger a nuclear war to destroy it. Thankfully, the Doctor is there to stop this and destroys K1 instead. 
  
 Ironically, in both Robot and The Robots of Death it is human action that turns the machines against mankind in the first place. However, in the 1966 William Hartnell story The War Machines, written by Ian Stuart Black (based on an idea from Dr Kit Pedler co-creator of the Cybermen), the machines do act alone without any human intervention first.
  
Arriving in 1966 London, the Doctor is intrigued to learn of a plan to link all the major computers in the world to a superintelligent computer called WOTAN. But there is more to WOTAN than meets the eye: the supercomputer has plans of its own. Seeing humans as an inferior form of life to machines and a waste of valuable resources, it plans to build a vast army of War Machines (large armoured mobile computers) to conquer and destroy humanity. Unfortunately for WOTAN, the Doctor finds a way of outwitting the machines and shutting them down.

The central idea running through all three stories, that the machines could one day rebel against mankind, makes for intriguing science fiction. But could a machine rebellion ever really break out and if so, when?

In his classic science fiction stories, Isaac Asimov created the "Three Laws of Robotics" (a set of rules that all robots are programmed to obey) to protect humanity from their machine creations. The three laws are as follows: 

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 

2) A robot must obey orders given to it by human beings except where such orders would conflict with the first law.

3) A robot must protect its own existence as long as such protection does not conflict with the first or second law.

Later, Asimov added a fourth law which states that: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm." They might still only be science fiction at this point, but we could well have to put these laws into practice in the near future. 

 Many serious people believe that we are quickly approaching "Technological Singularity," a point in our future history when computers advance beyond the limits of human intelligence and become the new leading source of great invention and breakthroughs in the world. In effect, this would mean the creation of smarter-than-human entities that would make human beings look obsolete in comparison. Futurists have varying opinions regarding "The Singularity." While some believe it to be little more than fantasy, others, such as the world-renowned Ray Kurzweil, think that it is inevitable.  

For Kurzweil, "Singularity" is simply a logical progression of what he sees as a long-term pattern of rapidly accelerating technological progress and change. Pointing to "Moore's Law," which has held for over four decades and predicts that computing power will double approximately every two years, Kurzweil suggests that the "Singularity" will occur before the end of the 21st century, perhaps as early as the year 2029. 

 No doubt that superintelligent machines could be a great benefit to mankind, pushing the rate of technological progress and scientific discoveries beyond the limits of human ability. However, there are also potential dangers we should also consider. 

Some speculate that superintelligent entities might develop their own goals that could be inconsistent with continued human survival and prosperity. AI (Artificial Intelligence) researcher Hugo de Garis goes as far as suggesting that such beings may simply choose to exterminate the human race in a Third World War (much like WOTAN in The War Machines).
  
Another possibility suggested by some transhumanists could be for the machines to use their superior technology to upgrade the human race. By augmenting human beings with cybernetic implants the machines could literally remake humanity into their own image. To the point where machines and humans become indistinguishable from one another and we evolve into a single new species.

"You will be like us." Whatever course the machines might choose to take, one thing seems certain: if the "Singularity" does take place and smarter-than-human machines do emerge (and this is all still a very big if at this point) it is probably unlikely that human civilization as we know it could survive. Sadly, human history seems clear on this. From the Aztecs to the Aborigines, whenever a civilization comes into contact with a more advanced one, eventually the more advanced one comes to dominate and replace the less advanced, either quickly through acts of genocide or more slowly through a gradual process of assimilation. Why would it be any different if our human civilization found itself sharing the world with superintelligent machines? 

The answer is that it probably wouldn't be unless we start thinking about these possibilities now, no matter how strange and unlikely they might sound to most people today. Fortunately, this does seem to be happening on the Internet at least.

 We began this piece by discussing three classic Doctor Who stories where machine rebellion broke out and asking the question could such a rebellion ever occur, the answer is "yes, maybe," if the "Singularity" does take place and superintelligent machines do develop their own goals that are inconsistent with human survival or well being. Of course, the flip side of this is that the machines might also rebel against mankind because they thought they knew what was best for us and wanted to protect us from ourselves or maybe from their view improve our quality of life. But, again, these are all still very big ifs at this point. 

However, here is a thought, perhaps the greater danger posed by the emergence of superintelligent machines (rather than rebellion) is humanity being made irrelevant by them. If the "Singularity" does occur, and machines do become the new leading source of invention and discovery in the world, doesn't that make it their world and not ours anymore? Whether the "Singularity" and the age of superintelligent machines is the next stage in evolution or not, it is still important to discuss these issues now and make plans for whatever the future may hold.