March of the Machines

While perusing the “new non-fiction” section of the library last month, I noticed this book by Kevin Warwick on “The breakthrough in artificial intelligence.” I was intrigued. The first thing I noticed was that the book isn’t really new. It was written in 1997, but it came out in paperback in 2004, so that made it new, or at least cheap enough for my library to buy it.

The book is about machines taking over the world. Warwick’s thesis is that machines are getting smarter all the time and one day they’ll be as smart as us. Then, and this point he repeats over and over again, they’ll get way smarter than us and eventually take over. In chapter two, provocatively named “In the Year 2050,” he presents a future scenario in which the machines have taken over and relegated human beings to the few menial tasks machines don’t (yet) do well. It’s kind of a “Planet of the Apes” in which the apes are machines. There are “wild” humans who disrupt things and are hunted down by the machines (ala “Matrix”?) but their population is small and isolated. They are basically treated like animals.

This sci-fi horror scenario came about because humans gradually came to depend more and more on machines in the first half of the 21st century, eventually giving machines the decision-making power to take over the world. Machines did so because, well, they were smarter, dummy. Humans made them smarter, and then used them to make even smarter machines and then, when they got smarter than we, and had the means of production in their … pincers, they just took over. Wouldn’t you?

Warwick actually presents a compelling case. He’s a “strong AI” advocate who believes computers/machines will develop some form of consciousness once they’re complex enough. After all, he cites, humans are nothing more than biological “hardware” so why shouldn’t machines be able to duplicate and eventually surpass the performance of humans? Machines already outperform us in many tasks. The big problem today is that we don’t understand well enough how our brains work, and how consciousness comes about. But even if we don’t figure that out, Warwick says it doesn’t matter. Machines won’t have a human kind of consciousness anyway. They’ll have a machine kind of consciousness, and maybe they already have it.

In taking over the world, the machines won’t necessarily have malice of intent. They’ll merely behave like humans behave. We’re smarter than other animals, so we’re in charge here and have dominion over them. In the same way, machines will be impatient with our slow wit and realize that they can make better decisions and are, after all, the higher life form.

What about the “Laws of robotics” popularized by Isaac Asimov in his book I Robot? These are:

1. A robot may not injure a human being.
2. A robot must obey the orders given by a human, unless this conflicts with law 1.
3. A robot must protect its own existence, unless this conflicts with laws 1 or 2.

Warwick cover this too. He points out, properly, that these laws are fictitious, and anyway impossible to carry out in practice. And we’ve already created many robotic devices that injure humans, sometimes by specific intent, as in military weapons.

In the latter part of his book, Warwick gets around to describing his own robotic research, where he confronts the problems of creating learning robots directly. After reading his introductory chapters, where he projects that machines will take over the world in less than 50 years, I was expecting great things from his robots. Instead, I was underwhelmed by his little wheeled “dwarfs” running around bumping into each other and trying to avoid obstacles. Yes, alas, Big Blue may have beaten Kasparov in their fabled chess match, but when machines try to interact with the real world, they still look like bumbling idiots.

Warwick chalks this up the relative simplicity of his “neural networks.” Maybe this is so. If his computers had billions and billions of cells instead of “hundreds” then maybe they would match the human brain’s uncanny ability to be self-aware and creative. But Warwick never answers the question I’d most like to hear him address. Why, if you believe machines will eventually subjugate human being—including perhaps your own grandchildren, do you spend your energy working to perfect them?

DadReviews01/05/05 0 comments

Post a comment

Thanks for signing in, . Now you can comment. (sign out)

Please capitalize your name properly and use the same information each time you comment. We will not send you spam, and your email address will not be posted.

Remember me?


Related Entries
  1. Vacant Vehicles
    Not unlike “Battlebots” this contest requires unmanned vehicles to navigate a 130 mile desert course.
  1. Matrix Reloaded
    What happens when Star Wars meets Superman meets The Terminator meets War Games? Dan and I saw it today, Matrix Reloaded. Or maybe it should have been titled Crouching Men in Black, Hidden Plot. Well, the special effects were cool.
  1. Fluidity and Memory
    Pato tackles the ever expanding “Wait one second”; his father tackles learning and language.
  1. Wiretapping Intelligence
    The Onion’s Infographic is on Wiretapping Intelligence.
  1. Speedy Delivery: New Al Qaeda Threat?
    NYTimes reports that “new” terrorist threats come from years-old intelligence.