Mind Matters Natural and Artificial Intelligence News and Analysis
robot-take-over-solarized-image-stockpack-adobe-stock
robot  take over solarized image
Image licensed via Adobe Stock

Not to Worry–AI Isn’t Going to Take Over

AI hype isn't new. Here's Robert J. Marks writing on the topic in 2017
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

[The AI hype isn’t new. The history of exaggerating its potential goes back decades. In this article, Robert J. Marks echoes many of the views covered in more detail in his 2022 book Non-Computable You: What You Do That Artificial Intelligence Never Will. Today we share it with you, originally written on October 3rd, 2017, and published at The Stream.]

A.I. is transforming our world. Should we worry about that?

Entrepreneur billionaire Elon Musk is worried.

Woody Allen once said, “What if everything is an illusion and nothing exists? In that case, I definitely overpaid for my carpet.” Musk thinks he overpaid for his carpet. He believes there’s a good chance the world as we know it is a sophisticated computer simulation designed by a Super Programmer and we humans are intelligent agents in that simulation. I guess we can say Musk believes we are the product of intelligent design.

And we, as intelligently designed simulations, are generating our own A.I. (artificial intelligence) simulations. We are, in Musk’s universe, simulations writing simulations. Concerning the A.I. we create, Musk warns: “I think we should be very careful about artificial intelligence. If I had to guess what our biggest existential threat is, it’s probably that. So we need to be very careful.”

Musk seems unconcerned that we might pose an existential threat to the Super Programmer that wrote us. And if Musk is right, won’t the A.I. simulations we create ultimately be destroyed by the simulations our simulations write? So maybe there is nothing for Musk to worry about.

To avoid A.I. from further screwing up humanity, Musk believes A.I. needs to be controlled. He’s right on this point. Dangers accompany all new technology. There were threats from the introduction of the automobile, home electricity and the microwave oven. Alfred Nobel was concerned about the threat posed by his invention of dynamite. He salved his conscience by founding the Nobel Prizes — including the Nobel Peace Prize.

And then there is atomic and hydrogen bomb technology, which are the closest we’ve come to inventing doomsday devices.

What, Me Worry?

So should we worry about a computer like Skynet in the Terminator movies becoming self-aware and trying to wipe us out? Or, as in the Matrix, maybe computers will compel us to bathe in a slimy embryonic soup while computer programs hard wired to our brains take us to a make-believe world of virtual reality.

Researchers concerned about the so-called singularity — where computers gain intelligence beyond that of man — take this threat seriously. When this happens, computers will take over and the best life available to you and me will be as a robot’s pampered pet. Should we worry?

The short answer in no. Many legitimate concerns can be raised about A.I., but not these. None of these things will happen.

There’s more and more evidence that computers won’t ever become conscious or gain understanding. Gregory Chirikjian, Director of Johns Hopkins Whiting School’s Robot Laboratory, adds, “Nor will robots be able to exhibit any form of creativity or sentience.”

But wait a minute! We know of something that does all these things. It’s somehow contained in the three pounds of fatty meat between our ears. Because of our brain, presumably, we’re creative, conscious and sentient. So why can’t A.I. someday do what we do?

Will A.I. Ever Achieve Consciousness?

A show-stopping reason that artificial intelligence and robots will never gain the higher abilities of humans is because features such as consciousness, understanding, sentience and creativity are beyond the reach of what we currently define as computers. Alan Turing invented the Turing Machine in the 1930s. The Church-Turing thesis states that anything that can be done on a computer today can be done on Turing’s original machine. It might take a billion or a trillion times as long, but it can be done. Therefore, operations that can’t be performed by a Turing Machine can’t be performed by today’s supercomputers.

Turing showed there were many deterministic operations beyond the powers of the computer. For example, a computer program can’t be written to always analyze what another arbitrary computer program will do. Will an arbitrarily selected computer program eventually stop or will it run forever? Turing showed that a computer can’t solve this problem. The Turing machine, and therefore today’s computers, have fundamental limits on what they can do. In terms of understanding, our brains function beyond Turing machines in many ways.

Searle’s Chinese Room

Philosopher John Searle offered another reason in his Chinese Room argument. Imagine a room with a little man named Pudge. He receives messages in Chinese slipped through a slot in the door. Pudge looks at the message and goes to a large bank of file cabinets in the room where he looks for an identical or similar message. Each folder in the file cabinet has two sheets of paper. On one is written the message that might match the message slipped through the door slot. The second sheet of paper in the file is the corresponding response to that message. Once Pudge matches the right message, he copies the corresponding response. After refiling the folder and closing the file drawer, Pudge walks back to the slot in the door through which he delivers the response and his job is done.

Here’s the takeaway.

Does Pudge understand the question or the response? No. Pudge does his job and doesn’t even read Chinese! He’s simply matching patterns. It might look from the outside like Pudge understands Chinese, but he doesn’t. He’s simply following an algorithm – a step by step procedure to accomplish some goal.

When one follows a step by step procedure to bake a cake, i.e. following a recipe, one is executing an algorithm. That’s all a computer can do. It can follow instructions from an algorithm.

I Lost on Jeopardy, Baby

Remember when IBM’s Watson Supercomputer beat everyone at the game show Jeopardy!? I can imagine Pudge in the Chinese room being reassigned to the Wikipedia room. When Watson is asked a question, Pudge goes to a Wikipedia file cabinet and retrieves the right response and slips it through the slot to the outside. Watson the computer doesn’t understand the questions or the answers. Watson is following a preprogrammed algorithm. It’s not conscious.

So what allows our brain, or rather us, to do things computers can’t? What makes us different?

Some researchers are seeking a materialistic explanation of our remarkable brains. With attention to quantum tubules found in the brain, Sir Roger Penrose and Dr. Stuart Hameroff propose a quantum mechanical model. Hameroff notes their quantum tubule theory of the brain “is in conflict with a major premise of [strong] AI and Singularity.”

The theory of Penrose and Hameroff proposes a physical brain process that is nonalgorithmic. Computers are limited to executing algorithms. Since nonalgorithmic means noncomputable, what Penrose & Hameroff are proposing cannot be simulated on a computer. If the Penrose-Hameroff theory or other work on so-called quantum consciousness is successful and can be engineered into a working model, we will be able to generate machines that do what the brain does. This new technology will not be a computer. We’ll need to give it another name.

If we can build a human-like brain, be afraid. Be very afraid. Skynet might be right around the corner. But as long as computers simply get faster and use more memory, there’s no reason to worry on this account.

Still, Don’t Be Reckless

Even if we accept the limits of computers, and therefore A.I., there’s still substance to Musk’s fears. He’s worried that we might “produce something evil by accident.” He’s right. Computers do what they’re told to do.

The celebrated science-fiction novel and movie 2001: A Space Odyssey features a high-level computer named HAL. At the end of the film, HAL makes efforts to kill the astronauts because they are interfering with the primary goal of the mission.

This is not so much a computer gone rogue, as it is careless programming. Even if we grant the fiction that HAL is conscious, the fault lies with HAL’s programmer who failed to specify that human beings were more important than the mission.

Science fiction writer Isaac Asimov made a stab at A.I. regulation in a 1942 sci-fi pulp magazine story later grouped with other like-themed stories and published as the book titled I Robot. Asimov proposed three laws to assure the subservience of A.I. robots to humans. Asimov’s first of three laws is:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

If HAL was programmed with the first of Asimov’s laws, HAL would never try to take over the mission by murdering the astronauts. But, then again, the movie would be less interesting.

A.I. lawmakers and regulators need to think more broadly than Asimov. What would an A.I. robot do to a policeman attempting to tase a fleeing murder suspect? Following the first law to allow no human being to come to harm, the robot would disarm the policeman and the murderer would escape. The first law needs amending.

Some say A.I. will someday be in a position to make moral and ethical judgements before acting. Nope. The moral and ethical judgments will be made beforehand in the computer program.

Thinking about consequences of rules is what lawmakers and regulators do. They often do so poorly. I hope any resulting regulation of A.I. does not necessitate a hoard of bureaucrats snooping around in everybody’s code. Having companies responsible for the action of their A.I. seems like a better idea. Lawmakers with quantum tubules better than mine are needed to formulate such rules.

A.I. will continue replacing workers and changing our world. Algorithms are replacing travel agents, toll booth workers, bank tellers, checkout clerks and brick & mortar stores. But the growth of A.I. has also created jobs like specialists in information technology, bloggers, data analysts, programmers and webmasters.

Any new technology will give rise to this sort of transition. Economists like to invoke the proverbial buggy whip factories, which were replaced by car factories.

Watch Out for the Hype

But beware of the A.I. hype motivated by marketing, fame, ignorance or getting more clicks on a web site. In the early 1900’s, when attempting to pitch his technology of direct current, Thomas Edison electrocuted animals at state fairs using Nikola Tesla’s competing alternating current. Edison even had poor Topsy the elephant zapped to death on Coney Island in 1903. Edison did this so people would adopt his direct current rather than the alternating current of Tesla.

According to Edison, Tesla’s alternating current was a potential existential threat. But Edison was not interested in truth. Because he thought it would improve his business, Edison spewed hype. As is evident from the output of the wall plugs in your home today, Tesla won the battle even though there were and are problems with his alternating current. Occasional accidental electrocutions still occur and there are still house fires due to frayed electrical insulation. But do we even consider eliminating these risks by living without electric power?

Every new technology has consequences. My wonderful cell phone allows me to access the web and the knowledge of the world but, in doing so, I have sacrificed my privacy to Google and maybe the NSA. There are always trade-offs. Personally, I can’t wait to program my car to drive from McGregor, Texas to our house near Charleston, West Virginia, push GO and crawl into the back seat to take a nap. There’s possible danger here. I could mistakenly enter Charleston, North Carolina.

Here the bottom line: Filter the hype, understand the limits of A.I. and ignore electrocuted elephants. Nearly all A.I. forecasts are hyperbolic. Like National Enquirer, they’re motivated by something other than truth. Heed the advice of Neils Bohr, “Prediction is very difficult, especially about the future.”


Not to Worry–AI Isn’t Going to Take Over