AI Is Not Nearly Smart Enough to Morph Into the TerminatorComputer engineering prof Robert J. Marks offers some illustrations in an ITIF think tank interview
In a recent podcast, Walter Bradley Center director Robert J. Marks spoke with Robert D.Atkinson and Jackie Whisman at the prominent AI think tank, Information Technology and Innovation Foundation, about his recent book, The Case for Killer Robots—a plea for American military brass to see that AI is an inevitable part of modern defense strategies, to be managed rather than avoided. It may be downloaded free here.
In this second part ( here’s Part 1), the discussion (starts at 6:31) turned to what might happen if AI goes “rogue.” The three parties agreed that AI isn’t nearly smart enough to turn into the Terminator:
Jackie Whisman: Well, opponents of so-called killer robots, of course argue that the technologies can’t be trusted and will turn into the Terminator and kill innocent people. What’s your view there?
A portion of the transcript follows. The whole transcript is here. Notes and links follow below.
Robert J. Marks: I think that here we need to separate science fiction from science fact. As was mentioned in the introduction, artificial intelligence will never be sentient. It will never be creative. It will never understand. And currently, it has no common sense. It can’t even parse simple flubbed headlines. One of my favorites is
Seven Foot Doctors Sue Hospital
You see there that we have an ambiguity. It’s either doctors that are seven feet tall or that these doctors specialize in the foot. And there’s a whole list of these. There’s a yearly competition called the Winograd Schema Challenge where you look at ambiguous questions and artificial intelligence tries to parse it off. And according to the economics professor, Gary Smith at Pomona College, it turns out that these Winograd Schema can only be cracked about 50% of the time. So ambiguity is really, really a difficult problem in artificial intelligence.
Note: Prof. Marks has collected a number of such headlines, including “Students cook and serve grandparents” and “Two Sisters Reunited After 18 Years at Checkout Counter.” The test for detecting common sense in AI, the Winograd Schema, was developed by Stanford’s Terry Winograd. Software engineer Brendan Dixon explains the challenge the schema addresses here. Gary Smith, author of The AI Delusion, talks about AI’s overall problem with ambiguity here.
Robert J. Marks: In fact, you remember IBM Watson that won at Jeopardy? IBM thought it’d be a great idea in the medical field, to mine data from the medical literature to help physicians. And they were commissioned by MD Anderson to do that, but they weren’t able to. And MD Anderson ended up firing them. And basically, the bottom line condensed into just a sentence that the AI, IBM Watson had no common sense to do this mining.
Note: The way statistics prof Gary Smith explains the problems with these medical software systems is this: “They don’t understand which [data] are more meaningful than others, which medical articles are reasonable and which are bull and so a lot of doctors have become disillusioned with Watson. And a lot of hospitals have literally pulled the plug.” Essentially, the ability to crunch data is not at all the same thing as the ability to understand its significance. AI has a great future in medicine for rapid diagnostics but that can’t be extrapolated to a general program for replacing doctors.
Rob Atkinson (pictured): We couldn’t agree more that AI is almost seen as like a magic pixie dust now. Put AI on something and it’ll do these amazing things. Yet I get frustrated when I… just simple things. When I go on Amazon and I click on an order, it doesn’t know that it’s me. It thinks it’s my wife, even when I’m on my phone and I have to put the order to come to me, how simple is that to do? And here we are living in a world of AI!
Next: Are universities falling behind in AI research? Prof. Marks offers some disturbing reflections.
Here’s Part 1: Is the U.S. military falling behind in artificial intelligence? What is the likely outcome of allowing those with very different value systems to have control of global AI warfare technology? Robert J. Marks told Information Technology and Innovation Foundation, an AI think tank, that AI superiority can deter or shorten wars, thus reducing overall casualties.
Part 3: Is the research money for AI spent at universities just wasted? A computer engineering prof tells an AI think tank about the time a four-star general visited his university. Robert J. Marks, author of the forthcoming book Supply Side Academics, says that the strong focus on publishing papers in journals doesn’t lead to advances in the discipline.
Part 4: Computer prof: Feds should avoid university, seek entrepreneurs. Too much time at the U is wasted on getting papers into theoretical journals, not enough time spent on innovation, he contends. Robert J. Marks, author of Killer Robots and the forthcoming Supply Side Academics, wants a bigger focus on developing practical technologies.
You may also wish to look at:
Russia is systematically copying U.S. military AI robotics. In Russia’s topdown system, the military and corporations are essentially part of the same enterprise.
- 01:19 | Introduction to the podcast topic
- 02:13 | Introducing Dr. Robert J. Marks
- 03:38 | AI in military applications
- 05:07 | Staying ahead in development
- 06:31 | Major areas of AI in the military
- 07:10 | Drone swarms
- 09:26 | Will AI be sentient?
- 11:30 | Autonomous weapons
- 16:07 | Ethics
- 17:48 | The state of AI research
- 20:31 | Top priority in tech policy
- Get a free copy of The Case for Killer Robots by Robert J. Marks
- Original podcast at ITIF
- ITIF’s website
- Walter Bradley Center on Natural and Artificial Intelligence