Mind Matters Natural and Artificial Intelligence News and Analysis
Hand above a red emergency button

Why an AI pioneer thinks Watson is a “fraud”

The famous Jeopardy contest in 2011 worked around the fact that Watson could not grasp the meaning of anything

In Robert J. Marks‘s latest podcast with Pomona College economics professor Gary N. Smith, the discussion turned to “the inability of AI to understand puns, lyrics, context, or anything at all,” in which folksinger Bob Dylan provided the focal point:

“Time Passes, Love Fades, But What Does “It” All Mean?”

Robert J. Marks: Let’s talk about IBM Watson, that beat the world champions at Jeopardy. I learned from your book that the contest, it was kind of gamed in a way—in the sense that the programmers of Watson had certain types of questions that they did not want Watson to be asked. Tell us about that.

Gary N. Smith: There’s two kinds of games going on. One was, for the human contestants, they weren’t allowed to buzz in until the light went on and it took them a fraction of a second to see the light and respond. And Watson couldn’t see lights so it was sent an electronic signal when it was okay to buzz in and that signal got there faster and was processed faster. And so Watson was repeatedly able to buzz in faster than the humans were. And it wasn’t that the humans didn’t know the answer, it was that they just didn’t have the reflexes.

The other gaming was that computers don’t really understand words… So you ask, “Who was the sixteenth president of the United States.”? The computer doesn’t know what “sixteenth” and “president of the United States” mean. But it can go and rummage through Wikipedia-like sources and find those words and match them to a president, Abraham Lincoln and come back with “‘Who’ was Abraham Lincoln.”

But then you put anything in that’s like a pun or a joke or a riddle or sarcasm, that you can’t look up in Wikipedia, and computers are helpless. For example, in the first round, one of the final Jeopardy clues was, “Its largest airport is named for a World War II hero; its second largest for a World War II battle.” And the correct answer was “Chicago.” And Watson guessed “Toronto,” apparently because it was confused in the second part of that sentence, what “it” referred to. And that is a common problem with computers. (See: Why did Watson think Toronto was in the U.S.A.? )

Terry Winograd is a computer scientist at Stanford and he thought up this test of computer knowledge. The question is, “What does ‘it’ refer to in this sentence?”

I can’t chop down that tree with this axe because it is too small.

Does it refer to the tree or the ax? Humans, we immediately know what it is because we know what a tree is, what an ax is, we know what “chop down” means, we know that small axes are not going to bring down big trees. But computers can’t answer that question because they don’t know what “it” refers to.

See also: AI is no match for ambiguity.

There’s a semiannual Winograd Schema Challenge where computers try to get 90% of the questions right. Right now they get about 50% right. They do no better than guessing because they really do not understand what words mean.

So in the Jeopardy thing, the IBM people did not want questions where there was any kind of ambiguity about what was being asked. Because ambiguity is very hard for computers to deal with. [11:27–14:28]

Robert J. Marks: One of the things you mentioned, I think everyone is familiar with the IBM Watson commercial with Bob Dylan, the great song-writing icon and Nobel Prize winner. Watson says, I read 50 billion pages a second, or whatever it is. And then he says, I’ve read all your work and I think that your primary objective is to promote the idea that time passes and love fades. And that was Watson’s summary of Dylan’s work. You pointed out that that was actually pretty shallow.

Gary N. Smith: It was really bad. A friend of mine, who is a pioneer in artificial research, Roger Schank, wrote a blog about this: “The ad made me laugh, or would have if had not made me so angry. I will say it clearly: Watson is a fraud.

You and I are people who know Bob Dylan. What he was writing about was the Viet Nam War and civil rights but he didn’t use those words. And so Watson couldn’t figure out what he was actually writing about. One of the examples I use (it’s Roger Schank) is the opening lines to “The Times They Are a-Changing:

Gather round, people, wherever you roam

Admit that the waters around you have grown,

And accept it that soon you’ll be drenched to the bone.

And what does that mean? You and I can argue about it, disagree about it, like any great poem or literature, but we have a basic idea of what he is saying, that there’s trouble brewing and the times are changing.

And Watson has no idea. It doesn’t know what any of those individual words mean and so it comes up with silly things like, he’s writing about time passes and love fades. That’s not at all what he’s writing about.

Robert J. Marks: So basically, IBM Watson doesn’t understand. It doesn’t have the ability to understand anything which is metaphorical, which has a double meaning, which is meant in whimsy, and it kind of takes everything literally then, right?

Gary N. Smith: Even deeper than that, it truly does not understand what words mean. Its inability to understand what “it” means in a sentence is because it doesn’t understand what any of the words in the sentence means. It can count them, it can look up definitions, it can find rhyming words, it can spellcheck them, but it doesn’t actually know what they mean.

There’s this guy Nigel Richards, who has won the French Scrabble championship twice, and he doesn’t understand any of the words he spells; it’s just a big mathematical puzzle. And that’s what computers do with words…

You’ll see that in some translation programs, even the best deep neural network ones, the Google Translate ones. Sometimes the stuff that comes back is just absolute gibberish… They try to match up sentence for sentence, word for word, but lose all meaning. They don’t know meaning.
[14:59– 18:12] …

Note: Gary Smith is the author of The AI Delusion (2018). Watch for the next episode, in which the discussion continues.

Earlier discussions between Robert J. Marks and Gary Smith:

Can AI combat misleading medical research? No, because AI doesn’t address the “Texas Sharpshooter Fallacies” that produce the bad data.

AI delusions: A statistics expert sets us straight. We learn why Watson’s programmers did not want certain Jeopardy questions asked.


The US 2016 election: Why Big Data failed. Economics professor Gary Smith sheds light on the surprise result.

Further reading on “lies, damned lies, and statistics”*:

Big data can lie: Simpson’s Paradox illustrates the importance of human interpretation of the results of data mining. (Robert J. Marks)

Study shows eating raisins causes plantar warts. Sure. Because, if you torture a Big Data enough, it will confess to anything. (Robert J. Marks)

*A proverb among 19th century British politicians, popularized by Mark Twain. “It suggests that statistics can be used to mislead even more than the worst form of untruth.” – The Phrase Finder

Featured image: Meaning/Michail Petrov, Adobe Stock

Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Why an AI pioneer thinks Watson is a “fraud”