Mind Matters Natural and Artificial Intelligence News and Analysis
camera record on crane on stage entertainment industry
camera record on crane in production on studio and light to stage for entertainment industry

1: IBM’s Watson Is NOT Our New Computer Overlord

AI help, not hype: It won at Jeopardy (with specially chosen “softball” questions) but is not the hoped-for aid to cancer specialists
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The AI Delusion by [Smith, Gary] Watson, a computer system that can answer questions posed in natural language, beat the world’s best Jeopardy champions in 2011. But was it a fair fight? Most people don’t know that, as economics professor Gary Smith recounts in a recent book, the Watson team asked the Jeopardy question writers to pull their punches:

The IBM (Watson) team was afraid the JEOPARDY staff would write clues with puns and double meanings that could trick Watson. That, in of itself reveals one big difference between humans and computers. Humans can appreciate puns, jokes riddles and sarcasm because we understand words in context. The best that current computers can so is check whether the pun, joke, riddle or sarcastic comment has been stored it its data base. ( The AI Delusion (Oxford, 2018), p. 8)

So, as he recounts, the Jeopardy staff agreed to select clues randomly from a stockpile that had been written but never used. That was a fair solution. But in making the request, IBM was tacitly admitting that Watson could be easily fooled by an ordinary level of complexity.

Fred Flintstone once glued his fingers to a bowling ball. Barney Rubble got a big hammer. Fred said, “When I nod my head, hit it.” With the expected result. For you and me, the scene is intended as a joke. But interpreting vague pronouns in context—in this case “it”—is difficult for current AI. That will probably improve with more programming. But bigger problems loom for Watson. It is failing as a “real business” proposition (IBM’s choice of term), not at winning trivia championships.

Venture capitalist Chamath Palihapitiya, CEO of investment firm Social Capital pulls no punches,

IBM isn’t at the forefront of artificial intelligence, Social Capital CEO and founder Chamath Palihapitiya told CNBC on Monday, and he certainly isn’t a fan of IBM’s Watson.

“Watson is a joke, just to be completely honest,” he said in an interview with “Closing Bell” on the sidelines of the Sohn Investment Conference in New York…

I think what IBM is excellent at in using their sales and marketing infrastructure to convince people who have asymmetrically less knowledge to pay for something. Natalia Wojcik, “IBM’s Watson ‘is a joke,’ says Social Capital CEO Palihapitiya” at CNBC

Originally, IBM pitched its Watson supercomputer as a revolution in marshaling the flood of new medical information so as to enable better cancer care. Hundreds of medical papers are published each day. No one can read them all. So when a patient has cancer, a physician could, in theory, tell Watson the details so that Watson could dig through the masses papers and identify the relevant ones.

It did not turn out to be that simple, according to STAT News:

…three years after IBM began selling Watson to recommend the best cancer treatments to doctors around the world, a STAT investigation has found that the supercomputer isn’t living up to the lofty expectations IBM created for it. It is still struggling with the basic step of learning about different forms of cancer. Only a few dozen hospitals have adopted the system, which is a long way from IBM’s goal of establishing dominance in a multibillion-dollar market. And at foreign hospitals, physicians complained its advice is biased toward American patients and methods of care.

The failure is not mainly that of AI:

The interviews suggest that IBM, in its rush to bolster flagging revenue, unleashed a product without fully assessing the challenges of deploying it in hospitals globally. While it has emphatically marketed Watson for cancer care, IBM hasn’t published any scientific papers demonstrating how the technology affects physicians and patients. As a result, its flaws are getting exposed on the front lines of care by doctors and researchers who say that the system, while promising in some respects, remains undeveloped.
Casey Ross and Ike Swetlitz, “IBM pitched its Watson supercomputer as a revolution in cancer care. It’s nowhere close” at STAT

Part of the problem is that not many data scientists know much about medicine. After one high-profile failure,

At JP Morgan, [IBM Watson Health’s general managerDeborah] DiSanzo touted the unit’s 7,000 employees, including 1,000 data scientists. But do they know medicines? No, the above drug marketer told me, and for this reason, “We’ve had quite a few experiments fail with them.”

And what Watson doesn’t know about medicine could undermine these partnerships. To do a project properly with Watson requires embedding with the Watson team for about six months, my source explained, adding, “We’re not ready to do that as an organization.”

“Watson Health is still a really young company that doesn’t understand how to work with pharma,” the executive concluded. Mark Iskowitz, “Is a Crisis Brewing for Watson Health?” at MM&M

Odd that no one at IBM considered these possibilities before making such huge investments in time, money, career energy, and PR.

One problem that has dogged Watson has nothing to do with AI or medicine. The journalism around the introduction of projects like Watson is long on the Gee Whiz! An Electronic Brain! It Won at Jeopardy! And it is short, very short, on systematic inquiry as to outcomes versus goals:

A 2015 Washington Post story entitled “Watson’s next feat? Taking on cancer. IBM’s computer brain is training alongside doctors to do what they can’t,” mentioned some limitations of machine learning but took an overall positive tone. It describes Watson as “a revolutionary approach to medicine and health care that is likely to have significant social, economic and political consequences.”

The story said also Watson would enable doctors “to find personalized treatments for every cancer patient by comparing disease and treatment histories, genetic data, scans and symptoms against the vast universe of medical knowledge.”

But cancer is not a trivial question. And the problems health care teams experience when  treating it were in some ways misrepresented:

“IBM spun a story about how Watson could improve cancer treatment that was superficially plausible – there are thousands of research papers published every year and no doctor can read them all,” said David Howard, a faculty member in the Department of Health Policy and Management at Emory University, via email. “However, the problem is not that there is too much information, but rather there is too little. Only a handful of published articles are high-quality, randomized trials. In many cases, oncologists have to choose between drugs that have never been directly compared in a randomized trial.” Mary Chris Jaklevic, “MD Anderson Cancer Center’s IBM Watson project fails, and so did the journalism related to it” at Health News Review

Someday, perhaps, we will have better research protocols and AI systems that quickly provide health care teams with a summary of the current best information on treating various types of cancer. But the ongoing Watson hype was definitely the #1 hype of the year. The software that won dumbed-down Jeopardy questions is just not the simple fix for the vast and complex fight against cancer.

Note: Brendan Dixon recently predicted an AI winter, citing this kind of problem, in AI Winter Is Coming. Too Big to Fail Safe? He provides a warning example of what can happen when relying on artificial intelligence alone to make medical decisions from very complex calculations can be risky at best.

2018 AI Hype Countdown 2: AI Can Write Novels and Screenplays Better than the Pros! AI help, not hype: It turns out that meaning matters. So. fiction and song writers, please do keep writing. Don’t leave us with just this stuff in 2019.

2018 AI Hype Countdown 3: With Mind-reading AI, You Will Never Have Secrets Again! AI help, not hype: Did you read about the flap they had to cut out of a volunteer’s skull? With so many new developments in AI, the real story is usually far down in the fine print. And not a close match with the headlines.

2018 AI Hype Countdown 4: Making AI Look More Human Makes It More Human-like! AI help, not hype: Technicians can do a lot these days with automated lip-syncs and smiles but what’s behind them? This summer, some were simply agog over “Sophia, the First Robot Citizen” (“unsettling as it is awe-inspiring”)…

2018 AI Hype Countdown 4: Making AI Look More Human Makes It More Human-like! AI help, not hype: Technicians can do a lot these days with automated lip-syncs and smiles but what’s behind them? This summer, some were simply agog over “Sophia, the First Robot Citizen” (“unsettling as it is awe-inspiring”)…

2018 AI Hype Countdown 5: AI Can Fight Hate Speech! AI help, not hype: AI can carry out its programmers’ biases and that’s all. Putting these kinds of decisions in the hands of software programs is not likely to promote vigorous and healthy debate.

2018 AI Hype Countdown 6: AI Can Even Find Loopholes in the Code! AI help, not hype: AI adopts a solution in an allowed set, maybe not the one you expected.

2018 AI Hype Countdown 7: Computers can develop creative solutions on their own! AI help, not hype: Programmers may be surprised by which solution, from a range they built in, comes out on top Sometimes the results are unexpected and even surprising. But they follow directly from the program doing exactly what the programmer programmed it to do. It’s all program, no creativity.

2018 AI Hype Countdown 8: AI Just Needs a Bigger Truck! AI help, not hype: Can we create superintelligent computers just by adding more computing power? Some think computers could greatly exceed human intelligence if only we added more computing power. That reminds me of an old story…

2018 AI Hype Countdown 9: Will That Army Robot Squid Ever Be “Self-Aware”? The thrill of fear invites the reader to accept a metaphorical claim as a literal fact.

2018 AI Hype Countdown: 10. Is AI really becoming “human-like”?: AI help, not hype: Here’s #10 of our Top Ten AI hypes, flops, and spins of 2018 A headline from the UK Telegraph reads “DeepMind’s AlphaZero now showing human-like intuition in historical ‘turning point’ for AI” Don’t worry if you missed it.

Robert J. Marks II, Ph.D., is Distinguished Professor of Engineering in the Department of Electrical & Computer Engineering at Baylor University.  Marks is the founding Director of the Walter Bradley Center for Natural & Artificial Intelligence and hosts the podcast Mind Matters. He is the Editor-in-Chief of BIO-Complexity and the former Editor-in-Chief of the IEEE Transactions on Neural Networks. He served as the first President of the IEEE Neural Networks Council, now the IEEE Computational Intelligence Society. He is a Fellow of the IEEE and a Fellow of the Optical Society of America. His latest book is Introduction to Evolutionary Informatics coauthored with William Dembski and Winston Ewert. A Christian, Marks served for 17 years as the faculty advisor for CRU at the University of Washington and currently is a faculty advisor at Baylor University for the student groups the American Scientific Affiliation and Oso Logos, a Christian apologetics group.


1: IBM’s Watson Is NOT Our New Computer Overlord