AI Can Help Spot Cancers—But It’s No Magic Wand
When I spoke last month about how AI can help with cancer diagnoses, I failed to appreciate some of the complexities of medical diagnosisLast month, I suggested that Deep Learning-based AI could be useful in radiology. Radiologists evaluate many images daily, often for small anomalies indicative of disease or cancer. Our eyes and mind get tired, increasing the chance something is missed. An AI system tweaked to favor false positives over false negatives could help reduce the workload and raise the odds of finding conditions early.
I still believe that. But I failed to appreciate all that goes into a radiologist’s role. I was making the mistake that many AI boosters (whom I’ve criticized) make: When we reduce a problem to something a computer can process we can all too easily leave out the crux of the problem.
Sai Balasubramanian (right), who has both legal and medical training, knows much more than I do about radiology. While he sees a place for AI-assisted diagnosis, interpreting images involves much more than pattern matching:
While AI systems may be able to detect routine medical problems based on pre-defined criteria, there is significant value provided by a trained radiologist that software simply cannot replace. This includes the clinical correlation of images with the physical state of the patient, qualitative assessments of past images with current images to determine progression of disease, and ultimately the most human aspect of medicine, working with other healthcare teams to make collaborative care decisions.
Sai Balasubramanian, “Artificial Intelligence Is Not Ready For The Intricacies Of Radiology” at Forbes
Radiology images come from a patient with a family history and other possible conditions. Further, as he points out, any one image is a snapshot in time, a brief part of the whole story. And it’s the whole story that matters, not a single image, perhaps taken out of context.
Balasubramanian goes on to note other challenges with deploying AI in radiology, including finding a way to obtaining the data necessary to train the systems without violating privacy. Further, the training data must include a sufficient swath of the global population to eliminate a frequent source of bias, the inadvertent use of samples that mostly include members of one group.
Still, even with these qualifications, Balasubramanian does see AI as augmenting a physician’s workflow, “especially given increasing radiology demands in clinical medicine.”
And that’s key word: Augmenting. AI systems can be useful tools, like every other tool we build, to assist us.
Even then, we must not dumb down the problems to the point that the solution no longer matters as long as we can say we have found one. Like driving, hiring, and so much else that we do, medical diagnosis is not an On-Off, True-False affair. Medical diagnosis entails judgment, some only the conscious human mind can do.
Further reading:
How AI can help us fight cancer: Breast cancer is an excellent example of how AI can speed up early detection. (Brendan Dixon)
Why was IBM Watson a flop in medicine? Robert J. Marks and Gary S. Smith discuss how the AI couldn’t identify which information in the tsunami of medical literature actually mattered.
and
Can AI combat misleading medical research? No, because AI doesn’t address the “Texas Sharpshooter Fallacies” that produce the bad data.