An Epic Failure: Overstated AI Claims in Medicine
Independent investigations are finding that AI algorithms used in hospitals are not all they claim to beEpic Systems, America’s largest electronic health records company, maintains medical information for 180 million U.S. patients (56% of the population). Using the slogan, “with the patient at the heart,” it has a portfolio of 20 proprietary artificial intelligence (AI) algorithms designed to identify different illnesses and predict the length of hospital stays.
As with many proprietary algorithms in medicine and elsewhere, users have no way of knowing whether Epic’s programs are reliable or just another marketing ploy. The details inside the black boxes are secret and independent tests are scarce.
One of the most important Epic algorithms is for predicting sepsis, the leading cause of death in hospitals. Sepsis occurs when the human body overreacts to an infection and sends chemicals into the bloodstream that can cause tissue damage and organ failure. Early detection can be life-saving, but sepsis is hard to detect early on.
Epic claims that the predictions made by its Epic Sepsis Model (ESM) are 76 percent to 83 percent accurate, but there have been no credible independent tests of any of its algorithms — until now. In a just published article in JAMA Internal Medicine, a team examined the hospital records of 38,455 patients at Michigan Medicine (the University of Michigan health system), of whom 2,552 (6.6 percent) experienced sepsis. The results are in the table. “Epic +” means that ESM generated sepsis alerts; “Epic –” means it did not.
Epic + | Epic – | Total | |
Sepsis | 843 | 1,709 | 2,552 |
No Sepsis | 6,128 | 29,775 | 35,903 |
Total | 6,971 | 31,484 | 38,455 |
There are two big takeaways:
a. Of the 2,552 patients with sepsis, ESM only generated sepsis alerts for 843 (33 percent). They missed 67 percent of the people with sepsis.
b. Of the 6,971 ESM sepsis alerts, only 843 (12 percent) were correct; 88 percent of the ESM sepsis alerts were false alarms, creating what the authors called “a large burden of alert fatigue.”
Reiterating, ESM failed to identify 67 percent of the patients with sepsis; of those patients with ESM sepsis alerts, 88 percent did not have sepsis.
A recent investigation by STAT, a health-oriented news site affiliated with the Boston Globe, came to a similar conclusion. Its article, titled “Epic’s AI algorithms, shielded from scrutiny by a corporate firewall, are delivering inaccurate information on seriously ill patients,” pulled few punches:
Several artificial intelligence algorithms developed by Epic Systems, the nation’s largest electronic health record vendor, are delivering inaccurate or irrelevant information to hospitals about the care of seriously ill patients, contrasting sharply with the company’s published claims.
[The findings] paint the picture of a company whose business goals — and desire to preserve its market dominance — are clashing with the need for careful, independent review of algorithms before they are used in the care of millions of patients.
Casey Ross, “Epic’s AI algorithms, shielded from scrutiny by a corporate firewall, are delivering inaccurate information on seriously ill patients,” at STAT News
Why have hundreds of hospitals adopted ESM? Part of the explanation is surely that many people believe the AI hype — computers are smarter than us and we should trust them. The struggles of Watson Health and Radiology AI say otherwise. The AI hype is nourished here by the scarcity, until recently, of independent tests.
In addition, the STAT investigation found that Epic has been paying hospitals up to $1 million to use their algorithms. Perhaps the payments were for bragging rights? Perhaps the payments were to get a foot firmly in the hospital door, so that Epic could start charging licensing fees after hospitals commit to using Epic algorithms? What is certain is that the payments create a conflict of interest. As Glenn Cohen, Faculty Director of Harvard University’s Petrie-Flom Center for Health Law Policy, Biotechnology & Bioethics, observed, “It would be a terrible world where Epic is giving people a million dollars, and the end result is the patients’ health gets worse.”
This Epic failure is yet another of countless examples of why we shouldn’t trust AI algorithms that we don’t understand — particularly if their claims have not been tested independently.