Pat Baird, regulatory head of global software standards at Philips, recently wrote an article titled, “Can Artificial Intelligence ‘Rehumanize’ Healthcare?” His thesis is that “By lowering administrative burden, AI can increase caregivers’ time spent actually caring for patients.” I will argue that this vision for the contribution of AI to healthcare delivery will not happen due to some very observable forces.
A place to begin the analysis is with the funding source for AI in healthcare. AI is bought or developed by healthcare delivery organizations. These organizations are following a business plan and if AI does not provide a business benefit, they will not pay for it. We can conclude that AI in healthcare will be designed and used to advance the business case of the organization. If AI systems give nurses and therapists more time, then administrators will recognize that they can do the same job with fewer nurses and therapists. Does anyone think that any good administrator would allow their nurses to have significantly more time on their hands? That time the AI has freed up could be spent with patients or the organization could simply reduce the number of nurses.
As I have had the opportunity to observe the staff in advanced research hospitals, they are already largely an extension of the computer system. They close the gap from the computer to the patient. A nurse enters a patient’s room. As the patient is greeted, he or she logs into the computer system. The appropriate orders are accessed and followed. The medications directed are administered. Vital signs are taken and entered. Therapies are administered and recorded. As AI becomes more sophisticated and is able to perform more of these functions the result is predictable. The nurse or therapist is no longer needed, that labor cost savings will be realized by the organization.
A more insidious impact of AI is the way it will change the role staff play. As AI driven systems become increasingly capable, the judgement and cognitive contribution of staff will diminish. In the extreme the staff’s brains will no longer be needed. Their only useful attributes will be from the neck down.
Increasingly as AI gains credibility the credibility of staff will diminish. The opinion of staff will drop relative to the way the AI’s directives are viewed. If a staff member thinks one thing should be done for a patient but the AI system recommends a different thing, who wins? As AI becomes more capable its judgment will dominate. Staff may occasionally have a different view of how best to care for a patient but the decision on which opinion to follow will increasingly go to the AI system.
A corollary to this trend will be that staff need less training and experience. People with lower skill levels can be used because the expertise of the AI systems will supplement their deficiencies. For the enterprise, lower skill level means lower labor cost. This will be a real plus for the organization’s profitability. It also is a strong incentive for management to overestimate the capability of AI. If they esteem AI enabled systems highly then they are perfectly justified in capturing the cost savings of using staff with lower skill levels. This is a powerful motivator.
The future of healthcare will not be driven by evil people doing diabolical things. Good people in key roles will pursue valid objectives, using the tools and processes at their disposal to guide them. It is when these tools and processes are limited, and all of them are limited, that bad things happen.
The contribution to patient outcomes from factors that cannot be quantified are significant. There are many lines of evidence that individualized care and ‘the personal touch’ of healthcare providers have real impacts on patient outcomes. However, if these factors cannot be quantified, they are unlikely to be included in decision making, particularly decision making in larger organizations. There is a ‘group think’ ethic that encourages trust in established decision-making processes. No matter how much care has been put into developing those processes, they are imperfect. All of our tools and processes have limits and flaws that can lead to bad outcomes, especially when they encounter circumstances that highlight those limits and imperfections.
Eonomics will determine how AI comes to be used in healthcare. If AI-enabled systems deliver healthcare of better quality and lower cost, then they will be adopted. The key to that statement is the measures used. If by the metrics used AI-enabled systems are as good or better than staff administered processes, they will be used. This depends on what metrics can be quantified and then what subset of those that can be quantified will be used in decision-making. The idea that AI will “‘Rehumanize’ Healthcare” will only become a reality if ‘humanization’ can be quantified and then shown to have sufficient value to outcomes that its cost is justified. It seems doubtful that the degree of “personal touch” will be or even can be quantified.
The most probable result is that AI systems will be increasingly used and staffing levels will be reduced, both in total number but also in level of training and experience. This will happen as organizations seek to optimize their profitability while delivering, by the measures they use, what they judge to be excellent health care.
You may also wish to read: What Is AI Doing To Me? AI’s Manufactured World Lacks Value The best way to defend ourselves from AI’s influence is to return to the abstract ideas of virtue, value, and goodness. AI can lead us to a simple view of the world, but it will not be an accurate view. The real world is complex and contradictory. (Stephen Berger)