AI health coaching: Risk vs. benefit
As health care analyst Katie Suleta points out, familiar problems like bias and hallucination could impact the health advice the AI coach givesAs the popularity of health coaching continues to grow — it’s a $7 billion industry already AI health coaches — courtesy Sam Altman’s Open AI and Arianna Huffington’s Thrive Global — look poised to be the Next Big Thing:
Although digital health coaches are becoming more popular, this new venture aims to enable a hyper-personalized approach by learning daily routines to better accommodate healthy lifestyle changes. For example, the authors explain that perhaps the AI coach may recommend healthy changes that are congruent to a user’s existing tasks for the day: “go on a 10-minute walk with your child after you pick them up from school at 3:15 p.m.” or “start your wind-down routine at 10 p.m. since you have to get up at 6 a.m. the next morning to make your flight.”
Sai Balasubramanian, “Open AI And Thrive’s AI Health Coach Is A Bold Step Toward Hyper-personalized Healthcare,” Forbes, July 19, 2024
Sure. An omnipresent digital nag is bound to put more zing into everyone’s life.
That presenter is personality plus, eh? Health coach, meet Tales from the Crypt? On the other hand, many people are prepared to suffer to get healthy… so yes, there is a market anyway.
Meanwhile, health care analyst Katie Suleta offers some timely thoughts about the AI health coach trend at American Council on Science and Health:
Like health coaching, AI can be a fantastic tool when used properly and responsibly. This involves understanding that AI is prone to bias and errors (hallucinations). As such, it’s essential to consider what you really know about the algorithm and what it returns to you. For example, what do you know about how the algorithm works? Where does it get its information from? What are the limitations of the training dataset? Also, consider the results it returns for you: have you double and triple-checked the responses? Are you able to find those same responses from reputable sources?
Katie Suleta, “AI, Health Coaches, and AI Health Coaches … Oh My!,” ACSH, July 7, 2024
Indeed. Anyone who followed the saga of the Soviet bears in space will know that generative AI is prone to hallucination.
Generative AI hallucinates financial concepts too. That’s why business prof Gary Smith doubts, based on his research, that AI financial planning will replace the gal in the bank’s corner office.
Suleta offers a specific example from health coaching:
Now consider a more nuanced question like “Is sunscreen more likely to cause cancer than the sun?” The answer AI provides will solely depend upon where it gets its information and the weight it gives to specific types of sources. That depends upon the programmer, the algorithm they’ve designed, what access to sources and information the algorithm has, and what sources are specifically targeted. The algorithm will reflect the reality of those programming it: their beliefs, knowledge, and ability to vet sources and account for new information.
Suleta, “ … Oh My!,”
Smith points out that such a system is also prone to manipulation: “As LLMs flood the internet with intentional and unintentional fabrications, LLMs will increasingly be trained on these falsehoods and be increasingly prone to regurgitate them.”
Suleta concludes that “there are some critical unanswered questions” about AI health coaching. It may help a given individual; it may mislead. Maybe it comes down to the traditional advice about the internet in general: If it sounds unbelievable, don’t believe it, and when in doubt, doubt.
American Council on Science and Health is a public education group that works to promote evidence-based science.
You may also wish to read: LLMs can’t be trusted for financial advice. The LLM responses demonstrated that they do not have the common sense needed to recognize when their answers are obviously wrong. It takes an experienced financial planner to distinguish between good and bad advice, so clients may as well skip the LLMs and go to the knowledgeable human. (Gary Smith)