Mind Matters Natural and Artificial Intelligence News and Analysis
cold-fresh-lemonade-with-slices-of-ripe-lemons-stockpack-adobe-stock
Cold fresh lemonade with slices of ripe lemons.
Licensed via Adobe Stock

Insurance Company Gives Sour AI Promises

Data collection and discriminatory algorithms are turning Lemonade sour
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

An insurance company with the quirky name Lemonade was founded in 2015 and went public in 2020. In addition to raising hundreds of millions of dollars from eager investors, Lemonade quickly attracted more than a million customers with the premise that artificial intelligence (AI) algorithms can estimate risks accurately and that buying insurance and filing claims can be fun:

Lemonade is built on a digital substrata — we use bots and machine learning to make insurance instant, seamless, and delightful.

Adding to the delight are the friendly names of their bots, like AI Maya, AI Jim, and AI Cooper.

The company doesn’t explain how its AI works, but there is this head-scratching boast:

A typical homeowners policy form has 20-40 fields (name, address, bday…), so traditional insurers collect 20-40 data points per user.

AI Maya asks just 13 Q’s but collects over 1,600 data points, producing nuanced profiles of our users and remarkably predictive insights.

This mysterious claim is, frankly, a bit creepy. How do they get 1,600 data points from 13 questions? Is their app using our phones and computers to track everywhere we go and everything we do? The company says that it collects data from every customer interaction but, unless it is collecting trivia, that hardly amounts to 1,600 data points.

How do they know that their algorithm is “remarkably predictive” if they have only been in business for a few years? Lemonade’s CEO and co-founder Daniel Schreiber has alluded to the fact that “AI crushes humans at chess, for example, because it uses algorithms that no human could create, and none fully understand.” In the same way, “Algorithms we can’t understand can make insurance fairer.”

An example he gives is not reassuring.

Let’s say I am Jewish (I am), and that part of my tradition involves lighting a bunch of candles throughout the year (it does). In our home we light candles every Friday night, every holiday eve, and we’ll burn through about two hundred candles over the 8 nights of Hanukkah. It would not be surprising if I, and others like me, represented a higher risk of fire than the national average. So, if the AI charges Jews, on average, more than non-Jews for fire insurance, is that unfairly discriminatory?

His answer:

It would definitely be a problem if being Jewish, per se, resulted in higher premiums whether or not you’re the candle-lighting kind of Jew. Not all Jews are avid candle lighters, and an algorithm that treats all Jews like the ‘average Jew,’ would be despicable.

So, far so good. His solution:

[An] algorithm that identifies people’s proclivity for candle lighting, and charges them more for the risk that this penchant actually represents, is entirely fair. The fact that such a fondness for candles is unevenly distributed in the population, and more highly concentrated among Jews, means that, on average, Jews will pay more. It does not mean that people are charged more for being Jewish.

Schreiber says that this is “a future we should embrace and prepare for” because it is “largely inevitable….Those who fail to embrace the precision underwriting and pricing…will ultimately be adversely-selected out of business.”

I don’t know if this future is inevitable, but I will withhold my embrace. A bot might be able to identify a reasonably accurate proxy for having at least one Jew in a household but if the algorithm takes this Jewishness into account, it is discriminatory. Since algorithms can’t currently identify candle-lighting proclivities, what can they use other than proxies for being Jewish? Since Lemonade is using a black box algorithm that “we can’t understand,” it may well be discriminatory — and we have no way of knowing for certain.

Looking ahead to this predicted inevitable future, how would an algorithm move beyond Jewish proxies to collecting data on a household’s proclivity for candle lighting? Would it use customer smartphone cameras to record what goes on inside their homes? Would it ransack customer credit card statements for evidence of candle-buying — which might lead people to pay cash for candles the same way that some people pay cash for illegal drugs?

We are mired in a despicable place where Lemonade’s black box algorithm may well be discriminatory — not just against Jews — and there is no attractive alternative beyond bulldozing our privacy. 

In May 2021 Lemonade posted a problematic thread to Twitter (which was later deleted):

When a user files a claim, they record a video on their phone and explain what happened. Our AI carefully analyzes these videos for signs of fraud. [AI Jim] can pick up non-verbal cues that traditional insurers can’t, since they don’t use a digital claims process. This ultimately helps us lower our loss ratios (aka how much we pay out in claims vs. how much we take in).

Are claims really being validated by non-verbal cues (like the color of a person’s skin) that are being processed by black-box AI algorithms that the company does not understand?

There was an understandable media uproar since AI algorithms for analyzing people’s faces and emotions are notoriously unreliable and biased. Lemonade had to backtrack. A spokesperson said that Lemonade was only using facial recognition software for identifying people who file multiple claims using multiple names. But if Lemonade is using image-processing software, there is no way of knowing what their black-box algorithm is doing with these data.

Lemonade then tried to divert attention from image-processing software by claiming that AI Jim is not really AI, but just an algorithm for recording customer information that is checked against preset rules.

It’s no secret that we automate claim handling. But the decline and approve actions are not done by AI, as stated in the blog post. [Lemonade will] never let AI, in terms of our artificial intelligence, determine whether to auto reject a claim. We will let AI Jim, the chatbot you’re speaking with, reject that based on rules.

The lemonade is smelling a little sour at this point. In a pre-IPO filing with the SEC, Lemonade stated that “in approximately a third of cases [AI Jim] can manage the entire claim through resolution without any human involvement.” Lemonade has also boasted that AI Jim uses “18 anti-fraud algorithms” to assess claims. Are these 18 algorithms not AI, but just checkboxes?

Overall, it makes sense that an insurance company can pare costs by having fewer sales agents and office buildings. However, it seems a stretch to say that buying insurance and filing claims can be delightful. Insurance purchases should involve some thoughtful consideration of the coverage, deductibles, price, and so on, not light-headed giddiness. Nor are people likely to be delighted by a goofy app after they have been in an automobile accident, had their home burn down, or suffered any other substantial loss that warrants a claim. I would be pleased — not delighted — if the process were simple and relatively painless. Lemonade should be given credit for trying really hard to do this.

On the other hand, Lemonade seems to fit a common pattern when it comes to AI — put an AI label on a business and hope that investors and customers are impressed. Much of what the Lemonade bots do is apparently just helping customers walk through routine forms. If the bots really are collecting 1,600 data points from each customer interaction and analyzing these data with a black-box data-mining algorithm, then all of the many cautions about data mining apply. Specifically, its algorithms may well be discriminatory and their boast that that their algorithms are “remarkably predictive” are most likely based on how well they predict the past — which is a fundamentally unreliable guide to how well they predict the future.

Lemonade might turn into lemons.


Gary N. Smith

Senior Fellow, Walter Bradley Center for Natural and Artificial Intelligence
Gary N. Smith is the Fletcher Jones Professor of Economics at Pomona College. His research on financial markets statistical reasoning, and artificial intelligence, often involves stock market anomalies, statistical fallacies, and the misuse of data have been widely cited. He is the author of dozens of research articles and 16 books, most recently, The Power of Modern Value Investing: Beyond Indexing, Algos, and Alpha, co-authored with Margaret Smith (Palgrave Macmillan, 2023).

Insurance Company Gives Sour AI Promises