However, less than a year after its public debut, the now valued $ 5 billion company has found itself in the midst of a PR controversy over the technologies underlying its services.
Confusion about how the company handles insurance claims caused by word choice “has led to the proliferation of lies and misconceptions, so we write this to make it clear and unambiguous to confirm that our users are not treated differently based on their appearance. ., behavior or any personal / physical characteristics, “Lemonade wrote on her blog on Wednesday.
Lemonade’s initially confusing messages and public reaction to them serve as a warning to a growing number of companies marketing themselves with AI buzzwords. It also highlights the challenges associated with this technology: while AI can act as a trade argument, for example, speeding up a typically difficult process like getting insurance or filing a claim, it is also a black box. It is not always clear why or how he does what he does, or even when he is used to make a decision.
In a blog post, Lemonade wrote that the phrase “non-verbal cues” in his now deleted tweets was “the wrong choice of words.” Rather, he said he was referring to using facial recognition technology he relies on to flag insurance claims one person files under more than one ID – claims that are flagged are passed on to human reviewers, the company noted.
Wissner-Levy told CNN Business that AI Jim is the “brand name” the company uses to talk about claim automation, and that not everything AI Jim does uses AI. While AI Jim uses this technology for some actions, such as detecting fraud with facial recognition software, it uses “simple automation” – essentially predefined rules – for other tasks, such as determining if a customer has an active the insurance policy or the amount of their claim is less than their insurance deductible.
“It’s no secret that we are automating the processing of claims. But the rejection and approval actions are not performed by the AI, as stated in the blog post, ”she said.
When asked how customers should understand the difference between AI and simple automation, if both are carried out within the framework of a product that has AI in the name, Wissner-Levy said that although the name of the chatbot is AI Jim, the company “never will not allow AI, from the point of view of our artificial intelligence, to determine whether to automatically reject the application. “
“We will allow AI Jim, the chatbot you are talking to, to reject this based on the rules,” she added.
When asked if the AI Jim brand was confusing, Wissner-Levy replied, “In this context, I think it was.” She said the company is hearing for the first time this week that the name is confusing or troubling to customers.