Lemonade: This $ 5 billion insurance company loves to talk about its AI. It’s all a mess now

0
36

[ad_1]

However, less than a year after its public debut, the now valued $ 5 billion company has found itself in the midst of a PR controversy over the technologies underlying its services.

On the Twitter and in Blog post On Wednesday, Lemonade explained why he deleted the so-called “terrible thread” of tweets posted on Monday. These now deleted tweets, among other things, said the company’s AI is analyzing videos that users submit when filing insurance claims for signs of fraud, identifying “non-verbal cues that traditional insurers can’t because they don’t.” not use the digital filing process. “
Deleted tweets that may still be viewed via the Wayback Machine Internet Archive, sparked a storm of outrage on Twitter. Some Twitter users were alarmed by what they believed “dystopia” the use of technology, as the company reported that its customers’ insurance claims could be verified by AI based on unexplained factors taken from their videos. Others dismissed the company’s tweets as “nonsense… ”
“As an educator who collects examples of AI snake oil to alert students to all harmful technologies that exist, I thank you for your outstanding service,” said Arvind Narayanan, assistant professor of computer science at Princeton University. tweeted on Tuesday in response to a Lemonade tweet about “non-verbal cues.”

Confusion about how the company handles insurance claims caused by word choice “has led to the proliferation of lies and misconceptions, so we write this to make it clear and unambiguous to confirm that our users are not treated differently based on their appearance. ., behavior or any personal / physical characteristics, “Lemonade wrote on her blog on Wednesday.

Lemonade’s initially confusing messages and public reaction to them serve as a warning to a growing number of companies marketing themselves with AI buzzwords. It also highlights the challenges associated with this technology: while AI can act as a trade argument, for example, speeding up a typically difficult process like getting insurance or filing a claim, it is also a black box. It is not always clear why or how he does what he does, or even when he is used to make a decision.

In a blog post, Lemonade wrote that the phrase “non-verbal cues” in his now deleted tweets was “the wrong choice of words.” Rather, he said he was referring to using facial recognition technology he relies on to flag insurance claims one person files under more than one ID – claims that are flagged are passed on to human reviewers, the company noted.

The explanation is similar to the process described by the company in a January 2020 blog post in which Lemonade shed some light about how AI Jim chatbot noted the attempts of a person using different accounts and disguises, apparently trying to file fraudulent statements. While the company did not reveal in this post whether it used facial recognition technology in the cases, Lemonade spokeswoman Yael Wissner-Levy confirmed to CNN Business this week that the technology was being used to detect fraud at the time.
While facial recognition technology is gaining ground, it is controversial. The technology was turned out to be less accurate when identifying people of color. Several black menat least were wrongfully arrested after false matches by face recognition.
Lemonade tweeted Wednesday that he is not using or trying to create AI. “who uses physical or personal characteristics to refute claims (phrenology / physiognomy), “and that it does not take into account the factors such as a person’s origin, gender, or physical characteristics when evaluating claims. Lemonade said it too never lets AI to automatically reject claims.
But in Lemonade IPO registrationfiled with the Securities and Exchange Commission last June, the company wrote that AI Jim “handles all claims by resolution in about a third of the cases, paying the claimant or rejecting the claim without human intervention.”

Wissner-Levy told CNN Business that AI Jim is the “brand name” the company uses to talk about claim automation, and that not everything AI Jim does uses AI. While AI Jim uses this technology for some actions, such as detecting fraud with facial recognition software, it uses “simple automation” – essentially predefined rules – for other tasks, such as determining if a customer has an active the insurance policy or the amount of their claim is less than their insurance deductible.

“It’s no secret that we are automating the processing of claims. But the rejection and approval actions are not performed by the AI, as stated in the blog post, ”she said.

When asked how customers should understand the difference between AI and simple automation, if both are carried out within the framework of a product that has AI in the name, Wissner-Levy said that although the name of the chatbot is AI Jim, the company “never will not allow AI, from the point of view of our artificial intelligence, to determine whether to automatically reject the application. “

“We will allow AI Jim, the chatbot you are talking to, to reject this based on the rules,” she added.

When asked if the AI ​​Jim brand was confusing, Wissner-Levy replied, “In this context, I think it was.” She said the company is hearing for the first time this week that the name is confusing or troubling to customers.



[ad_2]

Source link