"There was a sizable discussion on Twitter around a poorly worded tweet of ours (mostly the term 'non-verbal cues'), which led to confusion as to how we use customer videos to process claims. There were also questions about whether we use approaches like emotion recognition (we don't), and whether AI is used to automatically decline claims (never!)," said Lemonade in a May 26 blog post. (Photo: Gabby Jones/Bloomberg)
(Bloomberg) — Lemonade, an internet-based insurer, said that it doesn't use artificial intelligence to deny claims or coverage based on a person's characteristics after a "poorly worded tweet" drew heated criticism online.
Clients who submit claims are asked to make videos explaining what happened. In its tweet, the New York-based company said those recordings are analyzed for fraud by artificial intelligence (AI) to "pick up non-verbal cues that traditional insurers can't." That led to an outpouring of tweets accusing the company of discrimination based on race and other traits.
AI "has been shown to have biases across different communities," Lemonade said Wednesday, May 26, in a blog post after deleting the initial tweet. "Actions such as rejecting claims or canceling policies" are never done by AI.
So, we deleted this awful thread which caused more confusion than anything else.
TL;DR: We do not use, and we're not trying to build AI that uses physical or personal features to deny claims (phrenology/physiognomy) (1/4)
— Lemonade (@Lemonade_Inc) May 26, 2021
The insurer continued by stating: "The term non-verbal cues was a bad choice of words to describe the facial recognition technology we're using to flag claims submitted by the same person under different identities. These flagged claims then get reviewed by our human investigators. This confusion led to a spread of falsehoods and incorrect assumptions, so we're writing this to clarify and unequivocally confirm that our users aren't treated differently based on their appearance, behavior, or any personal/physical characteristic."
Lemonade's error comes as technology companies are under scrutiny for their treatment of women and minorities. Artificial intelligence has been criticized by those who say it's subject to biases introduced by the people who program and implement it.
Copyright 2021 Bloomberg. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
© Arc, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to TMSalesOperations@arc-network.com. For more information visit Asset & Logo Licensing.