Benevolent Bots is Lemonade’s new podcast series about the overlap of a few issues that are always on our minds: Artificial intelligence, insurance, and ethics.

Co-hosted by Lemonade CEO Daniel Schrieber and our in-house AI Ethics and Fairness Advisor Tulsee Doshi, Benevolent Bots takes a deep dive into big questions about technology and responsibility. Our debut episode features a conversation with Meg Mitchell, a research scientist focused on algorithmic bias and fairness in machine learning.

“Many companies and products go through this mental walkthrough: When can I use this technology? When should I? How do I determine whether the harms outweigh the benefits, and what are the guardrails you can put in place to prevent those harms?”

Tulsee Doshi, Lemonade’s AI Ethics and Fairness Advisor

Now, AI ethics is a nuanced field—and one that’s very much not suited to quick soundbites. Topics covered in this episode include a bite-sized history of the insurance industry; facial recognition; the possibility of high-tech “lie detector” functions on your smartphone (and the resulting ethical headaches); and much more.

But if you’re looking to quickly glean some insights before fully committing to the episode, we’ve summarized a few key points below (edited and condensed for length and clarity).

There’s no one set of “ethics”

“A common misconception is that AI can be ‘ethical,’ or computer science can have ‘ethical’ as a goal,” Meg Mitchell explains. “When we’re talking about ethics and ethical AI, we’re talking about approaching things through the lens of human values and thinking through different kinds of perspectives.

It’s important to realize ethics doesn’t give you an answer. You can’t say ‘this is an ethical thing.’ But you can say, ‘I will look at this through a bunch of different lenses and think about it from the perspective of virtue ethics—which is something like what will help people the most—or consequentialism, these different ethical frameworks that can help you prioritize amongst different kinds of values.

That means that you have to have some set of values that you’re working with. And that’s where a lot of the work of ethical AI really comes in: You’re trying to figure out the values of the company you’re working for, the values that you, as a person, can bring to the table. How do those interact to inform what we do?”

AI and insurance aren’t strange bedfellows

“If you play the word association game with ‘insurance,’ AI doesn’t make the top 10, probably not the top hundred words that you associate with insurance,” chimes in Daniel Schrieber. “But as we came into this space, we realized that it really ought to. Very early on, we defined Lemonade as a company that is built upon AI and behavioral economics.

“Using data in order to make predictions is the very core of insurance. I’d even say that probability theory and insurance co-evolved you go back to the 17th century. [Think of] Jacob Bernoulli and the law of large numbers, Pascal and Fermat and their formulation of probability theory. And this is the time and the place where the modern insurance company was born—Lloyd’s of London—and not that long afterwards, Benjamin Franklin’s [foray into insurance].

The actual product in insurance is probability theory. When you strip away everything else to take out the agents and the TV commercials and the geckos, what you’re left with is a probability theory.

Insurance is the business of predicting outcomes. That raises tremendous ethical challenges. The legal definition of what insurance companies are meant to do is use historical data in order to generate an expected loss, and to charge like risks like amounts. So we are at the epicenter of the question of how to use data and AI in order to do the right ethical thing by our customers.”

Don’t assume AI is the best tool for the job

“Machine learning and AI are often used like ‘the hammer for any possible nail,'” Meg Mitchell cautions. “instead of taking a step back and thinking more about whether a human would be much better at doing some task.

“I see ML, and AI more broadly, used more and more in contexts where it doesn’t necessarily make sense. But it’s used anyway, because there’s this sense that it’s up-and-coming and perhaps more correct. There’s automation bias—a propensity to believe something that an automated system says more than you would [what] a human [says], even in the light of conflicting evidence….

“It’s really important that a distinction is made between more objective and more subjective kinds of things. Machine learning systems are really good at objective things based on superficial characteristics, things like this.

“Once it gets more into intuiting something about a person, or more subjective senses of a person, then that’s where humans—and a variety of humans—are really useful. You want to bring in machine learning as one signal that can be used for some specific kinds of data—alongside a bunch of other data that multiple people, ideally, would look at.”

You can’t have it all

“There are these trade-offs,” says Meg Mitchell. “If you’re very concerned about fraud, which insurance companies are, then there’s only so much information you can provide people to help them understand why you’ve come to various decisions [about coverage]… because that can be gamed, or used for further fraud.

This gets to one of the points around ethics and not being able to have all ethics or all values. One value will always come at the expense of another. It’s always a matter of trade-offs, and trying to find some sweet spot.”


Listen and subscribe to Benevolent Bots on Spotify, Apple Podcasts, or wherever you get your podcasts. Stay tuned for new episodes in the coming weeks.

categories: #Lemonade101 #transparency

Share