Why “99% accurate” may not be enough—and safer paths to fair and explainable AI.
In this episode of the Get Plugged In Podcast Series: AI Insights, Dale Hall, Managing Director of the Society of Actuaries Research Institute, is joined by our principal and director of data science and analytics, Michael Niemerg, for a discussion on the urgent and evolving topic of fairness in artificial intelligence, particularly as it applies to insurance underwriting.
The SOA sought Michael out for his deep insights into the complexities of ensuring fairness in AI-driven models, the implications of generative AI for interpretability, and ideas on how actuarial professionals can better align modeling practices with ethical and regulatory standards. Listen in as Michael explains:
- Why financial products like insurance are built on trust between carriers, vendors, customers, regulators, and the public at large.
- Why insurance industry stakeholders need an effective regulatory regime.
- Why fairness must also be paired with transparency.
- Why even if generative AI is hallucination-free in 99% of cases, that may not be good enough for underwriting.
- How even if an AI model isn’t fully explainable, it’s still possible to check model output for bias.
- How novel data sources can help ensure that consumers have the best chance of getting a policy, while also helping to weed out bad actors.
- Why testing for bias is about more than meeting some statistical test. (Hint: It’s about preventing discriminatory outcomes and ensuring more fairness across the entire industry.)
The conversation also tackles common misconceptions about AI fairness and what actuaries need to consider in designing and testing fair models.
Whether you’re an actuary, data scientist, or regulator, this is a must-listen discussion on how to navigate bias, maintain trust, and prepare for a future where AI continues to reshape the insurance industry.
Listen to the Podcast