Insurers worldwide are increasingly embracing artificial intelligence (AI) and machine learning (ML) powered decision support tools (Artificial Intelligence Systems or AIS)— both home-grown and third-party developed—as well as external consumer data and information sources (ECDIS) in an effort to improve financial performance.
Executive Summary
Analytical models that insurers have used for years often require actuaries and data scientists to consciously choose to accept or reject the incorporation of individual data elements. In contrast, new AI/ML-powered models process massive volumes of data without human intervention. It's into this environment that insurance regulators are actively voicing their concerns, experts at FTI Consulting note. As regulation evolve, insurers will need robust governance and testing mechanisms. Here, the four consultants elaborate on compliance steps that include: revisiting AI model development; combining testing and data governance; evaluating back-end outcomes.These tools enable critical business activities including enhanced risk assessment and underwriting, pricing, marketing and sales distribution strategies, claims and fraud management.
At the same time, rapidly evolving industry regulations in the U.S. and abroad are putting increased pressure on companies to both proactively implement robust governance protocols and test their models, algorithms and external datasets for potential bias against legally protected classes of people (i.e., based on race, ethnicity, sex, gender, gender identification, religion, age, etc.).
Now is the time to get out in front of the growing number of regulatory mandates and impending deadlines to avoid downstream challenges and costs.
Why All the Fuss Today?
Given the fact that insurers have been using predictive models to help make risk and other key business-related decisions since the 1990s, one might ask why new fairness regulations are so important.