In an industry where the pace of regulation is often described with references to snails and turtles, the NAIC’s adoption of a model bulletin on carriers’ use of Artificial Intelligence Systems resembles the blur of a bullet train.
Executive Summary
Bruce Baty, a lawyer with nearly 40 years of experience assisting insurers in regulatory matters, gave an overview of a model bulletin adopted by the NAIC late last year, setting forth regulators’ expectations about how carriers will govern the development, use and acquisition of AI technologies.At a high level, what regulators rushed to accomplish in crafting the bulletin was to deliver a reminder that laws about unfair trade practices, unfair discrimination, corporate governance disclosure, P/C rating laws and market conduct surveillance laws apply to the use of AI systems as well, and to convey expectations about written documentation regulators expect carriers to produce during regular conduct exams regarding AI governance, risk management and scrutiny of third-party providers of AI systems.
Michael (Fitz) Fitzgerald, Insurance Industry Advisor for SAS Institute Inc., served as guest editor for this article and others featured in CM’s Q1-2024 magazine, “Leading the AI-Powered Insurer.“
“This was overnight, comparatively speaking,” Bruce Baty, a partner with the law firm Norton Rose Fulbright U.S. LLP, said during a January interview with Carrier Management Guest Editor Michael (Fitz) Fitzgerald, describing the development and adoption of a “Model Bulletin on the Use of Algorithms, Predictive Models, and Artificial Intelligence (AI) Systems by Insurers” by the National Association of Insurance Commissioners.
The bulletin, adopted at the Fall National Meeting of the NAIC in early December last year, outlines guiding principles for governance, risk management and internal controls for AI systems, with an entire section delineating what insurers should include in an “AIS Program”—a written program that regulators “expect” insurers to develop in order to document their “responsible use of AI Systems that make or support decisions related to regulated insurance practices.”
A final section addresses what might happen going forward as regulators carry out their oversight responsibilities during market conduct exams or other investigations to address AI usage:
“[A]n insurer can expect to be asked about its development, deployment, and use of AI Systems, or any specific Predictive Model, AI System or application and its outcomes (including Adverse Consumer Outcomes) from the use of those AI Systems, as well as any other information or documentation deemed relevant by the Department.”
The first section of the bulletin starts with a simple statement: “AI is transforming the insurance industry.” Baty, who has been advising insurance industry clients on corporate, reinsurance and regulatory matters for 38 years, agrees. He likens the use of AI in insurance to “a really fast-moving train that is speeding down the tracks.” That kind of speed, he said, “always is a great concern to state regulators, and the NAIC, in particular,” he said, highlighting the NAIC’s desire to get out in front of innovative ideas. “They like to study it and to understand what is happening, what they are regulating, and how best to regulate that to protect consumers.”
The legal adviser, who has specialized in insurance merger and acquisition deals over the years, contrasted the NAIC’s slower work on insurance business transfers (moving blocks of insurance business from one carrier to another) with its rush to get in front of insurer’s AI usage. With IBTs, the NAIC is carefully watching the IBT process go forward in Oklahoma and other states and considering how this will impact consumer rights and state guaranty fund coverage, among other things. “With AI, and the use of AI in marketing, underwriting and claims, it’s too late to start early. And that’s the NAIC right now. We can’t stop it. But what we can do is tap the brakes on the speeding train.”
Bruce Baty, Norton Rose Fulbright U.S. LLP
Baty described activity at the National Meeting in Orlando, where the bulletin was ultimately adopted, to support his observation about its “overnight” incarnation. Usually, by the time a proposed model gets to a national meeting, “all of the important stuff is already cooked. It’s already baked in by the time it gets to an airing by the committee in that public forum,” he said. But this time around, “regulators were still wrestling with concepts. There was a discussion and a debate among the regulators as to the meaning of the word ‘bias.'”
“They were still drafting at the on the day that this bulletin was adopted in Orlando,” he reported.
Anatomy of a Model; ‘Bias’ Undefined
The NAIC effort to craft the model bulletin started just about a year ago, with discussion at the previous Fall National meeting in December 2022. Two drafts created by the Innovation, Cybersecurity and Technology (H) Committee were subsequently exposed for comment—one on July 17, 2023, and a second draft on Oct. 13, followed by a virtual public meeting on Nov. 16 last year. Sticking points raised by commenters during draft exposure periods included the phrasing of definitions that make up a section of the bulletin, as well as issues related to the use of third-party data and model providers and the corresponding obligations of insurers to comply with insurance regulations when they use such providers. Revisions along the way included the removal of definitions for the terms “bias,” “big data” and “model risk,” as well as the introduction of definitions for “adverse consumer outcomes” and “degree of potential harm to consumers”—both of which are considerations for how insurers tailor their AIS programs in the language of the final bulletin.
Preliminary minutes of a Dec. 1 meeting of the H Committee posted on the NAIC website describe the last-minute back-and-forth over the bulletin’s references to “bias” in some detail. The word appears in an overarching statement about regulatory encouragement of “the development and use of verification and testing methods to identify errors and bias in Predictive Models and AI Systems” and in more specific recommendations for insurers to engage in—and document—data governance practices related to “bias analysis and minimization” in their AIS programs. According to the minutes, after hearing an insurance carrier trade group representative advocate for the removal of all of the bulletin’s references to the undefined term bias, a consumer advocate advanced the idea of replacing the word “bias” with “unfair discrimination.” Then regulators weighed in.
North Dakota Commissioner Jon Godfread motioned for the “unfair discrimination” terminology, Iowa Commissioner Doug Ommen supported the idea of removing the word “bias” in the five instances where it appears, and Colorado Commissioner Michael Conway suggested a new term—”statistical bias.”
Although Godfread then agreed to endorse the “statistical bias” phrasing, other commissioners raised concerns about whether the change would necessitate a new round of comments and about whether “statistical bias” was the right term for all pages of the bulletin. The H Committee’s Chair Kathleen Birrane, Maryland’s insurance commissioner, stated that the goal was to conclude discussion at the December meeting. After more debate, Godfread retracted his motion, and another motion to approve the bulletin with existing bias references unchanged (and the term remaining undefined) was passed through the committee, with Ommen abstaining from the motion to adopt.
The NAIC Executive Committee and Plenary adopted the model bulletin a few days later.
‘Subtle Reminders’
There was one instance of wordsmithing made at the 11th hour, according to the minutes draft, which reveals that a phrase about insurers’ “performance of audits” of third-party data and AI models to confirm their compliance with regulatory requirements was amended to read “performance of contractual rights regarding audits.” Concerns about vendor rights of confidentially and about exactly what insurers can contractually require of third-party vendors to gauge their conformity with an insurers’ standards of practice or compliance with regulatory requirements were raised by some of the 37 commentators that offered a total of 275 pages of reactions to the first and second drafts.
The sheer breadth of the commentary that the H Committee had to wade through supports Baty’s sense of an accelerated pace of activity from start to finish. Rushed or not, the intent of the bulletin was realized, Baty believes.
“It [is] a reminder to the carriers that with this use of artificial intelligence, you still have to comply with the unfair trade practices laws, you still have to be mindful of your claims practices. Do they adhere to the Unfair Claim Settlement Practices Acts?” Baty asked, paraphrasing the introductory paragraph of the bulletin and the first section, which lists those two laws, along with market conduct surveillance laws, P/C rating laws and the regulatory disclosures requirements of the Corporate Governance Annual Disclosure acts that already exist.
“It’s an effort to tap the brakes of this speeding train down the tracks, to tell carriers that we already have laws on the books that address so many of these issues. We adopted CGAD to understand how you’re managing yourself. [And] AI, because it has such an impact on consumers, needs to be addressed in your CGAD reports. It’s got to be an important aspect of CGAD.”
In the end, what the committee created is a “subtle reminder that we have market conduct laws on the books and that we fully intend to utilize those market conduct tools to come in and investigate how are you using AI so that you are mitigating the risk to consumers with respect to unfair discrimination,” Baty said, summarizing the last of the bulletin’s four sections, “Regulatory Oversight and Examination Considerations.” (See related online sidebar, “NAIC’s AI Model Bulletin in Brief,” for more details of the bulletin)
Principles Not Enough
“Not so subtle,” in the view of CM’s Guest Editor Fitzgerald, an industry advisor for SAS Institute, who notes that firms like his offer tools allowing carriers to deal with recommended “data lineage” and “bias analysis” and audit activities referenced in the bulletin.
None of that is required, however. “The goal of the bulletin is not to prescribe specific practices or to prescribe specific documentation requirements,” the language of the model clearly states. “Rather, the goal is to ensure that insurers…are aware of [their] Department’s expectations as to how AI Systems will be governed and managed and of the kinds of information and documents about an Insurer’s AI Systems that the department expects an Insurer to produce when requested,” the model bulletin says.
So, why put out a document that is not prescriptive? Why do carriers need a reminder of any of this given that CGAD, ORSA and Form F requirements already exist and that the NAIC already adopted “Principles in Artificial Intelligence” around fairness, accountability, compliance, transparency and security commitments of “AI actors” in 2020? (AI actors are “insurance companies and all persons or entities facilitating the business of insurance that play an active role in the AI system life cycle, including third parties such as rating, data providers and advisory organizations.”)
“It was clear that that statement alone was not going to be enough to bring accountability to the companies using AI,” Baty said, referring to the Principles. In fact, when the NAIC has directly asked carriers whether they are adhering to the Principles, most simply ignored the question, Baty reported, referring to a survey of auto insurers using advanced AI conducted by the Big Data and Artificial Intelligence (H) Working Group in 2022. “What was really remarkable and caught the attention of the regulators was the number of companies that just left that [governance] question blank,” he said.
According to the Working Group’s auto survey report, 169 out of 193 carriers reported using advanced AI or machine learning in some part of their operations, with the highest percentage using AI for claims, 70 percent. Yet, while 135 carriers surveyed reported already using AI for claims, 45 said they hadn’t documented the first principle (fairness and ethics) in their governance program and 81 left the question blank. The Working Group report pointed out that regulators expected the number of “blanks” to be less than or equal to the 58 who are not using or planning to use AI for claims. The report writers offered the same observation for all of the other functions and principles where the unexpected pattern repeated. (See related textbox, “Definitions, Definitions: Advanced AI vs AI“)
“In other words, maybe they have adopted those principles, but they haven’t incorporated them formally as part of their governance program. That was a real eye opener to the NAIC,” Baty said, during a video interview with Fitzgerald posted on the SAS website last year. (Video series: Insurance Analytics at Scale Video Series | SAS)
A more recent survey of home insurers tallies only yes and no answers for survey questions about whether each of the Principles is documented in carrier governance programs. Tables in the report reveal that 104 of 194 homeowners carriers say they are already using advanced AI for claims (still the top function for AI usage) and 174 said they did not document fairness considerations related to the use of their AI claims tools in their governance programs—again, with similar response patterns for other functions and principles. (Editor’s Note: 65 respondents said they documented fairness considerations “companywide,” across all functions)
Looking Ahead
Baty anticipates states with regulators who were on the H Committee in 2023—15 of them—will move forward to adopt the model bulletin in 2024. “The bulletin talks about a written AI program that carriers should adopt. That’s something that is not currently required, but I think the carriers are going to have to [recognize], whether your state of domicile adopts the bulletin or not, this is going to become a best practice.
Already, some states are moving to craft their own unique guidance documents. On Jan. 17, New York’s Department of Financial Services released a circular letter aimed at preventing unfair discrimination in carriers’ use of AI and external data sources specifically in underwriting and pricing. (Related online sidebar, What’s in New York’s Latest AI Circular?)
“I don’t know that we need to have all 54 jurisdictions adopt the bulletin for this to have impact,” Baty said. “What this [NAIC bulletin] is going to do is establish that best practice. So, when the market conduct examiners come in and they say, ‘I want to see your written AI program, whether your states adopted it or not, they’re going to be looking for your AI program. And I think, prudently, you better be able to hand that over. You better have one,” he said.
***
This article is featured in Carrier Management’s first-quarter 2024 magazine, “Leading the AI-Powered Insurer,” conceived by Guest Editor Mike (Fitz) Fitzgerald.
Related articles include:
- MGU-Carrier Relationships and AI
- NAIC’s AI Model Bulletin in Brief
- What’s in New York’s Latest AI Circular?
All of the articles in the magazine are available on the magazine page of our website.
Click the “Download Magazine” button for a free PDF of the entire magazine.
To be able to read and share individual articles more easily, consider becoming a Carrier Management member to unlock everything.