Introducing the idea that regulators encourage the use of innovation and AI Systems while recognizing that such systems present unique risks, such as the “potential for inaccuracy, unfair discrimination, data vulnerability, and lack of transparency and explainability,” the NAIC Model Bulletin on the Use of Artificial Intelligence Systems By Insurers breaks down guidance for AI governance and risk management in four sections:
Section 1 summarizes legislative authority, referring to the existing laws regarding unfair practices and disclosures, also referencing the “Principles in Artificial Intelligence” around fairness, accountability, compliance, transparency and security that the NAIC adopted in 2020.
Section 2 defines key terms used in the bulletin, including “AI System,” “Generative AI,” “Model Drift” and “Predictive Model”—terms that were added in the final draft.
Section 3, “Regulatory Guidance and Expectations” sets the expectation that insurers will establish meaningful governance and risk management controls and “encourages the development and use of verification and testing methods to identify errors and bias.” This section conveys the expectation each insurer will maintain a written AIS program, which “should be designed to mitigate the risk of Adverse Consumer Outcomes,” and states that risk management controls and governance processes “should be reflective of, and commensurate with, the insurers own assessment of the degree and nature of risk posed to consumers.”
Subsections offer general guidelines for AIS program design, considerations for a governance framework, items that risk management and internal controls should address, and considerations relating to the use of third-party AI systems and data.
Among the general guidelines are statements advising that AIS program development, implementation, monitoring and oversight should rest with “senior management accountable to the board or an appropriate committee of the board” and that an AIS program should include processes and procedures “providing notice to impacted consumers that AI Systems are in use and provide access to appropriate levels of information…”
When setting up a governance accountability structure, an insurer should consider “the formation of centralized, federated or otherwise constituted committees comprised of representatives from appropriate disciplines and units.” Examples listed include product specialties, actuarial, data science and analytics, underwriting, claims, compliance and legal.
Addressing risk management and internal controls, the bulletin states that the framework for identification, mitigation and management of AI risks “at each stage of the AI System life cycle” should address, among other things, oversight and approval processes of AI Systems, oversight and management of Predictive Models, validation and testing, data retention, and suitability.
Looming large in the section on third-party systems are guidelines describing suggested contract terms with vendors to provide audit rights and require cooperation with insurers subject to regulatory inquiries into their use of third-party products and services.
Section 4, “Regulatory Oversight and Examination Considerations,” reminds insurers that regulatory oversight of conduct extends to AI Systems, and that they can expect to be asked about the development and use of AI Systems even in the absence of a written AIS program. That written program, however, is the first item listed among the types of information that regulators are likely to request during an exam.
Also top of mind for inquiring regulators will be questions about how an insurer’s AIS program lines up with both the insurer’s degree of reliance on AI and the risk of adverse outcomes to consumers.
“In addition to conducting a review of any of the items listed in this Bulletin, a regulator may also ask questions regarding any specific model, AI System, or its application,” the model language says, going on to list types of information and documentation that regulators might request.
Among the items is “documentation related to validation, testing, and auditing, including evaluation of Model Drift to assess the reliability of outputs that influence the decisions made based on Predictive Models.”