“AI will be life-changing for insurance companies and consumers alike, raising the question of how regulators can ensure that models and algorithms and machine learning don’t simply scale-up the bad practices of the past,” writes Andrew Mais, commissioner of the Connecticut Insurance Department, in the accompanying article.
He notes that the NAIC has crafted a playbook with the adoption of guiding principles on artificial intelligence (AI).
The principles outline five key tenets, summarized with the acronym FACTS:
- Fair and Ethical: AI actors should respect the rule of law and should pursue beneficial consumer outcomes aligned with the risk-based foundation of insurance, avoiding and correcting unintended discriminatory consequences.
- Accountable: AI actors should be accountable for ensuring that AI systems operate in compliance with the guiding principles.
- Compliant: AI actors must have the knowledge and resources in place to comply with all applicable insurance laws and regulations.
- Transparent: AI actors should commit to transparency and responsible disclosures regarding AI systems to relevant stakeholders. Stakeholders, including regulators and consumers, should have a way to inquire about, review and seek recourse for AI-driven insurance decisions.
- Secure/Safe/Robust: AI actors should ensure a reasonable level of traceability in relation to datasets, processes and decisions made during the AI system life cycle. AI actors should implement systematic risk management approaches to detect and correct risks associated with privacy, digital security and unfair discrimination.



Allianz Built an AI Agent to Train Claims Professionals in Virtual Reality
Execs, Risk Experts on Edge: Geopolitical Risks Top ‘Turbulent’ Outlook
20,000 AI Users at Travelers Prep for Innovation 2.0; Claims Call Centers Cut
RLI Inks 30th Straight Full-Year Underwriting Profit 






