The NAIC published a Model Bulletin discussing the use of artificial intelligence systems in insurance for use by any state who decides to adopt it. Insurers are reminded that actions or decisions affecting insureds made and supported by AI and AI systems must comply with applicable insurance laws and regulations, including unfair trade practices and unfair discrimination laws. The Bulletin addresses certain expectations of insurers in their development and use of AI technologies.
As AI continues to develop it has been adopted within the industry in many ways including product development, marketing, sales and distribution, underwriting and pricing, policy servicing, claim management, and fraud detection. While the NAIC supports the development and use of AI systems, it is also aware that it can present certain risks to consumers.
The Bulletin defines a number of terms including Adverse Consumer Outcome, AI System, Artificial Intelligence, Generative AI, and Machine Learning. The defined terms can be found in the Bulletin.
The NAIC's Principles of Artificial Intelligence can provide a good guideline for insurers in developing and implementing AI systems. The principles stress the importance of fairness and ethical use of AI, accountability, compliance with state laws and regulations, and transparency.
The bulletin outlines applicable parts of a state's law that insurers must adhere to in connection with AI systems:
Unfair Trade Practices Model Act: prohibits the practice of unfair methods of competition or unfair or deceptive acts.
Unfair Claims Settlement Practices Model Act: sets standards for the investigation and disposition of insurance claims.
Insurers should ensure that the use of AI Systems does not violate the principles of the UTPA or the UCSPA.
Property and Casualty Model Rating Law: insurance rates must not be excessive, inadequate, or unfairly discriminatory.
Rating laws apply regardless of the methodology an insurer uses to develop them. Rates, rating rules, and rating plans developed using AI technologies must not result in rates that violate the rating laws.
Insurers are expected to develop and implement a written AIS Program for the responsible use of AI Systems. The program should be designed to promote compliance with applicable insurance laws and regulations and mitigate the risk of adverse consumer outcomes. The program must include the development and use of verification and testing methods to identify errors and bias in predictive models and AI Systems. An insurer's decisions made or supported through the use of an AI System are subject to the applicable Department's examination to determine whether they are compliant with insurance laws and regulations. Insurers should be prepared to provide to the Department information on its development and use of AI Systems, any outcomes from the use of such systems, and any other information or documentation requested by the Department.
The bulletin outlines a series of general guidelines for AIS programs that includes addressing governance, risk management controls, audit functions, how AI is used with the Enterprise Risk Management System, how the AIS will address all phases of the system's life cycle, and others.
A framework of governance for the oversight of the AI system is required. The governance should prioritize transparency and fairness while also protecting proprietary and trade secret information. The AI program should also have certain risk management and internal controls with certain oversight and approval processes, data and accountability practices, and validating and testing requirements.
Insurers are also required to have a process for handling third-party data with appropriate standards, policies and procedures to ensure the safety of consumer data.
Lastly, the bulletin reminds insurers of the department's responsibility for oversight and advises that insurers should expect investigations or market conduct surveys to include an insurer's AI protocols and standards.
The Bulletin can be found here.

