Insurance carriers should invest in discovery and management tools that identify and govern data. (Credit: Adobe Stock)

Insurance carriers across the nation are rapidly embracing artificial intelligence (AI) to transform their operations.

From streamlining claims processing and enhancing marketing efforts to conducting sophisticated market analyses, AI has become an indispensable tool for competitive advantage. However, this technological evolution comes with a significant regulatory challenge that many carriers are just beginning to understand.

With no overriding federal data privacy or AI regulations in place, a complex patchwork of state-level AI and data privacy laws has emerged to fill the regulatory void. Currently, 20 states already have AI or data privacy regulations on the books, with additional bills expected to go to votes once state legislatures reconvene in January.

For insurance carriers operating across multiple states, navigating this maze of varying requirements demands a robust, comprehensive approach to governing both AI systems and the data that powers them.

The state-by-state complexity

In 2023, the National Association of Insurance Commissioners (NAIC) created guidelines for the ethical and secure use of AI by insurance companies, and so far, 24 states have used the model as the basis for their AI legislation.

As the law firm Holland & Knight writes, the model bulletin provides guidelines for developing an AIS Program and requires auditing processes, clear and transparent governance frameworks, robust risk management and internal controls, and third-party vendor management and oversight.” For any insurance company employing AI, the NAIC bulletin is a good place to start.

That said, even though states have based their legislation on NAIC’s guidelines, individual implementations do differ, and the regulatory landscape varies dramatically from state to state, creating compliance challenges that extend far beyond simple checkbox exercises.

On the data privacy front, California, Colorado, Connecticut, Maryland, and Minnesota require businesses to allow consumers to communicate their privacy preferences automatically through universal opt-out mechanisms like the Global Privacy Control. Tennessee, notably, does not currently place this obligation on covered businesses, illustrating how requirements can differ even among states with privacy laws.

Other states go further. New Jersey, for example, mandates obtaining affirmative consent from minors aged 13 to 17 to process their data for targeted advertising, sale, or profiling. Maryland goes even further by requiring businesses to ensure that sensitive data processing is strictly necessary for the products or services provided and prohibiting the sale of such data—a stricter standard compared to Colorado's "adequate, relevant, and limited" requirement or California's "reasonably necessary and proportionate" standard.

Other states have regulations that go beyond data privacy to include the use of AI to make decisions that affect consumers. For example, Colorado’s Artificial Intelligence Act establishes comprehensive requirements for "high-risk" AI systems, mandating that organizations demonstrate they're not engaging in algorithmic discrimination. This requires feeding AI systems data that can directly or indirectly identify individuals, which means this data, too, must comply with data privacy compliance requirements.

Also challenging for insurance carriers, many states require that data, models, and artifacts used to test and validate AI efficacy be archived and made available upon request by regulatory bodies. For instance, under Colorado's law, consumers have the right to understand why AI profiling resulted in specific decisions and learn what actions they can take to secure different outcomes.

They may also review personal data used in profiling, correct inaccuracies, and have decisions reevaluated based on corrected information. These regulations create significant data retention and retrieval obligations that must be built into AI governance frameworks from the ground up.

Adding to the complexity, legal thresholds for applicability vary significantly across states. Maryland's requirements kick in for companies with 35,000 customers that derive more than 50% of gross annual revenue from selling personal information. Montana sets its threshold at 25,000 customers, while Tennessee requires 175,000 customers and Minnesota 100,000. These varying thresholds mean carriers must carefully track their customer base in each state to ensure compliance triggers are properly identified.

Strategic compliance framework

Establishing comprehensive data governance and guardrails is essential for sustainable success in this regulatory environment. The first step toward compliance begins with comprehensive data classification, but manual methods simply won't provide the flexibility and scalability needed for multi-state operations.

Insurance carriers should invest in discovery and management tools that identify and govern data. These systems should automatically tag data with appropriate sensitivity levels and track its movement through AI pipelines. Governance frameworks must be dynamic, capable of adapting to new data types and regulatory requirements while flagging when sensitive data usage and storage patterns deviate from established norms.

Effective frameworks must consider not just who accesses data, but how it's used, where it's processed, and what outputs it generates. This comprehensive approach requires detailed tracking systems that maintain records of data lineage, creating an audit trail that follows data as AI systems transform it. Organizations must understand exactly what data influences AI outputs and ensure accountability for all data handling decisions.

Building compliance systems

In this complex environment, governance frameworks must be designed to accommodate multiple regulatory requirements simultaneously. This demands automated controls that can enforce different standards based on data types, user locations, and processing purposes.

Continuous monitoring systems should alert stakeholders when AI systems operate outside approved parameters and produce detailed audit trails and impact assessments for AI decision-making processes.

Success also requires investment in comprehensive data literacy programs that help employees throughout the organization understand the implications of their data handling decisions. From underwriters using AI-powered risk assessment tools to claims adjusters leveraging automated processing systems, every employee interacting with AI-enhanced processes must understand their role in maintaining compliance.

As the regulatory landscape continues to evolve, insurance carriers need a firm governance foundation to remain compliant across all operating jurisdictions. The key lies in developing flexible frameworks that can adapt to new requirements while maintaining operational efficiency. Those that master the compliance maze today will be better positioned to leverage AI innovations tomorrow while maintaining consumer trust and regulatory approval.

George Tziahanas, VP of compliance and AGC at Archive360.

Opinions shared in this piece are the author's own.

NOT FOR REPRINT

© Arc, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to TMSalesOperations@arc-network.com. For more information visit Asset & Logo Licensing.