Directors and Officers (D&O) insurance helps protect corporate officers and board directors against personal liability arising from decisions made in their official capacities. But as Artificial Intelligence (AI) adoption accelerates in areas such as decision-making and automated hiring, generative models[1] new liabilities that push the limits of traditional D&O insurance coverage are being created by these activities. New risks stem from AI's "black box"[2] nature, rapid evolution/implementation, and a patchwork of regulatory clarity (e.g., EU AI Act[3] enforcement and various U.S. state-level AI bills/laws). Insurers are responding with exclusions, higher premiums, and demand for AI governance disclosures. Unfortunately, coverage gaps persist, potentially leaving executives exposed to multi-million-dollar claims. Here is a short primer on what you need to know.
Key Current Issues and Liability Concerns
Rising Litigation and Executive Exposure: AI litigation is surging, with over 200 U.S. cases in 2025 alone targeting intellectual property (IP) infringement, privacy breaches[4] (e.g., unauthorized data use in models), and ethical harms (e.g., faulty AI advice causing economic loss). Boards face derivative suits[5] for inadequate AI risk disclosure in SEC filings. This wave in litigation has accelerated in 2025, with over 50 AI-related securities filings in the first half of 2025 alone—a 300% increase from prior years—often triggered by "AI-washing"[6] and short-seller attacks[7], leading to sharp stock drops, exposing executives to personal liability.
The following are real-world examples of actual claims of AI-related incidents that lead to significant D&O insurance challenges and executive liability exposures:
- OpenAI coverage shortfall: D&O insurance limits were exceeded due to multiple aggregated claims involving AI-generated outputs.
- New York Times copyright claims: Lawsuits alleged copyright infringement from AI model training data, resulting in substantial legal exposure for directors.
- xAI trade secret disputes: Trade secret litigation arose from alleged misuse of proprietary data in AI development, with potential director liability.
- Wrongful death cases: Claims were filed after AI-driven decisions allegedly led to serious harm or fatalities, exceeding insurance coverage.
- Air Canada chatbot incident: A faulty AI chatbot response resulted in enforceable claims against directors for inadequate vendor oversight.
The chart below outlines real-world examples categorized by risk type. These cases illustrate the evolving landscape of executive liability and insurance limitations as organizations increasingly adopt AI technologies.
| Risk Category | Example Liability Concern | D&O Impact | Real-World Example(s) | Mitigation Strategies |
| Bias/Discrimination | AI hiring tools rejecting protected groups | Shareholder suits for fiduciary breach; 15% rise in claims | Mobley v. Workday, iTutorGroup, Amazon recruiting engine, mortgage algorithms | Mandate AI audits; show in board minutes |
| IP Infringement | Generative AI trained on unlicensed data | Class actions against execs for oversight failure | Getty Images v. Stability AI, Sony/Universal/Warner v. Suno & Udio, Entrepreneur Media v. Meta | Vendor contracts with indemnity clauses |
| Privacy Breaches | Unauthorized data in AI models | GDPR/EU AI Act³ fines + derivative claims⁶ | Samsung ChatGPT leak, Facebook/Cambridge Analytica, Google DeepMind & NHS | Privacy-by-design policies; regular training |
| Decision-Making Errors | Faulty AI advice leading to business loss | Negligence exclusions triggering personal exposure | Air Canada chatbot refund, Microsoft Tay chatbot, healthcare AI misdiagnoses | Document AI reliance thresholds |
| Systemic Failures | Widespread model hallucinations | Insurer denials; premium spikes up twenty% | Google AI Overview defamation, Arup video-call scam, insurers seeking AI exclusions | Hybrid human-AI governance frameworks |
The following factors illustrate how AI-driven decision-making errors and systemic failures amplify new liability exposures and insurance challenges for businesses across industries:
Regulatory and Governance Failures Directors risk claims for "failure to prevent" AI harms under new regimes like the UK's ECCTA (effective 2025) or U.S. NIST AI frameworks, including insufficient due diligence on high-risk AI (e.g., hiring bias tools). ESG[8]-linked suits tie AI to broader sustainability failures.
Geopolitical tensions: AI supply chain disruptions (e.g., chip export bans, etc.) could trigger insolvency-driven D&O claims, projected to rise 6% globally in 2025.Mergers/acquisitions complicate coverage; "claims-made" D&O policies may lapse post-deal without tail extensions[9], leaving executives vulnerable to AI legacy claims.
Systemic and Uninsurable Risks: Insurers view AI as a "systemic risk" akin to climate change with similar failures (e.g., hallucinating models across sectors), which could generate billions of dollars in claims, prompting retreats by underwriters from offering or writing D&O insurance coverage. Such scenarios echo similar warnings from the World Economic Forum on IP, privacy, and liability concerns.
Coverage Exclusions and Policy Adaptations: Major insurers like AIG, WR Berkley, and others are seeking regulatory approval to specifically exclude AI-related risks from general liability and D&O policies, citing the challenge of modeling systemic harms such as widespread algorithmic failures. This shift could leave claims involving AI-driven decisions, like biased lending algorithms and resulting discrimination lawsuits, without coverage. Firms exposed to AI risks are seeing premium increases of 15-25% or more. Failure to disclose to insurance company existing AI exposures risks rescission of coverage. Some insurers simply do not write organizations with even limited AI exposures.
Traditional D&O policies typically cover "wrongful acts" but exclude negligence in processes; for AI, this may mean that failures in oversight (e.g., untested AI models) may not be covered, creating coverage gaps for Chief Information Security Officers (CISOs) and boards of directors. E&O extensions if they exist may apply only to in-house AI, not third-party tools.
The bottom line is that currently due to limited historical data, most underwriters find it difficult to quantify AI risks and are seeking means to limit their exposure to loss.
Recommendations For Risk Managers, Insurance Agents, and Brokers
As artificial intelligence integrates with business operations, organizations face unprecedented challenges in managing, avoiding, and insuring against AI-related liability exposures. The rapid evolution of AI technologies has led to new forms of systemic, regulatory, and strategic risks, prompting insurers to reassess their coverage approaches and risk managers to adapt their risk management practices. In this context, the following recommendations are designed to help risk managers, insurance agents, and brokers proactively address emerging threats and safeguard their organizations in an increasingly AI-driven environment.
· Conduct AI Risk Assessments: Regularly evaluate the organization’s AI systems for exposures such as bias, privacy breaches, intellectual property (IP) infringement, and systemic failures.
· Strengthen Governance: Implement clear policies for AI oversight, including documentation of decision-making processes and escalation protocols for AI-related incidents.
· Monitor Regulatory Changes: Stay current on evolving laws (e.g., EU AI Act, GDPR, ESG requirements) and update risk frameworks accordingly.
· Stress-Test Scenarios: Run quarterly simulations of AI-related incidents to assess financial and reputational impact.
· Educate Clients on AI Exposures: Proactively inform clients about emerging AI risks, coverage gaps, and governance importance.
· Review and Tailor Coverage: Examine clients’ D&O policies for AI-related exclusions and limitations. Recommend side-A DIC policies[10], runoff coverage for M&A.
· Encourage Initiative-taking Disclosures: Recommend that clients revise their insurance applications to include comprehensive AI inventories and clearly defined governance procedures.
· Advocate for Mitigation Strategies: Encourage privacy-by-design, staff training, and vendor indemnity clauses.
· Track Claims and Litigation Trends: Monitor lawsuits and regulatory actions to predict insurer responses.
· Collaborate Across Functions: Foster communication between risk management, legal, compliance, and IT teams.
· Stay Informed: Subscribe to industry updates and take part in professional networks focused on AI and insurance.
Summary
AI adoption is rapidly reshaping liability exposures for directors and officers, with insurers tightening coverage and litigation surging. Risk managers, agents, and brokers must understand emerging exclusions, rising premiums, and the importance of robust AI governance. Initiative-taking risk assessments, tailored coverage solutions, and ongoing regulatory monitoring are essential to protect organizations and clients from evolving AI-driven risks.
[1] Generative models: Artificial intelligence systems (such as ChatGPT, DALL-E, or Stable Diffusion) that can create added content—text, images, audio, or code—based on patterns learned from large datasets.
[2] AI “Black Box”: Refers to the opaque nature of AI decision-making, making it difficult to explain or audit outcomes.
[3] EU AI Act: European Union regulation setting standards for AI systems, effective mid-2025.
[4] GDPR: General Data Protection Regulation, the EU’s privacy law governing data use.
[5] Derivative suit: A lawsuit brought by shareholders on behalf of the company, often for breaches of fiduciary duty.
[6] AI-washing: Overstating or misrepresenting a company’s AI capabilities in disclosures or marketing.
[7] A short seller report (or "attack") is a critical investigative publication released by investors holding short positions in a company's stock—often alleging financial irregularities, overvaluation, or operational flaws—to erode investor confidence, depress the share price, and enable profits from the shorts. In the AI sector, such reports have targeted hyped firms like CoreWeave and Super Micro Computer amid bubble fears.
[8] ESG: Environmental, Social, and Governance—criteria for assessing a company’s ethical impact and sustainability.
[9] Runoff coverage: Insurance that extends protection for claims made after a policy or company merger/acquisition.
[10] Side-A DIC policy: A type of D&O insurance that provides direct coverage to individual directors/officers when the company cannot indemnify them.

