Major insurance providers are moving to exclude artificial intelligence (AI)-related risks from corporate policies, fearing multibillion-dollar claims stemming from the rapidly evolving technology. Companies including
While businesses race to adopt cutting-edge AI, insurers remain cautious about offering comprehensive protection. AI "hallucinations"—instances where models generate false or erroneous content—have already led to costly and embarrassing failures. WR Berkley's proposed exclusion clause would deny claims involving "any actual or alleged use" of AI, including products or services incorporating embedded AI technology.
Illinois regulators previously questioned AIG about similar exclusions. In filings, AIG acknowledged that generative AI represents an "extremely broad technology" with claims likely to "increase over time." Though the insurer has submitted exclusion provisions, it stated there are "no current plans to implement them," keeping the option open for future use.
Dennis Bertram, European cyber insurance lead at Lloyd’s specialist insurer Mosaic, noted that insurers increasingly view AI outputs as too unpredictable and opaque to underwrite: "It remains a black box." Even specialized insurers like Mosaic, which covers some AI-enhanced software, refuse to insure risks tied to large language models like ChatGPT.
Rajiv Dattani, co-founder of AI auditing firm Artificial Intelligence Underwriting Company, highlighted unresolved liability questions: "When something goes wrong, who is responsible? There’s no clarity yet."
The industry’s caution follows high-profile AI mishaps:
- Solar firm Wolf River Electric sued
Kevin Kalinich, Aon’s global cyber risk leader, warned that while insurers could absorb $400–500 million losses from a single firm’s agentic AI failure (e.g., mispricing or misdiagnosis), systemic risks—such as an AI provider’s mistake triggering thousands of correlated claims—would be "uninsurable."
Standard cyber policies typically exclude AI hallucinations, focusing instead on security or privacy breaches. Though technology Errors & Omissions (E&O) policies may cover AI mistakes, newly added exclusions are narrowing protections. Zurich Insurance CIO Ericson Chan noted that unlike traditional tech failures, AI risks involve multiple parties (developers, model builders, end-users), potentially amplifying market impacts "exponentially."
Some insurers use endorsements (policy add-ons) to address AI’s legal uncertainties, but brokers warn these may further restrict coverage. For example, QBE’s endorsement—later adopted by peers—caps AI-related regulatory fines at 2.5% of a policy’s total limit. Chubb has agreed to cover certain AI risks but excludes "widespread" events like model-level failures affecting numerous clients.
Brokers and lawyers fear insurers may resort to litigation to deny claims as AI-related losses mount. Aaron Le Marquer of Stewarts law firm predicted: "It will probably take a major systemic event for insurers to say, ‘Wait, we never intended to cover this.’"
Comments