As businesses race to adopt artificial intelligence (AI), risks are quietly accumulating. Yet the insurance industry, traditionally a "stabilizer" against risks, is showing caution and retreat in this area.
Major global insurers are taking steps to exclude AI-related risks from standard corporate policies, fearing potential multibillion-dollar claims tied to AI technology. Industry giants such as
This marks a significant shift in insurers' stance toward AI risks. A key concern is the opaque decision-making process of AI models, often described as a "black box," making accountability difficult when errors occur. More alarmingly, flaws in a single AI model could trigger thousands of linked claims, creating an unmanageable "systemic, aggregated risk" for the industry.
The urgency stems from real-world cases where AI "hallucinations" (false outputs) or mistakes have led to costly consequences. Examples include a Canadian airline being ordered to compensate customers after its chatbot fabricated discounts and
**The "Black Box" Problem and Systemic Risks** The unpredictability and massive scale of AI risks are driving insurers away. Dennis Bertram, head of European cyber insurance at Lloyd’s specialist insurer Mosaic, stated that AI models are "too much of a black box" to insure. While his firm covers some AI-enhanced software, it refuses to underwrite large language models like ChatGPT.
Kevin Kalinich, EY’s global cyber risk leader, noted that insurers might handle a $400–500 million loss from a single company’s AI pricing or diagnostic error. However, they cannot absorb losses from one flawed AI model affecting thousands of clients—a "systemic, interconnected aggregated risk."
Zurich Insurance’s CIO Ericson Chan added that AI risks involve multiple parties—developers, model builders, and end-users—creating complex liability chains with "potentially exponential" market impacts.
**Insurers Push for Broad Exclusions** AIG and WR Berkley have proposed sweeping exclusions in regulatory filings. WR Berkley’s clause seeks to reject claims tied to "any actual or alleged use" of AI, while AIG warned that generative AI is a "broad-reaching technology" likely to generate more claims over time.
These moves coincide with rising AI-related incidents. Last year, UK engineering firm Arup lost $25 million to fraudsters using a deepfake of an executive in a video call. As insurers seek exemptions, risks once covered under "technology errors and omissions" policies now face a protection gap.
**Limited Workarounds Emerge** Some insurers are testing compromise solutions, but coverage remains narrow and restrictive. For instance, QBE introduced an endorsement for fines under the EU’s AI Act, but brokers reveal payouts are capped at 2.5% of the total policy limit.
Zurich-based Chubb agreed to cover certain AI risks but excluded "widespread" AI events where a single model flaw impacts multiple clients.
Aaron Le Marquer, head of insurance disputes at law firm Stewarts, warned that as AI-driven losses surge, insurers may contest claims in court. He predicted a "major systemic event" may be needed before insurers clarify they "never intended to cover such scenarios."
Comments