Why Insurers May Unwittingly Become AI Safety Champions
Clara Riedenstein / Dec 10, 2025
Bold Office (Jamillah Knowles & Digit / Better Images of AI)
Suppose you googled yourself and the box generated by artificial intelligence at the top of the search page alleged you are being sued by the state’s attorney general for deceptive sales practices. To your knowledge, you aren’t being sued by the attorney general. Still, your customers start dropping off and you begin losing business you spent years building up.
That was the case for Wolf River Electric, which claimed it lost $24 million in sales in 2024 due to errant search results. In response, the solar-panel installment company sued Google for $110 million in damages.
While an upsetting episode, it’s possible the tech giant could have paid the damages by turning to its insurance company. But insurers might not be able to provide such cover for much longer.
According to the Financial Times, insurance companies including AIG, Great American, and WR Berkley have sought government permission to limit their liability from mistakes made by AI agents and chatbots. Another insurer, Mosaic, told the publication this was because the technology is perceived as too much of a “black box” to establish liability.
This leaves companies in a bind. On the one hand, they are told that if they don’t adopt AI they will fall behind. On the other hand, AI might come with unacceptable liability risks.
To date, companies’ calculation has seemed to be that the benefits outweighed the risks. Companies have rushed to incorporate AI into their business models, even though complaints about AI hallucinations date at least as far back as ChatGPT’s release in 2022.
According to a McKinsey report from January, 92% of companies planned to invest more in generative AI over the next three years. Applications have ranged from customer service to marketing and finance. Governments have even started using AI to hold public office.
But if insurance stops covering AI, that calculation might shift. The fear from insurers is that they would drown under a tsunami of damage claims from AI chatbots’ fabrications. If regulators approve broad AI exemptions, companies could be held directly liable for such false claims. Companies could face the prospect of paying billions in damages.
Cases like that of Wolf River Electric aren’t isolated incidents. OpenAI confirmed that around one in ten responses provided by ChatGPT-5 contain factual errors. Other AI agents generate similar levels of “bullshit” (a technical term). If insurers drop off, companies might have to self-insure against a technology whose risks cannot be easily quantified.
A way out could be stronger AI safety standards, leaving both companies and insurers at ease. A widely-cited piece by techUK from September argued that AI insurance is deeply linked to AI assurance: the safer a system, the lower the premiums. The theory goes that this dynamic creates market incentives to develop safer AI systems and deploy them responsibly. It suggests that AI models could come up with assurance techniques for insurers to assess AI-related risks, such as more transparency about model data and training systems.
As the possibility of insurers retreating becomes a live option, we are starting to see that scenario play out. Industry leaders have taken to social platforms to advise companies on how to proceed if insurers back away. For instance Vish Nandlall, a board advisor to NRG Energies and other startups, suggests companies should maintain human control over high-impact decisions and adopt audit trail standards.
Others like Cat Valverde, founder of Enterprise AI Group, advises companies to ensure that they maintain a live inventory of their AI systems, maintain human control over high impact decisions, instate third-party risk management and undergo independent testing for bias in their models.
That rhetoric might sound familiar to some. Research institutes have long advocated for increased safety measures in AI deployment: third-party validation schemes, AI testing and risk assessments. They have had only marginal success in getting companies to roll out the technology responsibly. But if AI proves too risky for insurers, that provides a clear incentive for the AI industry to develop clearer safety standards — and for the market to reward those companies which adopt AI responsibly.
While this means that in the current transition period AI adoption might slow, in the long-term there would be clearer safety standards that put both insurers and enterprise at ease.
It wouldn’t be the first time the insurance market plays a role in increasing the safety of a technology. In the 2010s, insurers grew jittery over providing cyber insurance because they couldn’t assess risk from cyberattacks. As a result, safety standards such as multi-factor authentication became industry mainstays. Even today, the push and pull between insurers and cybersecurity continues.
The move away from AI coverage comes as policymakers around the globe are pulling away from AI regulation. In Brussels, the European Parliament this year declined to challenge the European Commission’s decision to pull the AI Liability Directive, which was intended to ensure “that persons harmed by AI systems enjoy the same level of protection as persons harmed by other technologies.”
In an effort to boost AI adoption on the continent, Brussels is backtracking on previous policy commitments. The recent Digital Omnibus proposes a “simplification” of existing tech regulation to boost AI development.
But insurers still couldn’t entirely replace regulators. Kai Zenner, Chief of Staff for MEP Axel Voss and a staunch advocate of the ditched AI Liability Directive, warns that while insurers can play a role in boosting AI safety, the lack of EU-level regulation may harm European competitiveness. This is particularly the case for small European companies, which are largely downstream adopters of AI and risk losing out if insurers lead the charge in AI safety. If insurance becomes prohibitively expensive or even impossible in the short-term, European companies may back away from adopting AI just as they were pushed to adopt it by competitiveness advocates.
Insurance companies might reveal themselves, inadvertently, as the unexpected frontline promoters of AI safety.
Authors
