أخبار العالم

Insurance companies are trying to avoid big payouts by making AI safer


A centuries-old industry that once made cars and buildings safer hopes it can do the same for artificial intelligence.

A handful of insurance companies, including startups and at least one major insurance company, are starting to offer specialized coverage for the failures of AI agents, the autonomous systems increasingly taking over tasks once handled by customer support representatives, job recruiters, travel agents and the like.

As insurers chase a lucrative new market, some are betting that they could bring a measure of regulation and standardization to a technology that is still in its early days but is seen by many in the business world as the natural next step.

“When we think about car insurance, for example, the broad adoption of the safety belt was really something which was driven by the demands of insurance,” said Michael von Gablenz, who heads the AI insurance division of Munich Re, a multinational insurance company. “When we’re looking at past technologies and their journey, insurance has played a major role in that, and I believe insurance can play the same role for AI.”

The failures of generative AI have made bizarre headlines, inspired courtroom battles and devastated families who say the technology harmed their loved ones. But as AI tools become more common across businesses and personal devices, users are caught between trusting companies to self-regulate and waiting for comprehensive government oversight.

Some insurance companies are eager to step into this untapped frontier. Even as many traditional insurers avoid covering AI-related risks entirely and others are still hesitant, a few are already offering specialized coverage for customers who use AI systems, offering sizable payoffs if things go haywire.

Those companies say insurance coverage — which has historically propelled safety improvements in a variety of industries — could also create an effective, market-based incentive for AI developers to make their products safer.

Rajiv Dattani, a co-founder of the Artificial Intelligence Underwriting Company (AIUC), an insurance startup developing standards for AI agents, said he believes that voluntary commitments from companies aren’t enough to manage the risks AI pose and that insurance can be a “neat middle-ground solution” — a form of third-party oversight that doesn’t rely solely on government action.

“Insurers will be incentivized to track accurately: What are the losses? How are they happening? How frequent are they? How severe are they? And then, what are the practices that you could use to actually prevent those losses, or at least mitigate them?” Dattani said. “We think insurers, because they’re paying, will end up leading a lot of this research or at least funding a lot of it.”

Businesses that use AI agents are susceptible to a broad spectrum of risks, including data leaks, jailbreaks, hallucinations, legal torts and reputational harm, according to insurers that cover such tools. An AI chatbot might leak confidential customer data, make discriminatory hiring decisions or — as multiple recent lawsuits have spotlighted — encourage self-harm in vulnerable users.

More than 90% of businesses want insurance protection against generative AI risks, according to a recent survey from the Geneva Association, an international insurance industry think tank. But without auditable standards for AI agent safety, many insurers still don’t know how to offer that protection.

A recent Ernst & Young report also found that 99% of the 975 businesses surveyed have suffered financial losses from AI-related risks, with nearly two-thirds suffering losses of more than $1 million. Meanwhile, AI companies such as OpenAI, Anthropic, Character.AI and a slew of other startups and tech giants have been the targets of major lawsuits in recent years.

Creating new metrics for unprecedented risks

In July, the AIUC launched the world’s first certification for AI agents, aimed at providing an auditable benchmark to assess the vulnerabilities of an agent. The standard, called AIUC-1, covers six pillars: security, safety, reliability, data and privacy, accountability and societal risks.

Companies can voluntarily get tested against the standard to gain customer trust, Dattani said, and the certificate could also be a way for insurance companies to assess whether an AI product meets their standards to be insurable.

“We’re in an era now where the losses are really here and happening; that’s one thing. The second thing is that insurers are now actually starting to exclude AI from their existing policies,” Dattani said. “So it feels pretty certain that we’re going to need some solution here, and we need people with skin in the game who can provide third-party oversight. That’s where we see the role of insurance.”

On Monday, AIUC introduced a consortium of 50 third-party backers from tech companies (including Anthropic, Google and Meta), industry groups and academic institutions who will help develop the certification standard as it evolves.

The company is also working on creating a self-harm standard to determine an AI system’s likelihood to produce content that could encourage people to hurt themselves, Dattani said. He said such research can then inform new safety practices for AI developers and, in turn, an insurance underwriting process for covering such risks.

“Pricing the risk correctly is how insurers make profit. Once a policy is issued, they don’t want to pay out claims. I mean, of course, they will honor their contracts, but they’re going to be looking for how to help their clients avoid losses,” said Cristian Trout, an AI policy researcher at AIUC. “Insurers are also facing an uncertain and new risk, so they want to get clarity on that.”

AIUC’s insurance policies cover up to $50 million in losses caused by AI agents, including hallucinations, intellectual property infringement and data leakage.

Dattani said he’s inspired by Benjamin Franklin’s approach to mitigating fire risks in the 1700s by co-founding the Philadelphia Contributionship — the oldest property insurance company in the country — which became a precursor to modern building codes by requiring certain safety standards for buildings to be insurable.

Demand rises as traditional insurers back away

In April, the Toronto-based AI risk assessment company Armilla similarly began offering specialized insurance to customers who use AI agents. Armilla’s comprehensive AI liability coverage spans performance shortfalls, legal exposures and financial risks associated with an enterprise’s large-scale adoption of AI.

CEO Karthik Ramakrishnan said that his clients include enterprises of all sizes and that demand has been rising on a near-daily basis.

“AI is one of the most democratic technologies. It’s getting adopted by every type of company, every type of domain, from retail, manufacturing, banking,” he said. “And so the ask for this insurance is coming from across the board.

Some insurance companies, he said, are excluding AI coverage entirely out of “fear of the unknown.” But Armilla, which started off working with businesses to evaluate their AI models for vulnerabilities, is eager to fill in using its AI experts and the data they’ve collected over the years.

AI is one of the most democratic technologies. It’s getting adopted by every type of company, every type of domain, from retail, manufacturing, banking … and so the ask for this insurance is coming from across the board.

Karthik Ramakrishnan, CEO of Toronto-based AI risk assessment company Armilla

Ramakrishnan predicted that in just a few years, AI insurance will become its own thriving market. In fact, Deloitte posits it could be a $4.8 billion market by 2032 — though Ramakrishnan said he believes that figure is an underestimation.

Munich Re, the global insurance company headquartered in Germany, introduced its AI insurance as early as 2018, before the generative AI boom. In recent years, demand has risen significantly.

“The rise of generative AI also created an awareness of the risk that those tools bring with them, and this is something which we have seen to be positively reflected in the demand that we are seeing,” said von Gablenz, who leads that division.

Nowadays, he said, Munich Re’s aiSure primarily covers AI hallucinations but is also looking into how it can cover IP infringement claims and other types of risks. To quantify AI-related risks, the company relies more on model testing results than historical loss data, which an insurer might typically use to price its policies.

Ramakrishnan and von Gablenz compared the nascent AI insurance industry to the trajectory of cyber insurance, which started off as a small, niche market before it ballooned into its own multibillion-dollar market.

A mixed approach to regulation

Economist and AI researcher Gillian Hadfield has been a longtime advocate for a mixed approach to AI regulation that combines government regulation with market forces. Governments alone, she said, lack the expertise and speed to keep up with rapid technological advancements.

“We really want to recruit the capacity of markets to solve hard problems, to attract investment, to attract talent, to attract risk capital, to attract innovation,” Hadfield said. “And that hard problem in the AI context is: How do we actually get our AI systems to do the things we want them to do, to do the good things and not the bad things?”

Hadfield is working with Fathom, a nonprofit AI safety group, to propose state legislation for a third-party AI governance framework that relies on what she called “independent verification organizations.”

Under that model, state or federal governments would set acceptable risk levels for AI systems and authorize private-sector evaluators to verify whether companies meet the standards. Companies could then voluntarily seek certification from the independent verification organizations to demonstrate their safety compliance.

In that vein, she said, insurance could play a role in encouraging companies to adhere to government or third-party safety rules by making it a requirement for coverage.

And Fathom co-founder Andrew Freedman thinks it will be a powerful incentive, as many businesses have remained hesitant to adopt AI agents without some sort of assurance that things won’t go wrong and leave them footing the bill.

“We’re going to have to see a marketplace of verification technology grow up alongside the capability market,” Freedman said. “And I think we’re in the infancy of that. But to be fair, I also think we’re in the infancy of AI — or at least AI hitting society.”