- Key insight: Insurers are redefining liability through generative AI policy exclusions.
- What's at stake: Unchecked AI exposures could prompt regulatory penalties, class actions and concentrated losses.
- Forward look: Startups and larger insurers will continue to offer specialized policies for gen AI.
Source: Bullets generated by AI with editorial review
As banks and other businesses scale up their use of generative AI, and related lawsuits pile up, a shift in the insurance landscape may change how they insure themselves against this risk.
Several companies already have been sued for problems stemming from their use of gen AI. Deloitte Australia this month refunded the Australian government for an AI-generated report it sold that included errors, made up quotes and fabricated academic references. Wells Fargo has been sued for algorithm-driven discrimination. The opposite problem has also led to regulatory action: The Securities and Exchange Commission and the Department of Justice have penalized companies for claiming they were using AI when they actually weren't.
Insurers are working to avoid liability for risks related to companies' generative AI usage. Verisk, a data analytics and technology vendor to hundreds of large property and casualty insurance carriers, has endorsed an optional carve-out for gen AI for its Insurance Services Office General Liability program, a set of standardized insurance forms and rules that an estimated 82% of property and casualty insurers use as a basis for their general liability policies. This change will take effect in January.
"We looked at the new technology and we introduced an exclusion that lasers out generative AI," Joseph Lam, vice president of general liability at Verisk, told American Banker.
Some insurance carriers may decide they have the in-house experts and understanding to continue to cover AI-related risks under their general liability policies, Lam said. Others may decide they don't have the expertise and adopt the exclusion for gen AI.
Several startups, including Testudo, Armilla and Vouch, see a void opening up and are offering insurance specifically to cover gen AI liability.
Meanwhile, companies are starting to express interest in this coverage. In a survey of 600 businesses conducted by the Geneva Association, a Swiss think tank, more than 90% said they would value insurance for gen AI-related risks. More than two thirds of respondents said they would be willing to pay at least 10% more for such coverage.
What are the liability risks?
Verisk didn't have specific risk scenarios in mind when it created its exclusion, Lam said.
"It's more, hey, this is a new technology. How do we introduce additional underwriting tools to allow our customers full flexibility on how they want to approach these risks going forward?" he said.
The Geneva Association has identified a host of gen AI risks, ranging from increased risk of algorithmic errors to the broader attack surface gen AI creates for cybersecurity attacks to job displacement leading to worker dissatisfaction and backlash to the generation of misleading or harmful information that can erode customer trust.
Lawsuits related to gen AI errors have begun cropping up. Deloitte Australia recently gave a partial refund for a $290,000 report it delivered to the Australian Department of Employment and Workplace Relations after an investigation revealed it contained AI-generated errors, including fake academic citations, fabricated legal references and incorrect factual claims.
A customer sued
Intellectual property lawsuits have been filed against companies like OpenAI, Meta and Stability AI, alleging unauthorized use of copyrighted content to train their gen AI models.
Back in 2022, Wells Fargo was hit with six separate class action lawsuits that accused the bank of algorithm-driven discrimination with respect to residential mortgage and refinance practices, violations of the Fair Housing Act and the Equal Credit Opportunity Act. The plaintiffs said Wells Fargo used its "pioneering automated underwriting" system, known as CORE, without sufficient human supervision or involvement, and that CORE's algorithm and machine learning were riddled with racial bias.
Another risk is AI washing — where companies get sued or fined for claiming they're using AI when they're not. In March, the Securities and Exchange Commission settled charges against two investment advisors, Delphia and Global Predictions, for making false and misleading statements about their purported use of artificial intelligence. The firms agreed to pay $400,000 in total civil penalties.
In April, the SEC, the U.S. Department of Justice and the U.S. Attorney's Office for the Southern District of New York filed actions against Albert Saniger, the founder and former CEO of Nate, alleging that he made false and misleading statements to investors about Nate's purported AI technology. Saniger marketed Nate as a cutting-edge mobile shopping app that worked "like magic" using AI, machine learning and neural networks. But according to the SEC and DOJ, the transactions were being processed manually by contract workers in foreign countries.
"I do see severe consequences for AI washing, especially if some of these smaller companies just jump on the bandwagon," said Jason Bishara, financial lines practice leader at NSI Insurance Group in Miami Lakes, Florida. "We're seeing claims happening now on AI washing, and I think it's only the beginning."
Startups jump in
A few startups see opportunity. One, Testudo, is offering a gen AI liability policy with backup from Lloyds of London. Testudo's coverage aligns with the Verisk exclusion, according to the company. "Companies want zero gaps, especially with a new and evolving technology like AI," a Testudo spokesman said.
Testudo can conduct an external review to determine and price an enterprise's AI risk. This includes reviewing all existing AI-related litigation to determine real-world market liability.
Testudo founder George Lewin-Smith was working for Goldman Sachs on the West Coast when OpenAI released ChatGPT to the public.
"The GPT moment happened, and the firm was very keen to adopt as much AI technology as quickly as they could," he said. "Implementing this technology is understandable from a technological perspective, and there's loads of use cases. The key issues were the compliance, the risk, the regulatory scrutiny which banks and institutions are under."
This gave him the idea of creating Testudo.
"Banks and enterprises across all the industries buy insurance for cyber risk, directors and officers risk and tech risk," Lewin-Smith said. "So our vision was that there should also be a category of product for generative AI as well."
Testudo's insurance will cover any lawsuits that stem from the use of gen AI, such as copyright infringement claims, discrimination lawsuits, intellectual property lawsuits, patent infringement and trademark infringement.
Testudo has been tracking generative AI related lawsuits daily. "There's been kind of an exponential increase," Lewin-Smith said.
IP infringement is a key risk, he said.
"A lot of these models were trained on copyrighted material, you're essentially integrating a technology that is defective from your initial use because it has already got this copyright risk within it," Lewin-Smith said. This risk could creep into departments in a bank that use the models, like software development and marketing.
Lending and credit scoring are high-risk uses of gen AI, he said. Testudo will cover legal costs, damages and settlements associated with these risks, as well as personal injury, bodily injury, defamation, libel, slander and data privacy as well.
Testudo monitors generative AI-related lawsuits and feeds them into a spreadsheet of federal, state and global generative AI lawsuits.
"That gives us a very unique view of what is causing litigation risk in the market, and it updates on a daily basis," Lewin-Smith said. It built models based on that data to price the liabilities of such actions. "Once you have that map of the real-world risk, we essentially can say, well, we're going to underwrite this or not based on that."
Testudo provides the underwriting, pricing and technology for the insurance. Lloyds of London provides the capacity and takes the risk on its balance sheet.
The initial policy will cover up to $10 million of liability risk.
Toronto-based Armilla AI, which is also backed by Lloyds, offers insurance for AI liability, for model errors, hallucinations, regulatory violations and data leakage among other coverages. It works with banks and fintechs today, according to CEO Karthik Ramakrishnan.
"We provide a non-intrusive, external evaluation that quantifies model reliability and governance in an evidence-based way," Ramakrishnan told American Banker. "It's very similar to what's already happened in cyber insurance, where underwriters evolved from questionnaires to penetration tests and continuous monitoring to assess security posture properly. All types of insurance — from home to auto to climate and fire risk — are adopting a data-driven, active underwriting approach. We're applying that same rigor to AI by building the actuarial foundation for this next wave of risk."
Vouch, a San Francisco insurance broker, says it covers claims of algorithmic bias, of IP infringement, defense costs for investigations concerning AI-specific regulatory violations, losses caused by AI products or algorithms and damages from services provided by AI.
NSI Insurance Group is likely to develop gen AI liability coverage, according to Bishara.
Other, larger providers may come out with similar policies, Lam said.
"One of the interesting things about the insurance industry is that, if there's an appetite for it, if there are people who are truly experts in these niche exposures, there's always going to be other players," he said. "If a main carrier decides to exclude it, there's going to be other players that will come in and fill in those gaps."






