Artificial intelligence technology has continued to evolve, and it’s affecting many areas of insurance from claims to underwriting to customer service, according to panelists at the 2025 PLUS D&O Symposium in New York City.
But has the technology developed so much that it could replace human underwriting in the next five years?
“I think it is the biggest question out there,” said Jeffrey Chivers, CEO and co-founder of Syllo, an AI-powered litigation workspace that enables lawyers and paralegals to use language models throughout the litigation life cycle.
Another way of asking this question is whether AI can develop judgment, not just in underwriting but across all business domains in which judgment is an essential part of the job, he said.
“Is there any change here with respect to a model’s ability to exercise the kind of nuanced value judgment and other types of judgments that go into a mission critical job?” he asked. “Thus far, the answer for me has been no,” he said. “If the answer is yes at some time in the next five years, I think that’s what changes everything.”
Claire Davey, head of product innovation at Relm Insurance, said that major shifts are already happening in other areas of insurance that involve more administrative tasks, however.
“It depends on how the organization wants to deploy [AI] and utilize it,” she said. “But I think many jobs, particularly those that are administrative, are at risk of being phenomenally changed by artificial intelligence technology. It is going to be a landmark shift in commerce that we’ve seen in a generation, and insurance is no different.”
That said, she agreed that underwriting jobs are safe, for now.
“One of the key governance controls and duties with AI technology is that it does require human oversight, so while AI could perform some underwriting stages, you would hope that there is still a human reviewing its output and sense-checking that,” she said.
AI’s Underwriting Judgment
AI technology is having a material impact on the insurance industry in other ways, panelists agreed. To start, the litigation landscape is already seeing a transformation.
Within five years, there will be a lot more adoption of generative AI across legal and compliance functions, Chivers predicted. “And I think five years from now, a couple of things will be really prominent.”
He said much debate will continue to emerge around transparency and any red flags discovered within an organization due to AI.
“Do you attribute knowledge to management if you had an AI agent in the background that surfaced these various red flags or yellow flags even if nobody reviewed it?” he said. “I think the transparency that generative AI brings within a big organization is going to be a big subject of discovery litigation.”
He added another area to watch is the degree to which companies are handing off decision-making responsibilities to AI.
“If we are in a world where companies are handing off that decision-making responsibility, it just raises a host of issues related to coverage,” he said.
This decision-making responsibility needs to be carefully considered with a human in the loop because of generative AI’s shortcomings, he said.
“It’s not a quantitative model, and it also really lacks what I would describe as judgment,” he said. “And so when I think about how do you understand these large language models and what they bring to the table in terms of artificial intelligence, I think the best way to think about it is in terms of different cognitive skills… [L]arge language models have certain cognitive skills like summarization and classification of things, translation, transcription, [but] they completely lack other cognitive skills.”
Allowing AI to participate in too much decision-making can be particularly dangerous because of one of its best skills so far: linguistics and rhetoric. This means AI models can excel at masking the fact that they lack the judgment to operate as an intelligent agent, Chivers explained.
“At the moment, I think it would be basically insane to allow the current iteration of large language model agents to actually run wild within systems.”
Jeffrey Chivers, Syllo
“If you allow the large language model to generate things like plans and plans of action, it literally generates these for itself. It has some objective in mind, and it writes out 10 steps for itself as to how to accomplish that objective. And it takes each of those steps and generates ideas about how to execute it. And then it goes about, and if you give it access to other systems, it will be able to function, call against those systems and cause real world impacts within your organization,” he said.
“At the moment, I think it would be basically insane to allow the current iteration of large language model agents to actually run wild within systems.”
Underwriters’ AI Judgment
Beyond the use of generative AI within underwriting, how are insurers underwriting to companies that use generative AI as a part of their business model?
“I think the risk profiles of insureds who are either developing or utilizing AI are shaped by the use case of that AI,” Davey said. “So depending upon what it’s been designed to do, that will influence whether its main risk factor is bias or transparency or accountability.”
She said that when Relm Insurance is underwriting an account, it’s important to ask what the AI technology is doing and where its main exposure or risk is when it defaults or something goes wrong.
“Obviously, if it’s handling or being trained on a lot of personally identifiable data, we have an issue there in terms of accountability and privacy. But if we’re looking at an AI model which may be running diagnostics—it may be trying to run forecasts or perhaps providing recommendations— we then have the issue of bias and discrimination,” she said. Relm thinks of those buckets as shaping the risk profile of the insureds, guiding underwriters in terms of what follow-up questions they’re going to ask.
Since Relm aims to provide informed capacity for emerging sectors, Davey said getting comfortable means asking questions and starting a dialogue with clients who are pushing on the frontiers of these emerging technologies.
“It is about trying to get dedicated time with those who are developing those technologies and also managing the technologies to really understand their technical capabilities, but also the governance around them,” she said. “So, it requires an investment on the client side to share their information, share their time with us. But if we can get the right information and we can get the comfort with the technology and their management topic, then we can start to provide capacity for that sector which has historically been underserved in the traditional markets.”
“If we can get the right information and we can get the comfort with the technology and their management topic, then we can start to provide capacity for that sector which has historically been underserved in the traditional markets.”
Claire Davey, Relm Insurance
Julie Reiser, partner at Cohen Milstein, thinks about AI risks in terms of both misrepresentation—or AI washing, in which a company overstates the capabilities of its AI technology—as well as employment discrimination.
“I think the overall premise that I’m hearing across the board is that AI is iterative, that we expect people not just to engage once and create a process, but rather it’s something that you have to check in with and you have to watch each step and then say, ‘Is this creating risk?'” she said. “It’s not like every year, you can just check in, and it’ll be fine.”
For companies that are only focused on AI, there’s even more risk, and that will require more board oversight and systems in place to manage risk, according to Nick Reider, senior vice president and deputy D&O product leader for the West region at Aon. “If they don’t have those, then they’re going to have a bad time when a good lawsuit is filed,” he said.
“It’s not to say that some mega-corporation that uses AI to simplify one of thousands of processes has no responsibilities whatsoever with respect to AI. Obviously, the directors can’t bury their heads when they learn of misconduct, for example.” However, AI-specific companies will need to have a higher level of governance in place, he said.
“But no matter what, just given the regulatory landscape that’s out there right now, there is additional governance that has to be in place at these companies,” he said. “There’s a lot that goes into it.”
Indeed, in the U.S. alone, disagreement has emerged around how to define artificial intelligence and what it could achieve in the next five years, said Boris Feldman, partner at Freshfields US.
“What I’m seeing, at least in the United States, is there are camps that are really concerned about super intelligence and the end of humanity,” he said. “And then there are other camps who are more focused on the here and now of what can we promulgate with respect to how these things are used to protect against the known risks of today.”
Davey said in the next five years, she believes a more colorful claims landscape in terms of litigation and regulation will emerge. “I would imagine that for the underwriters here and the brokers here, it’s going to be an interesting five years of conversations with clients about their claims history,” she said.
Proactive companies will lead the charge to set these standards, Reiser added.
“There will be a proactive group of companies and a reactive group, and the proactive group is going to set the standard for what the reactive group should have done,” she said. “That will be the benchmark. It wouldn’t surprise me.”
Davey said she believes that these emerging AI technologies, although constantly evolving, are insurable.
“It just takes work, and it takes effort, and it takes research, and that requires investment and resources,” she said. “So, if we as an insurance company, but also as an insurance sector, want to remain relevant, then we have to put in that upfront to work with clients to understand them and provide the solutions.”