Free Preview

This is a preview of some of our exclusive, member only content. If you enjoy this article, please consider becoming a member.

Claims professionals need to adapt to the AI evolution at a fast pace as the same technologies used to fight claims fraud are also being used to perpetuate it, according to experts.

Executive Summary

As generative AI takes the insurance industry by storm, claims experts share how it can be used as a force for both good and evil—for detecting fraudulent claims as well as creating them. They say the key to staying ahead of the criminals who are using AI is to invest in continuous employee training, keep an eye out for traditional red flags of claims fraud, and stay on top of the latest technological advancements.

“It’s truly become an arms race,” said Shane Riedman, president of anti-fraud analytics at Verisk. “It’s an arms race of good versus evil.”

He explained that while insurers are continuously working to identify and defend against the risk, criminals are working just as hard to leverage continuously changing AI technology to increase risk.

“Verisk has and we continue to invest in developing and applying technologies that can rapidly identify these frauds,” he said. “But the advancement of AI-generated content, especially in audio and video, really continues to move the goal post for us.”

Attestiv CEO Nicos Vekiarides said that while the insurance industry should be proud of how many claims are settled on photos alone as more efficiencies are created in the space, this is also driving risk that these photos have been manipulated or altered. Attestiv provides AI-powered digital content analysis and forensics and works to identify deepfakes, fraud and cyber threats in photos, videos and documents.

“We’re reaching the point where this type of fraud detection has to exist not only on the claim side but also on the underwriting side because you have to validate that these assets or whatever is being insured is real to begin with. So, I think that’s really the next place where this is going to hit.”

Nicos Vekiarides, CEO, Attestiv

“We are seeing fraudulent images that are being used as part of insurance claims—and a lot of them,” he said. “Previously, these were being done via Photoshop and other kinds of more manual ways of editing photos. But what’s happened with generative AI is now there are tools that are accessible that make it much easier to edit photos to exaggerate damage and effectively create something that’s very difficult to detect with just the naked eye.”

Detecting AI-Related Claims Fraud

So, how do claims professionals go about detecting altered images?

To start, insurance companies must ensure they are continuously updating their deepfake detection tools, said Sam Krishnamurthy, chief technology officer at Turvi. Turvi is a new SaaS technology provider for property/casualty claims, initially developed in-house at Crawford & Company.

“Deepfake technology is evolving at such a rapid pace that the major challenge for insurance companies will be adopting the technology themselves, rather than relying on traditional, outdated methods, which will no longer be sufficient to detect anomalies indicative of falsified information,” he said. “For instance, the traditional approach for pictures is to examine the metadata, particularly the geolocation, and match it to the property owner’s location. However, those committing fraud are now using deepfake technology to alter the geolocation of images. As a result, traditional methods will likely fall short in automatically detecting these manipulations, making it crucial for insurance companies to adopt more advanced, evolving technologies.”

“It’s an arms race of good versus evil.”

Shane Riedman, President of Anti-fraud Analytics, Verisk

Indeed, Riedman said expecting an adjuster to carefully examine every photo or document for what are now just trace indicators of tampering or AI generation is not always feasible and can create an unrealistic amount of work for the adjusters.

“We think that the, obviously, best defense is an application of a detection analytics program that is looking across all of the images and all of the documents, all digital media—a sort of dragnet approach to monitor for indicators of fraud,” he said.

However, these programs take time to implement and operationalize, so while insurers are on that journey, he said it’s important to double down on the basics.

“We find claims that have digital media fraud also have other suspect elements—the more traditional red flags,” he said. “So, focusing even more and being more hypersensitive to those traditional red flag indicators of claims fraud are more important than ever, and keeping current on fraud training is definitely an imperative.”

His advice to adjusters is to ensure that proper anti-fraud training is available within their companies and to be thinking about traditional red flag indicators with every claim that passes their desk. This means paying attention to things like multiple house fire claims, claims being submitted under multiple insurance policies, or a listed witness to a vehicular accident that has been involved in prior accidents with the same victim. He said these are all traditional indicators of fraud that insurers should still be on the lookout for, even with the emergence of generative AI technology.

“Those are really the best lines of defense as companies are on this journey to create that more robust perimeter defense,” he said.

Krishnamurthy said when examining a deepfake video, one indicator to watch is whether the lip synchronization appears accurate. In the case of images, unnatural skin tones or blurred areas around the edges will also be indicators.

“Another red flag could arise when historical data, analyzed through a predictive algorithm, doesn’t match the most recent image,” he said. “For example, if you filed a claim three years ago, we would already know what your property looks like. If someone attempts to create a deepfake and file a recent claim, we can use those past images to help validate whether the property is indeed yours.”

He added that he has seen more educational workshops to keep adjusters continuously trained on the latest trends in fraud. These workshops also raise awareness of the AI tools available in the market, even having adjusters create deepfakes themselves to become more familiar with these tools.

“AI plays a significant role for insurers in detecting deepfakes in claims, particularly through pattern recognition, anomaly detection and forensic analysis,” he said.

At the end of the day, however, Vekiarides said better tools are always needed to detect fraud as AI evolves. And better tools are likely coming, according to Riedman. One of these tools that he sees growing during the next few years is digital media provenance.

“It’s this idea of the technology embedding a trusted watermark of sorts in digital media so that carriers and other businesses and the government can instantly and reliably check an image or digital media for tampering and for source identification to determine with accuracy what is the source of this particular item that’s in front of me,” he said.

However, one of the best lines of defense—at least for now—is still human instinct.

“We will not see a replacement for the need for human-powered decisions,” he said. “That’s just not going to happen in the next few years. The ‘A’ in AI—artificial—cannot compete with the subjective reasoning skills of a trained and experienced practitioner or a trained and experienced adjuster.”

The Responsibility Is on Insurers

He does see the anti-fraud technology expanding to the point where it can help move meritorious claims faster and more accurately through the claims process while also protecting carriers from compliance, legal and regulatory risk.

Speaking of regulatory risk, Darcy Rittinger, chief risk officer at Cover Genius, said insurance providers already have a significant responsibility to detect and prevent fraud in the claims process, which stems from their fiduciary duties to their policyholders and shareholders and specific regulatory requirements set by government authorities. Now, detecting deepfakes will be part of those obligations to prevent fraud.

“Greater government support and partnerships to make these technologies available to insurers on a broader scale would significantly enhance their ability to detect fraudulent claims.”

Darcy Rittinger, Chief Risk Officer, Cover Genius

“There will likely be more access to deepfake detection technologies, which will become an essential part of claims platforms,” said the CRO of CoverGenius, an InsurTech that provides embedded protection for digital companies. “That said, these advancements are not always accessible to the insurance industry, and insurers have historically lacked the resources to develop or acquire cutting-edge tech tools independently. Greater government support and partnerships to make these technologies available to insurers on a broader scale would significantly enhance their ability to detect fraudulent claims.”

Additionally, she said regulatory incentives for the insurance sector to invest in detection technologies would be beneficial. However, she sees legislation first addressing individuals, the public and the public interest before it takes aim at insurers and the unique challenges deepfakes pose to the industry.

“While there are general laws regarding deepfakes and AI, most are not specific to the insurance industry,” she said. “Existing legislation, such as the Digital Services Act in the EU and FTC guidelines in the U.S., focus on consumer protection and privacy, but this legislation does not specifically account for the unique challenges deepfakes pose to insurers.”

Krishnamurthy said he sees this space developing as insurance companies continuously collaborate with their legal teams to explore opportunities for strengthening the legal frameworks that address deepfake fraud.

“Ongoing efforts are also needed within the industry to develop standards and share knowledge about these emerging threats,” he said.

Another space that will likely develop regarding AI fraud is in underwriting, Vekiarides said.

“I think a lot of the focus right now is on the claim side, which is interesting because that’s really where the fraud happens, but there’s also the ability to create false assets or fake assets on the underwriting side,” he said. “That’s a little trickier because I don’t think there are as many mechanisms to try to detect things on the underwriting side. For instance, could I produce a photo of a vehicle that doesn’t exist? Going all the way to life insurance, could I create a person that doesn’t exist? I can even have that person show up on a Zoom call with generative AI, so we’re reaching the point where this type of fraud detection has to exist not only on the claim side but also on the underwriting side because you have to validate that these assets or whatever is being insured is real to begin with. So, I think that’s really the next place where this is going to hit.”

***

Claims was the focus of Carrier Management’s final print issue, including articles on social inflation, litigation funding, Generative AI, the future of auto insurance and more.

Those articles are featured in CM’s fourth-quarter magazine, “Unite and Conquer: Industry Battles Social Inflation.” (Download a PDF for free access to all articles in the magazine or become a Carrier Management member to unlock every feature article we publish.)