Emerging risks | Growth Opportunities | APAC Insurance

Thursday, January 8, 2026

Emerging risks | Growth opportunities | APAC insurance

Thursday, 8 January 2026

Feature

Surge in AI-driven fraud tests APAC insurers

Surge in ai driven fraud tests apac insurers  rein asia
Rising sophistication and volume of AI-driven claim submissions puts upward pressure on costs and premiums.

Share

(Re)in Summary

• Insurers in APAC face increased fraud due to AI tools creating fake claims, with deepfake fraud rising by 194% in the region.
• Increasing use of AI in fraudulent activities, such as manipulated photos and fake documents, strain insurers, increasing costs, processing times and putting upward pressure on rates.
• Experts stress the need for updated detection techniques, combining AI-powered tools and human review for effective fraud prevention.
• A unified approach involving machine learning, intelligence sharing, and employee training is crucial to countering evolving AI-driven fraud.

Fraud—a story as old as time itself—is getting turbo-boosted by AI at a time when claims detection increasingly relies on the technology, as APAC insurers move to strengthen fraud detection.

An Instagram post by page chatgptricks in April showed how users could use ChatGPT-4o’s image generation feature to fabricate car accidents and even generate receipts, amplifying the potential for fraudulent claims and posing a significant challenge.

“False visual evidence, such as fake accident scenes or manipulated injury footage, can often be indistinguishable from genuine content.”
avatar

Dennis Liu

Chief Technology Officer at Etiqa

In motor insurance, where fraud is particularly prevalent, generative AI (genAI) can create fake accident scenes and manipulated footage that is hard to distinguish from reality, says Dennis Liu, Chief Technology Officer at Etiqa. “False visual evidence, such as fake accident scenes or manipulated injury footage, can often be indistinguishable from genuine content,” Liu tells (Re)in Asia.

Deepfake-enabled fraud rose by 194% in APAC, making up about 7% of all fraud in 2024, according to Erik Bleekrode, Head of Insurance at KPMG China and Asia Pacific. It’s a small but rising number.

And for developed economies, deepfake fraud is surging. South Korea saw a staggering 735% increase in deepfake fraud cases, and Singapore recorded a 207% surge in identity fraud attempts, according to a 2024 Sumsub report. “Major carriers are reporting a rise in digitally manipulated claim submissions,” Bleekrode adds.

Cases of claimants using technology to fabricate evidence—fake repair invoices, fabricated engineer reports and altered photos—are increasing. And they all come at a cost. For every Singapore dollar lost to fraud, firms in the APAC region incurred an additional cost of SG$3.95, a 2023 LexisNexis Risk Solutions survey found, due to internal labor costs, legal fees, recovery expenses, and costs associated with replacing goods.

“Every fraudulent claim represents a potential payout loss, but even suspicious claims that get flagged have costs,” Bleekrode said.

Opportunistic motor and property insurance scams detected in Australia totalled around A$560m (US$362.6m) in 2023, said the Insurance Council of Australia, with approximately A$400m lost in undetected fraud. On average, insurers devote three times as much time to process a claim suspected of fraud, straining claims departments, according to Bleekrode. “Such losses ultimately get passed on to policyholders via higher premiums,” he said.

“Major carriers are reporting a rise in digitally manipulated claim submissions.”
avatar

Erik Bleekrode

Head of Insurance at KPMG China and Asia Pacific

Insurers under pressure

Mounting AI sophistication is now creating unprecedented pressure on insurance providers as fraud is increasingly more accessible.

“Documents can be manipulated to the point where they are virtually indistinguishable from authentic ones,” says Liu. “Techniques such as deepfakes, AI-generated text and image manipulation can alter key details like dates, logos and signatures without leaving obvious signs of tampering.”

Fraudsters also have tools that allow them to get the inside lane on how insurers work simply by asking generative AI questions.

“You can find information about claims and policies and how insurers rate risks and how they handle claims by asking ChatGPT questions.”
avatar

Kaye Sydenham

Product Owner, Anti-Fraud at Verisk

“You could actually ask ChatGPT to draft you a medical report if you give it some information,” Kaye Sydenham, Product Owner, Anti-Fraud for Verisk Claims UK, tells (Re)in Asia.

The availability of information to scammers has also meant that they now have insight into how claims are handled, information they wouldn’t have had before, she adds.

“You can find information about claims and policies and how insurers rate risks and how they handle claims by asking ChatGPT questions,” says Sydenham. “It’s an emerging threat, one that we need to watch very closely, because it’s going to become worse.”

Scale presents another problem. “With Fraud-as-a-Service toolkits available on the dark web, even non-experts can generate realistic fake documents,” warns Chad Olsen, Head of Forensic Services for KPMG China.

For example, Olsen said a forged passport image was resubmitted more than 2,500 times with small changes to names, addresses or even hairstyles to evade detection algorithms. “Such subtle tweaks across many copies make it very hard for conventional fraud filters to catch a pattern. The more variations produced, the more ‘authentic’ each version can appear in isolation,” he noted.

Fake documents often enter systems as unstructured data and evade simple database checks, meaning they may not trigger a red flag, Olsen adds. “Traditional manual verification [such as visual inspection or cross-checking paper records] is increasingly inadequate here,” he explained.

“Such subtle tweaks across many copies make it very hard for conventional fraud filters to catch a pattern.
avatar

Chad Olsen

Head of Forensic Services at KPMG China

Insurers must recognise the environment in which they operate, according to Stuart Lewis, Regional Head of Claims Asia at RGA. “Detection techniques need to be updated,” he said.

Through AI-powered claims analysis and advanced image and video forensics, insurers will have tools to detect manipulated evidence. Natural language processing can examine text-based documents for inconsistencies, and enhanced identity verification can help with document authentication – these are multilayered defences that protect against sophisticated AI fraud.

“The focus is on solutions that combine detecting anomalies within actual documents that indicate fraud risk, along with the use of wider analytics models,” said Lewis.

But even as insurers find a wide range of tools at their disposal, they should ensure they have partners that specialise in various forms of AI, document fraud and on-the-ground investigations.

They should also look to other industries – such as banks – that face similar threats, Lewis adds. “We need to recognise that this might require different skillsets from what we have traditionally needed from our investigation partners.”

“The focus is on solutions that combine detecting anomalies within actual documents that indicate fraud risk, along with the use of wider analytics models.”
avatar

Stuart Lewis

Regional Head of Claims Asia at RGA

What can the industry do?

The scale, sophistication and volume of AI-driven insurance fraud has necessitated a greater evolution in claims verification — one that comes amidst rising demands for quick claims handling from policyholders.

“The expectations of customers now are so high — they want their claim paid quickly, they want a decision made,” says Sydenham. “And I think if insurers have to go back to a manual process, where you’ve got to wait for someone to physically come out on every claim, customers will take their business elsewhere.”

The industry has no choice but to move towards greater automation to quickly identify fraud and handle claims, Sydenham adds. “Most customers are genuine, so you don’t want to make it really complicated, but you do need checks in place to identify the cases where there are concerns and pull them out to take a closer look.”

“I think if insurers have to go back to a manual process, where you’ve got to wait for someone to physically come out on every claim, customers will take their business elsewhere.”

Kaye Sydenham

Product Owner, Anti-Fraud at Verisk

Insurers need a “trust but verify” mindset to strike the right balance between accepting information and implementing appropriate verification, says Lewis. “This means carefully considering what information to accept, how to accept it and where to incorporate suitable checks to ensure accuracy,” he notes.

Human review will always be required as a sense check, said Leah Hewish, Property Insurance specialist and partner at Clyde & Co. “The use of technology is imperative but will always be complemented by a sense check of details – place names, timeframes and even weather details – which are used to verify whether a loss or event occurred as claimed, or whether a piece of evidence can be relied upon,” she said.

“The use of technology is imperative but will always be complemented by a sense check of details.”
avatar

Leah Hewish

Property Insurance specialist and Partner at Clyde & Co

Even then, claims professionals still face challenges. A survey of 200 claims handlers commissioned by insurtech Sprout.ai found that 65% have seen a rise in fraudulent claims since 2021, with 83% suspecting that at least 5% of these claims involve AI manipulation. The survey found that 93% of adjusters believed fraudsters focus on low-value claims to fly under the radar of manual reviewers.

Without technological assistance, human reviewers alone cannot detect patterns across numerous claims. Insurers should understand that the right tools can help reduce fraud rates as fraud becomes more sophisticated, said Roi Amir, CEO of Sprout.ai. “Tools that support both efficient fraud detection and faster processing of genuine claims can help insurers strike the right balance,” he added.

Olsen said human reviewers still carry an edge. “Manual review absolutely helps in fraud detection – especially for complex cases that require discernment or cross-checking details that an algorithm might not have in its data,” he added.

“Tools that support both efficient fraud detection and faster processing of genuine claims can help insurers strike the right balance.”
avatar

Roi Amir

CEO of Sprout.ai

A combination of AI and human review is the best defence, said Olsen. “AI can tirelessly scan for anomalies and present a shortlist of high-risk cases, and then human investigators can apply critical thinking to confirm fraud and filter out any AI false alarms,” he explained.

Insurers must remain ahead of the curve by investing in machine learning, fostering industry-wide intelligence sharing, and equipping employees to identify and counter emerging AI threats effectively, to provide a “unified defence”, according to Liu.

“Protecting against AI-driven fraud requires more than just adopting the latest technology – it involves a shift in organisational mindset,” Liu added. “Changing the status quo is not a one-man or one-machine job. It is a collective responsibility.”

“Changing the status quo is not a one-man or one-machine job. It is a collective responsibility.”

Dennis Liu

Chief Technology Officer at Etiqa

Read next