Insurers, insurtechs must lift the lid on AI black boxes, experts warn

Industry must ensure AI models are transparent, explainable, and justifiable, say panellists.

Share

Insurers insurtechs must lift the lid on ai black boxes experts warn

(Re)in Summary

• Black-box AI solutions are unacceptable as insurers and insurtechs need to be able to explain every aspect of their AI models to gain stakeholder approval, says Munich Re’s Vice President of Business development in Asia.
• AI models can and should be customised for each insurer’s risk appetite and culture and cannot be uniform.
• Human underwriters will remain essential for complex cases while AI transforms routine tasks into more strategic customer-focused work.

Insurtechs and insurers exploring AI solutions for customer journeys and underwriting face a critical challenge: ensuring their models are transparent, explainable, and justifiable, said experts said at Insurtech Insights Asia Conference 2024,

“We can’t have AI solutions that are in a black box,” said Henry Wong, Vice President of Business Development in Asia for Munich Re. “You have to explain to your stakeholders in the end to get that project implemented [and] to get funded. And you probably have other stakeholders who are challenging you on that decision.”

“We need to explain every bit of the AI model in terms of the assumptions taken and what data we’re using in the training of the model. It’s a very comprehensive exercise,” added Wong.

Explainability is fundamental to risk management. “As a risk manager, we have to have [explainable AI] as a central key to our whole solution set,” said Wong. “For us to offer solutions to insurers, they need to understand how the AI model is performing.”

“As a provider, you need to explain the performance of the model and justify their decisions. In sensitive cases [like] underwriting, you better have a good answer.”

Henry Wong

Vice President of Business Development in Asia at Munich Re

This push for transparency aligns with recent regulatory developments. Hong Kong has issued a policy statement requiring financial institutions to develop AI governance strategies, while regulators in the US and Europe are examining potential model bias against minorities.

“As a provider, you need to explain the performance of the model and justify their decisions. In sensitive cases [like] underwriting, you better have a good answer,” Wong added.

Bespoke standardisation

While AI could standardise underwriting processes, a one-size-fits-all approach clashes with the diverse risk appetites of different insurers. Bespoke customised data must play a pivotal role, said Wong. “[A model] has to reflect your risk appetite, your product, your agent culture, and how strict you are in taking the underwriting guidelines,” he explained.

The value of AI lies in improving operational efficiency and decision-making while complementing underwriters’ expertise, said the panellists. This becomes particularly significant in risk stratification and early detection of risk factors.

“[AI can help in] differentiating high-risk individuals versus low-risk individuals, and the insurers may decide not to proceed with the sales process for high-risk individuals,” said Rebecca Zhang, Head of Regional Partnerships, Innovation and Product Development in fast-growth markets at reinsurer SCOR.

“With this kind of advancement, we definitely can see in the long run how AI can help with claim costs for the insurer and provide much better risk stratification from a medical basis,” added Zhang.

Privacy and security

Still, the adoption of AI in insurance brings its own set of challenges. Data privacy and security are shaping up to be key concerns in an increasingly digital and personalised industry.

Brian Lin, Senior Consulting Manager for Strategy and Risk Management at Sia Partners, emphasised the growing importance of AI risk assessments, as insurers need to understand data usage and model biases.

“Are we using high-risk models [or] low-risk models? How many volumes of data are we using? And how many people are we tracking? Once you have these risk factors, we will then have some mitigation factors,” said Lin.

The more insurers rely on AI models, the more they need to worry about keeping their data secure. “What [bad actors] will do with the AI is that they will try to mess it up with the data. They will try to tamper with the data,” said Ruiwen Wan, Cyber Risk Team Lead at Zurich Insurance. “When there are inaccuracies with training and tuning the model, we get a bad result.”

Privacy should be a business issue, instead of a security or compliance issue.”

Ruiwen Wan

Cyber Risk Team Lead at Zurich Insurance

Wan outlined additional security concerns, including exploitation attacks and data leaks that could compromise training data, alter models or manipulate their outputs.

“How we prevent all this will be very similar to how we protect our data at the very first stage. We classify what security and protections [are] necessary, and then we go into the data protection policy,” she added. “Privacy should be a business issue, instead of a security or compliance issue.”

This is a perspective backed by statistics. In 2023, the average data breach cost organisations US$4.45 million, and 83% of organisations have experienced multiple breaches. To address these risks effectively, insurers need comprehensive data protection frameworks built on strong governance and clear policies, said Wan.

The underwriter’s expanding view

Despite the risks, AI remains important in enhancing underwriters’ business impact, said Karina Au, chief underwriting officer at HSBC.

“We have so many nationalities of people who come and purchase insurance. We have high-level businesses that we are underwriting. And we have complex type histories of the client,” Au said. “We need to enhance our skill and uplift our underwriting authority in order to underwrite 50 million, 100 million cases and work closely with our reinsurer to get more capacity.”

“You’ll no longer be sitting at a desk looking at underwriting guidelines case by case, but you’ll get the opportunity to influence the customer journey.”

Henry Wong

Vice President of Business Development in Asia at Munich Re

HSBC has already embraced this transformation by implementing predictive modelling that analyses customer risk through multiple data points – including claims history, demographics, lifestyle information and financial data. This approach has proven especially successful in streamlining underwriting for simpler products like medical insurance.

“We can’t just keep hiring underwriters to review those administrative [papers],” Au said. “[AI] has helped us flag abnormal results so that our underwriter can do more complex cases and make more accurate decisions.”

Such is a shift that can be transformative for the profession, according to Wong. With AI handling routine tasks, the underwriting role will no longer be as mundane. “You’ll no longer be sitting at a desk looking at underwriting guidelines case by case, but you’ll get the opportunity to influence the customer journey.”

Read next

Share this article