AI is “going to change the way we work and think about work, and it’s either accepting that fact now or accepting it later,” said Darien Acosta, chief AI officer at Cover Whale, at Carrier Management’s InsurTech Summit 2024 in mid-May.

Harnessing the power of AI in the workplace is all about human-centered design, said Behzad Salehoun, Canada Insurance lead at Capco, a management and technology consultancy for the financial services industry, during the session on AI and Talent.

“It’s about creating interfaces and interactions that are intuitive and easy for both employees and customers to use, making sure that it enhances the user experience at the end of the day,” he said.

“The immediate opportunities for AI are implementations that complement the human workforce,” Salehoun said, such as “using AI to handle routine data entry tasks.” Insurers can use an AI copilot for customer service, claims and underwriting functions, which “frees up human employees to focus on more complex and nuanced work that requires emotional intelligence, judgment and personal interaction.”

Salehoun noted that “insurance products can be somewhat complex, and in a situation like a claim where the customer may be emotionally stressed, having a human-to-human interaction that is supported with AI lends itself to an efficient and far more personal customer experience.”

Coexistence Will Occur Naturally

Acosta believes coexistence between humans and AI in the workplace will occur naturally over time. He compared AI to the Internet and smartphones, noting how both of those technologies were once the “latest new thing” and are now completely incorporated into our daily lives.

“I can’t imagine working for a company that said I can’t use the Internet or I can’t use a smartphone to be more productive at work,” he said. “We will eventually move to a time where employees are saying, ‘I can’t imagine not using an AI copilot to be more productive at work.'”

Acosta said the “biggest win” of using AI in the workplace will be “basically allowing people who aren’t trained to be programmers or data analysts or engineers to still benefit from those skill sets.” For example, using an AI copilot can allow someone in HR to “go very deep into analyzing and drawing insights from data without the previously necessary technical ability, like literally being able to ask AI to create a distribution chart of PTO in order to identify outliers when it comes to maybe who’s abusing PTO and who isn’t”—all without knowing a single line of code.

Related articles: Meet Bob, Cover Whale’s AI Employee; Are Talent Leaders, Workforces Prepared to Wade Into AI Tidal Wave?; Meet Your New Employee: Advanced AI

AI for Training and Simulation

Salehoun said that AI is “highly beneficial in simulating realistic scenarios for employee training, especially in insurance, where customer interactions can be diverse and complex.”

He said that Capco has used natural language processing and machine learning “to create virtual training agents that mimic real-world interactions. Trainees can practice handling various customer temperaments. These agents can be calm and cooperative or irate and non-compliant…which can prepare a trainee for real interactions without the risk of damaging customer relationships during their learning.”

“Trainees can practice handling various customer temperaments. These agents can be calm and cooperative or irate and non-compliant…which can prepare a trainee for real interactions without the risk of damaging customer relationships during their learning.”

Behzad Salehoun, Capco

“The technology has matured to the level where the conversation is, quite frankly, quite realistic,” Salehoun said. “We can create various scenarios and diverse situations that the employee may not frequently encounter, like a complex claim or dispute or sort of rare policy questions tailored toward a trainee’s experience level. It’s almost like creating personalized learning, and that keeps the trainee engaged and continuously learning.”

Related article: Gamified Underwriting: How Hiscox Is Transforming Training With an Underwriter Simulator (2021)

The AI training agents can also provide instant feedback, pointing out where trainees may need improvement and analyzing aspects of the conversation such as tone of voice, choice of words, sentiment and even compliance to company policies, he said.

“Ultimately, implementing AI into employee training programs can support the staff to be more prepared to handle not only the technical aspects of their jobs but also managing the human elements of customer service.”

AI for Talent Acquisition

Using AI for talent acquisition is a “sensitive area,” said Acosta. “To be honest, I would not trust the responsibility of building an internal model that lacks bias,” he said, adding that he would outsource that risk to a reputable vendor.

“Ultimately, all of these systems have bias in some way or another,” he said, “and the ones we think don’t have bias, it’s just they have bias we haven’t found yet.”

Acosta said that he wouldn’t use AI to filter candidates or make the actual hiring selection. But he noted that AI could be useful for automating some of the more manual aspects of the talent acquisition process, such as scheduling meetings or summarizing interview transcripts.

“I personally don’t think we’re there yet to just have an AI telling you this candidate is better than another one,” he said.

Legal and Regulatory Considerations

It’s important to “know what AI systems you’re working with,” said Acosta. “Know what their data and retention policies are…know whether they are collecting your data and training on it. Usually, if you are paying them with some sort of company license, you are protected in that way.”

Acosta said the “biggest win” of using AI in the workplace will be “basically allowing people who aren’t trained to be programmers or data analysts or engineers to still benefit from those skill sets.”

He stressed the need to “always keep the human in the loop. What regulators fear the most is this idea of an autonomous AI agent that acts without accountability and also has some sort of inherent bias. But we can control for that by having the AI be semi-autonomous and having a human in the loop.” He said that the human is there to add accountability and “to explain the actions that the AI is taking or decisions it’s making.”

However, having a human in the loop “is not a panacea for all issues,” Acosta cautioned, noting that if that human “just starts rubber stamping all the actions that an AI does, and says, ‘Well, the AI has better understanding of the situation than I do,'” and stops questioning the AI, then that counteracts those protections.

Watch a replay of this session and all of the panels hosted during Carrier Management’s 2024 InsurTech Summit: AI for Everything.