Handshake between human and robot in a research lab, working together for success - Conceptual image about tech innovation, machine learning progress and partnership with future Artificial General Intelligence

Up until about 20 years ago, the insurance industry evolved gradually, mostly through regulation, precedent, and relationships.

But that changed in the mid-2000s, with the rise of insurtech. In the era of “move fast and break things,” more than one tech founder decided they could remake the insurance industry from the ground up, eliminating its inefficiencies in the process.

Many big names of that era eventually flopped. Insurance, it turns out, is harder than it looks. More than that, though, it’s meant to provide stability. It’s built on trust, and its product is essentially a promise: if something goes wrong, the insurer will offer compensation.

Stability and trust are not things you can innovate by disrupting.

But that doesn’t mean the insurance industry can’t benefit from technology like AI. It just means that the AI tools that transform insurance have to be built on a framework of trust—not just speed and automation. This is especially true in knowledge-intensive areas like underwriting, claims, and coverage analysis.

So how do we get there? By building AI systems that complement human expertise rather than seeking to displace it. Let’s examine what that looks like.

AI thrives at transactional tasks, not complex judgment work

Much of the early hype around AI in insurance focused on automation: faster processing of applications, quicker FNOL intake, reduced human intervention in workflows. That’s because AI systems excel at transactional tasks with clearly defined inputs and outputs.

Where AI struggles is in the gray areas: interpreting policy language, assessing liability, and navigating the nuance of coverage positions across jurisdictions.

These scenarios—like trying to determine whether a complex claim is covered—aren’t simple matters of processing data to get an answer. They require nuanced judgment calls. Those calls can only be made by someone with the necessary context, experience, and professional skepticism—all traits that AI systems can’t replicate.

But that doesn’t mean AI can’t assist on more complex tasks. In fact, the right tools can offer significant savings of time and energy—without automating away any sensitive decision-making.

AI systems can surface relevant facts, but they shouldn’t make decisions

When AI-powered insurtech fails, it’s often because it tries to automate the wrong work in the wrong way. You’ve probably heard the term “black box” to refer to the algorithms that power AI tools.

When a black box algorithm spits out a “solution” to a complex problem, it makes everyone uneasy—and it should. Insurance professionals shouldn’t be the mouthpiece of AI tools. AI tools should function as assistants to experienced professionals.

In the case of a complex claim, for example, a useful AI tool might surface relevant policy excerpts, relevant case law, or potential logical arguments the insurer could consider to make a decision. The AI does the work of gathering the necessary materials so the people can do the work of making complex decisions.

This isn’t the case of keeping humans “in the loop.” It’s a matter of designing the loop around human expertise from the outset.

Design principles for trust-centered AI in insurance

When I talk with insurance professionals eager to use AI tools to streamline their work, I hear a lot about what makes them trust an AI tool.

The following principles have emerged as strategic imperatives:

  • Transparency over magic: Users need to know how an AI arrives at a suggestion—not just what it says.
  • Control over delegation: Professionals don’t want black boxes. They want tools they can guide, refine, and question.
  • Context awareness: Systems need to reflect the nuance of the insurance domain, including jurisdictional variation, policy structure, and evolving norms.
  • Respect for expertise: When the user is the expert, they’re more likely to adopt and engage with a tool—assuming it makes them better at their job.

Put differently: AI shouldn’t take over the core tasks of insurance professionals’ work. They should act as helpful team players, passing them the ball closer to the net (so to speak) so the insurance professional can use their skill and judgment to take the shot.

The next generation of insurance AI will scale human impact

The promise of AI in insurance is not to replace experts but to scale their impact. The right systems can preserve institutional knowledge, reduce burnout, and enable faster, more consistent decision-making in areas that were previously bottlenecked by manual review.

But that promise will only be realized if we lead with trust, through careful design, thoughtful implementation, and a deep respect for the professionals who keep this industry running.

Modernization doesn’t have to mean disruption. In insurance, the most transformative technologies may be the ones that feel the most familiar because they were built to work with us, not around us.

Dan Schuleman is the co-founder and CEO of Qumis, a lawyer-built, AI-powered insurtech transforming how insurance professionals read and interpret policies. Before founding Qumis, Dan was Associate General Counsel at Kin Insurance, where he helped scale the company and navigate complex regulatory environments. He previously practiced insurance coverage law at Am Law 200 firms, advising insurers and policyholders on high-stakes commercial claims. Dan holds a J.D. from the University of Illinois College of Law and a B.A. with honors from Northwestern University.

NOT FOR REPRINT

© Arc, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to TMSalesOperations@arc-network.com. For more information visit Asset & Logo Licensing.