Insurance has always been a judgment business.
Underwriters evaluate uncertainty. Claims professionals assess context. Actuaries translate patterns into pricing. Risk managers protect the enterprise from volatility. The entire industry exists because people have learned how to make informed decisions in the face of incomplete information.
That's exactly why AI adoption in insurance is so powerful. It's also why it's so difficult.
The barrier is rarely the technology. The barrier is more often cultural.
Insurance organizations are built to value stability, discipline, capital preservation, regulatory control, and operational consistency. Those strengths have helped the industry endure for generations. But the same cultural muscles that protect an insurer from reckless decision-making can also reject transformative change before it has a chance to prove its value.
I call this the Cultural Immune System; the silent organizational reflex that mistakes progress for a threat.
In legacy industries, new technology can be treated like a foreign pathogen. If the "transplant" is not handled carefully, the organization rejects it, not because the technology is weak, but because the people inside the organization do not yet trust what it represents.
Insurance is especially vulnerable to this reflex.
To be fair, the industry's technology gap is real. McKinsey's April 2026 report on agentic AI and core modernization notes that insurers have long understood the need to transform core technologies, but have often delayed because of complexity, cost and risk. The report aptly describes the insurance core software system as a living, socio-technical system made up of decades of embedded business rules, batch windows, custom interfaces, and data semantics. In most insurance migrations, rewriting code is only a small part of the work. The bigger burden sits in understanding rules, converting data, reconciling quality, preparing operations, and stabilizing the business after migration.
That phrase, "socio-technical system," matters. It means core modernization is not only an IT problem. It is a people problem, a process problem, a knowledge-transfer problem, and a trust problem.
The same is true for AI.
Insurers can pilot AI tools in underwriting, claims, customer service, actuarial work, distribution, and back-office operations. But pilots do not create transformation. Transformation happens when employees change how they work, managers change how they lead, and executives change how they measure progress.
McKinsey's broader 2025 report on AI in insurance makes this point directly: Only a few insurers have extracted outsized value from AI. Doing so requires a strategic, comprehensive approach that rewires the enterprise. Change management represents roughly half the effort required to secure impact from AI, with data, modeling, and integration making up the other half.
That should stop every insurance executive in their tracks.
If half the battle is change management, then culture is not a "soft" issue. Culture is implementation infrastructure.
BCG reached a similar conclusion in its 2025 insurance AI analysis. It found that many insurance AI programs stall because of organizational and individual resistance, including limited business engagement, unclear roles and responsibilities, inconsistent support, and the probabilistic nature of AI, which clashes with insurance culture. BCG found only 7% have successfully brought their AI systems to scale.
That last point is critical. Insurance professionals are trained to explain decisions, document rationale, comply with regulation, and defend outcomes. AI introduces probabilistic recommendations, model confidence, pattern detection, and machine-generated outputs. To a technologist, that may sound exciting. To a veteran underwriter or claims leader, it can sound like risk without accountability.
This is where many AI initiatives go wrong.
Leaders announce a tool. They share a roadmap. They highlight efficiency. They talk about automation. They point to the model's predictive lift. Then they are caught off-guard when the organization responds with hesitation, skepticism, passive resistance, or quiet workarounds.
But employees are not resisting innovation in the abstract. They are reacting to an identity threat.
An underwriter wonders: Is my judgment being replaced?
A claims adjuster questions: Will this tool reduce empathy to a script?
An actuary is concerned: Can I defend this model to regulators, auditors, or the board?
A manager is unsure: Am I still accountable for decisions if the machine makes the recommendation?
A senior operations leader thinks longer-term: Are we automating a broken workflow and calling it modernization?
Those questions are rational. Leaders should treat them as design inputs, not obstacles.
Culture is 50% of organizational readiness for AI
Add culture to your implementation planning. The Culture Audit framework I put together will help. The org chart rarely tells you how change will actually move through a company. Every department has "Silent Influencers": the veteran underwriter everyone watches in the meeting, the senior analyst who knows where the data problems live, the claims manager who has seen five transformation projects come and go. If those people are not consulted, they can derail adoption without ever formally objecting.
Make them co-authors of the implementation. It'll help identify the sticky details on the frontlines of workflows, and it'll go a long way to successful adoption.
A culture-first AI strategy in insurance starts with five moves.
First, name the identity threat. Rather than using generic phrases like "fear of change," be specific. What skill does each role fear is being commoditized? Judgment? Relationship management? Technical expertise? Regulatory accountability? Once leaders name the threat, they can reframe AI as an extension of expertise rather than a replacement for it.
Second, translate the technology into business fundamentals. In my own experience with predictive analytics in insurance, technical teams often talked about algorithms as if they were writing for a scientific journal. Techno-speak does not create trust. Underwriters and executives need to understand how a model improves risk selection, profitability, speed, consistency, customer experience, or portfolio performance. The more abstract the technology feels, the more threatening it becomes.
Third, design for Human + AI from the beginning. AI use cases with human-in-the-loop processes define how AI augments or redefines specific jobs. Underwriters' roles will shift from manipulating spreadsheets toward refining algorithms and evaluating broader portfolio dynamics. That is the right frame. AI should not ask insurance professionals to turn their brains off. It should help them turn their brains up.
Fourth, modernize workflows before automating them. There can be pressure to move fast and announce AI initiatives, either from a board or because competitors appear to be making gains. Leaders must resist the temptation to automate old complexity. If a process is messy, opaque, or overly customized, AI will simply make the mess move faster.
Fifth, make governance a source of confidence, not drag. Insurance employees will not trust AI if they believe no one can explain, monitor, or override it. Highlight human-in-the-loop approvals, traceability from requirements to configuration to test evidence, model-validation practices, monitoring, and auditable artifacts. Those controls do not slow adoption. They make adoption possible.
The opportunity is enormous.
McKinsey estimates that agentic AI could create productivity improvements ranging from 10% to 90% across steps in insurance core modernization, depending on the domain and degree of automation. IBM's 2025 insurance AI research also points to efficiency gains, while warning that insurers must address technical debt and skills gaps to realize AI's potential.
The winners will be the insurers that build the most trust. Culture is the operating system AI runs on.
We don't need fewer experts. We need experts who can work differently, and leaders who can explain AI without hype, deploy it without dismissing human judgment, and govern it without paralyzing progress.
The machine is not the future of insurance. The human leading the machine is.
Kirstin Marr is a columnist for PropertyCasualty360.com and the Founder and CEO of Lead The Machine, a podcast and newsletter exploring the human side of AI at work. For over 25 years, she has helped more than 100 companies navigate transformation in technology, data analytics, and now AI to modernize and deliver profitable growth.Previously, Kirstin served as Chief Analytics Officer atInsurityand President of Valen Analytics, where she guided analytics businesses from startup through acquisition and scale.
Opinions expressed here are the author's own.
(Featured image credit: photoschmidt/Adobe Stock)
Read additional thought leadership by Kirstin Marr:
© Touchpoint Markets, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to TMSalesOperations@arc-network.com. For more information visit Asset & Logo Licensing.