For years, the insurance industry was a leader in the field ofbusiness analytics. In the early days, this referred primarily todescriptive analytics, like reports, dashboards and scorecards.

|

Regardless of the complexity or level of functionalityintroduced by new business intelligence tools, these analyticalsolutions had one major flaw: They were always backward-looking.They could only tell a story about what had happened in thepast.

|

Over the last few years, interest has grown in predictiveanalytics. The reason is obvious: If insurers can predict what willhappen in the future on particular policies and claims, then theycan be better positioned to maximize profits, reduce costs, andimprove customer satisfaction. In fact, the whole basis for theinsurance system relies on insurers' capability to predict with acertain level of certainty what is likely to happen in thefuture.

|

The Early Years

|

In the claims realm, fraud detection was heralded as an obviouschoice for using predictive modeling. Many companies have triedvarious ways of predicting the likelihood of fraud occurring on aclaim. Early efforts were met with mixed results and limitedsuccess. It is useful to examine why these models typically failedto exceed expectations.

|

In early models, data was often sourced from existing datawarehouses that had originally been built for reporting. The datathat is useful for dashboards and scorecards was a logical place tostart, but this information was incomplete. It most often consistsof dates, dollar values and categorical variables. It almost nevercontains text data. Although it may contain information frommultiple operational systems, it generally only includesinformation from core processing applications like the claimsystem(s). While there might be enough data to build a predictivemodel, these limitations are significant and can greatly affect theperformance and results of the model.

|

Another early challenge in attempting to predict fraud was thereliance on singular methods of detection. Upon inspection, manyearly "models" were really nothing more than combinations ofbusiness rules. More advanced insurers used supervised predictivemodeling where previously known fraudulent claims are used to train a model to detectsuspicious future claims. This approach, however, is also flawed inthat it is specifically designed to find the same types of fraudthat an insurer has historically found. Fraudsters are adept atcircumventing rules and thresholds. Furthermore, they areconstantly modifying their scams and inventing new ways to game thesystem.

|

These limitations hampered early forays into analytical frauddetection. The false-positive rate was high, and the model failedto help insurers identify emerging fraud scams. So some carriers shelved the modelaltogether or relegated it to a safety net merely to catch claimsfrom falling through the cracks when adjusters were unable tomanually identify a few red flags.

|

The Tipping Point

|

The p&c insurance industry is now at a tipping point. Arecent study indicated that almost half of all insurers areinvesting in new predictive modeling projects in 2013, driven bythe increased availability of internal and external data, moresophisticated modeling tools, and a broader range of businessissues that can benefit from predictive modeling.1

|

A recent surge in core system replacements has resulted inbetter data quality for many insurers. For example, one of thebenefits of upgrading a claim system is better field-levelvalidation, greater opportunities for prefill and integration withother systems, and easier data entry. All of these upgrades make itmore likely that adjusters and claims processers will inputaccurate information. The result of this change is a greater volumeof higher quality data that can be used for modeling.

|

The modeling tools are improving as well. Previously, building apredictive model required a lot of coding and knowledge instatistics, programming and specialized training in the specificapplication being used. Since that time, there have been tremendousimprovements in several key areas. Data management tools now makeit easier than ever to integrate multiple data sources and quicklyaddress data quality issues. Model building is still a skill thatrequires advanced expertise but the tools have become more userfriendly, with graphical user interfaces and built-in support forthe most common techniques. Finally, modelmanagement utilities have been introduced, allowing organizationsto more easily maintain multiple production models with loweroverhead.

|

Five Keys To Success

|

With a renewed focus on analytical methods for detecting fraud, insurers now have the opportunity to achievedesirable results. There are several key areas to consider whenimplementing an analytical fraud detection program. Here are fivesteps insurers need to take:  

  1. Don't underestimate data management. With anymodeling exercise, the results are wholly dependent upon the typeand quality of source information. In many cases, more than half ofthe work for a fraud detection project can occur during the datapreparation phase. This step may involve integrating data frommultiple internal and external sources, addressing data qualityissues like misspellings and missing values, and correctly matchingentities from different systems.  
  2. Incorporate unstructured text sources. Withincore systems, a huge percentage of information is stored as text.Especially on long-tail injury claims, claim notes and customerservice logs contain crucial information for accurate frauddetection. In some models, up to half of the variables might comefrom these sources. Furthermore, as insurers gain access toinformation sourced from social media, text analytics, contentcategorization and semantic analysis are essential to derive valuefrom comments and posts.  
  3. Use multiple detection techniques to identify bothknown and emerging schemes. Don't rely on just one methodof detection. Business rules and supervised modeling are good atidentifying known fraud schemes, while anomaly detection andnetwork analysis are better suited to detecting emerging exposures.Using a combination of these methods can help cover all the bases.Separate detection scenarios can be built for different points inthe claim life cycle to yield the best results. And additionaldetection scenarios can be focused on identifying suspiciousorganized networks to produce leads for complex case teams, asopposed to singular claim models that can support traditional SIUteams.  
  4. Think about how users will consume the modelresults. Building a model is only half the battle.Consideration must be given to how users will receive and reviewthe results of the model. Robust implementations will provide analert queue for evaluation by a triage team, along withfunctionality to review the details of associated claims andentities. The system should also allow users to share intelligenceand document the disposition of the alert, as this information canbe useful for monitoring model performance. More advancedimplementations can involve integration directly back into the coreprocessing system.  
  5. Maintain the models. Predictive modeling isnot a one-time exercise. Models experience decay over time. Inorder to maintain satisfactory results, models need to be tuned andperiodically updated. Source-system modifications, access to newinformation, changes in underwriting risk appetite, legal andregulatory developments, and shifting fraud patterns can allinfluence model results. It is important to routinely monitor theperformance of the fraud detection system and plan for periodicupdates. Most models can be updated every 12 to 18 months but infrequently changing environments, six-month updates might be moreappropriate.

The old days of simply relying on claims adjusters to notice redflags are over. Adjusters will always remain an integral part ofthe fraud defense system. However, given adjusters' increasingcaseloads, less experience and greater performance demands, it isunrealistic to expect them to adequately identify increasinglycomplex fraud exposures. Predictive analytical solutions have evolved and the insuranceindustry has reached a tipping point where manual methods are nolonger sufficient. Those insurers who embrace an analyticalapproach will be ahead of their peers. Those who lag behind riskbecoming soft targets.  

|

Footnote

|

1OperationalizingAnalytics: The True State of Predictive Modeling inInsurance. Strategy Meets Action. July2013.  

Want to continue reading?
Become a Free PropertyCasualty360 Digital Reader

  • All PropertyCasualty360.com news coverage, best practices, and in-depth analysis.
  • Educational webcasts, resources from industry leaders, and informative newsletters.
  • Other award-winning websites including BenefitsPRO.com and ThinkAdvisor.com.
NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.