It’s generally accepted that the first written insurance agreement was established with the Hammurabi code. It took quite a while—over 3,000 years—before humankind could establish enough understanding of risk transfer and mathematics and analyze enough data to develop the first actuary tables in the late 1600s. Since then, the sophistication within every part of the risk-sharing process—from rate-making to underwriting to claims—has increased at a quickening pace, driven by greater precision and predictability and built on an ever-expanding amount of data.
But despite the industry’s developments associated with cross-company information sharing—bureau rates, claims databases, and so on—there has long been competitive advantage to be gained by the amount of data an insurer could collect in order to fine-tune rates and underwriting, support new products, and carve out new market niches. It was a model that favored companies with larger books of business generating more data and with more staff resources available for information collection to establish greater certainty around predictability.
However, an explosion of new data sources is turning that model on its head. “There is more data, and more access to data, than there ever has been—and it’s growing,” says Matthew Josefowicz, director of insurance at Novarica. “Geocoding, geographic information sources, aggregated consumer information, commercial databases, credit scoring, property peril scoring; detailed dossiers on individuals and businesses—there’s a massive amount of information that can be obtained on individuals and businesses simply by searching the Web.”
In this new environment, there are fewer competitive advantages to be gained from the amount of unique information an insurer can compile itself. “We’re evolving from a time when insurers needed to be fantastic hunters with a simple digestive system to one where they’re almost being force-fed data,” says Josefowicz. “The challenge is: How can we evolve our organization to spend fewer of our resources on duplicating the effort of data entry, and instead handle that data more effectively?”
ACUITY—a $2 billion regional property-casualty insurer doing business in 20 states—sees the data explosion as force it can leverage to gain advantage against national insurers that have built much larger internal data stores.
“Today, we have access to a volume of data that we could never have compiled on our own,” says Ben Salzmann, ACUITY president and CEO.
Ed Felchner, vice president of personal lines and marketing at ACUITY, illustrates that point with an example from property insurance underwriting. “The problem before was that data on property risks was scattered. The information might have been available, but we’d have to go to 30 different sources to get it,” he says.
“If we wanted topographic information about a tract of land a building was on, we would get it from one location. We would geo-locate using another application. We could go online to assessors’ offices to get additional information. Those multiple processes are hard to do individually on every account, and we need that information right now, so we can use it in the quoting process. Today, with the advent of different companies aggregating this information, we can get it in one shot,” Felchner adds.
The buffet of information ACUITY samples includes many names you’ll find on today’s menu of sources: LexisNexis for analytics and policy prefill; Dun & Bradstreet for commercial information and financial stress scores; credit bureau data and financial responsibility scores. ACUITY also uses a motor vehicle record “push” service to update driver records and discover new drivers in the household automatically. And the company is testing a property information solution that aggregates available data on property risks and provides a peril-scoring system based on predictive relationships.
But is the amount of information available today too much for insurers to digest? No, says William Dibble, senior vice president of the national claims operation at Infinity Property and Casualty Corporation (IPACC). “There isn’t too much data out there. You just need to understand what data you already have in house, what data your vendors in various processes might be collecting, and then have the skills and technology to use it,” he says.
“An insurer’s core asset is no longer data gathering; it’s their ability to handle data when that’s received,” says Josefowicz. “The challenge for insurers is how do you take advantage of data to price better, expand your market, and improve the business of underwriting risks and handling claims.”
Mo Masud, senior manager in Deloitte’s advanced analytics and modeling practice, says that the exponentially increasing amount of data to process, comprehend, and react to is one of the key trends behind the adoption of predictive analytics. “The drive for operational efficiency and the reduction in expense ratio has made the application of predictive analytics in day-to-day operations even more critical,” he says.
Performing detailed, multivariate analysis in this information-rich environment requires a specialized combination of skills covering data management, statistical analysis, data mining and modeling, and strategy. Those skills need to be coupled with an understanding of both business processes and technology and with a natural curiosity to keep looking for relationships that can lead to predictability.
“Insurers may have the skill sets in analytics, but those skills are often localized in actuarial or departmentalized for underwriting or claims. The greatest challenge is extending those skills at an enterprise level,” says Masud.
The next objective for insurers is turning analytics into actionable information. “You need quality data, a scoring engine to take the risk attributes and apply a score, and a business rules management system to take the score and translate that into an action, price, or claims outcome,” Masud says.
“We knew our large data stores could provide meaningful insights that would help select accounts and price products to improve profitability and retain customers. The greatest challenge lies in extracting, understanding, analyzing and presenting it in a useful way,” says Ned Wilson, vice president of treasury and planning of FCCI Insurance Group.
FCCI, which writes workers’ compensation and other commercial lines, looked to analytics to bolster the bottom line. “Like everyone else, we’re trying to improve our account selection and performance. With a massive amount of data, we believe there’s very valuable information in that data to allow us to pick up a couple points on the combined ratio,” Wilson says.
The insurer began its analytics initiative by retaining a consultant, but was unhappy with the results. “The skill sets and understating of insurance weren’t there with the consultant we chose,” Wilson says.
Instead, based on its experience with SAS data warehousing and analytic products, the company decided to tackle the initiative itself and deployed SAS Enterprise Miner in early 2010. Wilson’s team joined forces with underwriting and in-house actuaries to first examine existing policy and experience data to create pricing and selection rules, starting with workers’ compensation. The analysis will be expanded to commercial auto and general liability in late 2011.
FCCI appended commercially available data, such as from Dun & Bradstreet, to its warehouse. “We checked our assumptions and determined which data elements generated the most predictive results,” Wilson says. “Some confirmed our expectations—the financial stress score can help predict business failure and the Paydex score can predict timeliness of premium payment. But other results have been surprising, such as correlations between different account sizes and profitability.”
FCCI then worked to turn those understandings into actionable information to use in day-to-day decision making. “Enterprise Miner is a powerful tool, but the challenge for us is to communicate that in non-statistical terms to the underwriting staff. It’s a teaching and communication challenge,” Wilson says.
The insurer uses components of Enterprise Miner to illustrate predictive factors and guide underwriting decisions. “The goal is to avoid making blanket statements of ‘don’t write this account.’ Instead, we provide underwriters information in the rating model that they can drill into that shows the unique characteristics of an account that caused a concern,” says Wilson.
For instance, underwriters might obtain detail that a particular combination of class code, account size, location, and Paydex score has shown to be a predictor of poor account experience. “The underwriter can go back and communicate with the agent either to get the price we need or to address the concerns,” Wilson says. “It allows us to squeeze insight out of information that no individual underwriter could get their arms around because it’s the result of an aggregation of an incredible amount of data.”
Wilson says it’s too early in the deployment of Enterprise Miner to determine if the company’s combined ratio goals are being achieved. “Our underwriters do a very good job, which makes it challenging to fine-tune our underwriting results. But there is some fruit that can be picked, and ways we can improve our risk selection and pricing to attract customers while being more profitable.”
In a data-rich and ultra-competitive market, leaders will be companies that not only best digest the amount of information available, but also deploy integrated analytic solutions at the front line. This can include providing decision support, such as at FCCI, or through full automation of the decision process.
As is the case with other areas of insurance automation, the push to automatically extend analytics started in personal lines, where there is greater homogeneity of information. “A few years ago, just having an analytics solution would give companies a completive advantage in personal lines,” says Masud. “As the market gets saturated with more companies using analytics, the advantage will be achieved by companies who can best execute and use those tools in day-to-day operations. Leading companies are those that are using analytics to establish a large number of price points to write a wide variety of risks.”
ACUITY, which does business exclusively through independent agents, has seen the benefit of analytics-based precision pricing in personal lines. Combined with data culled from its own policy and claims records and loaded into its Business Objects data warehouse, ACUITY has undertaken multivariate pricing initiatives, which it dubs “sophisticated pricing,” starting in personal auto and expanding to commercial auto and personal and commercial property.
“We use predictive analytics to price for a wider array of accounts, to support offering new programs and discounts, and to increase our competiveness in the best-performing segments of business,” says Salzmann. “Over the past few years, we have seen our ability to shift our book of business make up toward more profitable accounts because we have priced it to change.”
ACUITY also credits its growth in personal lines business to sophisticated pricing. The insurer has averaged nearly nine percent annual premium growth in personal lines over the past five years while generating an underwriting profit during that time, according to Felchner.
“We may not have the amount of internal data that larger companies have. But at the same time, what a lot of companies don’t have is accessibility to data, which we do because of the way we designed our warehouse,” says Felchner. “By leveraging our analytics platform, we create our own loss costs based on a detailed analysis of the data that reflects our unique book, and we apply the unique, predictive characteristics of individual accounts to create a virtually unlimited number of pricing combinations that are available automatically at the point of sale.”
That level of automation has been tougher to come by in commercial lines, which lack the homogeneity of personal lines as well as the sheer number of policies and data points.
“We don’t’ have the volume of data in commercial lines. There aren’t as many accounts or clusters of similar homogenous policies. What that does is encourage us to keep the models as simple as we can and keep the number of variables small, so that we have greater certainty behind our predictions,” Wilson says.
However, the commercial lines market continues to evolve. Many companies have ported their personal auto pricing models, MVR alert features, and prefill capabilities to commercial auto, which is particularly useful on smaller fleets with stable driver rosters.
The information explosion has impacted the claims process as well. Any claims investigator wants more information, not less, on which to base a decision. The problem is that, without front-line analytics, claims staff can quickly run up against information overload.
“There is only so much a human being can process,” says Josefowicz. “Insurers don’t need to subject staff to more data, they need to parse it electronically and present guided results to their line staff.”
IPACC, which underwrites personal automobile insurance, deployed an analytics system from SPSS in January, 2008. The platform includes a real-time claims scoring solution to help predict whether claims are legitimate and should be approved or are potentially fraudulent and should be further investigated. SPSS was acquired by IBM in 2009.
The company has taken a two-pronged approach to compiling information for the analytics platform to digest. First, IPACC completed the deployment of a new claim administration platform to obtain more data at first notice of loss and to provide greater flexibility.
“At one point, we had only captured 32 data points in our first notice of loss system. Today, we are able to capture several times that amount. New data fields can be easily added as needed, versus the difficulty of modifying the previous mainframe system,” says Dibble.
Second, the company has added several new claims data sources and brought those to the front line. “On a real-time basis when a call is made, information that’s public record—phone numbers for an individual, for instance—is fed into our system. We identify other phone numbers and other occupants of that household, which is very valuable in fraud detection. Those instances are alerted and scored. An alert doesn’t mean the claim is fraudulent, it means we need to look at it more carefully,” says Dibble.
Prior to deployment, it took IPACC an average of 40 days to determine if a claim might be fraudulent. Analytics has helped cut the average time to 10 days, and sometimes just 24 hours. IPACC has also increased its special investigative unit success rate from 60 to 90 percent, and the company has set up a new unit for opportunity fraud to complement its traditional focus on organized fraud.
In addition to fraud detection, analytics has helped IPACC improve the claims investigation process through intelligent scripting. “The system prompts the loss report taker into asking other questions that may be important in the accident by listening to key phrases that come up in description,” Dibble explains. For instance, if a claimant provides address information that doesn’t match public records, the report taker is immediately notified to focus on that particular question.
Analytics also enable IPACC to highlight potential future problems with claims—such as hidden damage that doesn’t show up on first inspection—to prevent the need to reinvestigate when that damage is discovered. New sources of information and its predictive analytics platform have also allowed the insurer to provide greater one-call claim resolution.
“We had always wanted to move toward what we would refer to as ‘right tracking’ of a claim and to reduce the touch points in the process. Claims that have no fraud indicators can now be handled directly by the first notice of loss adjuster without moving it on to a different adjuster. We are able to pay the claim and get the customer back on their feet faster,” says Dibble.
Those successes have justified the addition of staff focused on analytics. “We’ve already brought in three analysts who work with the software and with the claims and IT departments. We’re going to be expanding our analytics operation this year,” Dibble says.
The information buffet will continue to grow, and insurers’ use of analytics will continue to evolve.
“We’re going to see more use of public and semi-public consumer data. Aggregators are already building extensive profiles on consumers, and there are tremendous opportunities to include information from social networks,” Josefowicz says. “There is information out there insurers can leverage not just for underwriting and pricing, but for up-selling. Buying patterns, how often you fly, the kind of car you own, how often you eat out. Court records, how much your house is worth. There’s a tremendous dossier of information that can be aggregated and used.”
As with any evolution, this one will be defined by survival of the fittest. “The challenge for some insurers is how to adapt from being an organization that has spent the last 100 years scrounging for scarce data to one that operates in an era of data abundance,” says Josefowicz. “There’s almost no competitive advantage in what an insurer can find out versus what anyone with an Internet connection and a couple dollars can find out. Companies that continue to deploy resources on data gathering and do not put their resources into data management and analysis are wasting their resources because it’s being done more efficiently in other ways. Sooner or later, their skills won’t fit.”
“We’re not hurting for data,” says Dibble. “Our challenge is to find more ways to use analytic technology to utilize that data and to improve our performance and customer service. It’s like the explorer going out into the wilderness. You find different things every day.”