Data strategies are not universal throughout the insurance industry. The number is sometimes seen as being equal to the lines of business offered in the industry, according to Karen Pauli, research director in TowerGroup's insurance practice.

"It depends on the product line," says Pauli. "If you take a look at the personal lines, I think most carriers have come up with some level of strategy to handle getting at the data."

Many personal lines insurers have replaced or are in the process of replacing core administration systems or developed a strategy to wrap their core systems with business intelligence or analytics tools so they can use their data.

"If you are not getting at your data and don't have some level of confidence that it is accurate data, you are way behind the eight ball," says Pauli. "Personal lines insurers also have been able to use integrated third-party data, so it's more accurate. Core data standards have helped there."

Pauli believes commercial lines carriers are in a more difficult position because many of them still labor under legacy systems. That could be about to change, though.

"There are a lot of good core admin systems out there," she says. "We are seeing pretty aggressive adoption in the mid-market, so once you get the core admin systems onboard, and you have your data in order, you can ramp it up to use for other purposes. Everyone started with the basic personal auto and moved it to property. You'll find the same thing in commercial lines."

Clients of Capgemini Financial Services are speaking out increasingly on data governance, according to Christina Colby, vice president of business information management for insurance with Capgemini.

The complication comes when companies don't necessarily understand how to operationalize their data and get business and IT to work together, according to Colby, who adds that data quality is a massive component of this problem.

One thing Capgemini has done is to work with carriers to implement a meta data management solution to capture information about the data being stored. One such solutions involved a quality measurement that had been pulled together from a number of different rules.

Using Informatica as the ETL tool, Capgemini conducted data analysis with that platform, using (Data Explorer and Data Quality) components, according to Colby.

"We set up a series of rules so that as the information was being transformed in the TL engine, we could score that piece of information," she says. "We then loaded that into the meta data management solution, and for that client we used Adaptive. For every piece of information whether a source piece of information or coming into the reporting repository, we had a score on the data quality.

Capgemini had an indicative quality score on each of the reports.

"Most people when they see a published report assume all the data is correct," says Colby. "We had a few different thresholds—a red/amber/green status. If it was less than 60 percent confidence in the data quality, it would show up as red. Lower than 60 percent is less than ideal. And most of the reports were red."

This gave the insurer the impetus to implement a data governance program in earnest.

"If you see a published report, you assume it's correct," says Colby.

Another case study Colby shared involved a financial services firm where a team under the leadership of the CFO went through and quantified the cost of bad data so employees understood there is a cost to the organization if the inherent quality of the data is low or incorrect.

"As you can imagine, the data quality started to improve significantly," says Colby. "It's a way to give heightened visibility to how poor the quality of the data is."

LEGACY SYSTEMS

Most carriers have their own definition of the problems with their data, explains Pauli. Certainly one issue is siloed information.

"The enterprise data that allows you to make your contact center so efficient and allows for outreach to customers is the hitch in that plan," says Pauli. "That's a big order."

When studying carriers with multiple administration systems, Pauli believes they would love to be able to retire all but one of the core systems, but consolidation to the primary two or three systems usually is the best they can do.

Carriers find it difficult to go through the conversion process, explains Pauli.

"They need a strategy that allows them to get at the data and that's where BI or integration layers are the answer," says Pauli. "Some are quietly doing a good job with that. Farmers [Insurance] stood up at ACORD and told everybody how they've used BI and integration layers on their legacy systems to create some state-of-the-art content center functionality. Not everyone can replace all their legacy systems—it's not possible. That's where BI and integration strategies become critical."

Craig Loughrige, senior consultant for Robert E. Nolan Co., believes insurers are trying to return to an expanded toolset with some fundamental building blocks as a data strategy. One of those building blocks involves a clear delineation of transactional data vs. decision-support data.

"[Insurers] are coming at it in a backwards way through BI tools," says Loughrige. "That's a great way to look at it, but if you don't have your components in place [BI is] harder to use effectively."

Among the components insurers are using are a data dictionary, an enterprise data model, and the appropriate data stores—operational data stores and data marts. Many data marts come from data warehouses, explains Loughrige, but others come from operational data stores and bypass the data warehouse all together.

The final piece of the component is the data case management system.

"Having a right mix of tools, having meta data—structure and content—are important," says Loughrige. "BI tools and the need for information are the catalyst. I just left a client who is on the path to expand their toolset—both reporting tools and adding in some different database structure in order to build a foundation. That's when, your other tools make a little more sense."

DATA GOVERNANCE

Most insurers acknowledge they are in a relatively immature position in terms of data governance.

"It's simply because the nature of their data landscape is changing so much," says Colby. "[Insurers] have gone from the path of consolidation into major data warehouses to more siloed versions of information. It's perfectly acceptable—it's the direction carriers need to move to have repositories that are virtually combined vs. physically combined. But it really starts to complicate how you govern the data."

Most insurers recognize the need to do more, but don't understand how to effectively engage their business stakeholders.

"[Business leaders] have to be the data stewards," says Colby. "IT can help support them but fundamentally the business has to understand they're the ones who steward the information on behalf of the rest of the enterprise. Organizations have varying degrees of understanding. Meta data management is something we see as critical because it creates an area where you can have a common language and understanding between the business and IT communities."

When you look at any insurance company, their most significant assets are their data. Colby believes that in the actuarial space it is understood that data is the company's most significant asset, but when you try to extrapolate that across the organization everyone has an opportunity to improve.

"There's such an interest about predictive analytics, but if you're putting garbage data into the model, you'll get garbage data out of it," says Colby. "There are certain things you can do to try to clean the data among the models themselves, but the reality is you should be fixing it at the source so you have confidence behind the data."

TYPES OF DATA

Data comes in two forms for insurers: transaction and decision-support and Loughrige maintains it is important for carriers to set the two apart.

"The bread and butter for insurers is transaction data," he says. "You are generally working on a row at a time, but you have to structure it in order to make the inserts and changes work for you. Then you want to keep everything else—query tools and such—differently."

Decision-support data involves multiple rows at a time within the data warehouse, explains Loughrige.

"While you start with a data model that is structured, typically you design the normal form, which is a scheme to minimize damage to data and reduce timing issues," he says. "You have to denormalize the data a bit in the processing world and you do that differently in transaction and decision support. Transaction is one row at a time; decisions support is multiple rows at a time with queries that can take much longer. You don't want to bog down the transaction system with queries about something else."

There are structural problems that carriers face with legacy systems and it takes time to put a better data structure in place, explains Loughrige. Another problem involves the practical use of the data.

"People look at the data in different ways," he says. "It's hard to take a global view because it means different things in the way you count things or the timing of things."

Quality of data varies from one company to another, adds Colby.

"In general, most people would agree they want to see something above 90 percent accuracy," she says. "That's not exactly ideal either. One organization looked at an 80/20 rule, but frankly I don't think that's a standard anyone should work toward."

One of the biggest challenges Indiana Farm Bureau Insurance (IFBI) has faced is the quality of data collected from policy applications, according to Jim Putka, executive director, systems development for IFBI.

"One of our problems—both on the p&c side and the life side—is getting quality and complete application data," says Putka. "It's an issue with the intake of data because we have to make underwriting decisions. The quality of data is paramount."

IFBI works with both independent and captive agents and has had to deal with issues relating to the accuracy of the data coming in on the applications and also the completeness of the data.

"If the data is not complete, there is significant productivity lost because there is a back and forth among the underwriter, the agent, and the CSR," says Putka. "If the data is inaccurate there are delays and those delays have a ripple effect in terms of billing. Inaccurate and incomplete data may turn into unexpected, billed premium amounts for the insured."

The carrier has used the Exceed billing solution from CSC for its homeowners policies and has had success with the quality of data it brings in that the carrier is working on an implementation of Exceed for its personal auto business.

"What we found as an Exceed customer is that we are getting a much better experience on the homeowners' side," says Putka. "We heard lots of complaints on the auto side, which is serviced by our legacy application, and also our life side."

The CSRs are excited about adding Exceed for personal auto, according to Putka. One plus is that any policy changes are reflected immediately in billing. A second benefit is straight-through processing.

The system allows the carrier to work the quotes and run them through various iterations so the IFBI data is complete and accurate and can proceed smoothly to the next step.

"We anticipate we are going to have dramatic savings once we implement Exceed for personal auto in terms of the amount of time it takes for us to issue a policy," says Putka. "It is going to eliminate the back and forth among the underwriters, the CSRs and our agents."

While the Exceed system handles the process from quote to issue, it ensures the information is complete, even though humans are inputting the data, according to Putka.

"[The system] takes a look at the VIN and make sure it matches," he says. "It has various sub-systems that feed data and we are able to scrub the address and check if it is the name of an existing customer. There are edits built in that enforce the business rules before an application can move onto the next stage. We believe there will be a dramatic drop in the number of applications that are inaccurate or incomplete."

IFBI sets a target in terms of the number of applications that come in clean, explains Putka.

"We keep a metric in terms of the apps that are unclean or inaccurate," he says. "We have a baseline where we can compare our performance with Exceed."

DATA LEADER

A data leader should play a prominent role in the structure of an insurance carrier—certainly more important than it is viewed by most, points out Loughrige.

"Larger carriers have them, but smaller ones don't have as strong a role for a data leader," he says. "Companies need to have a data architect and a data base administrator."

Loughrige believes most companies do a better job in the DBA world and DBAs are more commonly found among carriers. For example, if a carrier has a performance issue on a query, it usually means there is something wrong with the query and those problems tend to be addressed more readily.

Loughrige maintains the data architect sets up the logical flow of data and sites the example of the architect vs. the carpenter.

"If you have a problem building something you call the carpenter," he says. "You need someone dealing in the physical world. Sometimes the solution lies in the design, though. Sometimes you just have a bad design."

The reason architects aren't always viewed as the go-to person is because they tend to deal more in imagination, according to Loughrige. But good CIOs take the data architect and focus on solving a real problem today rather than the future.

"If people have a problem they want it solved today," he says. "Sometimes people think they have programming or application problems, but I think they always need to ask: Do I have an underlying data problem that just looks like something else?"

If the data initiative is being run through the IT department, Pauli is skeptical about the possibility of success.

"[Data initiatives] have to be at the C-level and one or more business heads have to be on board," she says. "You need enterprise governance. Otherwise, in the day-to-day push to run an insurance operation, you will naturally revert to what you've been doing for the last10 years. This can't be an IT project; it has to be a business project."

Such direction doesn't preclude an enterprise data strategy, offers Pauli. For example, she looks at a staple of all insurance information: the policyholder's address.

"The claims people may put address over here and have a different configuration," she says. "But an address is an address. Having standards doesn't preclude doing different things with it. That's where you need open data architecture and working with technology providers to develop integration layers."

Carriers also need a collaboration with the company experts in the data area.

"I don't think you can pull the bunny out of the hat," says Pauli. "You need brilliant IT people, but if you live and die data, they are going to bring something to the party more than if you are simply an insurance IT executive."

SOCIAL MEDIA

Carriers are using a variety of monitoring tools to understand what customers or non-customers are saying online about their company, explains Colby. One such partnership between Attensity and Pegasystems uses Attensity as a monitoring engine, and pushes the data to Pega, which automatically generates business rules and reacts to social media.

"In terms of reacting to [social media], it's compelling," she says. "It's not just listening, it's engaging, but ideally in a way where you are not just doing it through human intervention."

As for how social media relates to data management and data quality, Colby believes much of the interest she is seeing in data management is actually in the unstructured data space.

"Using more of the advanced search capabilities, it's been of huge interest in the past year alone to figure out the benefit of using a capability such as this. You can interrogate any source you want," says Colby. "Whether it's an inbound document you're getting in electronic format, claims notes that may be somewhat structured or if it's coming through your social media channels, you can start to identify patterns in the data and not necessarily structure it, but look for similar patterns and values. In my mind, it's using unstructured data capabilities to look at internal or external sources."

Carriers need to take a look at the social media data that is accumulating and how that relates to big data issues.

"If you have all that information that's available on the Internet through social media, you have to start thinking about how you are going to use the data," says Pauli. "It's a kick in the pants to start thinking about data in a different way and trying to get to that enterprise strategy. These are expensive and multi-year strategies.

NOT FOR REPRINT

© Arc, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to TMSalesOperations@arc-network.com. For more information visit Asset & Logo Licensing.