There is a movement within the insurance industry to consolidate databases from multiple departments in order to gain a complete policyholder view. Typically, insurers with one database can see the different channels and lines of businesses within the company in which policyholders interact. In addition, it allows insurers to better comply with regulatory reporting and prevent premium leakage.
But, consolidating databases has always been a challenge, which is why so many insurers choose to maintain separate databases for different departments or business units. The question is whether it is worth going through the business challenges of combining a database to get the business benefit of centralization.
While there are many considerations to take into account, one challenge that faces insurers with any migration is data quality. Questions often arise about datastandardization, duplicates, outdated information, and missing fields. All of these challenges are surmountable if stakeholders properly consider how to mitigate these processes prior to migration.
Unfortunately, there is nostandard set of fields, abbreviations or formats for data that are universally recognized. Data consistency is a free-for-all of sorts, with differences from department to department and insurer to insurer. In addition, human error is often cited as the top data quality error.
Due to this degree of inconsistency, migrating data into a central database can be concerning. To minimize risk, administrators should decide what fields to keep and what format to use in each field.
Once the fields are set, as much information as possible should bestandardized. Contact information can easily be transposed to a set ofstandards through software tools. For instance, a mailing address can be changed to the country’s national postal authority.
Standardized and clean information makes it easier to identify and remove duplicates and append information later in the migration process.
Duplicate accounts are problematic during any migration process. It is common for databases within different departments or even from different business units to have overlapping policyholders or prospects.
Given this data landscape, eliminating duplicates can be difficult. However,standardized contact data can be an ally. Since contact information is typically found in every database, it can be used to help household information and identify duplicate contacts.
Insurers need to select matching elements to determine where duplicates can be found. Administrators need to determine the level of matching they want to accomplish, as well as the tolerance level for what is considered a duplicate record in the organization.
This process can be ridged or flexible, depending on the number of records, how much an insurer wants to review potential duplicates, or how much they want automatically merged.
Personal information is very fluid and changes on a frequent basis. Policyholders are constantly moving or experiencing other major life events. Because of this, contact records can be incomplete and become outdated quickly.
The age of data may be unknown and cause administrators to question its accuracy. However, certain contact fields can be updated with current information.
First, address information can be updated. The USPS has a National Change of Address file that businesses can utilize through third party vendors. Insurers can identify whether an individual has moved and send a request for updated address information within the last four years.
Additionally, email address information can be updated. Third party vendors can remove syntax errors, but they can also check to see if an email address is still active. Updating this type of information is beneficial, as it aids in creating a database of consistent fields and updated records.
Preparing for success
The key to succeeding in any project is proper planning and a full understanding of the task at hand. Database migration is no different. Insurers need to fully understand the information they are looking to consolidate, along with its strengths and weaknesses.
Data quality is always a concern, but this can be mitigated through the use of proper tools and techniques prior to and during implementation. Insurers shouldstandardize as much information as possible before consolidation and then work to remove duplicates. Finally, they can look to update outdated information.
Overall, a centralized database can provide insurance organizations with valuable insight and a complete view of their policyholders. While many are looking to take the centralization plunge, insurers need to be confident that the move will improve overall data quality.
Thomas Schutz is senior vice president and general manager at Experian QAS North America. He serves as the company's top executive for all strategic business decisions in the United Statesand Canada. Schutz can be reached at Thomas.firstname.lastname@example.org.