New federal regulations related to the Dodd-Frank Act are currently being rolled out across the financial services industry, affecting not only banks but insurance companies as well. The goal of these new regulations is to increase consumer protection by increasing transparency. To accomplish this, new regulatory commissions have been created to monitor all activities that are financial in nature. These activities include insuring, guaranteeing or indemnifying against loss, harm, damage, illness, disability or death.
Title V of the Dodd-Frank Act establishes the Federal Insurance Office (FIO). While the office has no regulatory powers, it does have the authority to request information. To prepare for the changes ahead, IT departments are taking steps to update current data management systems in order to ease the burden anticipated when the FIO starts requesting information. Currently, most insurance companies have multiple legacy systems spread across many departments. This means that data may be duplicated, incorrect, and in a variety of formats.
With increased levels of transparency, insurers will need to consolidate information and report accurately to the new governing bodies. To improve data quality now, companies need to ensure that data is accurate, complete and standardized. Improving data in this way will allow insurers to communicate accurately with regulators and also to improve customer service.
Sting of New Regulations
While the Dodd-Frank Act is still being implemented and the exact reporting information is being determined, most insurers are confident that the level of required reporting will increase. This will require insurers to modernize their current systems.
Data management systems are currently unable to easily search through large amounts of data or handle the types of searches that will most likely be required by regulators. This is largely due to the lack of centralized data management systems. Additionally, these systems are fraught with errors as there are few tools in place to ensure accuracy.
One type of information that is commonly incorrect for insurers is customer contact information. Surprisingly, according to a recent Experian QAS study, 91 percent of financial organizations suspect their contact data might be inaccurate in some way, and on average respondents think as much as 21 percent of their total data might be incorrect.
With this level of inaccuracy commonplace, it is important that insurers recognize the need to improve contact data. Better data will translate to better communicate with regulators by allowing staff to consolidate data and have confidence in a higher level of accuracy.
Data consolidation will be an important part of new regulations, as insurers will need to report on customers and their policies. Since data is currently spread across multiple systems, insurers are investing in modern data management systems that will consolidate data across the organization. While this is a step forward in improving the accessibility of data, reporting will still be inaccurate if data files are incorrect or duplicate records exist.
When organizations merge data, it is a common best practice to de-dupe information prior to consolidation. The first step in any merge is to ensure that data is accurate and in a standard format whenever possible. Contact data can often be standardized through back-end software tools, which also allows information to be corrected—if enough data is provided—and put into a standard format for comparison. Most tools allow users to configure the format based on specific business needs and preferences.
Once the contact data has been properly formatted, unique identifiers need be chosen. These identifiers will be the determining factors in recognizing duplicates. Data points are often the name, address, telephone number, or e-mail address. Since different information probably exists in each database, insurers need to select elements that are common across all data, which is why contact information is so valuable in this process. It is often a consistent field across databases.
Without these two steps, duplicates may be difficult to spot. Software tools frequently require data to be formatted in a certain way so that information can be processed correctly, and manual review can be time-consuming and inaccurate.
Ensuring Ongoing Data Quality
Once data systems are consolidated and existing data has been cleaned, it is important to guarantee the ongoing accuracy of data, especially contact data. Contact data not only affects a company’s ability to remove duplicate records, but also contact customers, segment accounts, perform geographical analysis, and issue appropriate policies with accurate risk assessment.
Contact data should be verified before it enters a centralized database. Human error is the main reason financial services organizations do not trust their data, according to a recent Experian QAS study. This means that verification should be automatic, which virtually eliminates this risk.
Validation software-based tools should be put in place at each point of capture. These include places like agent portals, policy services and claims. That way, as information is provided by new policyholders or changed by existing policyholders, insurers can have confidence in the quality of data.
In addition, the back-end scrubs that were described above should be used on a continual basis and data should be run against third-party files. That way, if a customer moves or changes their phone number, insurers have a better understanding of when that move took place, and can then reach out to the policyholder for updated contact information.
To make sure that these tools are right for each business, IT departments should spend some time analyzing data before selecting a solution. This will allow them to find common data errors and most frequently used information. This analysis allows projects to be prioritized and solutions to be chosen based on proven needs rather than assumptions.
Having an accurate, consolidated database allows insurers to improve financial reporting and provides peace of mind as regulations change. Besides the benefits to regulatory compliance, accurate data also improve many other departments across an insurance organization.
Each time contact information is entered incorrectly, that error could lead to many others, including an incorrect quote, a slowed claim or a communication being returned. When any of these events occur, staff members must spend extra time finding correct information. This slows down the processes and affects the bottom line and overall staff efficiency. Additionally, data errors hinder customer service efforts, as customers may have to wait longer for claim checks or may not receive required communications.
By improving the accuracy of contact data, processes can be streamlined and customers receive a higher level of service. In a business with such high turnover, insurers can give policyholders one less reason to leave and attract a continued renewal.
Accuracy Eases Regulatory Burden
By improving data accuracy now, insurers can be positive that heavy investments in new data management systems will not be wasted. These new systems will allow insurers to report to regulators efficiently, but without accurate data, the new systems will be ineffective.
While exact reporting structures are still being determined, taking steps today to consolidate data and improve accuracy will help improve efficiency and customer service now, while also helping insurers to prepare for the regulatory changes to come.