Insurance companies rely on catastrophe models to provide reliable estimates of loss, whether for managing risk over the long term or for understanding their loss potential in real time as an actual event unfolds. However, the reliability of model output is only as good as the quality of the exposure data used as input.
High-quality exposure data is essential for effective catastrophe risk management, improved underwriting and reinsurance decisions. Furthermore, rating agencies are now requesting companies report in detail on the quality of their exposure data.
The latest A.M. Best Supplemental Ratings Questionnaire, for example, requests that companies report if data elements essential for estimating catastrophe risk–such as geocoded location, construction, occupancy, year built and other data–are specified for individual risks.
Assessing and ensuring the quality of the underlying exposure data used for catastrophe risk analyses can be challenging, but a comprehensive effort to improve exposure data quality can improve decision-making across the enterprise and can provide a competitive advantage.
Short of inspecting every property, insurers are looking to incorporate automated tools to improve data quality. A three-step exposure data assessment and enhancement process that encompasses validation, benchmarking and augmentation provides companies with a comprehensive approach to reach their data quality goals.
o The Value of Validation:
A data validation analysis can help a company understand the magnitude and extent of uncertainty in its data quality. Underwriting departments can use the results of the analysis to enhance exposure data collection processes during underwriting. Portfolio managers can leverage these insights to devise ways to more effectively transfer exposure data from policy underwriting systems to portfolio management systems.
The information attained from a data validation analysis can provide a basis for external communication of data quality to reinsurers, rating agencies and regulators, which is quickly becoming a requirement for insurers exposed to catastrophe risk.
Furthermore, exposure data can have an impact on the prices insurers pay for reinsurance. As stated in a 2008 Ernst & Young exposure data quality survey, “…if cedants can eliminate some of the uncertainty from their data quality, the reinsurers will reward them for reducing the reinsurers’ risk.”
o The Value of Benchmarking:
Comparing a company’s data to industry averages provides another way to assess exposure data quality and validate whether the portfolio reflects the company’s underwriting strategy. Analyzing a company’s data in comparison to industry averages can assist in determining whether the composition of the portfolio is sensible.
If the comparison is reasonable, decision-makers can be more confident in modeled losses. If not, the benchmarking analysis may highlight areas for further investigation and identify potential systemic problems in the integrity of the data the company is collecting.
Of course, not every portfolio should match up exactly with industry averages. In some cases, it may make sense for the two to differ, depending on the composition of the portfolio.
For example, a company may be intentionally writing higher-value homes in particular areas. If the analysis confirms that the data reflect this underwriting strategy, the results can help provide evidence of successful implementation of the strategy to stakeholders. If not, the company may be overvaluing or undervaluing its risks and should take a closer look at underwriting and exposure data collection practices.
o The Value of Augmentation:
When property data is missing or of questionable value, it should ideally be enhanced with external data sources at the point of underwriting. Even when robust information has been used in underwriting, the data is not always effectively transferred from the underwriter to the portfolio-level decision-maker. This makes it essential to enhance exposure data at the portfolio level before it is used in catastrophe risk analyses and distributed outside the company.
While it’s easy to detect missing data, it is much harder to identify incorrect or unreasonable data. Portfolio-level data validation can leverage rules to identify unrealistic data (for example, a wood frame building with seven stories), but the challenge is to determine which datum is wrong (in this example, either the building height or the construction type).
Until now, the only solution has been to go back to the property owner–a time-consuming and expensive effort–or replace the incorrect data with industry averages, if available.
When data is missing or unreliable, companies need a way to efficiently augment their portfolio-level exposure data in a way that aligns with their current underwriting workflow. Fortunately, data can also be enhanced at the portfolio level.
When certain data necessary for catastrophe modeling is identified as missing or incorrect during the data validation process, it can be augmented using available databases of property-specific data. Once a portfolio has been augmented to the extent possible, catastrophe risk analyses will produce more reliable results.
The trend toward higher concentrations of properties in at-risk locations and the resulting increase in catastrophe losses have elevated the issue of data quality to executive decision-makers at many insurance companies. Options to validate and improve portfolio-level exposure data quality have historically been limited. Many companies set up manual processes to validate exposure data, but they are typically labor-intensive and limited in scope.
New approaches to address portfolio-level exposure data quality are now being employed by leading insurers. A sustained exposure data quality campaign that incorporates automated data validation, benchmarking and augmentation will increase the reliability of catastrophe risk analyses, improving the quality of a wide range of business decisions across the enterprise.