How effective are insurers in sorting through the mounds of information they possess to make insightful decisions?
Since insurers are intrinsically in an information business, itstands to reason that data collection and analysis is of paramount importance. Historically, insurers have struggled to trust decisions based off analysis of data from their legacy systems because of deep concerns about the quality of the data. And, as carriers bring online more and modern software systems, the data within them can accumulate at a mind-blowing, geometric rate.
An enormous pile of information can be very intimidating, but carriers can find effective and efficient ways to take advantage of the valuable secrets lurking within. Here are two key areas where certain, specific practices and technologies can markedly increase an insurer’s odds of data analysis success.
Systems must support data capture at appropriate levels of granularity, with tunable mechanisms, and in a suitably structured form. Granularity is a bit of an art form: systems must capture information at the right level of detail to optimize specific operational business processes as well as support desired analyses. Too much decomposition creates an unmanageable explosion of information (lots of heat, not enough light), but overgeneralized models will allow important discriminators or decision drivers to hide below the surface.
Our experience is that carriers benefit from starting models that are well-normalized and can act as an industry “best-practice baseline,” but then data within each line of business, region, and channel usually deserves some refinement. This means that a viable software system must provide a comprehensive ability to tune the models in order to provide capture flexibility, and enable an insurer to iteratively differentiate its offerings, pricing and services.
Finally, the data itself must be structured in a machine-interpretable way in order for software to do meaningful processing and to support downstream analysis. Systems that provide models that are carefully elaborated encourage insurer staff, producers, and, in today’s era of self-service, customers to input source data in higher quality, structured ways versus manual “free form” content that is much harder to analyze later.
Moreover, systems that provide for more advanced forms of structural encoding, like geospatial content and synthetic data types, create opportunities for carriers to stretch their representational thinking and achieve even higher capture fidelity. This emphasis on quality data in provides the essential foundation for quality decision insights out.
The data within core transactional systems is the lifeblood of an insurance carrier, but the laws of physics impose performance limits on the kinds of calculations and analysis that can be done in situ in operational data stores. Well-designed systems do provide a variety of in-line analysis and aggregated views into transactional data, but the most data-insightful carriers have grown entire ecosystems around their core systems, linking and feeding downstream analysis tools, frameworks, and repositories.
For example, carriers often choose to feed a traditional data warehouse, but some will go further and use a transport architecture that also feeds certain content to a NoSQL store or a faceted text search engine for real-time queries. The field is evolving so quickly that it is at least as important for an agile insurer to design for a dynamic set of data analysis components as it is to get any particular analytics path perfectly right. And for that even to be possible, of course, it is incumbent upon carrier core systems to be flexible, configurable enablers of this brave new data world.