The accumulation of data and the usage of that data in catastrophe modeling is a delicate business.
Tom Larsen, senior vice president of catastrophe-risk modeler EQECAT Inc., says the firm strives to be on the leading edge and does so with continuous adaptations to models—in order to avoid becoming too disruptive with wholesale changes.
“You don’t want to release something and say, ‘Last year we told you to go left. Now we want you to go right,’” he tells NU.
WORLDCATenterprise, the firm’s current catastrophe-risk-modeling software platform, includes 181 natural-hazard models for 95 countries and territories spanning six continents.
This spring EQECAT is updating its catastrophe-modeling platform to meet the demand for more complete models, especially following events in 2011—like the earthquake and tsunami in Japan, as well as flooding in Thailand—which were not consistently modeled.
“Our clients seek more and better data for policy forms to differentiate between policies on a more granular level,” says Larsen. “It is our job to keep up to date with the latest science and prioritize with clients in order to help them get an accurate perspective of their risk.”
The new platform is meant to better assist all decisions in the underwriting workflow of insurance.
Larsen says EQECAT incrementally updates its models instead of waiting for data from a single, large, claims-generating event to adjust.
Consider the type of destructive weather events in 2011 and how they changed the predictive-modeling spectrum. From a better understanding of insurance layers in countries like Japan and New Zealand to the distinctive characteristics and random areas of different damages caused by Hurricane Irene in the U.S., modelers were overloaded with additional data to meld into their imperfect products.
Irene alone presented several unique challenges, notes Larsen. For a large, major hurricane there is less uncertainty. It is easier to predict the impact and damage from a Category 5 hurricane than from Irene, which wasn’t a particularly strong storm but one that caused damage to a large swath of the East Coast. The flooding from Irene was also a learning experience, he adds, because it actually made the wind losses higher since claims adjusters couldn’t get to the scene as fast as they would have liked.
Since the landscape is always changing, the data set is perpetually evolving, presenting modelers with excess opportunities for improvement.
Calculations for flood losses, for example, change “every time someone paves a parking lot or cuts down a forest,” says Larsen, who adds 2011 also presented modelers with more understanding of the relationship between flooding and saturated ground.
Research on climate change is also informing models, Larsen adds.
Modelers “can’t eliminate all uncertainty,” he admits, but clients and investors demand a sense of the losses following a significant event.
“After an event you can see stock prices based on whispered loss-projection numbers,” Larsen says. “Understanding these events is a necessary component of international commerce. We are compelled to come up with an estimate.”
Additionally, ratings agencies and regulators are relying on models to give them a better understanding of an insurers’ exposure, reserving and solvency. It is a use of modeling that is becoming more important with recent international solvency requirements being implemented.
“There are a lot of pressures [on insurers],” Larsen says. “This is what is driving the demand.”