Risk modeling is about identifying and quantifying risk in light of changing variables. Within an enterprise risk management (ERM) program, models may help companies evaluate potential future losses. But to get the most out of risk models, companies need to understand them at their most basic level and prepare for the potential risks that can lie within them.
So what is a model?
A model is a theoretical construct to help people better understand and analyze real world facts, events or future scenarios. Generally, all models have an information input, an information processor, and an output of expected results. Within a business context, models transform external and internal data from one or more human or technology sources into a new picture or view of information. This output can be then used by managers to assess possible future performance, and help them make more meaningful decisions.
Modeling can be done quantitatively, using combinations of numerical data, statistical analysis, or economic, financial or mathematical theories. It can also be based, in whole or in part, on more subjective expert judgment. In either case, the goal is typically to arrive at a meaningful quantitative output expressed in measureable amounts, dollars or percentages.
Models can be used for many purposes, and are typically tailored to help evaluate or solve specific business needs or problems. For insurers, models are often used for practical purposes such as projecting sales targets from the addition of a new product line, or looking at buying patterns within a specific population of consumers.
When it comes to ERM, models may help:
- Simulate what claims might be generated after a natural catastrophe in a particular location.
- Estimate the risks of various investments in a financial portfolio over time.
- Quantify potential losses due to bankruptcy of partners or debtors.
- Predict ultimate production or customer loss due to failure or disruption of operational controls.
The Risks of Models
However, while models are extremely useful, they can also be dangerous. The modeling process itself has risks, and can, directly or indirectly, cause significant loss. This includes potential reputational loss. Models often illustrate what “could” happen, not what “will” happen, and are not necessarily perfect reflections of reality. Over-reliance on model results, certain modeling techniques, or blind faith in the talents of the clever actuaries, mathematicians and other staff who run models can be disastrous.
Indeed, failure of financial institutions to fully appreciate and account for weaknesses in their models impacting capital and solvency may have exacerbated the financial crisis of 2008, leading regulators and rating agencies to focus more on model risk itself, encouraging companies to better manage such risk as a specific component of its ERM efforts.
In reviewing the results of any model, the reviewer should consider the following points and potential risk areas, asking questions to clarify understanding as much as necessary to insure that outputs are evaluated with appropriate caution, and any caveats or qualifications to the results are duly noted.
- How was the model designed? Is it a well-accepted, standard industry model or was it developed in-house? Are there any potential errors in the design of the model itself? The risks inherent in the models include pure faulty design, where the model itself does not properly describe or mirror the reality or problem it is supposed to.
- What specific purpose does the model serve? What is it supposed to show? Sometimes mathematical formulas or equations are used for more than one purpose, for an issue or problem that seems to be similar to a second issue/problem, but the linkage might not, in fact, be as close as the modeler believes it to be. Running data through the model for both scenarios might magnify variances or differences between the two, giving less reliable results for the second issue than what the model was originally built for. For example, a model which extrapolates out personal automobile losses from historical trends might not be an exact fit to run commercial auto losses in the same way, if any of the drivers and causes (potential inputs into the model) of commercial auto losses are not exactly the same as for personal auto claims.
- What inputs or assumptions were made at the front end of the modeling process? Were they reasonable, clear, well-documented and agreed by the reviewing team? For example, who decides what a “representative sample” would be to run through a model, and what were the parameters of that decision? If the sample was not large enough, or not representative enough of the examined data set, outputs may be skewed.
- Companies must also regularly update, refine and enhance each of their models to reflect changing market conditions or shifting company goals, and evolve with the organization’s risk landscape. Have changes been made over time? How and by whom? Were the changes authorized? Were they made to correct inaccuracies or faults in prior versions?
- Has the model been quality reviewed, or the “math” double-checked? While many models in the insurance world are now highly complex and thus system-run, some modeling is still done either by hand, or via formulas in spreadsheets, and can be ripe for human error (such as cutting and pasting errors, wrong formulas, missing or hidden data fields). Don’t rule out software or hardware bugs either, until a system has been well tested and many simulations are done.
- Complex models may generate complex or ambiguous results which may require particular expertise to evaluate. Is the output data being properly interpreted? Are the people charged with interpreting results adequately trained or qualified to do so?
- Can results be taken out of context? Results in pure numerical form may mean nothing if not put in the appropriate context. For example, are you speeding if you are driving at a speed of 55 mph? Yes, if you are in a school zone with posted limits of 20 mph; no, if you are on the highway with a 65 mph limit. Facts alone are not “good” or “bad.” Appropriate context needs to be provided to determine the actual impact of a pure number or mathematical result, and this applies to model outputs as well.
Minimizing Model Risk
In light of these risks, insurers have begun to better identify and strengthen their ERM efforts specifically around model risk management. As with other risks, the better that model risks are fully identified, and scored or prioritized from a potential loss or financial impact standpoint, the easier it should be for companies to identify and apply mitigating controls. Some of the specific steps companies can take to help strengthen their modeling framework, and help minimize related risks, include:
- Assigning specific roles and responsibilities for model risk, identifying specific model risk “owners” as well as ensuring that other managers and staff who are invested in the process have a role in the risk assessment process – either as risk “co-owners,” model change management advisors, or final approvers who validate and approve a model and its outputs.
- Strengthening documentation procedures, making sure that new model development rationale and methods are clearly documented, and that any assumptions, caveats and qualifications are consistently associated with the results of the model and widely disseminated to all reviewers of the data (as opposed, perhaps, to being distributed only to the core actuarial team, managers or risk committee making major decisions with the information.)
- Making key models subject to more peer reviews, independent reviews and audits, as well as requiring dual-level sign-offs, or similar “two-eyes” review before publication.
- Monitoring and comparing model performance over time to actual results/historical data. Testing may also help, comparing results of one model against other models used for a similar purpose.
- Scheduling regular reviews to determine if model designs, inputs or assumptions need to be changed based on evolving market, product or economic conditions.
Finally, the overall profile of model risk can be raised by making it a high-level concern at the board of directors. Model risk should be discussed by the board or risk committee periodically even if, under the company’s pure financial risk assessment scale or risk “heat map,” model risk does not fall into a “Top 10” or “Top 20” list of risks. Since models often drive the priority assessments or scores of other risks (catastrophe loss, claim loss, business plan results), the inherent risk of models in itself is significant enough to warrant board attention and thought on a regular basis.
Recognizing and Accepting Model Risk
The more reliant insurance companies are on models to generate risk assessments in major business areas and support strategic decisions, the more important it will be for those companies to validate that models are appropriate, and working as intended. Regulators and rating agencies are encouraging all financial institutions pay closer attention to models, recognizing all strengths and weaknesses.
Having a robust process for managing model risks can help companies use models more confidently and effectively. The key is to help ensure that all stakeholders in the model results fully understand the objectives, definitions, design elements, ownership responsibilities, and limitations of the model at hand, so that they can more effectively judge its reliability.
Denise Tessier is senior regulatory consultant for Insurance Compliance Solutions, Enterprise Risk Management and the Consulting Practice at Wolters Kluwer Financial Services. She may be reached at email@example.com.