In late 2013, we discussed claims audits as a way to update and improve claims programs at p&c insurance organizations, beginning with the September 2013 article titled, “Why Auditing is a Springboard to Cost Reduction.”
After all, one of the most important outcomes of effective claims management is to reduce claims costs and expenses to the lowest reasonable values, while adhering to the legal and ethical requirements inherent in good faith claims handling.
We also began a discussion about some of the steps involved in auditing workers’ compensation, liability, and property claims. However, claims audits represent just one of the many components of a quality assurance (QA) program that must be implemented to not only maintain but also improve the claims-handling operations. The size, scope, and complexity of QA programs can differ significantly, depending on:
- Size of the organization
- Lines of business
- Geographic spread of the claims administrator
- Training program for the claims staff
- Experience and expertise of the claims staff
- Structure of the program—for example, claims are managed by insurers or TPAs versus self-administered.
- Extent to which vendors are used
- Client mix that you target
- Other market-specific and industry issues
Development of a comprehensive QA program, however, is beyond the scope of this article. Rather, in this article, we will focus on metrics and evaluation methods that can be used on an ongoing basis to augment the claims audit component of your QA program.
Activity Based Metrics
The balance of this article will concentrate on two major items: activity-based metrics and results-based metrics. We refer frequently to activity based metrics when we discuss claims-management best practices. It would be fair to describe best practices as the requirements and guidelines that many claims administrators believe will provide the path to optimal outcomes. Unfortunately developing and tracking performance by compliance with best practices focuses on the activities instead of the results or outcomes, which are more difficult to measure.
Often adjusters concentrate on reaching the milestones or “checking the boxes” rather than concentrating on the best ways to manage a given claim. Of course, let me emphasize that best practices are nevertheless useful in ensuring the proper foundation is set for managing the claim.
Here are couple of examples. Claims administrators have created their own best practices, some of which differ in the timelines for the required steps. Some of them include:
- Make two- or three-point contact within one working day of claim receipt/assignment.
- Input initial reserves within 1 to 5 days of claim receipt (requirements may vary up to 30 days).
- Complete the investigation within five working days of claim receipt.
- Develop an action plan within 14 days of claim receipt.
- Create updated action plans every 45 days thereafter.
- Require the claims supervisor to review the claim at 30 days after receipt.
These and other best practices are important to confirm that claims adjusters and supervisors understand the steps required in order to begin the claims process in the right direction. It is also imperative that the adjuster complete some of the steps before the next step can be performed both properly and in a timely fashion.
Some of the metrics that a claims administrator should follow in the initial stages of a workers’ compensation indemnity (lost time) claim are presented on the next page.
For liability claims, best practices may be the same for items 1 through 5, with the exception that the number of contacts may differ, depending on how many insureds and claimants are involved in the claim. For property claims, items 1 through 5 again apply, although the contact might be limited to the insured. There will, however, be other best practices specific to (and therefore applicable) to those lines of business. This may include, for example, conducting site inspection within two days of contact for property claims.
These are the metrics that are most difficult, especially if the measurements are for long-tail claims, such as workers’ compensation indemnity (lost time) or liability bodily injury (BI) claims. In some cases, the final resolution may not occur for many months or years, making it difficult to connect early steps, activities, action plans, vendor/specialist utilization, and management procedures to the eventual outcomes.
Many companies have tried to use various outcome-based metrics. In some cases, this approach has backfired, because the solution led to practices that caused issues far worse than the original problem. For example, some companies that were appropriately concerned about significant reserve development or reserve stairstepping instituted programs in which adjusters were required to report to the client or risk manager, should reserves reached a specific threshold. In such cases, management may have reprimanded the adjuster for either allowing the claim to reach that condition, or for not setting the reserve accurately in the first place.
This punitive approach led to the behaviors in which the adjusters initially set unreasonably high reserves so they would not be forced to increase reserves at a later time and be reprimanded for that occurrence. Or, perhaps they did not increase reserves until the last minute, possibly when a settlement was payable, figuring they would only have to be reprimanded once rather than several times. The result of these improperly administered, outcome-based metrics led to several additional problems, including:
- If reserves were inadequate, then there could have been an understatement of liabilities, which were identified at a later point when other financial problems may have arisen—and at the worst time to recognize the increased liabilities.
- Changes in reserving patterns may have made it more difficult for a casualty actuary to project provide reasonable estimates of claims liabilities and expenses.
- If reserves were overstated, then this may have led to improper use of the company’s financial resources, which could have been used elsewhere for more beneficial purposes.
In some cases, these reserving problems—when perpetuated on a large scale—led to issues that were much more far-reaching than the original problem of stairstepping reserves or if reserve development was thought to be excessive. In fact, in certain circumstances, the modified behaviors that occurred as a result of the “solution” led to the ultimate failure of the company.
So what are appropriate results-based metrics? The metrics should be based on final outcomes that are tied back to specific steps, activities, and management activities that occurred many months or even years previously. This makes it more difficult to respond to questions from senior management about how structural or procedural changes and improvements are affecting today’s payments and liabilities, but these metrics are based on sound logic.
One example of reserving metrics based on comparisons of final values to earlier steps and activities includes:
- Comparison of the relationship between the final closing value of a claim (at the only time when a claim’s ultimate cost is known and not subject to debate or conjecture) with the adequacy of case reserves at various points in the claim’s lifecycle.
For example, if a claim was closed after 15 months, how did the reserves at 60 days, 90 days, 180 days, 90 days before closing, and 30 days before closing compare to the ultimate total incurred cost, and what percentage of the final value did the reserve values reach at those points? The following line graph shows the comparisons for three adjusters. We will assume for purposes of this illustration that they handled the same claim with the same financial outcome. Note that they had very different reserving patterns, even though they started from the same point (possibly a table reserve mandated by senior management), and ending at the same paid value.
Analyzing this line graph and the data behind it can lead to several hypotheses:
- Adjuster 1 either did not proactively manage the claim to gather the necessary information to make a reasonably accurate assessment of the case reserves during its life, or did not increase reserves when information was received that should have created a reserve change.
- Adjuster 2 seems to have obtained information throughout the life of the claim, and while a slight increase was necessary prior to closing the claim, the case reserves appeared to be within reasonable ranges at various points in the claim’s life, and certainly well before it closed.
- Adjuster 3 might have been subject to the reprimands that we discussed earlier, so s/he set a high reserve initially, which required no adjustments along the way, but which resulted in reserves that were overstated when compared to the final value.
Other scenarios may also explain these different reserving patterns, but gathering information such as this will help claims administrators determine the range of reasonable reserve values and reasonable patterns, allowing them to spend their time reviewing and fixing the deviations from the normal range. Certainly if adjusters consistently produce reserve patterns such as those shown for Adjuster 1 or Adjuster 3, the company should take action.
Other examples that compare outcome values to earlier steps or management activities may include comparisons of final values to the timeliness of initial reporting; timeliness of initial contact (and a sense of the thoroughness of the initial contact); timeliness of liability or compensability decisions; timeliness of the use of return-to-work programs; and use of case management vendors.
There is no question that creating metrics to measure claims excellence can be a challenging proposition, especially if a company is limited by an old claims system, or has failed to or been unable to capture and retain data and information on the activities and management steps it has taken. However, if the organization is committed to continuous improvement, the company must make opportunities to use the data it has maintained, initiate a program to capture more data in the future, and make something meaningful out of the information that will not only measure the activities, but tie the results back to those activities.