Site Overlay

Optimal risk model selection

Estimated reading time: 5 minutes

Now that we have reviewed the performance of various interest rate risk models, we are interested in the selection of the optimal risk model. The risk model that would help us most in the decision making process. But how do we compare the different models and what are the relevant metrics?

Ideally, we want a risk model that is both accurate and that can adapt fast to changing conditions. In low-risk periods we want to see low risk estimates, in high-risk periods we want to see high risk estimates. In other words, our model should be able to adapt to the current context.

To measure the performance we are therefore mainly interested in two metrics: model accuracy and model cost. We can see how they perform in a model backtest. In essence, we want to verify how these metrics perform when we compare the model predictions against the realizations.

Model accuracy

We define model accuracy as the difference in the frequency of breaches that was expected versus the number of breaches that was realized. We call this metric the ‘validation distance‘ (VD).

For example, if we project a 95% chance to stay within the limit (i.e. we expect 5% breaches), but in reality the backtest showed that only 92% stayed within the limit (8% breaches), then we have a validation distance of 3%. Or, in other words, we have 3% more breaches than what we predicted.

Model cost

While model accuracy is mainly concerned with the frequency of breaches, we are also interested in the economic costs of applying a given model.

On one hand, having a breach (a higher than expected VaR, “overrun”) entails a cost as it means that we need to recognize a financial loss that has not been covered by a buffer. On the other hand, consistently lower than anticipated VaR (“underrun”) is also costly. As we face the opportunity cost of holding an excessive risk buffer that we could not use for other purposes.

Large overruns mean that our model was too optimistic, large underruns mean that our model is too conservative. For optimal risk model selection we want to minimise both the size of the ‘overrun’ in case of breaches as well as the size of the ‘underrun’ in case of non-breaches.

We measure this by the ‘average cost‘ of the model, or the difference between the predicted risk and the realised change. As overruns affect us more negatively than underruns we follow a loss averse approach. This means that we penalise the cost of an overrun twice as much as the cost of an underrun.

How to select the optimal risk model

After defining the relevant metrics, we can compare the models to select the optimal risk model for decision making. We search for the lowest average cost, while at the same time requiring a minimum level of accuracy. The accuracy could depend on regulatory or our own quality standards. For now we choose a maximum validation distance of 4%, which is considered an upper bound underBIS/Basel accuracy quality standards1

Example: Interest rate risk model comparison

The table below summarises key performance metrics for a handful of risk models that we have backtested. We have included the Solvency II Standard Formula (2016 & 2020 versions) as representative of regulatory models that require periodical calibration. Further, we have included a statistical data drivenBASE modelbased only on historical data. And finally, we have included context based REGIME-MACRO model that adapt automatically to changes in underlying macro-variables. We tested the models over two distinct backtest periods: since 2005 and since 1950.

ModelSinceVD 0.5%VD 99.5%Avg. VDAccuracyAvg. costsModel CostsPerfor-mance
Solvency II SF (2016)200524.5 %7.5 %16 %1,8 %+
Solvency II SF (2020)20050.5 %5.5 %3 %+2.5 %00
Solvency II SF (2020)19500.5 %1.5 %1 %++4.0 %
BASE (EU-period)20053.5 %8.5 %6 %01.21 %++0
BASE (full history)20050.5 %3.5 %2 %++1.75 %++
BASE (full history)19502.5 %3.5 %3 %+1.31 %++++
REGIME-MACRO19502.5 %4.5 %3.5 %+1.25 %++++
Model performance comparison of various interest rate risk models. We measure accuracy by validation distance (realised vs expected breach frequency). We measure the model cost as the average cost of the under- and overrun of the projected risk. We apply loss aversion for losses due to breaches.. We include two backtest periods for comparison: 2005-2023 and 1950-2023. The interest rate data is based on KEF data sources until 2004 . We use DNB zero coupon rates on 12 year maturity thereafter. We review the original Solvency II Standard model (2016 version) and the recently updated 2020 version. The BASE (EU-period) uses 12 years maturity interest rate data since 1994. The BASE (full period) and the REGIME model uses the full history of interest rate data since 1900.

Summary

Objective metrics like accuracy and cost help us in the selection of an optimal risk model. When we compare different interest risk models, it becomes clear that data driven models outperform more static counterparts that need manual calibration. Furthermore, models that are based on longer history, improve accuracy.

In addition, we see that context dependent REGIME-based model improve costs while maintaining relatively high accuracy. This can also be observed for other risk factors. For example, for land price risk we found that our context dependent REGIME model leads to significant improvements over a BASE model.

At this time we cover multiple risk factors with up-to-date and context aware risk profiles on a yearly and quarterly frequency. And we are constantly improving our models. Reach out for a free consultation to learn more.


Footnotes

  1. Basel Committee on Banking Supervision 2016. Minimal capital requirements for market risk ↩︎