Estimated reading time: 5 minutes
After reviewing the performance of various interest rate risk models, we want to select the risk model that works best for optimal decision making. Let us therefore first define some relevant objectives.
Ideally, we want a risk model that is both accurate and that can adapt fast to changing conditions. In low-risk periods we want to see low risk estimates, in high-risk periods we want to see high risk estimates. In other words, our model should be able to adapt to the current context.
To achieve these objectives we measure the performance of a model using two metrics: model accuracy and model cost. We want to see how they perform in a model backtest evaluating model predictions against the actual realizations.
Model accuracy
Model accuracy can be defined as the difference between the expected versus the realised frequency of breaches. We call this metric the ‘validation distance‘ (VD).
For example, if we project a 95% chance to stay within the limit (i.e. we allow or expect 5% breaches), but in reality the backtest showed that only 92% stayed within the limit (8% breaches), then we have a validation distance of 3%. Or, in other words, we have 3% more breaches than what we predicted.
Model cost
Besides frequency of breaches, we are also interested in the economic costs of a model.
On one hand, having a breach (a higher than expected VaR, “overrun”) entails a cost as it means that we need to recognize a financial loss that has not been covered by a buffer. On the other hand, consistently lower than anticipated VaR (“underrun”) is also costly. As we face the opportunity cost of holding an excessive risk buffer that we could not use for other purposes.
Large overruns mean that our model was too optimistic; large underruns mean that our model is too conservative. For optimal risk model selection we want to minimise both the size of the ‘overrun’ in case of breaches as well as the size of the ‘underrun’ in case of non-breaches.
We measure this by the ‘average cost‘ of the model, or the difference between the predicted risk and the realised change. As overruns affect us more negatively than underruns we follow a loss averse approach. This means that we penalise the cost of an overrun twice as much as the cost of an underrun.
How to select the optimal risk model
After defining the relevant metrics, we compare the models to select the optimal risk model for decision making. We search for the lowest average cost, while at the same time requiring a minimum level of accuracy. The accuracy could depend on regulatory or our own quality standards. For now we choose a maximum validation distance of 4%, which is considered an upper bound underBIS/Basel accuracy quality standards1

Example: Interest rate risk model comparison
The table below summarises key performance metrics for the risk models that we have backtested. We have included the Solvency II Standard Formula (2016 & 2020 versions) as representative of regulatory models that require periodical calibration. Further, we have included a statistical data drivenBASE modelbased only on historical data. And finally, we have included context based REGIME-MACRO model that adapts automatically to changes in underlying macro-variables. We tested the models over two distinct backtest periods: since 2005 and since 1950.
| Model | Since | VD 0.5% | VD 99.5% | Avg. VD | Accuracy | Avg. costs | Model Costs | Perfor-mance |
|---|---|---|---|---|---|---|---|---|
| Solvency II SF (2016) | 2005 | 24.5 % | 7.5 % | 16 % | — | 1,8 % | + | – |
| Solvency II SF (2020) | 2005 | 0.5 % | 5.5 % | 3 % | + | 2.5 % | 0 | 0 |
| Solvency II SF (2020) | 1950 | 0.5 % | 1.5 % | 1 % | ++ | 4.0 % | — | — |
| BASE (EU-period) | 2005 | 3.5 % | 8.5 % | 6 % | 0 | 1.21 % | ++ | 0 |
| BASE (full history) | 2005 | 0.5 % | 3.5 % | 2 % | ++ | 1.75 % | + | + |
| BASE (full history) | 1950 | 2.5 % | 3.5 % | 3 % | + | 1.31 % | ++ | ++ |
| REGIME-MACRO | 1950 | 2.5 % | 4.5 % | 3.5 % | + | 1.25 % | ++ | ++ |
Summary
Objective metrics like accuracy and cost help us in the selection of an optimal risk model. When we compare the different interest risk models, we observe that:
- Data driven models outperform static models that need manual calibration.
- Models that are based on longer history improve accuracy.
- Context dependent, or REGIME-based models, lower model costs even more while maintaining a relatively high accuracy.
These observations also can be noted in other risk factors. For example, also for land price risk we found that our context dependent REGIME model leads to significant improvements over a BASE model.
This article has shown how real-world validation on key objectives, helps us to select the best possible models.
Reach out for a free consultation to learn how our risk models can also help you in optimal decision making.
Footnotes
- Basel Committee on Banking Supervision 2016. Minimal capital requirements for market risk ↩︎