Model Risk
This entry examines the subject of model risk. Loosely speaking, model risk is the risk of error in the valuations produced by a pricing model or in the estimated risk measures produced by a risk model. The nature of model risk and its diverse causes and manifestations are examined. The entry also briefly addresses the scale of the problem and the dangers it entails, and then goes on to discuss ways in which model risk can be managed.
MODELS AND MODEL RISK
A model can be defined as “a simplified description of reality that is at least potentially useful in decision-making” (Geweke, 2005, p. 7). A model attempts to identify the key features of whatever it is meant to represent and is, by its very nature, a highly simplified structure. We should therefore not expect any model to give a perfect answer: Some degree of error is to be expected, and we can think of this risk of error as a form of model risk.
However, the term model risk is more subtle than it looks, and not all output errors are due to model inadequacy. For example, simulation methods generally produce errors due to sampling variation, so even the best simulation-based model will produce results affected by random noise. Conversely, models that are theoretically inappropriate can sometimes provide good results. The most obvious cases in point are the well-known “holes in Black-Scholes”: Simple option pricing models often work well even when some of the assumptions on which they are based are known to be invalid. They work well not because they are accurate, but because those who use them are aware of their limitations and use them discerningly.
In finance, we are concerned with both pricing (or valuation) models and risk (or VaR) models. The former are models that enable us to price a financial instrument, and with these model risk boils down to the risk of mispricing. These models are typically used on a stand-alone basis and it is often very important that they give precise answers: Mispricing can lead to rapid and large arbitrage losses. Their exposure to this risk depends on such factors as the complexity of the position, the presence or otherwise of unobserved variables (e.g., such as volatilities), interactions between risk factors, the use of numerical approximations, and so on.
Risk models are models that forecast financial risks or probabilities. These models are exposed to many of the same problems as pricing models, but all are often also affected by the difficulties of trying to integrate risks across different positions or business units, and this raises a host of issues (e.g., aggregation problems, potential inconsistencies across constituent positions or models, etc.) that do not (typically) arise in stand-alone pricing models. So risk models are exposed to more sources of model risk than pricing models typically are. However, with risk models there is far less need for accuracy: Errors in risk estimates do not lead directly to arbitrage losses, and the old engineering principle applies that the end output is only as good as the weakest link in the system. With risk models, we therefore want to be approximate and right, and efforts to achieve high levels of precision would be pointless because any reported precision would be spurious.
We are particularly concerned in this entry with how models can go wrong, and to appreciate these problems it helps to understand how our models are constructed in the first place. To get to know our models we should:
SOURCES OF MODEL RISK
Incorrect Model Specification
One of the most important sources of model risk is incorrect model specification, and this can manifest itself in many ways:
There is empirical evidence that model misspecification risk is a major problem. To give a couple of examples: Hendricks (1996) investigated differences between alternative VaR estimation procedures applied to 1,000 randomly selected simple foreign exchange portfolios, finding that these differences were sometimes substantial; more alarmingly, a famous study by Beder 1995 examined eight common VaR methodologies used by a sample of commercial institutions applied to three hypothetical portfolios, and among other worrying results found that alternative VaR estimates for the same portfolio could differ by a factor of up to 14. Some further evidence is provided by Berkowitz and O’Brien (2001) who examined the VaR models used by six leading U.S. financial institutions. Their results indicated that these models can be highly inaccurate: Banks sometimes experienced high losses very much larger than their models predicted, and this suggests that these models are poor at dealing with heavy tails or extreme risks. Their results also suggest that banks’ structural models embody so many approximations and other implementation compromises that they lose any edge over much simpler models such as generalized autoregressive conditional heteroskedasticity (GARCH) ones. The implication is that financial institutions’ risk models are very exposed to model risk—and one suspects many risk managers are not aware of the extent of the problem.
Incorrect Model Application
Model risk can also arise because a good model is incorrectly applied. To quote Emanuel Derman:
There are always implicit assumptions behind a model and its solution method. But human beings have limited foresight and great imagination, so that, inevitably, a model will be used in ways its creator never intended. This is especially true in trading environments, where not enough time can be spent on making interfaces fail-safe, but it’s also a matter of principle: you just cannot foresee everything. So, even a “correct” model, “correctly” solved, can lead to problems. The more complex the model, the greater this possibility. (Derman, 1997, p. 86)
One can give very many instances of this problem: We might use the wrong model in a particular context (e.g., we might use a Black-Scholes model for pricing options when we should have used a stochastic volatility model, etc.); we might have initially had the right model, but have fallen behind best market practice and not kept the model up to date, or not replaced it when a superior model became available; we might run Monte Carlo simulations with a poor random number generator or an insufficient number of trials, and so on. We can also get “model creep,” where a model is initially designed for one type of problem and performs well on that problem, but is then gradually applied to more diverse situations to which it is less suited or not suited at all. A perfectly good model can then end up as a major liability not because there is anything wrong with it, but because users don’t appreciate its limitations.
Implementation Risk
Model risk also arises from the ways in which models are implemented. No model can provide a complete specification of model implementation in every conceivable circumstance because of the very large number of possible instruments and markets, and because of their varying institutional, statistical, and other properties. However complete the model, implementation decisions still need to be made about such factors as valuation (e.g., mark to market versus mark to model, whether to use the mean bid-ask spread, etc.), whether and how to clean data, how to map instruments, how to deal with legacy systems, and so on.
The possible extent of implementation risk is illustrated by the results of a study by Marshall and Siegel (1997). They sought to quantify implementation risk by looking at differences between how various commercial systems applied the RiskMetrics variance-covariance approach to specified positions based on a common set of assumptions (that is, a one-day holding period, a 95% VaR confidence level, delta-valuation of derivatives, RiskMetrics mapping systems, etc.). They found that any two sets of VaR estimates were always different, and that VaR estimates could vary by up to nearly 30% depending on the instrument class; they also found these variations were in general positively related to complexity: The more complex the instrument or portfolio, the greater the range of variation of reported VaRs. These results suggested that:
[A] naive view of risk assessment systems as straightforward implementations of models is incorrect. Although software is deterministic (i.e., given a complete description of all the inputs to the system, it has well-defined outputs), as software and the embedded model become more complex, from the perspective of the only partially knowledgeable user, they behave stochastically. … Perhaps the most critical insight of our work is that as models and their implementations become more complex, treating them as entirely deterministic black boxes is unwise, and leads to real implementation and model risks. (Marshall and Siegel, 1997, pp. 105–106)
Endogenous Model Risk
There is also a particularly subtle and invidious form of model risk that arises from the ways in which traders or asset managers respond to the models themselves: Traders or asset managers will “game” against the model. Traders are likely to have a reasonable idea of the errors in the parameters—particularly volatility or correlation parameters—used to estimate VaR, and such knowledge will give the traders an idea of which positions have under- and overestimated risks. If traders face VaR limits or face risk-adjusted remuneration with risks specified in VaR terms, they will therefore have an incentive to seek out such positions and trade them. To the extent they do, they will take on more risk than suggested by VaR estimates, which will therefore be biased downward. Indeed, VaR estimates are likely to be biased even if traders do not have superior knowledge of underlying parameter values. The reason for this is that if a trader uses an estimated variance-covariance matrix to select trading positions, then he or she will tend to select positions with low estimated risks, and the resulting changes in position sizes mean that the initial variance-covariance matrix will tend to underestimate the resulting portfolio risk. As Shaw nicely puts it:
[M]any factor models fail to pick up the risks of typical trading strategies which can be the greatest risks run by an investment bank. According to naïve yield factor models, huge spread positions between on-the-run bonds and off-the-run bonds are riskless! According to naïve volatility factor models, hedging one year (or longer dated) implied volatility with three month implied volatility is riskless, provided it is done in the “right” proportions—i.e., the proportions built into the factor model! It is the rule, not the exception, for traders to put on spread trades which defeat factor models since they use factor type models to identify richness and cheapness! (Shaw, 1997, p. 215; his emphasis)
Other Sources of Model Risk
There are also other sources of model risk. Programs might have errors or bugs in them, simulation methods might use poor random number generators or suffer from discretization errors, approximation routines might be inaccurate or fail to converge to sensible solutions, rounding errors might add up, and so on. We can also get problems when programs are revised by people who did not originally write them, when programs are not compatible with user interfaces or other systems (e.g., datafeeds), when programs become complex or hard to read (e.g., when programs are rewritten to make them computationally more efficient but then become less easy to follow). We can also get simple blunders. Derman (1997, p. 87) reported the example of a convertible bond model that was good at pricing many of the options features embedded in convertible bonds, but sometimes miscounted the number of coupon payments left to maturity.
Finally, models can give incorrect answers because poor data are fed into them—”garbage in, garbage out,” as the saying goes. Data problems can arise from many sources: from the way data are constructed (e.g., whether we mark to market or mark to model, whether we use actual trading data or end-of-day data, how we deal with bid-ask spreads, etc.), from the way time is handled (e.g., whether we use calendar time, trading time, how we deal with holidays, etc.), from the way in which data are cleansed or standardized, from data being nonsynchronous, and from many other sources.
MANAGING MODEL RISK
Some Guidelines for Risk Managers
Given that risk managers can never eliminate model risk, the only option left is to learn to live with it and, hopefully, manage it. Practitioners can do so in a number of ways:
Some Institutional Guidelines
Financial institutions themselves can also combat model risk through appropriate institutional devices. One defense is a sound system to vet models before they are approved for use and then periodically review them. A good model-vetting procedure is proposed by Crouhy et al. (2001, pp. 607–608) and involves the following four steps:
All these stages should be carried out free of undue pressures from the front office, and traders should not be allowed to vet their own pricing models. It is also important to keep good records, so each model should be fully documented in the middle (or risk) office. Risk managers should have full access to the model at all times, as well as access to real trading and other data that might be necessary to check models and validate results. The ideal should be to give the middle office enough information to be able to check any model or model results at any time, and do so using appropriate (that is, up to date) data sets. This information set should include a log of model performance with particular attention to any problems encountered and what (if anything) has been done about them. There should also be a periodic review (as well as occasional spot check) of the models in use, to ensure that model calibration is up to date and that models are upgraded in line with market best practice, and to ensure that obsolete models are identified as such and taken out of use. Such risk audits should also address not just the risk models, but all aspects of the firm’s risk management. And, of course, all these measures should take place in the context of a strong and independent risk oversight or middle office function.
KEY POINTS
REFERENCES
Beder, T. (1995). VaR: Seductive but dangerous. Financial Analysts Journal 51, 5: 12–24.
Black, F. (1992). How to use the holes in Black-Scholes. In R. Kolb (ed.), The Financial Derivatives Reader (pp. 198–204). Miami: Kolb Publishing.
Berkowitz, J., and O’Brien, J. (2002). How accurate are value-at-risk models at commercial banks? Journal of Finance 57, 3: 1093–1112.
Crouhy, M., Galai, D., and Mark, R. (2001). Risk Management. New York: McGraw-Hill.
Derman, E. (1997). Model risk. In S. Grayling (ed.), VaR—Understanding and Applying Value-at-Risk (pp. 83–88). London: Risk Publications.
Derman, E. (2004). My Life as a Quant: Reflections on Physics and Finance. Hoboken, NJ: John Wiley & Sons.
Dowd, K. (2005). Measuring Market Risk, 2nd ed. Chichester: John Wiley & Sons.
Geweke, J. (2005). Contemporary Bayesian Econometrics and Statistics. Hoboken, NJ: John Wiley & Sons.
Ju, X., and Pearson, N. D. (1999). Using value-at-risk to control risk taking: How wrong can you be? Journal of Risk 1, 2: 5–36.
Kato, T., and Yoshiba, T. (2000). Model risk and its control. Bank of Japan Institute for Monetary and Economic Studies Discussion Paper No. 2000-E-15.
Marshall, C., and Siegel, M. (1997). Value at risk: Implementing a risk measurement standard. Journal of Derivatives 4, 1: 91–110.
Shaw, J. (1997). Beyond VaR and stress testing. In S. Grayling (ed.), VAR—Understanding and Applying Value at Risk (pp. 221–224). London: KPMG/Risk Publications.
Siu, T. K., Tong, H., and Yang, H. (2001). On Bayesian value at risk: From linear to non-linear portfolios. Mimeo, National University of Singapore, University of Hong Kong, and London School of Economics.