OPERATIONAL RISK MEASUREMENT AND PRICING

GLOBALCAPITAL INTERNATIONAL LIMITED, a company

incorporated in England and Wales (company number 15236213),

having its registered office at 4 Bouverie Street, London, UK, EC4Y 8AX

Accessibility | Terms of Use | Privacy Policy | Modern Slavery Statement

OPERATIONAL RISK MEASUREMENT AND PRICING

The measurement of operational risk (OR) earned considerable attention in the wake of huge losses at investment banks such as Barings and Sumitomo.

The measurement of operational risk (OR) earned considerable attention in the wake of huge losses at investment banks such as Barings and Sumitomo. This is despite there being no agreed definition of operational risk. So what is it? To some institutions it is the risks not covered by market or credit risks. To the Basle Committee on Banking Supervision it is "the potential for unexpected losses to arise from inadequate systems, operational problems, breaches in internal controls, fraud or unforeseen catastrophes." It is the losses that follow from acts undertaken (or neglected) in carrying out business activities. This means that when a transaction is priced solely in terms of market and credit risks, an important risk, which can have devastating financial consequences, is missing from the product pricing.

OR measurement will enable the allocation of risk capital to cover OR losses to be determined from loss data. It will highlight risky business activities, and help management reduce the risk.

The development of a model for measuring OR begins with the database. Events therein should carry their losses or potential losses, the business activity giving the losses, and other risk indicators. The creation and management of the database is key to understanding the control environment. In investment banking most losses will be from processing a high volume of transactions, and will show up as interest payments to counterparts, fees and fines paid to exchanges, etc. Retail banks will be exposed to frauds, legal and liability problems and small claims arising from processing errors. The database should allow separate analysis for each business unit and for each potential loss-making activity.

MODELING CRITERIA

A reliable methodology would require statistical/actuarial methods (using the loss data to estimate the joint and marginal distributions of severity and frequency loss) combined with econometric techniques (such as multiple regression and discriminant analysis). In addition we took quantitative methods developed for hydrology, where rare events--floods, storms, etc.--have major financial consequences. We chose to adopt as modeling considerations that the results must be easily understood, requiring no expert statistical interpretation, and be computationally efficient, requiring no elaborate simulation studies, for example. If we could mimic the value-at-risk (VaR) measures widely used in market and credit risk computations, so much the better, however these failed to cope with the recent turmoil within the Asian and Russian markets. An OR database would typically show high impact events at low frequency among events of high frequency but low impact. A financial institution might therefore sort its losses into: 'expected loss' to be absorbed by net profit; 'unexpected loss' to be covered by risk reserves (so not totally unexpected); and 'stress loss' requiring core capital or hedging for cover. The 'expected loss' per transaction can easily be embedded in the premium, so, if a business unit handles 5,000 transactions in a typical week, with weekly 'expected losses' estimated at USD82,000, USD16.40 per transaction would cover these. It is the rare but extreme stress losses that the institution must be most concerned with.

EXTREME VALUE METHODOLOGY

The normal distribution that forms the basis of much of statistical inference must be replaced by a loss distribution showing a thicker upper tail. The log-normal distribution had this role historically in econometrics theory, and the Weibull in reliability modeling. Because of the paucity of extreme observations we cannot hope to model with any precision the entire upper tail distribution (the excess distribution) despite its importance for understanding aggregate loss. We therefore restrict ourselves to estimating only an extreme quantile. This corresponds to VaR which uses the quantile to set a maximum limit on potential losses.

We might have chosen to work with any heavy-tailed model, but selected the Generalised Extreme Value distribution (GEV) which encompasses the distributions (Weibull, Frechet and Gumbel) which arise as limit distributions for the largest observation in a sample. The estimation procedure takes the largest loss observed in each of the preceding 12 months, and obtains the parameters of the GEV best fitted by these 12 values. Estimation procedures are described in Embrechts et al. (1997). The results can be updated daily, weekly or monthly on a rolling 12 month basis. The estimated 100p% quantile is called the maximum amount at risk at confidence level p (MaRp). In view of the heavy-tail characteristics, a very high quantile such as 99% can give very high figures, suggesting an economic capital allocation beyond that which would be deemed appropriate, so the 95% quantile might prove more suitable. The following table shows values similar to those that were obtained from a typical fraud database of a clearing bank handling millions of transactions per day, with about 20 frauds in excess of GBP100,000 attempted each year, and only one massive fraud over a period of five years.

 

 

 

 

 

 

 

 

 

 

 

The single extreme case is seen to distort the shape of the fitted distribution. However, insurance premiums are similarly dependent on the previous year's claims history. Simulations (see Embrechts et al., 1997) demonstrate the problems of estimation for heavy-tailed distributions even when the exact model is known and there is lots of data. Tests of fit for any particular heavy-tailed distribution would lack power to detect a lack of fit, and this was seen in our studies.

References:

Cruz, M, Coleman R & Salkin G. (1998)  Modeling and measuring operational risk.  The Journal of Risk, Volume 1/Number 1, 63-72.

Embrechts, P, Kluppelberg, C & Mikosch T. (1997)  Modelling Extremal Events for Insurance and Finance. (Applications of Mathematics, Vol. 33) Springer.

 

This week's Learning Curve was written by Rodney Coleman, a senior lecturer in mathematics at Imperial Collegein London, and Marcelo Cruz, a director of operational risk at Warburg Dillon Read.

 

Related articles

Gift this article