Guide to parameters for challenging the Economic Capital Model

  • Subscribe to updates

  • Privacy
  • This field is for validation purposes and should be left unchanged.

 

Guide to parameters for challenging the Economic Capital Model 

In the third in our series of blogs on challenging risk models, Tony and John look at the parameters relating to the capital model. Operational Risk Software can be key to supporting this discipline.   

Taken from: Mastering Risk Management 

These parameters relate to the challenges that can be made to the Economic Capital Model parameters and to the input data inside the economic capital model. (How do your model risk management)

The parameters we will now consider are: 

  • Frequency distribution
  • Severity (impact) distribution
  • Frequency percentage weight
  • Impact percentage weight
  • Business line correlation
  • Loss event types correlations
  • Use of provisioning for capital
  • Number of samples
  • Confidence level
  • Sampling seed
  • Distribution comparison

Several of these parameters are also discussed in the qualitative data model parameters blog as they are equally relevant to the capital model. 

Frequency distribution 

Parameter description – This is the discrete distribution used for simulating the frequency of the elements that are being modelled. There are several possible distributions that can be used. 

Challenge – the Poisson is the most common distribution to use for a firm’s frequency modelling as it requires only a mean to describe the distribution. Given the nature particularly of non-financial risk data and the paucity of the data, another possible discrete distribution for frequency is the negative binomial. This distribution is appropriate when the standard deviation is greater than the mean (and this is almost always the case with non-financial risk data).

How this parameter may affect capital – The frequency distribution is one of the two distributions used for modelling (the other one being the severity (impact) distribution). Different frequency distributions may give different occurrence values for the risk and may therefore affect the capital figure produced by the model. 

Severity Distribution

Parameter description – This is the continuous distribution used for simulating the severity (impact) of the elements being modelled. Again, there are several possible distributions that can be used. 

Challenge – the Lognormal is the most common distribution to use for a firm’s severity (impact) modelling as it requires only a mean and a standard deviation to describe the distribution. Given the nature of non-financial risk data and the paucity of the data, there are only a few other possible continuous distributions for severity such as the Gumbel and the Pareto. Most of the other distributions commonly used for modelling can be used for financial risk data. However, they cannot be used for non-financial risk data as they often require additional parameters which are not available due to poor quality and quantity of non-financial risk data.  

How this parameter may affect capital – The severity (impact) distribution is one of the two distributions used for modelling (the other being the frequency distribution). Different severity distributions can significantly affect the capital figure produced by the model. 

Frequency percentage weight

Parameter description – This is the percentage numerical value (between 0 and 100) given to the frequency of each element of the data by cell to be used in the economic capital calculation. It reflects the importance of each element (by cell) in terms of its influence on the capital figure. The total of the weights of each cell/element must add to 100. 

For example, if there are large numbers of external losses in the data to be used, the frequency weight of external losses may be set to a small percentage number so that external loss data frequency does not swamp the capital calculation. Conversely, there will probably be a high frequency of the firm’s own transactional risk data which will be given a high percentage number as it relates to actual events occurring to the firm itself. 

Challenge – The weights should reflect the quality and quantity of the data for each cell/element combination.  It is likely that there will be a number of different weights applied to different cells/elements as the quality and quantity will vary depending on the collection of internal loss data, the size of the frequency of external loss data, the relevance of RCSA data (particularly with regard to the quality of the internal loss data) and the relevance of the scenario data. The challenge is to settle on a justifiable set of weights. These can be validated through mechanisms such as internal audit reports and key risk indicators (KRIs), as well as subjective matter experts’ views and the comparison of different weightings to give different economic capital figures. 

How this parameter may affect capital – The effect on capital may vary between subtle and significant. It is important to validate the weights where possible and to review these on (at least) an annual basis. Different weights should be tried to give different capital figures which can then be analysed for relevance and appropriateness. 

Impact percentage weight

Parameter description – This is the percentage numerical value (between 0 and 100) given to the severity of each element of the data by cell to be used in the economic capital calculation. It reflects the importance of each element (by cell) in terms of its influence on the capital figure. The total of the weights of each cell/element must add to 100. 

Challenge – The weights should reflect the quality and quantity of the data for each cell/element combination. It is likely that there will be a number of different weights applied to different cells/elements as the quality and quantity will vary depending on the collection of internal loss data, the size of the severity of external loss data, the relevance of RCSA data and the relevance of the scenario data. The challenge is to settle on the a justifiable set of weights. These can be validated through mechanisms such as internal audit reports and KRIs, as well as subject matter experts’ view and the comparison of different weights to give different economic capital figures. 

How this parameter may affect capital – Again. the effect on capital may vary between subtle and significant. It is important to validate the weights where possible and to review these on an annual basis. As noted above, different weights should be tried to give different capital figures which can then be analysed for relevance and appropriateness. 

Business line correlation

Parameter description – Any two business lines can be correlated, at any value between -1.0 and +1.0

Challenge – The default in economic capital models is often set at 0.50. It is difficult to clearly demonstrate correlations. However, there may be some pairs of business lines that can be qualitatively demonstrated to correlate, e.g. two retail business lines with different products but similar customer demographics. 

How this parameter may affect capital – If business lines are positively correlated the capital required will increase as a high value for one business line is associated with a high value for the second business line. Note that regulators in some industries expect the firm to have at least one model run with +1.0 correlations for all business lines (i.e. if one business line event happens, they all happen within that time period). This is very conservative in terms of the economic capital required to support a firm’s risk profile.  Conversely, a negative correlation between business lines will decrease the amount of capital required as a high value for one business line is associated with a zero (or even negative) value for the second business line. An economic capital model should automatically generate capital values at correlations of +1.0 and 0.0, as well as a correlated value. 

Loss event type correlation

Parameter description – Any two loss event types can be correlated, at any value between -1.- and +1.0

Challenge – The default in economic capital models is often set at 0.50. It is difficult to clearly demonstrate correlations. However, there may be some pairs of loss event types that can be qualitatively demonstrated to correlate, e.g. internal fraud and external fraud. 

How this parameter may affect capital – If loss event types are positively correlated the capital required will increase as a high value for one loss event type is associated with a high value for the second loss event type. 

Use of provisioning for capital 

Parameter description – Many firms have a long-term provisioning policy that deducts expected monthly losses from the monthly profits. It is unreasonable for economic capital to be required for losses that have already been provisioned in the P&L. Firms should therefore be able to deduct their provisioning amounts from their economic capital requirement. 

Challenge – Clearly there must be consistency in the accuracy of the provisioning and this can be challenged through back testing. In addition, if the provisions are allocated by the firm’s business lines and loss event types there will be transparency in adjusting the economic capital figure. However, any other sort of allocation or a lump sum provision will be difficult to both check and use. 

A further check could be carried out through reviewing the mean loss values generated by the capital model and reconciling them to the provision values. 

How this parameter may affect capital – If the provision amounts are deducted from the total capital figure, there will be a smaller capital requirement. However, if the provision amounts are in excess of the monthly expected losses, a larger-than-realistic amount will be deducted from the total capital figure. This will result in a lower capital figure than is actually required to operate the firm with its given risk profile. 

Number of samples

Parameter description – This is the number of iterations in a given simulation. 

Challenge – the challenge is to find the range of iterations within which there are consistent results. This is known as the area within which convergence occurs.  If the number of iterations is too small, the output derived from the simulated distribution will be unlikely to be consistent (over a number of simulations) as convergence has not been reached. If the number of iterations is too large, the output derived from the simulated distribution will again be unlikely to be consistent as outlier values will have been created. 

Convergence can be observed both through the consistency of the outputs and through the building of the curves as the simulation progresses. (Monte Carlo simulations)

How this parameter may affect capital – If the number of simulations is not within the area of convergence the capital values derived will alter materially from one simulation to the next. They may either be too big or too small, but will not be consistent. As noted in in our Monte Carlo blog, modellers often aim for all the iteration results to be within, say, 1% of each other. 

Confidence Level

Parameter description – this is the quantile at which we can be confident that the value derived from the simulated distribution will not be exceeded. 

Challenge – Determine at which confidence level the firm wishes to set its economic risk capital. If a firm sets its economic capital level at the 90th centile it will be more likely to fail because of lack of economic capital than if it sets its confidence level at the 99th centile. It should be noted that the regulatory confidence level for banks for non-financial risk is set at 99.9 and for insurance companies at 99.5.

How this parameter may affect capital – A smaller confidence level (e.g. 99.5 rather than 99.9) will result in a smaller capital figure. 

Sampling seed

Parameter description – this is a number that is input into the random number generator at the start of a simulation, and so is called the seed.  The seed is the number from which all the random numbers are generated. For many random number generators it is often set at between 1 and 32,000, or zero if you want the generator to choose ‘randomly’. A random number generator will produce exactly the same set of random numbers for the same seed. This is useful if you wish to duplicate the results. However, if randomly different results are required it must be remembered to change the seed. 

Challenge – Consider whether or not to use the same seed when generating simulations. As many random iterations should be used to generate any economic capital results which will be used. This enables results to be averaged and therefore, hopefully, to be closer to the expected value. 

How this parameter may affect capital – if the same seed is used, the same output will be obtained from the distribution, i.e. the same capital figure (within statistical bounds). Different seeds will produce small variations in capital (although statistically immaterial if the number of simulations is within the convergence zone).

Distribution comparisons

Parameter description – this is a comparison of the distributions used in the economic capital calculations. 

Challenge – Review the data from the simulations using different distributions and determine which distribution produces the most reliable, appropriate and sound capital figure. 

How this parameter may affect capital – Different distributions have different size tails. The size of a tail will influence the capital figure, with a thin tail distribution producing a lower capital figure. Lognormal, Gumbel and Pareta are three popular distributions with fat tails and so are more likely to be conservative in estimating the capital figure (rather than underestimating it by using thin tailed distributions). 

 

In our next blog Tony and John talk about the problems raised by the different tails, and how to deal with them.       

Mastering Risk Management by Tony Blunden and John Thirlwell is published by FT International. Order your copy here: https://www.pearson.com/en-gb/subject-catalog/p/mastering-risk-management/P200000003761/9781292331317    

For more information about how Operational Risk software can help your organisation, contact us today on sales@risklogix-solutions.com