11 Parameters for modelling qualitative risk data

  • Subscribe to updates

  • Privacy
  • This field is for validation purposes and should be left unchanged.

11 Parameters for modelling qualitative risk data 

In the second in our series of blogs on challenging risk models, Tony and John look at the parameters relating to the modelling of qualitative data. Operational Risk Software can be key to supporting this discipline.   

Taken from: Mastering Risk Management 

These parameters relate to the challenges that can be made to the Qualitative Data Model parameters and to the Qualitative Data Model input data inside the model. (How do you model risk management data).  The parameters that we will now consider are: 

  • Impact granularity
  • Time frame factor
  • Categories used for modelling
  • Control assessment matrix ranges
  • Frequency distribution
  • Severity distribution
  • Control weight
  • Risk correlations
  • Number of samples
  • Confidence level
  • Sampling seed

Impact granularity

Parameter description – this enables any impact range to a Heat map (Risk Assessment Matrix) which is considered too large for modelling purposes to be subdivided into smaller amounts. 

Challenge – Impact ranges within a risk assessment matrix should be challenged to confirm that the range is neither too small nor too large for the risks that the range covers. If it is thought to be a large range, use of sub-ranges should be considered. 

How this parameter may affect capital – use of sub-ranges will result in capital that is more aligned with the expected impact of the risk concerned, rather than using a broad impact range that is less appropriate. It may also lead to a reduction in the economic capital required if there are more risks that are towards the lower end of the range.  Conversely, if there are more risks towards the higher end of the range the capital will be increased. 

Time frame factor

Parameter description – This enables a RCSA that has been scored according to a non-annual time frame to be adjusted to an annual basis. For example, if risks have been assessed for a five-year impact on a five-year plan this parameter will adjust the loss values calculated to a one-year basis. 

Challenge – Check that the RCSA used in the modelling has an impact assessment which is based on a one-year time period. 

How this parameter may affect capital – if risks are assessed on a five-year basis, the impact will be considerably higher than on a one-year basis. The capital will therefore also be higher as it clearly takes more capital to support a firm through five years. 

Categories used for modelling

Parameter description – this refers to the business lines that are used as a category for modelling (or any other category that may be required for modelling). 

Challenge – Although a firm will generally use its own business lines for modelling (along with its own loss event types), it is possible to model any other category against the loss event types. For example a firm may be interested to see the capital required to support each risk owner, as these are often different from business line heads. 

How this parameter may affect capital – The economic capital calculation is generally based on a firm’s own business lines and its own loss event types. However, if for example it is based on risk owners and loss event types the capital will be different as the modelling will be based on a different set of data (risk owners instead of business lines).

Control assessment matrix ranges

Parameter description – These are the design and performance failure probabilities for each control in the RCSA that is being modelled. 

Challenge – These parameters should be challenged through use of subject matter experts, audit reports, internal losses and events which lead to control failures and external relevant events. All of these challenge data should be consistent with the assessment ratings given to each control. Further qualitative challenge can be made by comparing the reasonableness of the failure values to perceived control strengths and weaknesses of each control. 

How this parameter may affect capital – Other than expected losses, risks only occur when controls fail. Therefore capital will only be required when controls fail. A higher level of control failure will require greater capital. Conversely, if all controls are perceived to be excellent very little capital should be required as the firm will have very few losses.  This is unusual and most firms have a limited amount of resources to spend on controls. This limited spending leads to controls which are often suboptimal but the losses from the under-controlled risks are nevertheless within management’s appetite. 

Frequency distribution 

Parameter description – This is the discrete distribution used for simulating the frequency of the risk. There are several possible distributions that can be used. 

Challenge – the Poisson is the most common distribution to use for a firm’s frequency modelling as it requires only a mean to describe the distribution. Given the nature particularly of non-financial risk data and the paucity of the data, another possible discrete distribution for frequency is the negative binomial. This distribution is appropriate when the standard deviation is greater than the mean (and this is almost always the case with non-financial risk data).

How this parameter may affect capital – The frequency distribution is one of the two distributions used for modelling (the other one being the severity (impact) distribution). Different frequency distributions may give different occurrence values for the risk and may therefore affect the capital figure produced by the model.. 

Severity distribution

Parameter description – This is the continuous distribution used for simulating the severity (impact) of the risk. Again, there are several possible distributions that can be used. 

Challenge – the Lognormal is the most common distribution to use for a firm’s severity (impact) modelling as it requires only a mean and a standard deviation to describe the distribution. Given the nature of non-financial risk data and the paucity of the data, there are only a few other possible continuous distributions for severity such as the Gumbel and the Pareto. Most of the other distributions commonly used for modelling can be used for financial risk data. However, they cannot be used for non-financial risk data as they often require additional parameters which are not available due to poor quality and quantity of non-financial risk data.  

How this parameter may affect capital – The severity (impact) distribution is one of the two distributions used for modelling (the other being the frequency distribution). Different severity distributions can significantly affect the capital figure produced by the model. 

Control weight

Parameter description – this parameter allows differentiation of the mitigating effect of different controls within a control cluster. Often one control is more vital in mitigating a risk than the others. The differentiation can be on a scale of, say, 1 to 10 where 10 is complete mitigation of the risk by that control and 1 is very poor, if any, mitigation. 

Challenge – All weights should be set to a default of 5. These default weights should be challenged so that the weights more closely reflect the mitigating effect of different controls. 

How this parameter may affect capital – if weights are set to high values, the capital required will be smaller as the controls mitigate the risks more completely.  However, the use of control weightings allows an economic capital to be calculated which is more aligned with the firm’s actual risk profile. 

Risk Correlations

Parameter description – Data on risks which are correlated should be used by the model in order to get a more accurate economic capital figure. The correlation can be at any value between -1.0 and +1.0.

Challenge – the default in qualitative data models is often set at 0.00 (i.e. the risks are independent of each other). It is difficult to clearly demonstrate corelations. However, there may be some pairs of risks that can be qualitatively demonstrated to correlate, e.g. cyber attack and loss of customer data. 

How this parameter may affect capital – if risks are positively correlated, the capital required will increase as a high value for one risk is associated with a high value for the second risk. Note that regulators in some industries expect the firm to have a least one model run with +1.0 correlations for all risks. (i.e. if one risk happens, they all happen within that time period). This is very conservative in terms of the economic capital required to support a firm’s risk profile. Conversely, a negative correlation between risks will decrease the amount of capital required as a high value for one risk is associated with a zero (or even negative) value for the second risk. 

Number of samples 

Parameter description – this is the number of iterations in a given simulation. 

Challenge – the challenge is to find the range of iterations within which there are consistent results. This is known as the area within which convergence occurs.  If the number of iterations is too small, the output derived from the simulated distribution will be unlikely to be consistent (over a number of simulations) as convergence has not been reached. If the number of iterations is too large, the output derived from the simulated distribution will again be unlikely to be consistent as outlier values will have been created. 

Convergence can be observed both through the consistency of the outputs and through the building of the curves as the simulation progresses. (Monte Carlo simulations)

How this parameter may affect capital – If the number of simulations is not within the area of convergence the capital values derived will alter materially from one simulation to the next. They may either be too big or too small, but will not be consistent. As noted in in the Monte Carlo blog, modellers often aim for all the iteration results to be within, say, 1% of each other. 

Confidence Level

Parameter description – this is the quantile at which we can be confident that the value derived from the simulated distribution will not be exceeded. 

Challenge – Determine at which confidence level the firm wishes to set its economic risk capital. If a firm sets its economic capital level at the 90th centile it will be more likely to fail because of lack of economic capital than if it sets its confidence level at the 99th centile. It should be noted that the regulatory confidence level for banks for non-financial risk is set at 99.9 and for insurance companies at 99.5.

How this parameter may affect capital – A smaller confidence level (e.g. 99.5 rather than 99.9) will result in a smaller capital figure. 

Sampling seed

Parameter description – this is a number that is input into the random number generator at the start of a simulation, and so is called the seed.  The seed is the number from which all the random numbers are generated. For many random number generators it is often set at between 1 and 32,000, or zero if you want the generator to choose ‘randomly’. A random number generator will produce exactly the same set of random numbers for the same seed. This is useful if you wish to duplicate the results. However, if randomly different results are required it must be remembered to change the seed. 

Challenge – Consider whether or not to use the same seed when generating simulations. As many random iterations should be used to generate any economic capital results which will be used. This enables results to be averaged and therefore, hopefully, to be closer to the expected value. 

How this parameter may affect capital – if the same seed is used, the same output will be obtained from the distribution, i.e. the same capital figure (within statistical bounds). Different seeds will produce small variations in capital (although statistically immaterial if the number of simulations is within the convergence zone).

In our next blog Tony and John talk about the parameters relating to the capital model.       

Mastering Risk Management by Tony Blunden and John Thirlwell is published by FT International. Order your copy here: https://www.pearson.com/en-gb/subject-catalog/p/mastering-risk-management/P200000003761/9781292331317    

For more information about how Operational Risk software can help your organisation, contact us today on sales@risklogix-solutions.com 

 

Related Posts

Confidence levels, holding periods and their effect on risk data modelling
Confidence levels, holding periods and their effect on risk data modelling  Our third and final blog on risk management modelling where Tony and John discuss the effects of confidence and holding period. Using Operational Risk Software can be key. Taken from: Mastering Risk Management  The confidence level used in modelling represents the level at or …

Confidence levels, holding periods and their effect on risk data modelling Read More »