The standard model in Solvency II is totally inadequate for operational risk. Many in the industry know this. But nothing is likely to change before 2012.
“How to treat operational risk has been a very important question from the start of the Solvency II project,” says Luca Ziewer, insurance partner at Oliver Wyman. Indeed, ask anyone in the industry and they tell you that their business takes operational risk very seriously. Yet, Ziewer adds, “the risk measures used for operational risk are still very crude.”
“Understanding operational risks is being increasingly viewed as important for better management of insurers, continues Ziewer, “but this sentiment is not being developed into something concrete.”
Ziewer recalls the executives responsible for the risk management of the big insurance companies said years ago that one of their key priorities was to develop an operational risk management framework. “Yet nothing much has changed, not even where insurers use their own internal models to measure operational risk,” he observes.
The comments echo those of the Committee of European Insuance and Occupational Pensions Supervisors (CEIOPS), which reported in November 1997 that “the majority of undertakings that answered the QIS3 [quantitative impact study 3] questionnaire seem to recognise operational risk as an area that requires special attention.” However, Ceiops also noted in its report, “Many participants considered the operational risk module as tested under QIS3 as being too simplistic.”
From fires to misplaced decimal points
Operational risks range from a company’s headquarters burning down to a misplaced decimal point in the data input. The Solvency II directive has adopted the Basel II definition of operational risk: any risk resulting from people, processes, systems or external events.
Many operational risks affecting insurers are not unique to the insurance industry, the loss of key personnel, IT failures and external fraud for instance. But there is also “a whole set of risks at a more detailed level that occurs only in an insurance company,” explains Tony Blunden, director at operational risk specialists, RiskLogix. A workshop held last year by the Association of British Insurers (ABI) identified reinsurance process failure as a key insurance operational risk and Blunden highlights poor claims handling and inaccurate or imprecise policy wording.
Despite having seen numerous examples of what can go wrong, the majority of insurers are reluctant to confront operational risk. Regulatory moves are having some effect. In the UK, the Individual Capital Adequacy Standards (ICAS) regime, instituted at the end of 2004, required allocation of capital against operational risk for the first time.
Little discussion on operational risk
Although Solvency II is now raising awareness of operational risk across Europe, Rachel Delhaise, managing director at Guy Carpenter says that, on the whole, there is “very little discussion on operational risk” in Europe at the moment, or at least little evidence of it.
The problem is, as Mike Wilkinson who leads the risk management consulting team at EMB, notes, unlike other risks, the management of operational risk is not key to driving the business: “Insurers actively encourage insurance risk because that’s how they make their money. Equally, with market risk there is money to be made by taking it on. “If you look at operational risk, there’s no real reward. The only thing you can say is that there is a cost with implementing operational controls.”
Insurers need incentives to implement operational controls, therefore, but the current Solvency II proposals for SCR have been criticised for not providing them. On the one hand, the standard formula is said to be too simplistic. Although the details have yet to be finalised, the capital to be set against operational risk is worked out separately from the other risks and is expressed as a percentage to be added on to the basic SCR (the SCR minus operational risk).
Article 106 of the directive states, “the calculation of the capital requirement for operational risk shall take account of the volume of those operations, in terms of earned premiums and technical provisions, which are held in respect of those insurance and reinsurance obligations.”
In other words, the larger the insurer the greater the operational risk charge. In reality, however, the relationship is not linear. Larger firms often have the resources to permit a more sophisticated handling of operational risk. But as it stands, the standard formula cannot reward this.
One of the consequences of loading operational risk on top of the basic SCR is that it does not allow for any diversification benefits between operational risks and other types of risks. It also encourages companies to treat operational risk as a separate problem, instead of one cutting right across the business.
Problems with the standard approach
“The problem with the whole standardised approach is that it doesn’t really reflect sufficiently the overall risks in the business. It’s not just operational risk, but operational risk is the worst offender,” Wilkinson says, “You have to do it to understand your overall capital effect but by treating operational risk as separate it loses a lot of the value.”
Of course, a standard model by its very nature will have drawbacks. “Standard formulas are inevitably simplistic,” says Delhaisse. Firms seeking greater insight into their operational risk can choose the internal model route.
Yet Solvency II does not provide the incentive to take this route. The results of the third Quantitative Impact Study (QIS 3) showed that the use of a standard model to calculate operational risk led to a capital allocation almost half that of the firm’s own internal model. QIS 4 calibration was better but still understated the capital needed compared to firms own models. This anomaly worries many in the industry. One regulator said “the current standard formula is nonsense for operational risk”.
Mariano Selvaggi who heads ORIC at the ABI believes CEIOPS is becoming aware that operational risk needs more attention, “They are not happy at all with the way in which the standard formula is working at the moment; the resulting capital charge generally is not in line with supervisors’ own practical experience across the EU.”
Oliver Wyman’s Ziewer is not optimistic that it will be addressed, however, “CEIOPS should address it, but I’m not at all sure that they will before the start of Solvency II. I think the process that they used for QIS 4 will be what the industry has to start with when Solvency II gets implemented.”
This is a particular concern for the UK insurance industry. Justin Elks, associate director of risk and compliance at Just Retirement, says, “My big worry here, is that firms wilfully don’t develop operational risk models because they don’t see the incentive for doing so. From my discussions with the FSA, I think the UK regulator recognises the concerns here and is worried about the danger of it moving the UK backwards from where we are with the ICAS regime.”
Even if the incentives are there, the step up to calculating operational risk through an internal model is plagued with difficulties. “As it happens, it’s something that you can’t quantify to a 99.5 % confidence level,” says operational risk consultant John Thirlwell.
Internal model guidelines in the directive are not broken down into different risks, so there are no guidelines specifically for operational risk. Like all risks, therefore, operational risk calculations will need to pass the following seven tests: data, statistical quality, calibration, validation, profit and loss attribution, documentation and use.
Under Basel II, many banks aimed at the more complex advanced measurement approach; in the end, however, only four firms adopted this model. Rodney Bonnard, partner at Ernst and Young and member of the Ernst and Young Solvency II global task force, witnessed the challenges faced by banks undertaking the transition to Basel II. “When it actually came down to it, banks found operational risk to be challenging. Many of them reverted to the standardised approach, having struggled with data when attempting to use the advanced measurement approach.”
Many banks took a hybrid route and modelled the risks they could but used the standard formula for operational risk. Bonnard believes many insurers may follow this method, “The hierarchical structure of the risk modules under Solvency II is designed to allow insurers to do this. There is a complete hierarchy of how the risks interact and because operational risk is added on top, it would allow companies to just replace the whole basic SCR part with their own model and add operational risk on at the end.”
A further approach is to use technology designed for the management and capital allocation of operational risk. One such technology is aCCelerate, provided by RiskLogix. aCCelerate is Basel II compliant, but is also increasingly used by insurers.
Asked how a technology can determine a hard figure for a soft risk, Blunden says, “Most organisations find it very difficult to put actual numbers against even the impact of the risk, most people [therefore] like to do a range or say, well that’s a medium to high impact. Having made these qualitative assessments, because they are soft, we do then put numbers around those ranges and then we apply probabilistic modelling and build a probabilistic distribution.
“Like all modelling it’s not the figure but like all modelling it’s in the right ball-park.”
Lack of data
One obstacle for the banks was a lack of historical loss data. This is a pressing issue for the insurance industry today.
The ABI launched its database of the operational losses of member companies in 2005. The database is run by the operational risk insurance consortium (ORIC), in conjunction with software company SAS. ORIC’s data is largely UK-based with only 24 companies. Most members are large, however, with members such as Swiss Re, Allianz and RSA, and this year has been expanding outside the UK with plans to reach 50 members by 2011.
Selvaggi believes the database offers the most viable way to approach operational risk: “[Lack of data] is a big problem, especially within Solvency II and the onus placed on insurers to seek internal model approval. So as long as there is not a consistent database that some people can use to model a historical event, [modelling extreme events] becomes very difficult to undertake.”
Just Retirement is a member of ORIC and Elks believes that external databases are useful not just to provide figures to plug into a model but also to help understand the risks and gauge what operational risks are out there: “There is a danger in being too internally focused with operational risk. An external database allows the risk function to ask challenging questions of the business: this happened somewhere else — could it happen here? Why are we different? Why are our controls so much better?”
But would a much more comprehensive database necessarily make any difference to the way operational risk is handled? Some in the industry remain sceptical. Even with all-inclusive, squeaky-clean data, the reasons for operational failures still differ markedly from case to case, so generalising from one particular firm’s experiences may be of limited use.
Furthermore, while purely operational failures happen infrequently, operational faults often exacerbate or trigger other types of failure. The problem then becomes one of classification: to what extent was that an operational failure? Different firms and different people will give different answers.
Research shows, for example, that insurance company failures that on the surface look to be due to market or insurance risks, often originate within the company. The 2003 paper by Simon Ashby et al entitled, Lessons about risk: analysing the causal chain of insurance company failure, says that, “The root of most insurance company failures is management, and typically poor management”, including, “misperceptions, misunderstandings and miscommunications.”
Pillar I vs Pillar II
Some believe that the most reliable way of mitigating operational risk is to understand and manage it, in other words addressing it under proposed pillar II of Solvency II. In this case, operational risk cannot be looked at in isolation. “The danger is that you get an operational risk silo approach whereas operational risk affects everything in the way a business is run,” notes Wilkinson. “In my mind it is much more around implementing proper risk management.”
Why then include a pillar I requirement at all if operational risk sits most naturally under pillar II? Even if emphasis should be on sound overall management of risk, an allocation of capital is still prudent. What’s more, for firms that use an internal model, the very process of modelling increases the firms’ understanding of it. In this sense, pillar I and pillar II complement each other.
The Own Risk and Solvency Assessment (ORSA) that is part of pillar II, for example, will force insurers to examine whether the pillar I load that they have specified for operational risk is adequate.
“What the regulators are saying is that the standardised approach is not an easy option from a regulatory perspective because the big element is the risk governance regime which still has to be in place regardless of whether the standardised approach or the internal model approach has been adopted,” Wilkinson explains. Delhaisse agrees: “Over time, pillar II will become stronger than pillar 1 for operational risk.”
First published by Insurance Risk & Capital, January 2009.