Loss data – Driving business value from operational risk data

  • Subscribe to updates

  • Privacy
  • This field is for validation purposes and should be left unchanged.

In most financial services organizations, operational risk data is underused. Vast amounts of operational risk data – including operational risk loss event data – is often collected but are not transformed into meaningful reports for key stakeholders. As a result, the business, senior management, and the board can often question just how much real business value op risk delivers. Applying new tools such as Big Data analytics, artificial intelligence (AI) or machine learning (ML) to this situation isn’t likely to change it much.

This is the third blog in a four-part series of blogs that explores how operational risk teams can deliver a lot more business value to their stakeholders from the data they already have. The first blog looked at how risk and control assessment (RCSA) data can be used to better understand the effectiveness of the control environment. The second blog discussed how different types of key risk indicators (KRIs) serve different purposes – with only some being truly predictive. This third blog will look at how elements of loss data collection can provide important information to boards and senior management teams.

Focusing in on three data points

Operational risk teams can get lost in the blizzard of data points that can be associated with loss event data collection. However, often firms don’t realize that for all loss data events, they collect three important pieces of data:

• The date the loss event occurred
• The date the firm detected the event had happened
• The date the investigation into the event was closed

These three data points can reveal quite a lot about the robustness of a firm’s risk management framework if analysed correctly.

That’s because, with these three data points, operational risk executives can report on:

• Quality of detective controls – calculated by the difference between the days when a risk event was discovered and when it actually happened.
• Quality of corrective controls – calculated by the difference between when a risk event was discovered and when it was closed.

Detective and corrective control indicators tell organizations about the robustness of the controls they have in place to reduce the impact of events. Many organizations do not realize that they have the ability to create this informative analysis. Operational risk teams can break this down by risk type to see how well reality matches with the goals they are trying to achieve.

Loss-data-driving-business-value-from-operational-risk

Connecting to risk appetite

Firms may want to consider setting appetites for both detection and correction within their risk appetite document. For example, a firm could set an appetite of three days within their to detect an event – so all risk events should be discovered and reported within three days of occurrence. If an event is not reported within a set period of time – for example, a week or ten days – then the failure of detection should become a risk event in its own right. If detection fails for more than this period, the firm could consider making such failures risk committee agenda items. The risk committee should discuss proposals for how detection could be improved, and ways in which investment in controls should be altered.

In conclusion, operational risk teams using grc software can provide useful information to stakeholders across the business by performing simple calculations based on the data they already collect. This analysis can help boards and senior management understand where more investment may be needed to improve the control environment or business processes, in order to reduce the impact of risk events if they occur. This kind of useful analysis does not require AI or ML – a good risk management software solution should be able to provide this analysis quickly and easily.

The final blog in this four-part series will examine how firms can undertake a useful cost-benefit analysis of their control environment using data they already have. This will bring together elements of the first three blogs to show how good quality analysis can provide the business with very useful information for decision-making.

To learn more about how RiskLogix can help your firm make the most of the loss data it already collects, please contact us.


Related Posts

Risk Management and Business Survival – A case study
Following on from our earlier blog about the role of risk management and using operational risk software in business resilience and business survival, here Tony and John give a high profile example from their book Mastering Risk Management, and go on to outline how to prepare for responding to a critical event… It is often …

Risk Management and Business Survival – A case study Read More »

Business survival and the case for Risk Management
When viewed within the context of business resilience and business survival the case for a formalised risk management system, supported by designed-for-purpose operational risk software is compelling. Here in an exert from their book Mastering Risk Management, Tony Blunden and John Thirlwell put the case… At the business level, a robust and efficient risk system …

Business survival and the case for Risk Management Read More »