Loss data – Driving business value from operational risk data

  • Subscribe to updates

  • Privacy
  • This field is for validation purposes and should be left unchanged.

In most financial services organizations, operational risk data is underused. Vast amounts of operational risk data – including operational risk loss event data – is often collected but are not transformed into meaningful reports for key stakeholders. As a result, the business, senior management, and the board can often question just how much real business value op risk delivers. Applying new tools such as Big Data analytics, artificial intelligence (AI) or machine learning (ML) to this situation isn’t likely to change it much.

This is the third blog in a four-part series of blogs that explores how operational risk teams can deliver a lot more business value to their stakeholders from the data they already have. The first blog looked at how risk and control assessment (RCSA) data can be used to better understand the effectiveness of the control environment. The second blog discussed how different types of key risk indicators (KRIs) serve different purposes – with only some being truly predictive. This third blog will look at how elements of loss data collection can provide important information to boards and senior management teams.

Focusing in on three data points

Operational risk teams can get lost in the blizzard of data points that can be associated with loss event data collection. However, often firms don’t realize that for all loss data events, they collect three important pieces of data:

• The date the loss event occurred
• The date the firm detected the event had happened
• The date the investigation into the event was closed

These three data points can reveal quite a lot about the robustness of a firm’s risk management framework if analysed correctly.

That’s because, with these three data points, operational risk executives can report on:

• Quality of detective controls – calculated by the difference between the days when a risk event was discovered and when it actually happened.
• Quality of corrective controls – calculated by the difference between when a risk event was discovered and when it was closed.

Detective and corrective control indicators tell organizations about the robustness of the controls they have in place to reduce the impact of events. Many organizations do not realize that they have the ability to create this informative analysis. Operational risk teams can break this down by risk type to see how well reality matches with the goals they are trying to achieve.


Connecting to risk appetite

Firms may want to consider setting appetites for both detection and correction within their risk appetite document. For example, a firm could set an appetite of three days within their to detect an event – so all risk events should be discovered and reported within three days of occurrence. If an event is not reported within a set period of time – for example, a week or ten days – then the failure of detection should become a risk event in its own right. If detection fails for more than this period, the firm could consider making such failures risk committee agenda items. The risk committee should discuss proposals for how detection could be improved, and ways in which investment in controls should be altered.

In conclusion, operational risk teams using grc software can provide useful information to stakeholders across the business by performing simple calculations based on the data they already collect. This analysis can help boards and senior management understand where more investment may be needed to improve the control environment or business processes, in order to reduce the impact of risk events if they occur. This kind of useful analysis does not require AI or ML – a good risk management software solution should be able to provide this analysis quickly and easily.

The final blog in this four-part series will examine how firms can undertake a useful cost-benefit analysis of their control environment using data they already have. This will bring together elements of the first three blogs to show how good quality analysis can provide the business with very useful information for decision-making.

To learn more about how RiskLogix can help your firm make the most of the loss data it already collects, please contact us.

Related Posts

Embedding and Entrenching Operational Risk Management
One of the most difficult things that a Chief Risk Officer (and the Head of Operational Risk Management) has to do is embed and entrench operational risk management. By contrast, implementing operational risk management is straightforward although a task that can take several years. It is also easy for a regulator to spot a firm …

Embedding and Entrenching Operational Risk Management Read More »

The Shortcuts Trap – Considering stress testing and scenarios, part 2
Ways in which popular shortcuts can result in problematic understandings of the levels of risk a firm can face. This is the final blog in a series of seven about how frequently-used shortcuts can undermine operational risk management programs within financial firms. It is part two of a blog looking at stress testing and scenarios …

The Shortcuts Trap – Considering stress testing and scenarios, part 2 Read More »

The Shortcuts Trap – Considering stress testing and scenarios, part 1
How frequently used shortcuts can result in poor understandings of the levels of risk within an organization This is the sixth in a series of seven blogs about the ways in which common shortcuts can undermine operational risk management success within financial services firms. It is the first of two blogs that focus specifically on …

The Shortcuts Trap – Considering stress testing and scenarios, part 1 Read More »

Client Area Access

Sign in with

Your company email address is required to register.

  • Name

  • Contact Info

Sign in with

Please enter your username or email address.
You will receive a link to create a new password via email.

You must be logged in to edit your profile.