Share with your network!

 

The rogue trader that cost UBS $2 billion adds to a long list of case studies that only begins with Barings, Sumitomo, Allied Irish Bank, and Kidder Peabody and never seems to end. When we review high-profile risk failures, a common theme emerges: although many involved a chain of multiple breakdowns, most high-profile risk failures have required some sort of operational risk failure like the rogue trader. Before we even know enough to cast specific blame, we can confidently point to some sort of operational exposure. Operational risk is less mature than market or credit risk, but rivals both in importance.

How do we define operational risks?

Operational risks defies a single consensus, concrete definition. It is tempting to want operational risk to equal everything that is not market or credit risk. This is a good place to start, but we generally further exclude business risks, including business strategy and direct environmental factors like competition and technology. We also further exclude reputational risk (FRM candidates tend to memorize that operational risk includes legal risk but excludes strategy and reputation).

Under this exclusion-based approach, we can reduce the definition to: operational risks are the non-business financial risks other than market (including liquidity) and credit risks.

In Financial Risk Manager (#FRM), we rely on Basel’s definition: Operational risk is the risk of loss resulting from inadequate or failed internal processes, people and systems, or from external events.

The Basel definition allows us to parse operational risk broadly into:

  • Internal risks, including people (here is our rogue trader!), processes and systems; or
  • External risks; e.g., natural disaster, theft

The Basel Accord uses a business line/event-type (BL/ET) matrix for locating losses in cells. Columns are business lines; e.g., asset management. Rows are Basel’s seven classifications of event types. Below is an example of a firm that has mapped Basel’s seven types to the firm’s more natural five types:

 

As illustrated above, a bank can map a reduced number of their internal loss event types to Basel’s seven events; e.g., fraud might be defined to encompass both internal and external fraud. Data can be collected, internally or externally, according to the cells in the matrix: we are then asking and answering questions in the form, what is the typical frequency or severity of loss for a certain event type in a certain business line?

The cells in the matrix, known as units of measure (UOM), are lowest pre-aggregated units at which the bank specifies the two types of loss distributions: frequency and severity. In the most common approach (loss distribution approach, LDA, more on this later), it is critical to characterize these two distribution types.

The first is the frequency of losses during a period, probably one year. The second is the magnitude (severity) of a loss in dollar terms if a loss is experienced; in statistical terms, the loss severity conditional on a loss occurring.

#FRM Basel Definition

A very simple illustration of convoluting frequency and severity:

Imagine we expect that for our very small bank, next year the frequency of operational losses can be described by a discrete Poisson distribution with mean of five losses. We then generate a random number from 0 to 1.0 and 0.19 is returned, which corresponds to a simulation of three losses, below the mean; i.e., =POISSON.DIST(X = 2, mean = 5, true = CDF) = 12.5% and POISSON.DIST(X=3, mean=5, true = CDF) = 26.5%, so 19% as a random input into the inverse CDF corresponds to X of 3. The particulars of this inverse transform method are not currently vital, just please note that given a cumulative distribution function (CFD) which can be inverted, we can use a random number [0, 1.0] to produce a random (X) value.

Imagine we further assume a different, continuous distribution is used to simulate the severity of each loss. We will not bother to specify which distribution. Just assume our three random trials, one for each loss event, produce the following severities: $1.2 million, $0.4 million, and $0.7 million.

The simulated operational losses for the year are, therefore, the sum of $2.3 million because we randomly simulated three losses (the frequency) and summed the simulated severity of each loss.

What approaches do we use to measure and model operational risk?

In the FRM, we learn that historically there two broad approaches:

  • Top-down approaches, versus
  • Bottom-up approaches

Top-down approaches are convenient but insufficient for risk managers who seek to mitigate or prevent severe operational losses. They analyze operational risk at the firm-level. So we might think of them as approaches that an external debt or equity analyst could employ with publicly-available data, but they are too superficial as they do not require clarity into the firm’s specific operations. For example, an analyst could regress (linear regression) a company’s stock returns against several factors and attribute the residual (everything else) to operational exposures. But we have learned an important difference between market risk and credit risk. Market risk can depend highly on external variables like interest rates. Operational exposures tend to manifest as firm-specific events and develop as emergent properties of the firm; top-down approaches fall short almost by definition!

Bottom-up approaches, on the other hand, are relevant to a risk manager whose job includes the control, mitigation, prevention and cost/benefit or optimization of operational risks (Like others risks, shareholders do not expect the elimination of risk, such a thing is too expensive and impossible for the assumption of expected business risks). Linda Allen, a long-time assignment in the FRM, parses bottom-up approaches into:

  • Process approaches, or
  • Actuarial approaches

While they are not mutually exclusive (as process approach outputs can be actuarial inputs), process approaches include scorecards and event trees. Think of on-site consultants interviewing employees and mapping processes.

This series is concerned instead with actuarial approaches, and therefore the leading methods used to model operational risk. Actuarial approaches need people, and depend on processes, but they are nowhere without quantifiable data. Even more specifically, we tend to be concerned with so-called loss distribution approaches (LDA) which are common in sophisticated practice and used to qualify under Basel’s internal (advanced) approaches to OpRisk.

Two starter ideas about loss distributions

As we learn to model operational risk, we quickly realize this is all about distributions. Jorion draws a difference between valuation and risk measurement, and this difference helps to show why operational risk modeling is more mathematically rigorous than, say, equity valuation or even CDS valuation:

· Valuation (What is the intrinsic value of a company’s stock? Or what is the value of a credit default swap?) requires precision in the today’s expected value. That is, high precision in the first moment of a current distribution.

· Risk measurement (e.g., value at risk, expected shortfall), in contrast, is satisfied with an approximation in the future tail of the distribution. That is, forward-looking approximations in the third and fourth distributional moments.

This leads to our first basic idea: the tail is not the body. The extreme loss tail is harder to figure than the mean. The tail contains the low-frequency, high-severity (LFHS) losses. We will forever be dealing with a problem that keeps coming up: we always want more data that we can get. The firm’s own data is relevant but does not offer enough once in a lifetime events. Peers and external database give us plenty more data but invariably plague us with comparison problems. Still, we will obsess on the tail, and we will look for methods to help us characterize the tail as our special concern. This leads to extreme value theory (EVT).

 

#FRM LDA UOM Loss

 

 

 

The second basic idea, already introduced above, is that we distinguish between the frequency of losses, which are countable (discrete) variables, and the severity of losses, which tend to want measurement as continuous variables. For example:

  • A frequency question: How many loss events do we expect to occur next year? This is a discrete random variable that could be described (characterized by) with a Poisson, binomial or negative binomial among others
  • A severity question: Conditional on the occurrence of a loss, what is the severity of the loss? We tend to use continuous distributions to describe this random variable. For example, lognormal, exponential, gamma or beta.

Do you want to learn more about the practical modeling of operational risk, including the real-world application of Excel to OpRisk analytics? Edu Pristine and bionicturtle.com are proud to refer you to the new Operational Risk and Credit Risk Training School. Click here for more details.

To guarantee your success at passing the FRM exam at one go, follow the link to FRM Concept Checkers for many more concepts!

 

Operational Risk Management at its easiest! Read this post if you want to be a master of Operational Risk Management and its various nuances!