The rogue trader that cost UBS $2 billion adds to a long list of case studies that only begins with Barings, Sumitomo, Allied Irish Bank, and Kidder Peabody and never seems to end. When we review high-profile risk failures, a common theme emerges: although many involved a chain of multiple breakdowns, most **high-profile risk failures have required some sort of operational risk failure** like the rogue trader. Before we even know enough to cast specific blame, we can confidently point to some sort of operational exposure. Operational risk is less mature than market or credit risk, but rivals both in importance.

Operational risk defies a single consensus, concrete definition. It is tempting to want operational risk to equal everything that is

notmarket or credit risk. This is a good place to start, but we generally, including business strategy and direct environmental factors like competition and technology. We alsofurtherexclude business risksfurther excludereputational risk (FRM candidates tend to memorize that operational riskincludeslegal risk butexcludesstrategy and reputation).Under this exclusion-based approach, we can reduce the definition to:

operational risks are the non-business financial risks other than market (including liquidity) and credit risks.In the Financial Risk Manager (FRM), we rely on Basel's definition:

Operational risk is the risk of loss resulting from inadequate or failed internal processes, people and systems, or from external events.The Basel definition allows us to parse operational risk broadly into:

Internalrisks, including people (here is our rogue trader!), processes and systems; orExternalrisks; e.g., natural disaster, theftThe Basel Accord uses a business line/event-type (BL/ET) matrix for locating losses in cells. Columns are business lines; e.g., asset management. Rows are Basel's seven classifications of

event types. Below is an example of a firm that has mapped Basel's seven types to the firm's more natural five types:

As illustrated above, a bank can map a reduced number of their internal loss event types to Basel's seven events; e.g., fraud might be defined to encompass both internal and external fraud. Data can be collected, internally or externally, according to the cells in the matrix: we are then asking and answering questions in the form,

what is the typical frequency or severity of loss for a certain event type in a certain business line?The cells in the matrix, known as units of measure (UOM), are lowest pre-aggregated units at which the bank specifies the two types of loss distributions:

frequencyandseverity. In the most common approach (loss distribution approach, LDA, more on this later), it is critical to characterize these two distribution types.The first is the frequency of losses during a period, probably one year. The second is the magnitude (severity) of a loss in dollar terms if a loss is experienced; in statistical terms, the loss severity conditional on a loss occurring.

## A very simple illustration of convoluting frequency & severity

Imagine we expect that for our very small bank, next year the

frequencyof operational losses can be described by a discrete Poisson distribution with mean of five losses. We then generate a random number from 0 to 1.0 and 0.19 is returned, which corresponds to a simulation of three losses, below the mean; i.e., =POISSON.DIST(X = 2, mean = 5, true = CDF) = 12.5% and POISSON.DIST(X=3, mean=5, true = CDF) = 26.5%, so 19% as a random input into the inverse CDF corresponds to X of 3. The particulars of thisinverse transform methodare not currently vital, just please note that given a cumulative distribution function (CFD) which can be inverted, we can use a random number [0, 1.0] to produce a random (X) value.Imagine we further assume a

different, continuousdistribution is used to simulate theseverityof each loss. We will not bother to specify which distribution. Just assume our three random trials, one for each loss event, produce the following severities: $1.2 million, $0.4 million, and $0.7 million.The simulated operational losses for the year are, therefore, the sum of $2.3 million because we randomly simulated three losses (the frequency) and summed the simulated severity of each loss.

## What approaches do we use to measure and model operational risk?

In the FRM, we learn that historically there two broad approaches:

- Top-down approaches, versus
- Bottom-up approaches

Top-down approachesare convenient butinsufficient for risk managerswho seek to mitigate or prevent severe operational losses. They analyze operational risk at the firm-level. So we might think of them as approaches that an external debt or equity analyst could employ with publicly-available data, but they are too superficial as they do not require clarity into the firm's specific operations. For example, an analyst could regress (linear regression) a company's stock returns against several factors and attribute the residual (everything else) to operational exposures. But we have learned an important difference between market risk and credit risk. Market risk can depend highly on external variables like interest rates. Operational exposures tend to manifest asfirm-specific events and develop as emergent properties of the firm; top-down approaches fall short almost by definition!

Bottom-up approaches,on the other hand, are relevant to a risk manager whose job includes the control, mitigation, prevention and cost/benefit or optimization of operational risks (Like others risks, shareholders do not expect the elimination of risk, such a thing is too expensive and impossible for the assumption of expected business risks). Linda Allen, a long-time assignment in the FRM, parses bottom-up approaches into:

- Process approaches, or
- Actuarial approaches
While they are not mutually exclusive (as process approach outputs can be actuarial inputs),

process approachesinclude scorecards and event trees. Think of on-site consultants interviewing employees and mapping processes.This series is concerned instead with

actuarial approaches, and therefore the leading methods used to model operational risk. Actuarial approaches need people, and depend on processes, but they are nowhere without quantifiable data. Even more specifically, we tend to be concerned with so-called loss distribution approaches (LDA) which are common in sophisticated practice and used to qualify under Basel's internal (advance d) approaches to OpRisk.## Two starter ideas about loss distributions

As we learn to model operational risk, we quickly realize this is all about distributions. Jorion draws a difference between valuation and risk measurement, and this difference helps to show why operational risk modeling is more mathematically rigorous than, say, equity valuation or even CDS valuation:

- Valuation (what is the intrinsic value of a company's stock? or what is the value of a credit default swap?) requires precision in the today's expected value. That is,
high precision in the first moment of a current distribution.- Risk measurement (e.g., value at risk, expected shortfall), in contrast, is satisfied with an approximation in the future tail of the distribution.
That is, forward looking approximations in the third and fourth distributional moments.This leads to our

first basic idea: the tail is not the body. The extreme loss tail is harder to figure than the mean. The tail contains the low-frequency, high-severity (LFHS) losses. We will forever be dealing with a problem that keeps coming up: we always want more data that we can get. The firm's own data is relevant but does not offer enough once in a lifetime events. Peers and external database give us plenty more data but invariably plague us with comparison problems. Still, we will obsess on the tail, and we will look for methods to help us characterize the tail as our special concern. This leads to extreme value theory (EVT).

The

second basic idea,alreadyintroduced above, is that we distinguish between thefrequencyof losses, which arecountable (discrete) variables, and theseverityof losses, which tend to want measurement ascontinuous variables. For example:

- A
frequencyquestion: How many loss events do we expect to occur next year? This is adiscrete random variablethat could be described (characterized by) with a Poisson, binomial or negative binomial among others- A
severityquestion: Conditional on the occurrence of a loss, what is the severity of the loss? We tend to usecontinuousdistributions to describe this random variable. For example, lognormal, exponential, gamma or beta.

It has been an amazing experience. I got a chance to learn from various experts in this field. Trainers have been amazing. I learnt how to carve big business plans over a coffee sitting in the comfortable ambience of the class. Its much different than a usual classroom experience. The best thing about them is that we can comeback for revision and classes that gets missed.

EduPristine has given me the right exposure to the Big Online World. The faculty was knowledgeable which is why I could manage to take home sound knowledge through this particulars course. The time invested in this course has been worth it.