Header  

Posts - Integrating the
Model in the Real
World:

1/13/09: Audit & Risk - Seeing the Forest from the Trees



9/4/08: Security ROI

6/28/08: Boise: A
Terrorist Target?


5/10/01: FFIEC Business
Continuity Planning
Handbook


4/3/08: SOX 404 Audits

Home>Assess>Risk>Calculating

Granularity

To reiterate: risk consists of two components - the probability that a negative or harmful event will occur and the cost or amount of loss or expense that will result from the event. For example, when an insurance agent prices your car policy one of the factors assessed is the value of your car (the impact or criticality of a loss) combined with your age, driving record and home address (elements that predict the probability of a loss). If you drive a Rolls Royce (high criticality) and live in a zip code with a disproportionately high rate of stolen cars (high probability), you can expect insurance premiums equivalent to the national debt.

Applying these two components to complex business processes is far more difficult than pricing an auto policy. Unlike the insurance industry where there is extensive actuarial data on prior losses along with sophisticated algorithms for analyzing this data, there is little historical loss data for operational risk and few if any objective methods to determine the value of the related business processes.

This leads to a crucial question, how do we compensate for the lack of historical data and complex risk algorithms in calculating operational risk? If we can not predict the likelihood of a hacker breaching a firewall based on data from prior incidents, what data can we use as a substitute?

One approach and the one endorsed on this site is to break down the two elements of Criticality and Probability into successively finer levels of detail to the point where the elements take on a reasonable level of objectivity. Granularity can compensate for a lack of hard data on losses. For example, it is nearly impossible to answer the question, "What are the odds a firewall will be breached by a hacker?" The question is too vague. But if we add more detail and break down the specific components of the risk, the problem becomes more clear. What if the firewall sits in front of a major Internet banking application that is a tempting target for thieves? What if the firewall has not been patched for several vulnerabilities that were announced six months ago and are commonly known in the hacker community? What if the computer system behind the firewall has no intrusion detection system to identify unauthorized activity? As we break down the problem into finer detail a clearer risk profile emerges. We can begin to see significant impacts and possibilities. We can also see how specific vulnerabilities may effect the risk level.

Note, this site advocates a compromise between quantitative and qualitative assessment factors. At this point in the development in the discipline of operational risk there is not enough universally accepted objective data that can serve to identify the probability of loss events. Conversely, there is an over abundance of subjective, "gut feel" data that is utilized as the basis for risk decisions. While breaking down risk into specific, detailed components yields a better understanding of the issues, there needs to be framework to support this process of analysis that will insure the result is repeatable and follows a commonly agreed form of assessment. Put another way, if we are going to break risk down into ever finer components, we need a method to reassemble those components into an overall assessment.

The model to the right represents the method endorsed on this site for conducting risk assessments of operational risk (including information security). As we will see on the following pages, the model is based on the concept of breaking down Cost and Likelihood/Probability into their detailed elements. By placing a risk value on each of the detailed elements, we can then consolidate the values into an overall appraisal of risk.

Simple Risk Model