28
Definitions of risk The following series of e-mail exchanges took place on the RISKANAL mailing list. Being a civil engineer, I've studied now from several points of view "risk" and "vulnerability". I got the UNESCO definition of risk, that is: risk = hazard x vulnerability x potential loss I understand that vulnerability in this equation reflects the performance of an object under an impact with respect to damage. Reading publications, I became aware that not all authors do respect this definitions, but tend to call vulnerability what is for example the damage cost. (This equals in my opinion the part in the above equation "Vulnerability x potential loss"). Are there other recognised definitions for vulnerability than the UNESCO-definition? Kaspar PETER [email protected] Interesting question from Peter Kaspar. He cites a UNESCO "definition" of risk: risk = hazard x vulnerability x potential loss What does this mean? What are the units of "hazard", "vulnerability" and "potential loss", and when these three disparate items are multiplied together, as this formula indicates, what dimensions does "risk" have? I thought that "risk" was some function of "likelihood" and

9.Definitions of Risk

Embed Size (px)

DESCRIPTION

risk

Citation preview

Page 1: 9.Definitions of Risk

Definitions of risk

The following series of e-mail exchanges took place on the RISKANAL mailing list.

Being a civil engineer, I've studied now from several points of view "risk" and "vulnerability". I got the UNESCO definition of risk, that is:

risk = hazard x vulnerability x potential loss

I understand that vulnerability in this equation reflects the performance of an object under an impact with respect to damage. Reading publications, I became aware that not all authors do respect this definitions, but tend to call vulnerability what is for example the damage cost. (This equals in my opinion the part in the above equation "Vulnerability x potential loss").

Are there other recognised definitions for vulnerability than the UNESCO-definition?

Kaspar [email protected]

Interesting question from Peter Kaspar. He cites a UNESCO "definition" of   risk:

risk = hazard x vulnerability x potential loss

What does this mean? What are the units of "hazard", "vulnerability" and "potential loss", and when these three disparate items are multiplied together, as this formula indicates, what dimensions does "risk" have?

I thought that "risk" was some function of "likelihood" and "consequences" but doubt that it is the product of these two.

Stuart C [email protected]

Count me as one of those who does not respect the UNESCO definition of risk.

The formula

risk = hazard x vulnerability x potential loss

Page 2: 9.Definitions of Risk

seems to assign two terms to the concept of a potential loss and leaves out full consideration of the frequency or probability of a challenge to a system.

A more standard definition of risk (in USDOE and USNRC space) is

scenario risk = likelihood x consequences

where "likelihood" is a measure of the frequency or probability of occurrence of a particular scenario (sequence of events) and "consequences" is some measure of the outcome of that scenario.

"Hazard" is generally used to express some aspect of your system that is capable (under some plausible scenario) of producing unpleasant consequences, thus corresponding to the "potential loss" entry in the UNESCO formula.

Under what I consider to be the standard definitions, the overall risk will normally be taken to be the sum of scenario risks for some reasonable set of scenarios. This may be an overestimate of the risk (since simply summing scenario risks doesn't account for scenario overlap) or an underestimate (since the scenarios excluded from the "reasonable" set of scenarios may represent significant risk).

One context in which "vulnerability" seems appropriate is where estimates of the ultimate risk must account for attack and defense by conscious agents, for instance, in estimating the risk of diversion of weapons materials, or the risk of successful attacks on computer networks, or the risk of various types of criminal endeavours, etc.

In those contexts, it seems reasonable to consider the likelihood of a successful diversion or attack to be a function of the threat and the vulnerability. The risk then can be defined as a function of the likelihood of a successful diversion or attack and the consequences of that diversion or attack.

In the UNESCO definition, you might argue that "vulnerability" was a proxy for the likelihood that impersonal Nature, design faults, and component/system failures eventually lead to unpleasant consequences, but it seems to me that term "vulnerability" better represents the success or failure of conscious actors in dealing with threats from other possibly conscious actors. To my mind, the UNESCO definition still would have a non-standard definition of "hazard", since in the standard definition "hazard" and "potential loss" are essentially equivalent.

It will be interesting to hear from others on this issue, since I am arguing for certain "standard" definitions of English words, which don't necessarily agree with the dictionary definitions, while risk analysis/assessment/management is a worldwide, multilingual activity.

Jim [email protected]

Page 3: 9.Definitions of Risk

I have come across a French version of the UNESCO definition :

risque = Alea x Vulnerabilite

Reference in Coste, Lucien (1996) Definissons le risque Preventique-Securite 29 (Septembre-Octobre): 22-25

This would translate something like

Risk =Chance x Vulnerability.

I am still looking for the exact publication of UNESCO who proposed the definition for risk (anybody out there ?). The definition seems to be used mostly by Europeans working in the area of natural disasters (like Lucien Coste).

In this context, the term vulnerability does not refer to the "success or failure of conscious actors in dealing with threats from other possibly conscious actors." Jim Dukelow

It would indeed be rather "a non-standard definition of "hazard", since in the standard definition "hazard" and "potential loss" are essentially equivalent." Jim Dukelow

If we do take the French version of the UNESCO definition, and translate alea with chance, we are indeed not very far from the standard definition Jim Dukelow is referring to : "A more standard definition of risk (in USDOE and USNRC space) is

scenario risk = likelihood x consequences

where "likelihood" is a measure of the frequency or probability of occurrence of a particular scenario (sequence of events) and "consequences" is some measure of the outcome of that scenario."

My only objection to Jim's interpretation of this standard definition concerns the work likelihood. I would not limit it to the measure of the frequency or probability of a scenario. Doing so would eliminate alternative approaches like fuzzy mathematics. As Lucien Coste has outlined (above reference) the French word alea refers to a possibility (which I have translated with chance). Possibilities, which should include probabilities, may refer to alternative mathematical approaches rather than the standard theory of probabilities.

The emphasis on possibilities (which corresponds to the word likelihood) may be a European bias to the concept of risk. A Dutch publication defined risk as "The possibility, with a certain degree of probability, of damage to health, the environment or goods, together with the nature and extent of that damage." Health Council of the

Page 4: 9.Definitions of Risk

Netherlands: Committee on Risk measures and Risk Assessment "Risk is more than just a number." The Hague 1996

Does anybody know more about the choice of the term likelihood ?

Reiner Banken M.D. [email protected]

In a message dated 5/25/99 8:22:50 PM Pacific Daylight Time, [email protected] writes:

<< I thought that "risk" was some function of "likelihood" and "consequences" but doubt that it is the product of these two. >>

What "risk" is depends on what properties one believes it should have. Specifying properties can lead to a definition of risk.

In the context of financial gambles, David Bell, in "Risk, return, and utility" (Management Science, 1995, 41, 1, 23-30) defines the "risk" of a single-attribute random variable X as what a rational decision maker needs to know, in addition to its expected value (E(X)) and the starting wealth (w), to calculate the expected utility of X. He therefore posits a function of the following form:

Eu(w + X) = f[E(X), R(X), w]

The properties he specifies are as follows:

1. Monotonicity: u(w) increases with w.2. Risk-aversion: u(w) is risk-averse (i.e., concave). f decreases with R(X).3. Decreasing risk-aversion: If X and Y are indifferent at wealth w, then X is preferred to Y at wealth > w iff R(X) > R(Y). ("Risk matters less at higher initial wealth.")3. Continuity: u is continuous.

>From these properties, he deduces the following:

THEOREM 1:

Axioms 1-4 imply that u(w) = [1 - exp(-cw)]/[1 - exp(-c)]

and 

R(X) = (1/c)log Eexp{-c[X - E(X)]} for c > 0.

Page 5: 9.Definitions of Risk

c is a coefficient of risk aversion.

So, R(x) is the definition of "risk" in this setting.

Other authors have taken a similar axiomatic approach. Different sets of axioms lead to different quantitative measures of risk. So, to define risk, it is useful to be specific about the properties one wishes to entail when one speaks of risk. 

BTW, I agree that the product of probability and consequence is not a good general definition of risk. It ignores the fact that many decision makers are risk averse.

Tony [email protected]

This definition

risk = hazard x vulnerability x potential loss

is similar to the Business Continuity / Disaster Planning formulation of

Risk = Threat x Vulnerability x Impact

which is also used in some types of security risk analysis. The use of an arithmetic equation makes all sorts of assumptions that are not generally born out in the way the analysis is performed, with the three variables (Threat, Vulnerability and Impact) often being measured on arbitrary and essentially qualitative scales.

Leaving the mathematical rigour aside, the way the terms are used has always suggested to me that they indicate a lack of clarity in the underlying analysis. Very few analyses get to the level described by Tony Cox (In the context of financial gambles, David Bell, in "Risk, return, and utility" ...) and the simple application of Threat x Vulnerability x Impact or even Probability x Impact rarely amounts to much more than a statement that there are three (or two) independent features of a risk that we need to take into account and that the overall severity or priority is a combination of them with characteristics more like a product than a sum. Products give no weight to a combination for which one of the measures is zero while additive combination gives some weight so long as at least one of the measures is greater than zero.

I prefer to frame the analysis in terms that make few assumptions beyond those apparent to the decision makers. On the basis that the simplest model that will explain or clarify a phenomenon is the most useful, because it avoids cluttering our thoughts with unnecessary details, qualitative scales for likelihood and impact combined through a lookup matrix are often sufficient for routine planning. The next step up is to use real numbers for probabilities and impacts, but it is a mistake to think that this puts the whole

Page 6: 9.Definitions of Risk

exercise on an objective basis as both the source of the inputs and the interpretation of the output are anything but objective.

If you need to go even further and take account of Tony's comment that "the product of probability and consequence is not a good general definition of risk. It ignores the fact that many decision makers are risk averse" life will get a lot more complicated. Not only are decision makers generally risk averse they are very diverse and not always consistent over time or capable of understanding the interaction between their own preferences and a risk assessment. The work required to take account of preferences and formalise them in utilities and rules or formulae for combining them is simply not warranted in very many instances and even when it is you have to ask if your audience will be willing or able to use it.

To summarise - keep it simple unless you need more and don't let the analysis get beyond the thought processes of your decision makers.

Steve [email protected]

I read with interest your comments on the UNESCO definition of risk.

With a background in environmental health I am currently working on a framework for assessing and managing safety in health technology assessments (HTA). I am looking for generic concepts of risk and safety which could cover all of the different type of risks associated with health technologies (iatrogenic, infectious, toxic, technological, ...). The definition of technology is rather large in HTA, it could be defined rather loosely as 'ways of doing things in the health system'.

In your comments you wrote : " On the basis that the simplest model that will explain or clarify aphenomenon is the most useful because it avoids cluttering our thoughts with unnecessary details, qualitative scales for likelihood and impact combined through a lookup matrix are often sufficient for routine planning."

Could you give me some references or examples on this approach ? What do you mean by lookup matrix ? Are you aware of any examples of such an approach outside the business or investment sector ?

Reiner Banken M.D. [email protected]

Page 7: 9.Definitions of Risk

<< To summarise - keep it simple unless you need more and don't let the analysis get beyond the thought processes of your decision makers. >>

Agreed on all counts!

As a practical matter, I like to do a "short-cut" decision analysis which assumes (with some theoretical and empirical justification) that utility is  related to outcomes measured in "natural" units in a simple way that involves a single numerical risk-aversion parameter. (In jargon, I buy off on the use of exponential utilities for measurable value functions as an awfully good approximation in many cases.) Then, that one parameter can be assessed by discussing with the client the certainty-equivalent of some different risky prospects.

If that's too much trouble, I can also present answers as functions of the risk aversion parameters, described by the certainty equivalent of a 50-50 gamble on a good or bad outcome. Based on such a sensitivity analysis, I can show what decision makers who are more or less risk-averse will do and let the real decision-maker be informed accordingly.

None of this cuts much ice with d.m.'s who don't want to use decision analysis. So, for them, I just identify undominated prospects and leave it at that. I agree with you that many decision-makers fall in this category. My experience has been that my business clients usually like to see how sensitive recommended actions are to risk-aversion, but have little enthusiasm for formal utility assessment.

Tony [email protected]

I agree with you that it is necessary to clarify concept of RISK and HAZARD as it is necessary to compare like with like and to speak the same language. Unfortunately my field of interest is not natural disaster but construction projects. Nevertheless, I understand the assumption that natural disasters do not take place if there is no risk for the assets. It was none the less useful to remind it. It is interesting to note the work to be prepared by Fritz A. Seiler concerning the gathering of the different existing definitions.

The word risk may be either of Arabic origin risq or of Latin one risicum. Its French spelling `risque' first appeared in England in the mid-17th century. It began to appear in its modern version in insurance transactions, around 1830. Over time, the original meaning of the word has changed. It has gone from one simply describing any unexpected outcome, good or bad to one relating undesirable outcomes.  It has now a negative connotation in common English usage as in `runthe risk of...' or `at risk'.

However, I have got a reservation about your definition of HAZARD (but as I wrote above, the definition and the approach of such a term may have a different meaning in the

Page 8: 9.Definitions of Risk

your field of interest). I do not consider that a HAZARD is the probability that an event will occur but that a HAZARD is that event. The risk is then the probability than an event occurs and the loss of life (or money) is then the CONSEQUENCE of that HAZARD. In order not to forget that risk analysis is not only the analysis of "bad" events (or hazards), but also can lead to the analysis of potential OPPORTUNITIES in construction project risk analysis, the term EVENT is more used. I agree then with the definitions and concepts of the terms VALUE and VULNERABILITY as you wrote them. If I follow them, value (or assets) is the total amount of elements which are analysed, vulnerability is the percentage of these elements which are likely to be lost if a hazard occurs and the probability is the RISK this HAZARD occurs. It is only a question of terminology in fact. Youdescribe the risk as the amount of loss after calculation while I take it as the probability that an event occur. Using that definition, the formula is:

Consequences = value * vulnerability * risk

Where

CONSEQUENCES (or whatever it can be called) is the way to  quantify the resultant figures and to give a characteristic to an  area or a project which can be compared with another one.

VALUE is the total amount to be studied

VULNERABILITY is the percentage of that whole that is likely to be loss

RISK is the probability of an event to occur.

PS: If someone get more details concerning these definitions of  RISK, VALUE, VULNERABILITY and CONSEQUENCES,  feel free to give your opinion order to get strong basis for further discussions

References:

Flanagan, R., Norman, G., Risk management & construction, BSP, 1993

Ansell, J., Wharton, F., Risk Analysis, Assessment and management, J. Wiley and Sons, 1992

Smith, N.J. , Managing risk in construction project , Blackwell Science, 1998

Cadman, D., Byrne, P, Risk, uncertainty and decision making in property development, E & FN Spon, 1984

Page 9: 9.Definitions of Risk

Alexandre [email protected]

I am fascinated by the discussion of hazard versus risk, however do not feel any of the current opinions speak to my concern of whether or not one can qualitatively evaluate risk. In Comparative Risk Assessment, scales are developed to rank risks based on quantitative characteristics, but in a qualitative ranking system. While probability is inherent in the ranking, the resulting ranking does not reflect actual differences in scale, or magnitude. Are we comparing risks or ranking hazards? Is there any meaning for risk in qualitative terms?

Jo Anne [email protected]

<< Is there any meaning for risk in qualitative terms? >>

Yes, and qualitative comparisons can sometimes lead to quantitative ones.

A pretty standard framework for comparing risks (and defining quantitative measures of risk) goes like this.

1. Start with a bunch of objects to be compared. These might be probability measures for consequences (or probability distribution functions if there is only one continuous attribute for consequences, like life-years lost.) The could also be belief functions, plausibility measures, etc., depending on your religious affiliation vis a vis uncertainty calculi. Let's just call them "prospects".

2. Posit a comparative relation called "riskier than" or "is at least as risky as" that can be used to compare prospects.

3. Write down some qualitative properties (like continuity, transitivity, respecting dominance, etc.) that you think the comparative relation should have.

If you stop here, you may already have enough structure to provide useful comparisons between prospects. You can solve some practical decision problems, like identifying actions corresponding to undominated prospects. But, the usual flow is to continue from qualitative to quantitative. This is done in two steps:

4. Show that numbers can be assigned to prospects in such a way that the sizes of the numbers reflect the comparative relation ("is at least as risky as"). Usually, this is done by

Page 10: 9.Definitions of Risk

exhibiting some function (or functional) that does the job. This step, if you can carry it out, constitutes a "representation theorem". If other folks like your qualitative properties, then the representation theorem gives you a publication is some journal like the Journal of Risk and Uncertainty.

5. Show that the numbers so assigned are unique up to some kind of scale transformation (e.g., choice of origin and unit.) Now, interpret those numbers as the quantitative risks associated with the prospects. Ta da! You have moved from qualitative to quantitative risk measurement.

Lots of folks have gone down this path. I think there are at least a dozen quantitative risk measures based on such qualitative properties. One of my favourites is "A standard measure of risk and risk-value models", J. Jia and J.S. Dyer, Management Science, 42, 12, December, 1996, 1691-1705.

The problem with all this cool technical development is that there are too many ways to come up with meanings for "risk" in qualitative terms, as you put it. one is best really comes down to which sets of properties (or "axioms") you find most compelling. The Jia and Dyer ones seem pretty good to me. (I also like Bell's work, which I mentioned previously, though Jia and Dyer give reasons why they like theirs better. The representations are quite similar.)

Tony [email protected]

One more blow to this recumbent horse.

It seems to me that much of this is semantics. Tony Cox's posting begins to address a fundamental issue comprised of (1) the decision to be made (including the purpose and context), (2) the decision-maker(s) (DMs), and (3) how the DMs think / make their decisions. We, as risk assessors, need to meet the needs of the DMs.

Risk analysis is used by many people in many contexts for many reasons. Accordingly, it is appropriate to use different approaches, methods, etc. to meet the highly variable needs of the DMs. One method, one set of words and definitions will not work in all, or even most, situations. The words, concepts, and methods used in a risk analysis must address each of the three points above to be meaningful.

Terms must be defined in the context of the three points. The term "risk" means very different things to AIDS caregivers, insurance actuaries, and the EPA carcinogen assessment group. Any risk metric used needs to be consistent with the decision to be made and DM's other needs and issues. Ditto for other terms used.

The concepts used must address the concepts being addressed by the DMs using words

Page 11: 9.Definitions of Risk

and concepts that make sense to the DMs. Communication is not the words that are presented, it is what is understood by the recipient of those words. A common understanding of the terms used needs to be established and used consistently for a risk analysis to be useful.

We not only need to make sense to the DMs, we need to address the decision(s) to be made. The concepts used and conclusions made in a risk analysis must be consistent with the decision(s), including: (1) why, (2) how, (3) by whom, (4) when, (5) etc. The DMs (not to mention others who might read, try to understand, and use the results of a risk analysis) often don't understand the nature of the decision well enough to understand what methods should be used. We as risk assessors have an obligation to make the effort to understand what is needed, in addition to what is requested, and address than. We need to avoid merely fitting the analysis into one or two analytical method(s) that happen to be handy or familiar to us or the DMs.

As risk assessors, I think we already understood these things. We do tend to overlook them from time to time, as in this thread. We as risk assessors do ourselves, our discipline, our clients, and the public a disservice when we fail to recognize and implement these principles. Yes, we actually have to think about what we are about to do instead of just doing it.

John [email protected]

All the recent discussion on RiskAnal about the defn of risk sounds too too simplistic for serious risk assessment. Maybe some of the "sound bites" will work on the Oprah Winfrey show, but they do not form the basis of a proper risk assessment.=-=-=

In the first issue (Volume 1, Number 1) of the journal "Risk Analysis" circa 1980 or 1981, John Garrick, Stan Kaplan (and other co-authors?) proposed a definition of risk. I do not have the article in front of me, so I will work from memory

Risk is a 3-tuple (i.e., a vector with 3 components)

(go read the original article for a full explication)

=-=-=

Adapting the idea of risk as a vector, let me offer an update of the Garrick/Kaplan idea:

Risk is a 4-tuple (i.e., a vector with 4 components)Here are the four components needed to specify a risk.....

Page 12: 9.Definitions of Risk

1. A statement of the population under study and analysis

Does the study concern chipmunks? people? just females? just females who live in Montana? people with diabetes? who???

2. A statement of the outcome under study and analysis

Does the study concern prompt death such as an airplane crash? cancer of the liver? cancer of the prostate? headache? nausea? sore throat? stomach distress due to undercooked meat? a delay due to congestion on a highway during rush hour traffic?

3. A statement of the variability (probability) of the outcome in the specified population

Look in any dictionary to see that risk involves likelihood, chance, and probability. For a population, it is correct to specify the probability as a probability distribution for the population.

No, a point estimate cannot do the job, nor can any single summary statistic (such as the mean or the 75th percentile or the 99.9999999th percentile). Any single number taken from a distribution has the potential to mislead the risk manager and the public. Risk managers and the public need to see the full distribution.

4. A statement of the uncertainty/certainty in the results, i.e., a statement by the analyst on how confident that he or she may be in the results.

This may be qualitative, ranging from "I pulled this answer out of my hat" to "We had the entire faculty of CalTech study the problem for 2 years at a cost of $3B" -- but somehow the risk assessor needs to tell the risk manager and the public how much or how little confidence he/she has in the results.

The answer may also be quantitative using any of the several techniques now available, i.e., second-order random variables that capture both variability and uncertainty. Some people may want to use fuzzy methods to convey the uncertainty/certainty...

IMHO, and definition of risk -- and any discussion of risk -- needs to include these four components.

If one or more of the components is missing from the study, it is not a risk assessment.... Maybe it is science fiction? or fantasy?

David E. [email protected]

Page 13: 9.Definitions of Risk

Jim Dukelow's response below assigns the gravity of consequences to the risk component of my simple 2-compartment model, an entirely reasonable approach. I generally consider potential consequences as an attribute of the hazard. Both approaches address consequences as a component of risk and risk analysis using either approach should provide the same results.

John [email protected]

<< All the recent discussion on RiskAnal about the defn of risk sounds too too simplistic for serious risk assessment. Maybe some of the "sound bites" will work on the Oprah Winfrey show, but they do not form the basis of a proper risk assessment. >>

Risk assessment in a scientific sense is one thing. Risk assessment in a legal sense is another. US EPA has many regulations requiring "risk assessment" studies, specific methods that are approved for such studies, and specific limits on risks calculated by these methods. Whether or not the methods that they require reflect the best possible science, or the limits in the regulations are the most appropriate, they do tend to have the effect of reducing exposure to the toxic pollutants that the regulations and associated risk assessment methods address, and tend to serve a useful purpose as a result.

I think that more attention should be given on this list to the methods required by government agencies for regulatory "risk assessment" and developing and proposing specific improvements to them that are realistically implementable and enforceable and that do a better job of protecting human health and the environment.

Larry [email protected]

This seems to be getting very abstract - see embedded comments below.At Wed, 02 Jun 1999 15:33:32 -0600, "Fritz A. Seiler" <<[email protected]> wrote:

>Jo Anne Shatkin wrote:

>> I am fascinated by the discussion of hazard versus risk, however do not feel any of the current opinions speak to my concern of whether or not one can qualitatively evaluate risk. ........ Is there any meaning for risk in qualitative terms?

>A Comment on the Concepts of Hazard, and Risk, and on the terms Qualitative and Quantitative Risk sent to both RISKANAL and to RADSAFE by Fritz A. Seiler and Joseph L. Alvarez

Page 14: 9.Definitions of Risk

>

>Dear Jo Anne,

>

> Risks and hazards are quite generally relating to the same natural phenomenon or artificial construct. For example, a cumulus cloud is a hazard to aviation, as well as a reef is a hazard to navigation, and ice on the road is a road hazard. As long as these phenomena or artefacts are left alone, so that they cannot interact with an object such as a plane, a ship, or a car, there is no risk, although the hazard exists.

Who cares about hazards then. It seems to be a label we could do without by simply confining our attention and language to risks...

>In other words, a hazard describes the type of conditions that may occur at different locations in the cloud or across the reef, but it takes a well-defined, detailed space-time scenario to quantify the risk of a particular consequence.

It seems that any feature of the physical universe could be constructed as a hazard to something on this basis.

> By definition, therefore, risks are always quantitative, i.e., they are expressed in numbers. The question of whether a risk is given in qualitative or quantitative terms depends only on how large its errors are. If they are very large, a qualitative scale may be used, for instance, by characterizing the risk as 'high', 'medium', 'low', and 'very low'. Due to the large uncertainties, this may then be called a qualitative risk assessment. But we need to remember that, with all its uncertainty, there is still a coarse numerical judgment involved in classifying a risk as 'low' or 'very low'.

This seems like a rather narrow engineering definition that, whether it is intended to do so or not, is likely to confine attention to things you can measure with a ruler, scales, stop watch etc.

There is so much confusion around the subject of risk, why complicate it further? The concept of risk being exposure to uncertain detrimental impacts on your objectives, or words to that effect, seems perfectly adequate and well grounded in the real world. The focus on objectives avoids the analysis drifting into things no one cares about and ensures realistic priority setting in general.

I was tempted to let this one go, but I spend so much time helping clients retrieve their concept of risk from muddles they have drifted or been led into that I thought I'd chip in.

Page 15: 9.Definitions of Risk

Any analysis of risk is academic and might as well not happen if no one is going to act on it, even if the action is just to decide to do nothing. If you want to help people decide how to act there will be enough complexity inherent in the situation without adding to it. So keep it simple I say.

Steve [email protected]

I agree with Dr. Grey's comments on abstraction and simplicity. The simple definitions I often use are:

1) A hazard is something bad that might happen (or something that might cause something bad to happen), and

2) Risk is the likelihood that a the bad thing will happen.

 

John [email protected]

There are specific situations in which the consequences of accident scenarios might be assigned a numerical value of 1, such as 1 core damage event, which is the end point of nuclear plant Phase I probabilistic risk assessments. In those situations the risk turns out to be equal numerically to the likelihood that the bad thing will happen (assuming that we have defined risk as consequences times likelihood, a definition that some dislike).

Normally, however, a hazard will have a range of possible consequences, particularly if the measure of consequence has units of monetary cost or numbers of people ill or dead. In these situations the concept of risk should capture both the magnitude of the consequences and the likelihood of their occurrence.

Now the chain of custody gets a little complicated. In the message that John Beach was responding to, Steven Grey had a number of comments on an earlier message by Seiler and Alvarez, including:

S&A> Risks and hazards are quite generally relating to the same S&A> natural phenomenon or artificial construct. For example, a S&A> cumulus cloud is a hazard to aviation, as well as a reef is a S&A> hazard to navigation, and ice on the road is a road hazard. As

Page 16: 9.Definitions of Risk

S&A> long as these phenomena or artifacts are left alone, so that they S&A> cannot interact with an object such as a plane, a ship, or a car, S&A> there is no risk, although the hazard exists.

Grey> Who cares about the hazard then. It seems to be a label we could Grey> do without by simply confining our attention and language to Grey> risks.

My response is conditioned by accepted usage in DOE-space of hazard and risk analysis. A hazard analysis will attempt to identify all hazards (without initial consideration of likelihood). That will be followed by a step that qualitatively or semi-quantitatively estimates the likelihood(s) associated with each of the hazards. At this point, we have, effectively, a qualitative or semi-quantitative risk analysis, which can either be used stand-alone to inform risk management decisions or can be used to determine the scope of a (more expensive) quantitative risk analysis.

In response to my posting yesterday on the Price-Anderson Act (in the US), Ray Martin asked:

RM> Any estimate on the payout on all that liability?

As of last September, the Price-Anderson nuclear liability insurance pools had paid out $131 million in claims and claims expenses (since 1957). Of that amount, about $70 million was paid in connection with the Three Mile Island accident in 1979. More information is available at <www.nrc.gov/OPA/gmo/nrarcv/98-175.htm> and <tis-nt.eh.doe.gov/enforce/bginfo/info.html-ssi>.

The P-A liability pool approaches being no-fault insurance, as normal standards for establish liability under tort law do not apply to P-A claims.

Finally, an example from my past of the distinction between hazard and risk. More than a decade ago, I was involved with safety and risk analysis for a proposed high-level nuclear waste repository to be located 1 kilometer below the surface in the Columbia Basin basalt flows. Previous analyses had extensively considered the hazard represented by a local meteor strike severe enough to excavate and disperse the contents of the repository. This struck me a wrong-headed because a fairly simple analysis could establish that the probability of such a strike was on the order of one in a hundred million per year AND the primary consequences of the meteor strike would be much more severe than the secondary consequences of dispersal of the wastes.

On the other hand, Chris Chapman of Southwest Research Institute and a colleague, whose name escapes me at the moment, established fairly convincingly in a paper in Nature a few years ago, that the risk to humans of meteor strikes is of the same order as a

Page 17: 9.Definitions of Risk

number of risks we pay a lot of attention to. That risk is dominated by medium-sized meteor strikes in the world's oceans and the effects of the accompanying tsunamis.

Jim [email protected]

 

> If the phrase "risks we pay a lot of attention to" means things like > auto related fatalities (does it?), the quote below seems to compare > actuarial data and hypothetical data, which seems inappropriate but > common. Some of the prior and related comments also seem to ignore > other risk-related factors (defining risk as "something bad that might > happen") that some non-engineers think should not be ignored when > risks are characterized (e.g. voluntariness, intergenerational > equity). Did any of the prior commenters read the US National > Research Council's 1996 "Understanding Risk"? > > ___________ > > "...a colleague, whose name escapes me at the moment, established > fairly convincingly in a paper in Nature a few years ago, that the > risk to humans of meteor strikes is of the same order as a number of > risks we pay a lot of attention to..."

First, my apologies to Clark Chapman (not Chris Chapman). The paper in question is

Clark R. Chapman and David Morrison, "Impacts on the Earth by asteroids and comets: assessing the hazard", Nature, v. 367, pp. 33-39, January 1994.

Perhaps the best way of indicating which risks are considered would be to reproduce Table 3 from the paper.

Table 3 Chances of dying from selected causes (USA)

Cause Chances

Motor vehicle accident 1 in 100Murder 1 in 300Fire 1 in 800Firearms accident 1 in 2500Asteroid/comet impact (low limit) 1 in 3000Electrocution 1 in 5000Asteroid/comet impact 1 in 20000

Page 18: 9.Definitions of Risk

Passenger aircraft crash 1 in 20000Flood 1 in 30000Tornado 1 in 60000Venomous bite or sting 1 in 100000Asteroid/comet impact (high limit) 1 in 250000Fireworks accident 1 in 1 millionFood poisoning by botulism 1 in 3 millionDrinking water with EPA limit of TCE 1 in 10 million

The indicated chances have units [per lifetime].

With the exception of TCE in drinking water these are all risks for which we have actuarial information. I would argue that the evidence for asteroid/comet impact is effectively actuarial, although in this case the actuarial record consists of impact craters and their associated dates and surveys of near-Earth objects and their sizes together with theoretical models of the consequences of land and ocean impacts for various sizes of objects. Although I consider the evidence to be actuarial, of the types of impact events considered by Chapman and Morrison, the only one I am aware of in the historical record is the 1908 Tunguska event, which affected hundreds of square kilometers of Central Siberia and caused an unknown number of casualties.

None of the listed risks appears to be voluntary. None appears to involve intergenerational equity (again, with the possible exception of TCE in drinking water).

I misrepresented the results of the paper somewhat. The calculated risks are dominated by the global effects of rare, large impacts -- something similar but less severe than the K/T boundary event that is considered by many to have killed off the remaining dinosaurs 65 million years ago. Chapman and Morrison's Table 3 does not reflect the consequences of tsunamis produced by somewhat smaller ocean impacts. They note that "Inclusion of tsunamis might raise mortality from several-hundred-metre bodies by as much as a factor of 10 (J. Pike, manuscript in preparation)". Overall risk would still be dominated by rare, large impacts, however.

Jim [email protected]

In a message dated 6/3/99 4:42:29 PM Eastern Daylight Time, [email protected] writes:

<< All the recent discussion on RiskAnal about the defn of risk sounds too too simplistic for serious risk assessment. Maybe some of the "sound bites" will work on the Oprah Winfrey show, but they do not form the basis of a proper risk assessment. >>

Risk assessment in a scientific sense is one thing. Risk assessment in a legal sense is another. US EPA has many regulations requiring "risk assessment" studies, specific

Page 19: 9.Definitions of Risk

methods that are approved for such studies, and specific limits on risks calculated by these methods. Whether or not the methods that they require reflect the best possible science, or the limits in the regulations are the most appropriate, they do tend to have the effect of reducing exposure to the toxic pollutants that the regulations and associated risk assessment methods address, and often tend to serve a useful purpose a result.

I think that more attention should be given on this list to the methods required by government agencies for regulatory "risk assessment" and developing and proposing specific improvements to them that are realistically implementable and enforceable and that do a better job of protecting human health and the environment.

Larry [email protected]

By way of reintroduction, and filling in some details for Jim Dukelow, Clark Chapman's home page is at . . .http://k2.space.swri.edu/clark/clark.html. "The Risk to Civilization from Extraterrestrial Objects" at . . .http://k2.space.swri.edu/clark/chance.html has links to everything you ever wanted to know about the subject.

"Legal Issues in Defending Against Asteroids" by Michael B. Gerrard is at . . . http://members.tripod.com/~Ray_Martin/RiskAnal/DefAgAst.html.

A book review of "Project Risk Management: Processes, Techniques and Insights" by C.B. "Chris" Chapman and Stephen Ward is at . . . http://members.tripod.com/~Ray_Martin/RiskAnal/ProjRskM.html. Chris is Professor of Management Science and Director of the School of Management, University of Southampton.

To add one more row to Clark's table, fixed-wing combat flying risk, Southeast Asia and since, is about 3-7 chances of being shot down per 100 missions and 1-3 chances per 100 of dying from it. The remainder are ejections and safe recovery, usually by helicopter. A few are captured. Most of the losses are in air-to-ground (bombing) missions. Losses are all actuarial, all well known. US air crews are 100 percent voluntary. With few exceptions, after a few missions, combat flying is not considered risky by the crew members themselves. If asked, I suspect the majority would tell you to put the often touted 1:1 million "remedial action goal" where the sun doesn't shine--but wear your seat belt.

Ray [email protected]

or [email protected]