20
6.5.6.9 MEASUREMENT OF RISK Hans Wolfgang Brachinger, Professor, Department of Quantitative Economics, University of Fri- bourg, Switzerland Keywords Risk, risk judgement, risk perception, risk measure, decision making under risk, risk-value models, variance, volatility, value-at-risk Contents 1. Introduction 2. Standardized Risk Measures 3. Luce’s Measures of Risk 4. Sarin’s Measures of Risk 5. Fishburn’s Measures of Pure Risk 6. Fishburn’s Measures of Speculative Risk 7. Risk Measurement under Partial Probability Information 8. Final Remarks Glossary Risk: Possibility of injury or loss attached to the choice of a given alternative or action. Its amount is a matter of perception or estimation. Risk preference: Refers to the preferability of an alternative under conditions of risk and is a matter of preferences. Risk-value models: Models of decision making under uncertainty assuming that the preference for an alternative is exclusively determined by its riskiness and its value or worth. Within such models, the decision problem is viewed as choosing among possible risk-value combinations where riskiness of each alternative is numerically represented by a risk measure. Alternative: Every action a decision maker may choose in a set of admissible actions characterizing a given decision problem. Perceived risk: The amount of risk attached to a given alternative according to the perception of an individual decision maker. This perception is determined by the amount of potential losses and its probability. Risk measure: A real-valued function numerically representing an individual decision maker’s risk ordering on a given set of alternatives. It quantifies the amount of perceived risk. Risk ordering: An ordering which can be obtained directly by asking a decision maker to judge which of any pair of alternatives he perceives as riskier. This ordering need not be related to the decision maker’s preference ordering in any simple way. Random variable: A function defined on a set of random events with real values which are them- selves regarded as random. In decision situations under risk, the possible outcomes of any alternative usually are regarded as a random variable with monetary values. Value-at-Risk: For a given time horizon and a confidence level 1 - α, the Value-at-Risk of a financial portfolio is the loss in market value over the time horizon that is exceeded by that portfolio only with probability α. Target outcome: An outcome such that every outcome smaller than the target outcome is viewed as undesirable or risky, while outcomes as large as the target outcome are desirable or non-risky. This target outcome may be the zero outcome, status quo, a certain aspiration level, as well as the best 1

Measuring Risk

Embed Size (px)

DESCRIPTION

MEASURE RISK

Citation preview

Page 1: Measuring Risk

6.5.6.9

MEASUREMENT OF RISK

Hans Wolfgang Brachinger, Professor, Department of Quantitative Economics, University of Fri-bourg, Switzerland

KeywordsRisk, risk judgement, risk perception, risk measure, decision making under risk, risk-value models,variance, volatility, value-at-risk

Contents1. Introduction2. Standardized Risk Measures3. Luce’s Measures of Risk4. Sarin’s Measures of Risk5. Fishburn’s Measures of Pure Risk6. Fishburn’s Measures of Speculative Risk7. Risk Measurement under Partial Probability Information8. Final Remarks

Glossary

Risk: Possibility of injury or loss attached to the choice of a given alternative or action. Its amount isa matter of perception or estimation.Risk preference: Refers to the preferability of an alternative under conditions of risk and is a matterof preferences.Risk-value models:Models of decision making under uncertainty assuming that the preference foran alternative is exclusively determined by its riskiness and its value or worth. Within such models,the decision problem is viewed as choosing among possible risk-value combinations where riskinessof each alternative is numerically represented by a risk measure.Alternative: Every action a decision maker may choose in a set of admissible actions characterizinga given decision problem.Perceived risk: The amount of risk attached to a given alternative according to the perception of anindividual decision maker. This perception is determined by the amount of potential losses and itsprobability.Risk measure: A real-valued function numerically representing an individual decision maker’s riskordering on a given set of alternatives. It quantifies the amount of perceived risk.Risk ordering: An ordering which can be obtained directly by asking a decision maker to judgewhich of any pair of alternatives he perceives as riskier. This ordering need not be related to thedecision maker’s preference ordering in any simple way.Random variable: A function defined on a set of random events with real values which are them-selves regarded as random. In decision situations under risk, the possible outcomes of any alternativeusually are regarded as a random variable with monetary values.Value-at-Risk: For a given time horizon and a confidence level1−α, the Value-at-Risk of a financialportfolio is the loss in market value over the time horizon that is exceeded by that portfolio only withprobabilityα.Target outcome: An outcome such that every outcome smaller than the target outcome is viewed asundesirable or risky, while outcomes as large as the target outcome are desirable or non-risky. Thistarget outcome may be the zero outcome, status quo, a certain aspiration level, as well as the best

1

Page 2: Measuring Risk

result attainable in a certain situation.Gain: Any outcome that lies above a certain target return.Loss: Any outcome that falls below a certain target outcome.Variance: A classical risk measure quantifying the risk of an alternative by the mean square deviationof its potential outcomes from its mean outcome.Standard deviation: A classical risk measure quantifying the risk of an alternative by the square rootof the mean square deviation of its potential outcomes from its mean outcome.

Summary

The concept of risk is essential to many problems in economics and business. Usually, risk is treatedin the traditional expected utility framework where it is defined only indirectly through the shape ofthe utility function. The purpose of utility functions, however, is to model preferences. In this article,those approaches are reviewed which directly model risk judgements. After a review of standardizedrisk measures, recent theoretical developments of measures of perceived risk are presented.

1. Introduction

The termrisk plays a pervasive role in many economic, political, social, and technological issues.In the literature, there are various attempts to define or to characterize the risk of an alternative fordescriptive as well as for prescriptive purposes. Thereby, the main emphasis lies on the risk itselfof the alternative, independently of the problem of risk preference.Riskrefers to the riskiness of analternative. It is a matter of perception or estimation.Risk preferencerefers to the preferability of analternative under conditions of risk and is a matter of preferences.

Having accepted that risk is something different from risk preference, it would be interesting to knowwhat the relation between risk and risk preference is. There are various theories of decision makingunder risk. Some of these theories like risk-value models make explicit use of a risk measure, othersdo not. In this article,neitherrisk-value modelsnor the relation of risk measures to other theories ofdecision making under risk will be discussed. The focus is on one important component of risk-valuemodels, the factor risk, independently of any risk preferences.

There are three main reasons which necessitate a means for the direct comparison of alternatives asto their risk. First, the understanding of riskiness judgements might help to understand preference.Taking risk and value as primitives, one might explain preference by a risk-value model, i. e. by afunction of these two components. Many theories in management and finance rely on such a sepa-rate consideration of risk and value. Possibly the best known example is modern portfolio selectiontheory. Within this context, the decision problem is viewed as choosing among possible risk-returncombinations and formulated as either maximizing return for a given level of risk or minimizing riskfor a given level of return. With such an approach, obviously, the decision will generally depend onthe risk measure used. Second, there is growing empirical evidence that, under conditions of uncer-tainty, people base their decisions on qualitative aspects of choice alternatives such as risk. Finally,judgements of perceived risk may be required as such, independent of the necessity of choice, e. g.,for intervention before the decision stage in a public policy setting.

In this article, it is started from the assumption that there exists a meaningfulrisk orderingwhich canbe obtained directly by asking an individual to judge which of any pair of comparable alternatives isriskier. The key concept will therefore be a binary relation%, with A % B meaning that an alternativeA is at least as risky as another alternativeB. Throughout the article the relationA Â B states that

2

Page 3: Measuring Risk

alternativeA is riskier than alternativeB while A ∼ B means thatA andB are equally risky. Therisk ordering need not be related to the individual’s preference ordering in any simple way. Accordingto the conception of standard measurement theory, functionsR are searched for which numericallyrepresent the relation%, i. e. functionsR with the property

A % B ⇐⇒ R(A) ≥ R(B) . (1)

Every such functionR will be calledrisk measurement functionor simplyrisk measure.

Despite the importance of risk, there is little consensus on its definition. In empirical studies, typically,two dimensions which appear to determine perceived risk have been identified: amount of potentialloss and probability of occurrence of loss. The risk of an alternative increases if the probability ofloss increases or if the amount of potential loss increases. Unfortunately, up to now no agreement hasbeen reached on the relative importance of the uncertainty of outcomes versus their undesirability fordetermining perceived risk. Furthermore, there is empirical evidence that possible gains reduce theperceived risk of an alternative. But it is by no means clear how and to what extent risk perceptiondepends on potential gains. Thereby, losses and gains are defined with reference to a certain targetoutcome. This target outcome may be the zero outcome, status quo, a certain aspiration level, as wellas the best result attainable in a certain situation. An outcome is regarded as a loss if and only if it fallsbelow the target outcome. It is regarded as a gain if and only if it lies above the target return. Otherempirical studies have shown that risk is not simply equal to something like negative preference,it is an own important concept. When judging the riskiness of an alternative, people encode andcombine probability and outcome information in qualitatively different ways than when judging itsattractiveness.

Risk measurement as it is presented in this article seeks to get behind specific contextual referentsof risky alternatives to consider characteristics of risk that apply to many different situations. It isthe objective of this article to review the more naive standardized risk measures as well as recentlydeveloped economic or psychological theories of perceived risk which rely on the axiomatic approachof modern measurement theory.

2. Standardized Risk Measures

In this section, an overview is given on measures of risk which have been advanced to quantify risk ina standardized way which is widely acceptable and independent of individually varying perception.Among all the measures reviewed, subjective transformation of values or probabilities is not admitted.

Traditionally, the risk of an alternative has primarily been associated with the dispersion of the corre-sponding random variable of monetary outcomes. Then, it is common to measure the riskiness of analternative by itsvarianceσ2 or its standard deviationσ. If an alternative’s future value is character-ized by a continuous random variablex with densityf = fx, distributionF = Fx, and expectation

µ := E(x) :=

+∞∫

−∞

x f(x) dx (2)

these risk measures are defined by

σ2 := Var(x) :=

+∞∫

−∞

(x− µ)2f(x) dx (3)

3

Page 4: Measuring Risk

and

σ := [

+∞∫

−∞

(x− µ)2f(x) dx]1/2 . (4)

In the finance context the standard deviation of continuous growth rates usually is calledvolatility.

Similar standardized risk measures are theexpected absolut deviation aroundµ

+∞∫

−∞

|x− µ| f(x) dx (5)

and theexpected absolut deviation around0

+∞∫

−∞

|x| f(x) dx . (6)

In the context of technological issues, the risk of a project sometimes is simply quantified by theproduct

x · p(x) , (7)

wherex are the costs of some “catastrophic” event connected with the project andp(x) its corre-sponding probability. In fact, this “measure” is a gross simplification of (6).

Besides, it has been conventional wisdom in economics and other fields of research that risk is thechance of something bad happening. In this vein, risk is associated with an outcome that is worsethan some specific target outcome and its probability. Within the risk measures tailored to this notionof risk are thelower semivariance

µ∫

−∞

(x− µ)2f(x) dx , (8)

theexpected value of loss

−0∫

−∞

x f(x) dx , (9)

and theprobability of lossor probability of ruin

Px(x ≤ r) =

r∫

−∞

f(x) dx . (10)

Thereby,r is a certain target level outcomes lower of which are a loss or disastrous to the decisionmaker.

In the same vein in 1977, Fishburn proposed the risk measure

RF (x) =

t∫

−∞

(t− x)kfx(x) dx (k > 0) . (11)

4

Page 5: Measuring Risk

Thereby,t is a fixed upper bound,t ≤ Ex. The parameterk of this risk measure may be interpreted asa risk-parameter characterizing a kind of risk attitude. Valuesk > 1 describe a certain risk-sensitive,valuesk ∈ (0, 1) a certain risk-insensitive behavior.

Fishburn’s risk measure can be interpreted as a certain moment of the distribution ofx. As the lowersemivariance, it is a ‘lower moment’ characterizing the part of the distribution below the expectation.It is ‘partial’ because this part is only partially characterized. Because this characterization is relativeto the parametert, Fishburn’s risk measure simply constitutes what in the literature now is called thelower partial moment (relative tot) of the distribution ofx.

To measure the market risk of a portfolio of traded assets, banks are more and more employing internalmodels based on a methodology calledValue-at-Risk. This methodology serves for the determinationof the capital requirements that banks have to fulfill in order to back their trading activities. For agiven time horizon and a confidence level1− α, the Value-at-Risk of a portfolio is the loss in marketvalue over the time horizon that is exceeded by the portfolio only with probabilityα.

Let r be the reference level with which the value of a given portfolio is compared at the end of thetime horizon. Ifx < r, there is a loss at the amount ofr − x. The portfolio’s loss is thus given by therandom variable

l := r − x . (12)

As reference level, initial valuex0 as well as expected valueE(x) may reasonably be used. Theprobability of a loss lower than or equal tol is given by the distribution function

Fl(l) := P (l ≤ l) =

l∫

−∞

fl(t) dt . (13)

Using the loss distributionFl, for a given time horizon and a given confidence level1−α (0 ≤ α ≤ 1,e.g.,α = 0.01), thep·100% Value-at-Riskof the portfolio is the lossVaR = VaRx;α implicitly definedby

Fl(VaR) = P (l ≤ VaR) = 1− α . (14)

This equation shows that, statistically speaking, the VaR-measure of a portfolio is the1 − α · 100%-quantile of the portfolio’s loss distribution.

Applying the inverse distribution functionF−1

lto (14) yields the1 − α · 100% Value-at-Risk of the

portfolio explicitly through

VaR = VaR(x) := F−1

l(1− α) . (15)

Thereby,F−1

l(1− α) is the value of the inverse distribution functionF−1

lat1− α.

All the risk measures reviewed above are special cases of two related three-parameter families of riskmeasures. The first three-parameter risk measure is defined as

RS1(x) :=

q(Fx)∫

−∞

|x− p(Fx)|k dFx(x) (k ≥ 0) , (16)

wherep = p(Fx) denotes a reference value level from which deviations are measured. The positivenumberk specifies a power to which deviations in value from the reference level are raised and thus

5

Page 6: Measuring Risk

k is a measure of the relative impact of large and small deviations. The parameterq = q(Fx) is arange parameter that specifies what deviations are to be included in the risk measure. The secondthree-parameter risk measure is defined to be thekth root ofRS1(x), i.e.,

RS2(x) :=[ q(Fx)∫

−∞

|x− p(Fx)|k dFx(x)]1/k

(k > 0) . (17)

Through appropriate choices of the parametersp = p(Fx), q = q(Fx), andk it is easy to see that theabove reviewed risk measures are special cases of one of the families (16) and (17). The variance (3)results from equation (16) and the standard deviation (4) from equation (17) by settingp = E(x), k =2, andq = +∞. The expected absolute deviation aroundµ and around 0, (5) and (6) respectively,are special cases of (16) obtained by choosingp = µ = E(x) andp = 0, respectively,k = 1, andq = +∞. Equation (16) gives the lower semivariance (8) whenk = 2 andp(Fx) = q(Fx) = µ;it gives the expected value of loss (9) whenp = q = 0, andk = 1. Family (16) amounts to theprobability of loss (10) by settingk = 0, andq = r. Finally, this family yields the lower partialmoments (11) by settingp = q = t.

For any triplet(p, q, k) of parameter values, through both of these families of risk measures the riskof a given alternative is characterized by a nonnegative numberR = R(p, q, k). In fact, through bothof these families of risk measures, the risk of a given alternative is characterized by a quadruplet(p, q, k, R) whereR = R(p, q, k). Essentially, if any three of these four quantities are fixed the fourthquantity can be used as a (not necessarily nonnegative) real-valued indicator of the risk of a givenportfolio. On the basis of that idea, additionally, it can be shown that also the Value-at-Risk is aspecial case of family (16).

For any of the risk measures reviewed so far, its embedding in one of the families (16) or (17) im-mediately discloses the features of this measure. It shows, e.g., that the variance indeed takes intoaccount the idea of a target return, namely by choosingp = E(x), but that, by choosingq = +∞,all deviations from that target return irrespective of being above or below the target return, symmetri-cally, are taken into account. Also outcomes above the target return increase the risk. This contradictsthe empirical notion of risk outlined in the introduction.

This embedding also discloses the major features of the Value-at-Risk measure. It takes into accountthe idea of a target return by implicitly choosingp = r. Contrary to the variance and in the sense of theempirical notion of risk, only deviations from the target return downwards are considered. Anotheradvantage of the Value-at-Risk measure is that by fixing the parameterα the risk of a portfolio isexpressed in terms of value and is, therefore, easily to interpret. But, nevertheless, the Value-at-Riskis a very rudimentary risk measure. Because the parameterk is set to0, obviously, it contains noinformation on the loss distribution. The Value-at-Risk user knows that a loss bigger than the Value-at-Risk will only happen with a certain (small) probability. He has no information on, e.g., how large avery big loss can be and how probable it is. Contrary to the empirical notion of risk, the Value-at-Riskmeasure does not increase if the amount of potential loss increases.

Other naive risk measures scattered in the literature are theShannon entropy∞∫

−∞

f(x) ln(f(x)) dx (18)

which is well-known from communication theory, theinterquartile rangeF−1(0.75) − F−1(0.25),and theminimum outcomexmin of x. For cases where valuesx < 0, i. e. losses are possible, theminimum outcome is usually calledmaximum loss.

6

Page 7: Measuring Risk

In the remainder of this article, an overview is given on economic or psychological theories of per-ceived risk. All the measures reviewed admit subjective transformations of values or probabilities.

3. Luce’s Measures of Risk

Modern approaches to the problem of risk measurement, consistently, concentrated on the problemhow certain transformations of choice alternatives affected people’s perceptions of their riskiness.These transformations included rescaling, i. e., multiplying all outcomes by a positive constant, aswell as translation, i. e., adding a (positive or negative) amount to all outcomes. In 1980, Luce tookup this approach by deriving risk measures from functional equations characterizing the effect ofrescaling on perceived risk.

Luce suppresses the random variable notation and associates risk with densities. Letfαx denote thedensity of the transformed random variableαx gained fromx through rescaling by a scale factorα > 0, i. e., through multiplyingx by a positive real constantα. Luce presumes that the riskR(fαx)of a densityfαx is some function of the riskR(fx) of the densityfx and ofα. In a first assumption,Luce assumes that there is a strictly increasing functionS with S(1) = 0 such that for all densityfunctionsfx and all realα > 0

R(fαx) = R(fx) + S(α) . (19)

In a second assumption concerning the structure of risk, Luce assumes that there is an increasingfunctionS with S(1) = 1 such that for all density functionsfx and all realα > 0

R(fαx) = S(α)R(fx) . (20)

A second class of assumptions concerns the nature of the aggregation of a density into a single numbercharacterizing its risk. In a first assumption, Luce assumes that the density undergoes a pointwisetransformation and then is integrated. More specifically, he assumes that there is a non-negativefunctionT , with T (0) = 0, such that for all density functionsf

R(f) =

+∞∫

−∞

T (f(x)) dx . (21)

In a second assumption, Luce supposes that there is some transformation of the random variable itselfandR is the expectation of the resulting variable. More specifically, he assumes that there is a functionT such that for all densitiesf

R(f) =

+∞∫

−∞

T (x)f(x) dx = E(T (x)) . (22)

Combining each structural assumption with each aggregational, Luce gets four different functionalforms of risk measures. Combining the first aggregation rule (21) with the additivity assumption (19)leads to

R1(f) = −A

+∞∫

−∞

f(x) log f(x) dx + B , (23)

with A > 0 andB ≥ 0. Combining it with the multiplicativity assumption (20) leads to

R2(f) = A

+∞∫

−∞

f(x)1−θ dx , (24)

7

Page 8: Measuring Risk

with A > 0 andθ > 0. With both measures,R1 andR2, the risk of a random variablex is expressedby an integral of a certain non-linear transform of its density. The risk measureR1, obviously, is anaffine transformation of the Shannon entropy (see Section 2). In both measures, no difference is madebetween potential losses and potential gains ofx.

Combining the second aggregation rule (22) with the additivity assumption (19) yields

R3(f) = B1

∞∫

0

f(x) dx + B2

0∫

−∞

f(x) dx + AE(log |x|) , (25)

whereB1, B2, andA are real numbers,A > 0. With this measure, the risk of a random variablexis quantified by a linear combination of the expectation of thelog-transform ofx, the probability ofpositive outcomes, and the probability of negative outcomes.

Combining the second aggregation rule (22) with the multiplicativity assumption (20) yields

R4(f) = A1

∞∫

0

xθf(x) dx + A2

0∫

−∞

|x|θ f(x) dx , (26)

whereθ is a real number,θ > 0, A1 = (θ + 1)1∫0

T (x) dx, andA2 = (θ + 1)0∫−1

T (x) dx.

With the measureR4, the risk of a random variablex is represented by a linear combination of theconditional expectation of positive outcomes and the conditional expectation of negative outcomes,where all outcomes are raised to some powerθ. An important feature of the risk measuresR3 andR4 is that gains and losses are treated separately and in a different manner. In the measureR4, the“chance component” ofx, i. e. the possible gains, and its pure “risk component”, i. e. the possiblelosses, combine clear-cut additively.

Luce leaves the question of the reasonableness of these forms to empirical investigation. Nevertheless,he remarks that many psychologists believe that the risk of a “gamble” that is repeatedn times is lessthann times the risk of the “gamble” played once. This is known to hold only for the risk measureR3 and fails for the others. However, the risk measureR3 suffers, as well as the risk measureR4,from another drawback. It increases witha for positive uniformly distributed random variables withconstant rangeb − a. For some people, this property is highly counter-intuitive. In fact, there isempirical evidence that risk decreases if a positive constant is added to all potential outcomes of analternative.

There are two fundamental problems with Luce’s structural and aggregational assumptions. A firstproblem concerns the additivity assumption (19). This assumption obviously implies that a randomvariable having zero risk is transformed by any change of scale into one with non-zero risk. Rescalingby any positive factorα < 1 leads to negative risk, whereas risk is increased by rescaling with anyα > 1. This argument favors the multiplicativity assumption (20) and thus the risk measuresR2 andR4.

A second fundamental problem concerns the first aggregation rule (21). This aggregation rule leadsto risk measures which aretranslation invariantor location free. Thereby, a risk measureR is calledtranslation invariant or location free if and only if

R(fx+β) = R(fx) , (27)

8

Page 9: Measuring Risk

wherefx+β denotes the density of the transformed random variablex + β gained fromx throughtranslation byβ, i. e., by adding a real constantβ. Translation invariance of risk measures of type(21) is immediately shown by

R(fx+β) =

∫T (fx+β(x)) dx =

∫T (fx(x− β)) dx =

∫T (fx(x)) dx = R(fx) . (28)

Translation invariance of risk measures of type (21) implies that the risk measuresR1 andR2, forany shift family of distributions, depend only on the range and are independent of the location of arandom variable. It follows, e. g., that random variables with a uniform distribution of the same rangeb−a are, in the sense ofR1 andR2, equally risky irrespectively of their location. Again, this propertycan be regarded as highly counter-intuitive. In fact, it can be deduced from the empirical evidencementioned above that risk depends on the location of a random variable.

It should be noted that also most of the standardized risk measures reviewed in Section 2 are transla-tion invariant. This holds in particular for the most important risk measures used in finance, namelyvariance (3), standard deviation (4) , and lower semivariance (8). Risk of a random variable is mea-sured independent of its location.

There is empirical evidence against translation invariance of perceived risk. This implies that assump-tion (21) and therefore the risk measures (23) and (24) should be rejected. The risk measure (25) canbe ruled out because of its unreasonable behavior in the neighborhood of zero. Obviously, this mea-sure approaches negative infinity if, e. g., any positive random variable with uniform distribution isrescaled by a factorα > 0 converging to zero.

Revising and extending Luce’s model (26) in 1986, Luce and E.U. Weber presented a new axiomat-ically based risk model, called conjoint expected risk (CER). They start from an arbitrary setG ofreal-valued random variables, interpreted as “gambles” with (arbitrary numbers of) monetary out-comes, and assume that the decision maker involved has a binary risk ordering,%, onG. Then theCER-model is derived from a certain system of axioms or assumptions on%.

The first four axioms of this system are purely technical and do not offer any special insight into whatis special about the CER measure. It is a final fifth axiom which shapes the risk ordering.

This axiom consists of four requirements on%. Let x andy be two random variables inG which, both,can take on only positive or only negative outcomes and leta, b, b′, andb′′ be positive real numbers.Then, the first requirement of this axiom is theindependence condition

x % y ⇐⇒ ax % ay , (29)

and

ax % bx ⇐⇒ ay % by . (30)

Condition (29) states that a change in scale does not change the risk ordering between random vari-ables which, both, can take on only positive or only negative values. The second part (30) of theindependence condition says that if, for any random variable which can take on only positive or onlynegative values, one scale is perceived as at least as risky than another scale, then the same order-ing holds for any other random variable which can take on only positive or only negative values,respectively.

The second condition states that the ordering induced by independence on the positive realsR+ is theordinary ordering≥, i. e.

ax % bx ⇐⇒ a ≥ b . (31)

9

Page 10: Measuring Risk

Assuming independence, this condition says that the riskiness of random variables which can takeon only positive or only negative outcomes is anincreasing function of the scale value. Thus, e. g.,the random variablex = ($15, .5; $0) is perceived to be less risky than the random variable4 x =($60, .5; $0).

As a third requirement a condition ofrestricted solvabilityis introduced wich states that for any tworandom variablesx andy which, both, can take on only positive or only negative values there exists apositive real numberb such that

b′y % ax % b′′y =⇒ by ∼ ax . (32)

This solvability condition says that perceived risk is a continuous function of scale changes. Note,that this solvability condition is different but related to the standard continuity assumption which ispart of the axioms which imply the expectation principle.

As the fourth requirement of this axiom anArchimedean conditionis introduced which states that forany two random variablesx andy which, both, can take on only positive or only negative values thereexists a positive real numbera such that

x  y =⇒ ay % x . (33)

This condition says that given two random variablesx andy wherex is riskier thany, then, by meansof a sufficiently large scale transformation,y can be transformed into a random variable that is riskierthanx.

Based on these axioms on the risk ordering%, Luce and E.U. Weber prove that% can numerically berepresented through the CER-modelRCER(x) which, for any (discrete or continuous) random variablex ∈ G, is given by

RCER(x) = B1

∫ 0

−∞dFx(x) + B2

∫ ∞

0

dFx(x) + B3

∫ 0

0

dFx(x)

+ A1

∫ 0

−∞|x|θ1 dFx(x) + A2

∫ ∞

0

xθ2 dFx(x) ,

(34)

whereBi, Ai, andθi are scaling constants,θi > 0, andFx denotes the distribution function corre-sponding tox.

As a result, according to the CER-model, perceived risk of a random variablex can be quantified bya linear combination of the probability of negative outcomes, the probability of positive outcomes,and the probability of the zero outcome as well as the conditional expectation of negative outcomesraised to some powerθ1 and the conditional expectation of positive outcomes raised to some powerθ2. As Luce’s measureR4, the measureRCER evaluates gains differently from losses and the “chancecomponent” of a random variable and its pure “risk component” combine additively. But, contrary toR4, the probabilities to win, to lose and to break even are additionally part of this risk measure. Forrandom variables with only positive or only negative outcomes the CER-model is equivalent toR4.Therefore, it suffers from the same behavioral problems as pointed out above forR4. The high numberof scaling constants poses an additional challenge for a reliable assessment of the risk measureRCER.

It should be noted that, in general, the scaling constantsAi andBi of the CER-model (34) can take onnegative or positive values, depending on the decision maker’s risk ordering. Empirical estimations ofthe parameters of the CER-model foundA1 andB1 to be positive as well asA2 andB2 to be negative.In such cases, the probability of positive outcomes of a random variable as well as their conditional

10

Page 11: Measuring Risk

expectation reduce its perceived risk and the risk measure (34) can take on negative values when thepositive outcome contributions outweigh the negative outcome contributions.

It is interesting to think about the system of axioms on which the CER-model is founded. All axiomsmake intuitive sense, and, in addition, do not appear to be so strong. Nevertheless, these axioms implya pretty restrictive set of risk measures.

4. Sarin’s Measures of Risk

In 1987, Sarin extended Luce’s risk measures to obtain risk measures that are empirically more rea-sonable. Therefore, Sarin starts from the overwhelming empirical observation that the risk of a ran-dom variable appears to decrease when all possible outcomes are improved by a constant, i. e., whena positive constant is added to all outcomes of a random variable.

Sarin’s first assumption concerns the risk of the densityfx+β belonging to the transformed randomvariablex + β. He assumes thatR(fx+β) is a multiplicative function ofR(fx) andβ. More specifi-cally, it is assumed that there is a strictly monotonic functionS with S(0) = 1 such that for all densityfunctionsfx and all realβ > 0

R(fx+β) = S(β)R(fx) . (35)

Thereby, it is assumed thatR(fx+β) decreases asβ increases. For non-negative risk measures thisimplies thatS(·) is strictly decreasing.

As indicated in the last Section, Luce’s first aggregational assumption (21) implies risk measureswhich are translation invariant. Holding such risk measures to be empirically not reasonable, Sarintherefore, in his second assumption, requires that the expectation principle (22) be used to aggregatedensities into single numbers.

From these two assumptions Sarin derives the risk measure

R(f) =

+∞∫

−∞

Kecxf(x) dx = KE(ecx) , (36)

with real constantsK > 0, c < 0, or K < 0, c > 0 . Through this measure, risk of a randomvariablex, essentially, is represented by the expectation of its exponential transform.

As, for example, implicitly stated in Luce’s assumption (21), it seems sensible to assume risk mea-sures to be non-negative. This implies for Sarin’s risk measurec < 0 andK > 0. Evidently, thisrisk measure gives higher weight to a random variable’s potential losses than to its potential gains.Because of assumption (35), Sarin’s risk measure does not suffer the last criticism of Luce’s riskmeasures, it is not location free and, in particular, decreases under translations with increasingβ.Furthermore, it can be easily shown that it increases with the scale factorα under rescaling.

In 1990, M. Weber, contrary to Sarin advocating translation invariance of risk judgements, presentsan extension of Sarin’s risk measure (36). To make this measure location free he suggests, first, to“normalize” random variables by subtracting their respective expected values, i. e., to transform allrandom variablesx to the “normalized” variables

x′ = x− E(x) . (37)

11

Page 12: Measuring Risk

The “normalized” random variables all have zero mean and reflect the risk of the original distributionswith reference to their expectation. Thereby, the expectation serves as atarget outcomesuch thatevery outcome whose value is smaller than the expectation is viewed as undesirable or risky, whileoutcomes with values as large as the expectation are desirable or nonrisky.

Weber’s location free variant of Sarin’s risk measure (36) is then given by

R′(f) =

+∞∫

−∞

Kec(x−E(x))f(x) dx = KE(ecx′) , (38)

with real constantsK > 0, c < 0 . Through this measure, risk of a random variablex, essentially,is represented by the expectation of the exponential transform of its normalized versionx′ . Weberpoints out that for normally distributed random variables the measure (38) and the variance (3) yieldthe same risk ordering of lotteries.

Sarin’s risk measure (36) as well as Luce’s risk measuresR3 andR4 use the expectation principlefor aggregating densities into single numbers. This principle implies that resulting risk measures arelinear in probabilities. It is well-known that linearity in probabilities is a pretty strong requirementwhen preferences are modelled. Sarin therefore generalized Luce’s expectation assumption (22) and,combining it with different structural assumptions, derived alternative measures of risk. Combiningthe generalized expectation assumption, in turn, with Luce’s structural assumptions Sarin gets twonew families of risk measures. Combining it with the additivity assumption (19) he obtains the riskmeasure

R(f) = B1

∞∫

0

f(x) dx + B2

0∫

−∞

f(x) dx + AE(log |x|)− 1

2A2 Var(log |x|) (39)

with some constantsB1, B2, andA > 0. Combining it with the multiplicativity assumption (20) hereceives the risk measure

R(f) = B1

∞∫

0

xθf(x) dx + B2

0∫

−∞

|x|θ f(x) dx +1

2(

θ

θ − 1)E(|x|2θ)− 1

2Var(|x|θ) . (40)

It should be noted that in the generalized risk measures (39) and (40) of Sarin as well as already inLuce’s risk measures (25) and (26) potential gains and potential losses are treated separately. Thisseems to be in accordance to observed differential preference attitudes towards gains and losses. Sarinstresses the fact that empirical data will be needed to test the validity of the generalized expectationprinciple and the corresponding generalized risk measures.

5. Fishburn’s Measures of Pure Risk

Various empirical studies have demonstrated that preference attitudes exhibit striking differences inthe loss and the gain regions. Although the association of preferences between random variables withthe variables’ risks is far from being clear, it seems reasonable to assume that these differences alsoplay an important role in risk assessments. Furthermore, experiments with monetary random variableshave convincingly shown the salience of loss probability and loss amount on perceived risk. Based onthese observations in 1982 and 1984, Fishburn developed theories of risk in which gains and lossesare treated separately. In the first part of his study, Fishburn focusses on risk as probable loss, i. e. on

12

Page 13: Measuring Risk

what is usually calledpure risk. In the second part, measures of risk are proposed that include effectsof gains on perceived risk.

In the first part of his study, Fishburn is guided by the conventional wisdom that risk is a chance ofsomething bad happening, that risk arises from the possibility of undesired outcomes. Fishburn’s the-ory of risk is based on a binary risk relation%, “is at least as risky as”, defined on a set of probabilitydistributions over outcome values. From the standpoint of measurement theory, this relation is to berepresented by a numerical risk measure.

In the spirit of most of the standardized risk measures used in finance and of Luce’s assumption (21),Fishburn implicitly adopts the position that risk judgements are location free. Therefore he presumesthat sometarget outcomecan be identified so that every outcome whose value is smaller than the valueof the target is viewed as undesirable or risky, while outcomes with values as large as the target’s aredesirable or nonrisky. Without loss of generality, for convenience, this target outcome is set at zero.Fishburn regards a distribution as having zero risk if and only if it has no chance of delivering aoutcome below zero.

More specifically, Fishburn starts from a (non-empty) setX of possible outcomes and separates allnon-zero outcomes inX into undesirable and desirable subsets asX− := {x ∈ X | x < 0} andX+ := {x ∈ X | x > 0} . The outcomes inX− (X− 6= ∅ by assumption) are referred to as losses,and the outcomes inX+ as gains. The binary risk relation% is applied to the set

A := [0, 1]× P− := {(α, p) | 0 ≤ α ≤ 1, p ∈ P−} , (41)

whereP− is the set of all probability measuresp with p(X−) = 1. Each distributionp ∈ P− iscertain to result in a loss. A pair(α, p) is to be interpreted as a two-dimensional measure that yieldsthe probability for a subsetY of X− asαp(Y ). The parameterα gives the probability of a loss and,given a loss,p(Y ) is the probability that the loss will be inY .

The risk relation% defined onA is assumed to be a weak order, i. e. strictly complete and transitive.Then, for a first basic representation theorem, Fishburn uses four axioms. Among these a continuityaxiom and an axiom that states that for every measure inP−, someworstoutcome is at least as riskyas the measure. Another axiom states that risk increases as the loss probability increases for fixedp,another one that worse outcomes entail greater risks. The latter axioms seem reasonable in view ofthe common perception that risk increases as bad outcomes become more probable, and as probablebad outcomes get worse. In his first basic representation theorem, Fishburn shows that there is anon-negative real-valued functionρ onA with

(α, p) % (α′, p′) ⇐⇒ ρ(α, p) ≥ ρ(α′, p′) (42)

which hasρ(α, p) = 0 if and only if α = 0, and is continuous and increasing inα.

Three additional axioms yield what Fishburn callsα-p-separability, namely the existence of real-valued functionsρ1 on [0,1] andρ2 onP− such that

ρ(α, p) = ρ1(α) · ρ2(p) (43)

Thereby, the functionρ1 is continuous and increasing inα with ρ1(0) = 0, and the functionalρ2 is,restricted to one-point-distributionspx overX−, increasing asx decreases.

Certain combinations of additional axioms, then, yield special forms of risk measures of the multi-

13

Page 14: Measuring Risk

plicative separable type (43). Fishburn axiomizes risk measures of type

ρ(α, p) = ρ1(α)

X−

ρ2(x) dp(x) , (44)

ρ(α, p) = α

X−

ρ2(x) dp(x) , (45)

and

ρ(α, p) = ρ1(α)

X−

|x|θ dp(x) , (46)

whereθ is a real parameter,θ > 0. With all of these types of risk measures, each outcomex isidentified with the one-point measurepx and integration is Lebesgue-Stieltjes-integration.

In a further theorem, Fishburn gives conditions for the representation of% by a risk measure of theexpectation type

ρ(α, p) =

X−

ρ(α, x) dp(x) . (47)

This corresponds to the expectation principle (22) used by Luce and Sarin as aggregational assump-tion.

These families of risk measures contain some of the standardized risk measures listed in Section 2 asspecial cases. In the family (46) of risk measures, the choiceρ1(α) ≡ α together withθ = 1 yieldsthe expected value of loss (9) of a random variable; the choiceρ1(α) ≡ α together withθ = 2 leadsto the lower semivariance (8) of a mean-centered random variable. Furthermore, the family (46) ofrisk measures withρ1(α) ≡ α is contained in family (16), if one chooses therep(F ) = q(F ) = 0.The risk measures (45) and (46) obviously are special cases of the more general family (44). With thechoiceρ1(α) ≡ α, the family (44) yields (45), with the choiceρ2(x) = |x|θ it yields (46). With thechoiceρ1(α) ≡ α, the family (46) leads to the risk measures (11) which Fishburn considered in hisearlier paper published in 1977.

6. Fishburn’s Measures of Speculative Risk

In the second part of hisFoundations of Risk Measurement, Fishburn considers measures of risk thatexplicitly include effects of gains on perceived risk. He adopts the position that increased gains reducethe risk of fixed probable losses without completely negating this risk. Thus he focusses on that isusually calledspeculative risk. Speculative risk measures, generally, are to incorporate the consensusthat risk increases as loss probability or amount increases, and that greater gains as well as greatergain probabilities reduce perceived risk.

As in his first study, Fishburn starts from a binary risk relation%, defined on a set of probabilitydistributions over a set of outcome values, which is to be represented by a numerical risk measure.Again, a non-empty setX of numerical outcomes is considered containing a non-risky target outcomewhose value is, without loss of generality, set at zero. As above, the set of all nonzero outcomes ispartitioned into an undesirable subsetX− of losses and a desirable subsetX+ of gains. The riskrelation is applied to the set

B = {(α, p; β, q) | α, β ≥ 0, α + β ≤ 1, p ∈ P−, q ∈ P+} , (48)

14

Page 15: Measuring Risk

whereP− is defined as above, andP+ is the set of all probability measuresq with q(X+) = 1. Eachdistributionq ∈ P+ is certain to result in a gain. A quadruple(α, p; β, q) is to be interpreted as a four-dimensional measure that yields the probability for a subsetY of X− asαp(Y ), and the probabilityfor a subsetZ of X+ asβq(Z). As above, the parameterα gives the probability of a loss and, given aloss,p(Y ) is the probability that the loss will be inY . The parameterβ gives the probability of gainand, given a gain,q(Z) is the probability that the gain will be inZ. Hence, the probability for thetarget-outcome{0} equals1− α− β.

Also in this generalized case, the risk relation% defined onB is assumed to be a weak order. Then,Fishburn presents sufficient conditions for a basic representation of% by a non-negative real-valuedfunctionR onB that has

R(α, p; β, q) = 0 ⇐⇒ α = 0 (49)

and satisfies, for all(α, p; β, q), (γ, r; δ, s) ∈ B

(α, p; β, q) % (γ, r; δ, s) ⇐⇒ R(α, p; β, q) ≥ R(γ, r; δ, s) . (50)

It should be noted that, according to condition (49), there is no risk if and only if there is no chance ofgetting an undesirable outcome. Therefore, this condition rules out measures of risk that are additivelyseparable in gains and losses in the sense of

R(α, p; β, q) = R1(α, p) + R2(β, q) , (51)

with a gain partR2 and a loss partR1.

Condition (49) has two interesting empirical implications. According to that condition, every sureloss (α = 1) has non-zero risk and is therefore regarded as risky. In addition, every50 − 50 randomvariable of receiving the target outcome or some amount above the target outcome (α = 0) has zerorisk and is therefore regarded as riskless. Both properties are highly controversial, as can be verifiedin any class-room experiment.

For a first representation theorem, Fishburn uses five axioms. Among these a continuity condition forloss and gain probabilities and an axiom which asserts some suitably bad outcome which is at least asrisky as a givenp ∈ P−. The third axiom states the commitment to no risk when there is no chanceof loss. The first two axioms assert monotonicity for gain and loss probabilities and for gains andlosses, respectively. These axioms are in accordance with the conventional notion of speculative risk.In his representation theorem, Fishburn shows that there is a non-negative real-valued functionR onB satisfying conditions (49) and (50) that is continuous and increasing in loss probability as well ascontinuous and decreasing in gain probability when the loss probability is positive.

In Fishburn’s study a series of further axioms is formulated. Certain groups of axioms assure specialtypes of risk measures. According to commitment (49), only such representations are consideredwhich are multiplicatively separable in arisk-part quantifying an option’s pure risk and achance-part expressing an option’s potential gain. In his approach, Fishburn essentially follows conventionalconjoint measurement. The axiomatized types of risk measures include

R(α, p; β, q) = ρ(α, p)τ(β, q) , (52)

R(α, p; β, q) = [

X−

ρ(α, x) dp(x)][

X+

τ(β, y) dq(y)] , (53)

R(α, p; β, q) = [ρ1(α)

X−

ρ2(x) dp(x)]× [1− τ1(β)

X+

τ2(y) dq(y)] . (54)

15

Page 16: Measuring Risk

These types of risk measures get more and more specialized. The family (52) has the basic multiplica-tively separable form. The second is a specialization using the expectation principle for the risk-partas well as for the chance-part. The third family (54) goes a step further by separating out the effectsof loss and gain probabilities. Assuming that the indifference relation∼ implied by the risk relation% is preserved under uniform rescaling of outcomes in his pure risk setting Fishburn, finally, arrivesat the risk measure

R(α, p; β, q) = [ρ1(α)

X−

|x|θ dp(x)]× [1− τ1(β)

X+

τ2(y) dq(y)] . (55)

The central idea of all these families of speculative risk measures is that gains reduce risk in a pro-portional way that is independent of the particular(α, p) involved, unlessα = 0, in which case thereis no risk to be reduced.

7. Risk Measurement under Partial Probability Information

All of the risk measures discussed so far refer to the riskiness of alternatives that can be describedby random variables, i.e. by precise probability distributions over one-dimensional outcomes. Inpractice, however, situations ofpartial probability informationare prevailing. Such situations of“partial ignorance” or “ambiguity” are simply characterized by uncertainty about probabilities ofoutcomes.

A well-known example of a decision situation under partial probability information is the so-calledthree-color-problem introduced by Ellsberg in 1961. In this problem, an urn containing90 balls ispresented.30 of these balls are known to be red. The remaining ones are known to be black oryellow, but with unknown proportion. From this urn, exactly one ball is to be drawn randomly. Thealternatives are different bets on colors or pairs of colors, respectively. In this situation, obviously, theprobability of red is1/3 and the probabilities of black or yellow are known to be between0 and2/3but uncertain.

There is some empirical evidence that in practical economic situations under partial probability in-formation, e. g. project evaluation, decision makers use some sort of generalized mean-risk decisionrule. Modelling that kind of decision behavior presupposes a theory of risk under conditions of partialprobability information. In 1989, Brachinger developed such a theory by starting from a setS of mu-tually exclusive states of nature,|S| = m ≥ 2, and a setA of alternativesu : S −→ R1. Ambiguity iscovered by a (non-empty) subsetP, with |P| > 1, of the setPS of all probability measuresp definedon a givenσ-algebra onS. Thereby,P is to be interpreted as the set of all probability distributionscompatible with the information available. In Ellsberg’s three-color-problem, e. g., partial probabilityinformation is covered by

P = {p | p1 =1

3∧ p2 = λ ∧ p3 =

2

3− λ ; λ ∈ [0,

2

3]} . (56)

The most important practical cases of partial probability information are those where the decisionmaker is able to (not necessarily completely) rank the (finite set of) states of nature or where hedisposes of interval-valued probabilities. It can be shown that in all of these cases the probabilityinformationP is linear in the sense that it allows a description

P = {p ∈ PS |Bp ≥ b} , (57)

whereB is a(k ×m)-dimensional matrix andb is ak-dimensional vector.

16

Page 17: Measuring Risk

In his theory of risk under partial probability information Brachinger presumes thatu is a cardinalutility function with target utilityu∗. Every utility value smaller thanu∗ is viewed asrisky and willbe called aloss. Every utility value as large asu∗ is viewed asnonriskyand will be called again.Without loss of generality the target utility is set to zero,u∗ = 0.

Every utility valueu smaller thanu∗ = 0 is a potential loss whose amount is given with−u. There-fore, the total amount of potential losses of an alternative is given with the function which takes onthe value−u if u is negative and equals0 otherwise. Formally this function can be described by theloss-functionu+ : S −→ R0+ defined by

u+(s) := χ+(−u(s)

) · (−u(s)) ≥ 0 , (58)

whereχ+ is the characteristic function of the positive reals. The degree of uncertainty over thepotential losses of an alternative is covered by the partial probability informationP. I. e., objects ofthe theory of risk under partial probability information are all possible pairs(u,P). Each pair(u,P)is calledrisk vector.

Brachinger presumes that a risk measure should be a nonnegative, real-valued mapping

R : A× ℘(PS) −→ R10+ (59)

being equal to zero if and only if a risk vector’s loss component is identically zero, i. e.

R(u,P) = 0 ⇐⇒ u+ ≡ 0 . (60)

Furthermore, it is assumed that the risk of an alternative only depends on possible losses, i. e.

R(u,P) = R(u+,P) , (61)

and that “simple” risksχA, A ⊂ S, are measured through

R(χA,P) := supp∈P

p(A) =: µ(A) . (62)

It can be shown that the set functionµ defined on the givenσ-algebra onS is an upper envelope ofP, and is monotone and subadditive, i. e. it holds

µ(A ∪B) ≤ µ(A) + µ(B) (63)

for all A andB being elements of theσ-algebra onS.

The first main axiom then is that “compounded” risksA∪B should not be higher than the “modular”risksχA andχA−B, i. e. it is assumed thatR is submodular,

R(χA∪B,P) ≤ R(χA,P) + R(χB,P)−R(χA∩B,P) . (64)

The second main axiom is that the measureR(u,P) of a general risk vector(u,P) should be some“loss-expectation”, where “expectation” should be taken with reference toµ, i. e. it is assumed that

R(u,P) =

∫u+ dµ , (65)

where integration is in the sense of the Choquet-integral.

17

Page 18: Measuring Risk

Finally, it is assumed that the riskR(αu,P) is some function of the riskR(u,P) and α. Morespecifically, corresponding to equation (20) it is required that, for all risk vectors(u+,P) and allα > 0, there is a real functionS with S(1) = 1 such that

R(αu,P) = S(α) R(u+,P) . (66)

Thereby,S should be increasing and continuous.

From these assumptions Brachinger derives the risk measure

R(u,P) = supp∈P

∫u+ dp , (67)

which reduces to

R(u,P) = supp∈P

m∑j=1

u+j p(u+

j ) (68)

in the finite cases.

In a second part of his study in analogy to Fishburn’s work, Brachinger considers measurement ofspeculative risk where risk-reducing effects of gains on perceived risk are explicitly taken into ac-count. But, based on empirical results and contrary to Fishburn’s multiplicativity assumption (49), itis started off from the idea that positive and negative components of an alternative, i. e. possible gainsand losses are combined additively to arrive at judgements of its riskiness. It is assumed that, ceterisparibus, generalized risk should increase with increasing amount and probability of loss and decreasewith increasing possible gains and gain probability.

For a given alternative, every utility valueu greater than0 is a potential gain. The total amount ofpotential gains of an alternativeu ∈ A can be described by the functionu− : S −→ R0+ defined by

u−(s) := χ+(u(s)

) · u(s) ≥ 0 . (69)

The degree of uncertainty over the potential gains is, as well as over the potential losses, coveredby the partial probability informationP. I. e., objects of the theory of speculative risk under partialprobability information are all possiblegeneralized risk vectors(u+,u−,P).

Brachinger assumes separability in the sense that there is a pure risk measureR, achance measureCand a mappingF : R2

0+ −→ R such that for every generalized risk vector(u+,u−,P) holds

R∗(u+,u−,P) = F [R(u+,P), C(u−,P)] , (70)

F being strictly increasing in the first variable and strictly decreasing in the second one. It should belinear-homogenous and fulfill certain additional technical properties.

These assumptions then lead to the risk measure

R∗(u+,u−,P) = γR(u+,P)−R(u−,P) . (71)

Within this class of generalized risk measures the parameterγ covers the decision maker’s risk atti-tude.

18

Page 19: Measuring Risk

8. Final Remarks

A reading of the literature on concepts and measurements of risk shows that there is, by now, avariety of theoretical approaches. Among these approaches it is difficult to select anyone as superiorby convincing a priori arguments. Empirical data will be needed. The need for more empiricalinvestigations to evaluate alternative measures of risk is obvious. It seems quite realistic that somedefinitions of risk may be more useful when the objective is to predict choices under uncertainty whileothers may be superior predictors of introspective judgements of perceived risk.

Research on perceived risk should be better integrated with the descriptive as well as the prescriptivemodels of decisions under risk. A better understanding of risk judgements could help to develop morerealistic risk-value models. In the areas of strategic planning, and, especially, in finance risk-returnconsiderations are standard. The capital asset pricing model is based on a variance-expected valuemodel, thus a special case of risk-return model. For the capital asset pricing model one has alreadytried to develop alternative risk-value foundations. However, all these economic applications rest ona deeper understanding of risk judgements.

Bibliography

Artzner P., Delbaen F., Eber J.-M., and Heath D. (1999). Coherent Measures of Risk.MathematicalFinance9, 203–228. [In this paper, methods of measurement of both market risks and nonmarketrisks are discussed.]Brachinger H.W. and Weber M. (1997). Risk as a primitive: a survey of measures of perceived risk.OR Spektrum19, 235–250. [This is a more comprehensive survey of naive standardized risk measuresused in the economic literature as well as of recent theoretical and empirical developments.]Fishburn P.C. (1977). Mean-Risk Analysis with Risk Associated with Below-Target Returns.TheAmerican Economic Review67, 116–126. [This paper develops a general mean-risk dominancemodel in which risk is measured by a probability-weighted function of deviations below a specifictarget return.]Fishburn P.C. (1982). Foundations of Risk Measurement. II. Effects of Gains on Risk.Journal ofMathematical Psychology25, 226–242. [This paper presents an axiomatization of measures of “spe-culative” risk that include effects of gains on perceived risk.]Fishburn P.C. (1984). Foundations of Risk Measurement. I. Risk as Probable Loss.ManagementScience30, 396–406. [This paper follows the modern axiomatic approach and derives measures of“pure” risk from a relation “is at least as risky as” defined on pairs of random variables.]Jia J., Dyer J.S., and Butler J.C. (1999). Measures of Perceived Risk.Management Science45, 519–532. [This paper presents two classes of measures for perceived risk by decomposing a lottery intoits mean and “standardized risk”.]Luce R.D. (1980). Several Possible Measures of Risk.Theory and Decision12, 217–228. [This pa-per develops several risk measures based on some structural and some aggregational assumptions. Acorrection appeared in: Luce R.D. (1981). Correction to: Several Possible Measures of Risk.Theoryand Decision13, 381.]Luce R.D., Weber E.U. (1986). An Axiomatic Theory of Conjoint, Expected Risk.Journal of Mathe-matical Psychology30, 188–205. [This paper presents an axiomatization of subjective risk judge-ments that leads to a representation of risk by a risk measure with seven free parameters.]Machina M., Rothschild M. (1987). Risk.The New Palgrave, A Dictionary of Economics, London:Macmillan Press. [This entry discusses the risk problem from the general economic viewpoint ofdecision making under uncertainty.]Sarin R.K. (1987). Some Extensions of Luce’s Measures of Risk.Theory and Decision22, 125–141.[In this paper, based on a generalization of the expectation principle, a new exponential model of risk

19

Page 20: Measuring Risk

is developed.]

20