8/13/2019 Extreme Events Seoul
1/14
2002 Statistical Research Center for Complex Systems
International Statistical Workshop
19th & 20th June 2002
Seoul National University
Modelling Extremes
Rodney Coleman
Abstract
Low risk events with extreme consequences occur from flooding, terrorism,
epidemics, fraud, etc. Measuring and modelling these rare but extreme eventsis needed to answer questions of how high to build a flood barrier, how much
to spend on railway safety, how much to put into reserves to save a bank from
collapse.
We look at probability models appropriate for modelling extreme losses. Model
uncertainty is shown to arise in fitting the long tail from just a small sample of
tail data. A methodology is shown which has been applied to extreme banking
losses.
Key words: Return values, quantiles, operational risk, extreme value
distributions, probability weighted moments, small sample estimation.
Table of Contents
1. Extreme risk events . . . . . . 2
2. Return values and quantile models 2
3. Statistical modelling of loss data 3
4. A case study . . . . . . . . . 9
5. Pricing the risk . . . . . . . 10
6. Alternative methodologies . . 11
7. Appendices . . . . . . . . . 11
References . . . . . . . . . 13
Dr Rodney Coleman
Department of Mathematics, Imperial College
180 Queens Gate, London SW7 2BZ, UK
8/13/2019 Extreme Events Seoul
2/14
Modelling extremes Rodney Coleman Page 2
1 Extreme risk events
Extreme risk events are rare. The consequences can be catastrophic. We have only to
remind ourselves of
Chernobyl,
nvCJD, the human form of mad cow disease, with under 150 cases so far, but
untreatable and identifiable only after death.
We might add: winning the lottery jackpot. The UK national lottery has odds of less
than 1 in 14 million given by choosing 6 numbers from 49.
When considering extreme loss we must also bear in mind balance of risk, such as the rare
complications in children from inoculations despite their benefit in saving lives.
Consideration will be directed to modelling the consequences of low risk events when theseconsequences are expressed in monetary terms. Prior to the World Trade Center destruction
in September 2001, the largest insured loss was from Hurricane Andrew in 1992 which
caused $16 billion of insurance payouts. In 1999, supercomputers were able to successfully
predict the path of Hurricane Lenny, though its severity was overestimated. The low risk
means that we cannot demand that the residents of Florida go to live in the interior of the
continent, and insurance cover cannot be denied them if they continue to live in Florida.
But how should insurance for rare but extreme losses be priced? Worldwide insured losses
for the years 19701995, reported in Embrechts et al. (1997), illustrates the problem facing
the insurance industry. While the increase in the sizes of the losses throughout this time
is clearly attributable to increased economic development, the rate at which the extreme
losses occur may be a reflection of global warming.
After recent fatal railway train crashes in the UK, the question of the appropriate amount
to spend on safety devices has political and public dimensions. Railway companies were
criticised for even suggesting that safety might be measured in lives saved per million
pounds. The crashes have driven passengers onto the roads, where the number of deaths
each week exceeds the annual rate on the railways.
When Barings Bank collapsed overnight in 1995 due to fraud and mismanagement, the
financial world found that despite having a good understanding of credit risk (non-
repayment) and market risk (price variability) its risk from business activities, such as
loss of key personnel, computer shut-down, and fraudulent transactions, these risks were
not being measured or managed. Records were not even being collated of losses arising from
this so-called operational risk. Today stockmarket listed companies report on operational
risk to shareholders in their annual reports.
2 Return values and quantile models
Return values are the basic risk measure for extremes.
Building regulations require building standards that ensure that catastrophic building
failure will occur in any year with a less than a 1 in 50 chance.
8/13/2019 Extreme Events Seoul
3/14
Modelling extremes Rodney Coleman Page 3
Nuclear plants, dams, bridges and sea dykes generally require a less than 1 in 10,000
chance of catastrophic loss in any year.
These are return values. The probability of a loss larger than xin any year being less than
p gives, assuming independence of loss events, P(No loss > x over k years) > (1 p)
k
.
Example 1. Reactor safety
If the probability of meltdown in any given nuclear plant in a year is 0 .00005 , and we
consider the 600 nuclear plants in the USA, then the probability of meltdown in one of the
plants sometime in the next 20 years is 44% .
Return values are defined via quantiles. The 100p % quantile is the loss x exceeded in
100 (1 p)% of the years. We write Q(p) for the quantile function, where Q(p) =x is the
inverse function of the loss size distribution function F(x) = P(X x) = 1 p . We are
familiar with quantile measures such as the lower and upper quartiles, Q(0.25) andQ(0.75),
the median, Q(0.5), etc.
For any non-decreasing function T, T(Q(p)) is a quantile function. For example, ifQ, Q0,
andQ1 are quantile functions, so are
+ Q0(p) ( >0) Q0(p) + Q1(p) ( >0, >0)
ln Q(p) (Q(p)> 0) Q(1 p) 1
Q(1 p)
These can be combined to create ever more elaborate models (Gilchrist 2000). Some
standard quantile models are given in Table 1. A location parameter and scale parameter give Q(p) = + Q0(p) . Since the moments of heavy-tailed distributions may not
exist, location and scale are not necessarily expectation and variance.
Model Q0(p) Model Q0(p)
Uniform p Frechet ( lnp) ( >0)
Exponential ln(1 p) Weibull ( lnp) ( 0)
Power p ( >0) Beta (1 p) (
8/13/2019 Extreme Events Seoul
4/14
Modelling extremes Rodney Coleman Page 4
had this role historically in econometrics theory and the Weibull in reliability modelling.
Our experience is that the tail of the lognormal is insufficiently thick, failing to model the
extremely large losses.
Example 2. An illustrative small data set
7, 10, 15, 18, 20, 21, 22, 24, 25, 32, 36, 52, 80, 120.
The spot diagram suggests that we look for a skew distribution to model their distribution.
Table 2 shows the results of probability plots given by the Minitab statistics package. We
see from the table that none of the distributions offers good estimation of the tail. With
a sample of just 14 values the confidence bands are very wide, and the largest value 120 is
way beyond every fitted 95% quantile. The fittedp-value (1 F(120)) for the largest value
120 is also given in the table, and demonstrates its status as an outlier.
Distribution Q(0.95) 95% CI Q(0.99) 95% CI 1F(120)
Normal 83 (60, 107) 104 (74, 134) 0.003
Lognormal 86 (48, 154) 141 (68, 295) 0.021
Gumbel 94 (73, 116) 111 (87, 135)
8/13/2019 Extreme Events Seoul
5/14
Modelling extremes Rodney Coleman Page 5
3.1 The small sample problem
An extreme loss in a small sample is over-representative of its 1 in a 100 or 1 in a 1000
chance. Extreme losses in a small sample are under-represented if there are no extreme
losses observed. We must conclude that we cannot, through fitting data, model the truedistribution of a heavy-tailed distribution. This is true whatever model we try to fit.
3.2 Extreme value distributions
Two heavy-tailed models that can allow large observations arise out of Extreme Value
Theory. These are the Generalised Extreme Value distribution (GEV) and the Generalised
Pareto Distribution (GPD) (see Appendix 7.1). The GEV is the limit distribution for the
sample maxima, the largest observation in each period of observation. The GPD is the
limit distribution of losses which exceed a high enough threshold. Nevertheless, there is no
reason why the GEV should not be used for modelling threshold exceedances, or the GPDfor sample maxima, or any other heavy-tailed distribution, as we will not necessarily be
approaching the asymptotic position unless we have a very large data set. In this latter
case the small sample problem does not arise, and tests made for model fit can be powerful
enough to identify the lack thereof.
3.3 Parameter estimation
Errors in estimating the shape parameter can make estimates of the loss severity
distribution and its quantiles quite unstable, especially its high quantiles. Table 3 shows how
relatively small changes in the estimated shape parameter can lead to significant errors in
high quantile estimation. In reading quantile function plots, differences are shown through
changes iny-axis values for fixed x-axis values.
Several methods have been developed for the estimation of the shape parameter. These can
lead to estimates that differ considerably more than is shown in Table 3.
100p% 0.42 0.46 0.50
95% 5.9 6.3 6.8
99% 14.1 15.9 17.9
99.9% 40.9 50.0 61.2
Table 3: Quantiles Q(p) for GE V(, 0, 1).
Maximum likelihood estimation weights each value equally. Interest centred on the tail
suggests that more weight should be given to the largest observations. Hosking and Wallis
(1997) have developed a probability weighted method of moments (PWM) (see Appendix
7.2). Hill estimation for the shape parameter (Hill 1975) uses just the data in the tail
(see Appendix 7.3). Other tail data methods give Dekkers-Einmahl-DeHaan and Pickands
estimators (Embrechtset al. 1997), but these would require larger data sets than the 12 or
14 values used to illustrate this article.
8/13/2019 Extreme Events Seoul
6/14
Modelling extremes Rodney Coleman Page 6
0.95 0.96 0.97 0.98 0.99 1
0.0
10
20
30
40
50
60
Figure 2: Quantile functions for GE V(, 0, 1), = 0.42, 0.46, 0.50 .
Let the order statistics from a random sample of size n be denoted by x1:n > x2:n > >
xn:n. Table 4 gives the Hill estimates of for the GEV fit of Example 2. Choice of how
many tail observations to use can lead to problems of reproducibility of the estimation. My
practice with a small sample is to use a trimmed mean
= 1n 6
n3k=4
kto estimateusing Hill estimates. The trimming should be adjusted so that, from inspection
of the Hill estimates, averaging is over a flattish subset of estimates. This trimmed mean is
not appropriate for large data sets when a comparison can be made with other estimation
methods.
3.4 Fitting the data of Example 2
k 1 2 3 4 5 6 7 8 9 10 11 12 13 14
xk:14 120 80 52 36 32 25 24 22 21 20 18 15 10 7
k 0.41 0.63 0.79 0.71 0.82 0.72 0.70 0.66 0.64 0.68 0.80 1.14 1.41
Table 4: Hills estimation applied to the data {xk:14} of Example 2.
In practice, we often use PWM+H estimates, probability weighted moments for estimating
and , with Hill estimation for . A comparison can be made with using PWM for all
three parameters and this is done in Table 6 for the same data. In the table: GEV1 and
GPD1 have all three parameters fitted by PWM; GEV2 and GPD2 use the Hill trimmed
mean estimate for , with the other two parameters fitted by PWM; GEV3 gives the three
parameters fitted by maximum likelihood (using the Xtremes package of Reiss and Thomas
(2001)); GEV4, GEV5 and GEV6 use arbitrarily chosen values of , with the other two
parameters fitted by PWM. The GPD and maximum likelihood fitted models are here
8/13/2019 Extreme Events Seoul
7/14
Modelling extremes Rodney Coleman Page 7
to demonstrate the difficulty of fitting a long-tailed model to small samples. Recall, our
objective is to model the extreme tail. Banking regulators are proposing that the 99.9%
fitted quantile be reported as a measure of risk.
In Figure 3, the three probability density functions (GEV1, GEV3 and GPD1 of Table
6) appear to have converging tails. However the quantile function plots show significantdifferences. For the GPD, the location parameteris the lower bound on the distribution.
GEV1 GEV2 GEV3 GPD1 GPD2 GEV4 GEV5 GEV6
x = 0.348 0.715 0.441 0.170 0.715 0.200 0.250 0.300
7 7.27 15.33 7.89 9.33 16.45 3.54 4.83 6.09
10 10.96 16.31 14.74 11.01 16.87 8.88 9.58 10.28
15 13.65 17.11 13.45 12.84 17.35 12.59 12.93 13.28
18 16.09 17.91 15.66 14.88 17.91 15.85 15.90 15.98
20 18.51 18.76 17.90 17.16 18.57 18.97 18.78 18.62
21 21.03 19.71 20.27 19.73 19.36 22.13 21.73 21.35
22 23.76 20.82 22.89 22.69 20.33 25.46 24.86 24.28
24 26.82 22.16 25.88 26.15 21.55 29.09 28.31 27.54
25 30.41 23.86 29.46 30.30 23.14 33.20 32.26 31.31
32 34.79 26.13 33.93 35.44 25.32 38.06 36.98 35.88
36 40.51 29.40 39.91 42.13 28.53 44.13 42.97 41.75
52 48.70 34.72 48.76 51.55 33.84 52.41 51.28 50.02
80 62.69 45.51 64.53 66.91 44.79 65.60 64.82 63.83
120 101.54 87.05 112.09 104.84 87.48 97.83 99.37 100.64
Table 5: Fitted values from GEV and GPD for Example 2.
GEV1 GEV2 GEV3 GPD1 GPD2 GEV4 GEV5 GEV6 20.127 19.364 19.418 8.546 16.255 21.015 20.682 20.381 13.071 4.993 12.343 21.484 5.180 16.339 15.250 14.142
0.348 0.715 0.441 0.170 0.715 0.200 0.250 0.300
Q(0.5) 25.24 21.46 24.33 24.35 20.90 27.23 26.54 25.86Q(0.9) 64.75 47.28 66.92 69.09 46.59 67.45 66.75 65.84Q(0.95) 88.12 70.77 95.11 92.46 70.70 87.30 87.86 88.15Q(0.99) 168.64 196.66 204.13 158.63 203.99 144.33 152.34 160.62Q(0.999) 397.61 987.09 579.52 291.04 1020.54 264.53 302.67 347.63Table 6: Fitted parameters and quantiles from GEV and GPD for Example 2.
8/13/2019 Extreme Events Seoul
8/14
Modelling extremes Rodney Coleman Page 8
0.0 10 20 30 40 50 60 70
0.0
0.01
0.02
0.03
0.04
0.05
0.9 0.92 0.94 0.96 0.98 1
0.0
50
100
150
200
250
300
0.995 0.996 0.997 0.998 0.999 1
0.0
500
1000
1500
2000
Figure 3: a) The probability density functions for the fitted distributions GEV1 (black),
GEV3 (red) and GPD1 (green) of Table 6. b) and c) The quantile functions for high
quantiles.
3.5 A simulation study
A simulation study using data from 30 independent 12-samples from GEV(0.5, 0, 1) showsthrough the large estimated standard errors (Table 7) that PWM+H gives unstable
estimation of high quantiles. In this simulation the probability plotting points pj:n were
{uj:n}, the order statistics for a random sample of size n from Uniform(0,1), with a new
random sample taken for fitting each GEV sample (see Appendix 7.2). Sample sizes of
about 100 seem to be required. Other estimation methods fare worse.
Quantile 95% 99%
True value 6.83 17.95
Average estimate 5.81 15.85
Est. standard error 3.06 12.95
Table 7: A simulation study using data from 30 independent 12-samples from GEV(0.5, 0, 1).
Simulations (Embrechts et al. 1997) demonstrate that problems of estimation for heavy-
tailed distributions can arise even when the exact model is known and there are lots of
data. Tests of fit for any particular heavy-tailed distribution appear to lack the power to
detect a lack of fit for any realistically sized data set, as was seen in our studies (Cruz et
al. 1998).
One advantage of PWM over maximum likelihood is its easier implementation, and itsgreater applicability in small samples (Landwehr et al. 1979). As Hosking et al. (1985)
noted, although PWM estimators are asymptotically inefficient compared with maximum
likelihood estimators (MLEs), no loss of efficiency is detectable in samples of 100 or less.
The biases of PWM estimators are small and they decrease rapidly as the sample size
increases. The standard deviations of the PWM estimators are comparable with those of
MLEs for moderate sample sizes (n=50, 100) and are often substantially less than those of
MLEs for small samples.
8/13/2019 Extreme Events Seoul
9/14
Modelling extremes Rodney Coleman Page 9
4 A case study
One mechanism for estimation is to fit a GEV distribution to the sample maxima of loss
data over each of the preceding 12 months. The estimation process can be applied daily,
weekly or monthly on a rolling 12-month basis. In view of the heavy-tail characteristics just think of the size of a potential catastrophic loss a very high quantile such as 99%
can yield a figure which implies an economic capital allocation beyond that which would be
feasible. Recall the Basel Committee proposal that a 99.9% quantile value over a one year
holding period be calculated.
Severe fraud loss events will show the fitted parameters to vary in time, since each large loss
event will distort the shape of the fitted distribution. Historic data cannot be assumed to be
like recent or future data, so long data series cannot be assumed to improve the estimation.
My view is that although no estimates will be reliable, using rolling 12-month data will
highlight the impact of each extreme loss, and allow its effect to decline in time. It also
provides a pricing mechanism which reflects the occurrence of extreme loss and which canbe compared with hedge prices, such as insurance costs.
Example 3. Retail banking fraud
The data are the order statistics for the 12 monthly maxima of the losses from fraud during
1995 at a large UK retail bank. The data come from Cruz (2002).
600,000.34 394,672.11 260,000.00 248,341.96 239,102.93 165,000.00
120,000.00 116,000.00 86,878.46 83,613.70 75,177.00 52,700.00
Calculating losses to the nearest penny cannot easily be explained. Ignoring the variety
of precisions gives the following table of fitted distributions and quantile estimates (values
given in units of$1000), with fitted values (rounded) for the five largest order statistics.
1995 Q(0.5) Q(0.9) Q(0.95) Q(0.99) Q(0.999)GEV1 128.16 89.42 0.214 162.3 386.6 499.1 828.1 1541.0
GEV3 122.68 67.88 0.594 150.5 443.3 675.4 1764.4 6920.6
GPD1 42.36 168.11 0.043 157.1 410.6 514.5 743.9 1045.4
1995 GEV1 GEV3 GPD1
600 531 753 541
395 353 386 377
260 281 279 297
248 235 223 244
239 201 187 204
Table 8:
8/13/2019 Extreme Events Seoul
10/14
Modelling extremes Rodney Coleman Page 10
Wide differences in parameter values of GEV1 (PWM) and GEV3 (MLE) can be seen, with
resulting high variability in the fitted 99% and 99.9% quantiles.
0.0 200 400 600 800
0.0
0.002
0.004
0.006
0.995 0.996 0.997 0.998 0.999 1
0.0
2000
4000
6000
Figure 4: Fitted probability density functions and those parts of the quantile function plots
in the long tail. GEV1 (black), GEV3 (red) and GPD1 (green).
5 Pricing the risk
We have seen how the lack of good data in the tail of the loss severity distribution creates
uncertainty in the model, and in the high quantile values. This carries through to any price
that would be put on the risk when transferring it to insurance or cat bonds. How we should
be addressing this pricing is to be the subject of another paper, but once again illustrates
this uncertainty.
Let us assume an excess-of-loss insurance contract, with an excess of u ($100,000 for
example), with losses over a calendar year to form the basis of the premium for the followingyear. Then pricing requires us to estimate the shortfall, the excess of loss over the threshold
u, when a failure has resulted in such a loss. For a random variable X from the loss
distribution, this is the conditional distribution ofX u given that X > u.
Estimation procedures and tests for the appropriateness of the choice of model (mean excess
estimation and Q-Q plots) are found in Embrechts et al. (1997).
A useful consequence of using the GPD model for fitting loss data is that its mean excess
is linear inu. For 0,
E(X u X > u) =
+ u
1
A plot of sample values of Xu against u will give approximately a straight line with
slope /(1 ) . In practice the few sample values with largeu can make the GPD fitting
unsatisfactory.
To obtain annual premiums we need to take into account the frequency distribution, so that
we can multiply the single event price by the number expected in any year. In practice we
might choose to use the frequency over the previous calendar year and the estimated mean
shortfall for that years losses.
8/13/2019 Extreme Events Seoul
11/14
Modelling extremes Rodney Coleman Page 11
6 Alternative methodologies
6.1 Resampling techniques
The jackknife and bootstrap (Efron and Tibshirani 1993) can be used to obtain samplingproperties of the estimates such as confidence intervals. However this will not correct
for the small sample problem. Similarly for the hierarchical Bayes resamplingof Medova
(2000) and Kyriacou & Medova (2001). This latter treats non-stationarity by letting the
GPD parameters be random from distributions which themselves have parameters (hyper-
parameters). The frequency process, generally modelled as a Poisson process independent
of the loss size process, can be given hyper-parameters. They simulate a loss event (severity
plus time of occurrence) from the GPD and Poisson process having these estimated hyper-
parameters. The estimates are all updated as new simulated data are created.
6.2 Econometric modelling of loss data
The direct use of linear predictive modelling can, for example, relate the frequency and
severity of loss to underlying predictor variables (risk indicators) such as system downtime
(SD), number of employees (E), and number of transactions (T). It is essential that this
be accompanied by model checking and validation. Values ofR2 of 95% or more (showing
that the model has accounted for at least that amount of variability in the data) should not
be taken as evidence for the models predictive power. Backtesting for predictive power is
important.
6.3 Dynamic financial analysis (DFA)
DFA refers to enterprise-wide integrated financial risk management (see Kaufmann et al.
(2001)). It involves (mathematical) modelling of every business line, with dynamic updating
in real time. Research for the insurance industry is centred at RiskLab, sponsored by
ETHZurich, Credit Suisse, Swiss Re, and UBS.
6.4 Bayesian Belief Networks (BBNs)
A BBN is an acyclic graph of nodes connected by directed links of cause and effect, with a
probability table at each node. Their development arose out of concern that cause and effectrelationships are not incorporated into statistical/econometric modelling. Also concern that
risk assessment needs to incorporate uncertainty, diverse information, expert judgement,
and partial information. Probability relationships via Bayes Theorem allow scenarios to
be transmitted through the network.
7 Appendices
7.1 The GEV and GPD distributions
Generalised Extreme Value Distribution (GEV)
8/13/2019 Extreme Events Seoul
12/14
Modelling extremes Rodney Coleman Page 12
For a random variableXfromGEV(,,), where and are the location and scale
parameters and is the shape parameter, let z = (x )/ , then
P(Xx) =F,,(x) =F,0,1(z)
= exp{ exp(z)}, for allz (= 0)exp{(1 + z)1/}, for 1 + z 0 (= 0)
As 0, the= 0 case gives the Gumbel distribution, for >0 we have the Frechet
distribution, and for
> xn:n from a random sample of size n, the corresponding sample values are
r = 1n
nj=1
xj:nprj:n
where
pj:n = n j+ 1
2
n or
n j+ 1
n + 1 ( =E(Uj:n) ) or uj:n
prj:n = (pj:n)
r
or E{(Uj:n)r
}and {uj:n} are the order statistics u1:n > u2:n > > un:n for a random sample of
sizen from Uniform(0,1).
The PWM estimates of, and are found by substituting the sample values0 = x,1 and2 for their theoretical values. Letm1= 21 0 , m2 = 32 0 .
For the GEV having
8/13/2019 Extreme Events Seoul
13/14
Modelling extremes Rodney Coleman Page 13
approximately, where c = ln 2
ln 3
m1m2
= m1
(2 1)(1 ) , = 0+
{1 (1 )}
where () = 0
t1etdt is the Gamma Function.
For the GPD having x2:n> > xn:n from a random sample of size n, the Hill
estimates of the shape parameter are
k = 1k 1
k1j=1
ln xj:n ln xk:n (k= 2, . . . , n)
The form
k = 1k
kj=1
ln xj:n ln xk:n (k= 2, . . . , n)
is sometimes used, but this does not improve the fit in respect of the data sets of this
paper.
References
Basel Committee on Banking Supervision. (2001a) Working Paper on the
Regulatory Treatment of Operational Risk. Bank for International Settlements
September 2001. (See http://www.bis.org)Basel Committee on Banking Supervision. (2001b) Consultative Paper 2. Bank
for International Settlements
Basel Committee on Banking Supervision. (2001c) Consultative Paper 2.5. Bank
for International Settlements
British Bankers Association. (1999) BBA/ISDA/RMA Research Study on
Operational Risk. (See http://www.bba.org.uk)
Coleman R.(2000) Using modelling in OR management. Conference Operational Risk
in Retail Financial Services (London, June 2000)
(Unpublished: http://stats.ma.ic.ac.uk/rcoleman/iir.pdf)
8/13/2019 Extreme Events Seoul
14/14
Modelling extremes Rodney Coleman Page 14
Coleman R & Cruz M. (1999) Operational risk measurement and pricing (Learning
Curve). Derivatives WeekVIII, No. 30, July 26, 56.
(Revised preprint: http://stats.ma.ic.ac.uk/rcoleman/opriskLCV.html)
Coles S & Powell E. (1996) Bayesian methods in extreme value modelling: A review
and new developments. International Statistical Review64 119136
Cruz M G. (2002) Modeling, measuring and hedging operational risk. Wiley
Cruz M, Coleman R & Salkin G. (1998) Modeling and measuring operational risk.
Journal of Risk1 6372
Danielsson J, Embrechts P, Goodhart C, Keating C, Muennich F, Renault O
& Shin H S. (2001) Submitted in response to the Basel Committee for Banking
supervisions request for comments. May, 2001.
(http://www.riskresearch.org )
Efron B & Tibshirani R. (1993)An Introduction to the Bootstrap. Chapman & Hall
Embrechts P. (Editor) (2000) Extremes and Integrated Risk Management. Risk
Books, London
Embrechts P, Kluppelberg C & Mikosch T. (1997) Modelling Extremal Events.
Springer
Gilchrist W. (2000) Statistical modelling with quantile functions. Chapman &
Hall/CRC
Hill B M.(1975) A simple general approach to inference about the tail of a distribution.
Annals of Statistics3 11631174
Hosking J R M & Wallis J R. (1997) Regional Frequency Analysis: An ApproachBased on L-Moments. Cambridge University Press
Johnson N L, Kotz S & Balakrishnan N. (1995) Continuous Univariate
Distributions. Volume 2. (2nd Edition). Wiley
Jorion P. (1997) Value at Risk: The New Benchmark for Controlling Market Risk.
McGraw-Hill
Kaufmann R, Gadmer A & Klett R. (2001) Introduction to Dynamic Financial
Analysis. ASTIN Bulletin 31 (May) 213249.
(Available also from http://www.risklab.ch)
Kyriacou M N & Medova E A. (2000) Extreme values and the measurement ofoperational risk II.Operational Risk1, 8 1215
Landwehr J, Matalas N & Wallis J R. (1979) Probability weighted moments
compared to some traditional techniques in estimating Gumbel parameters and
quantiles. Water Resources Research 15 10551064
Medova E A. (2000) Extreme values and the measurement of operational risk I.
Operational Risk1,7 1317
Reiss R-D & Thomas M. (2001)Statistical Analysis of Extreme Values. (2nd Edition).
Birkhauser