Upload
esfahanipour
View
213
Download
0
Embed Size (px)
Citation preview
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
1/131
NO N-LINEAR RISK MEASUREMENT
Jeremy ODonnell
Thesis submitted to the University of London
for the degree of Master of Philosophy
IMPERIAL COLLEGE
April 2003
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
2/131
Page 2
To my wife, for constant encouragement, my daughters, for constant
distractions, and my son, for waiting until I had finished.
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
3/131
Page 3
ACKNOWLEDGEMENTS
I would like to thank Professor Nigel Meade for working with me over the many years it
has taken to complete this research on a part-time basis. He has been an unwavering
supporter of my subject area and a huge help in turning my raw ideas into a structured,
sensible presentation. He has been aided in the background by Mr. Robin Hewins, who has
been a reference for banking industry material.
I would like also to thank Dr. Richard Flavell, an early inspiration, who suggested that my
research include coverage of the regulatory environment. Professor Stewart Hodges, of the
Warwick Business School Financial Options Research Centre, has always expressed an
interest in the research and encouraged me in the early days.
The other staff of the Management School have always been ready with advice and
assistance, with special mention to Professor Sue Birley, for advice on the scope of my
research.
Both Kris Wulteputte and Jacques Longerstaey provided assistance while they were
supporting the RiskMetrics methodology, at the RiskMetrics Group and at JP Morgan.
Kris in particular gave me useful feedback on the core experimental work.
I would like to express sincere gratitude to both Gulf International Bank and Merrill Lynch
for their financial support. Don Simpson, Graham Yellowley and Jay Morreale indulged
my ambitions without asking for anything in return. Dominic Ash of Merrill Lynch
provided careful proofreading of the final thesis draft.
Numerical Technologies of Tokyo provided the Monte Carlo simulation add-in for Excel
that I use in Chapter 5. This seemed to work much better than my own efforts and savedme a lot of heartache in the final months of my results gathering.
Finally, I would like to thank my family, for making me believe that I could do this.
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
4/131
Page 4
ABSTRACT
Several tools exist to assist the modern risk manager in monitoring investments, ranging
from institution-wide position reports, through market sensitivity analysis and credit
exposure reports, to complex money-at-risk calculations and simulations. The risk manager
must choose one or more methodologies to combine this data to provide meaningful
measures of the risk. One particularly difficult area has been the risk that arises from
positions in optio ns, in circumstances when the risk manager does not want to
compromise on speed, accuracy or cost of implementation. This will happen when the
institution has significant option positions, requires a measure with credibility, and has
limited resources for systems implementations. In this thesis, I look at the popular
methodology available to risk managers, RiskMetrics, and assess the level of compromise
that the risk manager must make when using the fast techniques within this methodology
to measure non-linear risk from option portfolios. The thesis describes the common
shortfalls in the RiskMetrics model when applied to a typical portfolio for a financial
institution. The thesis goes on to examine the challenges of non-linear risk measurement in
more detail. In particular, I examine the measurement of non-linear risk using the
RiskMetrics Delta-Gamma-Johnson modelling method. Very few publications in the field
of non-linear Value at Risk (VaR) have included a review of the Johnson methodology. I
show that it can work effectively for non-linear risk in interest rate options, under some
circumstances. I also show an example for which it understates the risk. Organisations may
still at this time be adopting the methodology as a component of a RiskMetrics VaR
implementation. The thesis presents a framework that risk managers can apply to their own
portfolios, to assess the suitability of the Delta-Gamma-Johnson approach.
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
5/131
Page 5
TABLE OF CONTENTS
1 INTRODUCTION..................................................................................................................9
1.1 The Development of Risk Management ..........................................................91.2 Chronology...............................................................................................................121.3 The G30 Report ......................................................................................................141.4 Market Risk and VaR ............................................................................................161.5 Credit Risk .................................................................................................................181.6 Integrated Risk.........................................................................................................191.7 The UK Regulator..................................................................................................191.8 Non-Linear Risk Measurement.........................................................................211.9 Motivations and research objectives ...............................................................221.10 Organisation of the thesis...................................................................................23
2 VALUE AT RISK CONCEPTS.....................................................................................242.1 Introduction..............................................................................................................24
2.2 A history of derivatives trading.........................................................................262.3 A history of Risk Managem ent..........................................................................272.4 Risk in the Financial Markets .............................................................................292.5 Concentration and diversity................................................................................312.6 Risk Management ...................................................................................................322.7 Value at Risk.............................................................................................................382.8 Benefits of Risk Management Controls.........................................................402.9 VaR Approaches.....................................................................................................422.10 VaR Inputs ................................................................................................................522.11 VaR Outputs.............................................................................................................532.12 Other Risk Measures .............................................................................................552.13 Coherency..................................................................................................................56
2.14 When to use which method...............................................................................572.15 Back testing...............................................................................................................572.16 Risk Metrics..............................................................................................................582.17 Whats wrong with VaR.......................................................................................582.18 Conclusion.................................................................................................................61
3 THE RISKMETRICS METHODOLOGY...............................................................623.1 Introduction..............................................................................................................62 3.2 RiskMetrics in Summary......................................................................................633.3 History of the methodology...............................................................................643.4 JP Morgan..................................................................................................................653.5 Reuters........................................................................................................................653.6 Context and development ...................................................................................65
3.7 The Heart of RiskMetrics....................................................................................663.8 Time horizon............................................................................................................673.9 RiskMetrics standard deviations .......................................................................673.10 RiskMetrics Covariances and Correlations...................................................693.11 Data mapping ...........................................................................................................693.12 Inclusion of non-linear instruments ................................................................723.13 Limitations.................................................................................................................733.14 Assumptions .............................................................................................................733.15 Features .......................................................................................................................75
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
6/131
Page 6
3.16 Conclusion.................................................................................................................794 RISKMETRICS IMPLEMENTATION.....................................................................80
4.1 Introduction..............................................................................................................80 4.2 RiskMetrics in the trading room.......................................................................814.3 Data..............................................................................................................................824.4 Interest Rate Risk....................................................................................................83
4.5 Equity Concentration Risk ..................................................................................844.6 Corporate Bonds.....................................................................................................854.7 Option Risk...............................................................................................................864.8 Conclusion.................................................................................................................94
5 USE OF JOHNSON TRANSFORMATION IN RISKMETRICS................955.1 Introduction..............................................................................................................95 5.2 The Standard RiskMetrics Approach..............................................................955.3 Delta-Gamma...........................................................................................................955.4 Non-linear market risk in RiskMetrics ...........................................................965.5 Delta-Gamma-Johnson Method.......................................................................985.6 Worked Example..................................................................................................1025.7 Full test .....................................................................................................................107
5.8 Test Statistic for Null Hypothesis..................................................................1085.9 Test Details.............................................................................................................1095.10 Results of Experiment........................................................................................1105.11 Analysis .....................................................................................................................1115.12 Conclusion...............................................................................................................120
6 CONCLUSIONS.................................................................................................................1226.1 Further work...........................................................................................................1236.2 Summary..................................................................................................................126
APPENDICES.............................................................................................................................127Deal Data.................................................................................................................................128Results of Experiment........................................................................................................129
REFERENCES............................................................................................................................130
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
7/131
Page 7
LIST OF TABLES
1. Chronology of VaR and non-linear risk publications.............................................................................12
2. Recommendations of the G30 Report into Derivatives .......................................................................143. A history of derivatives trading .......................................................................................................................264. Significant Risk Management Events ...........................................................................................................275. Taxonomy of Limits............................................................................................................................................336. Features of BPV Limits......................................................................................................................................377. Prevention of Risk Management Events.....................................................................................................408. Features of Variance-covariance.....................................................................................................................459. Features of Historic Simulation ......................................................................................................................4910. Features of Monte Carlo .................................................................................................................................5111. Other Risk Measures.........................................................................................................................................5512. Development of RiskMetrics.........................................................................................................................6613. Challenges of RiskMetrics...............................................................................................................................81
14. Exotic option features......................................................................................................................................8915. Trade Data and Market Data for Cap 1..................................................................................................10216. Delta Exposure for Cap 1.............................................................................................................................10417: Gamma Exposure for Cap 1.......................................................................................................................10418. Covariance matrix for USD tenors, November 1996........................................................................10519. Mina & Ulmer calculated moments for Cap 1......................................................................................10520. Simulated Moments for Cap 1 using full revaluation and Delta-Gamma approximation..10521. Johnson fit parameters for Cap 1...............................................................................................................10622. Moments of pdf for Cap 1 using Johnson Transform......................................................................10723. Results of linear regression of Kolmogorov-Smirnov statistic vs dependent variables.......116
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
8/131
Page 8
TABLE OF FIGURES
1. Historic Simulation results.............................................................................................................................482. Risk System Structure ......................................................................................................................................523. Payoff for a linear instrument .......................................................................................................................874. Options pay-offs at maturity.........................................................................................................................875. Option pay-off curves......................................................................................................................................886. Delta-Gamma-Johnson Method..................................................................................................................987. Approach for test procedure.......................................................................................................................1088. K-statistic vs Option Strike/ Underlying Forward..............................................................................1129. K-statistic vs Option Expiry.....................................................................................................................`11310. K-statistic vs reference portfolio skew ....................................................................................................11411. K-statistic vs calculated portfolio skew...................................................................................................11512. K-statistic vs transformation typ e.............................................................................................................116
13. Cumulative frequency of returns using delta-gamma-Johnson and simulation: Cap 7........11714. Cumulative frequency of returns using delta-gamma-Johnson and simulation: Cap 16.....11815. Cumulative frequency of returns using delta-gamma-Johnson and simulation: Cap 10.....11916. Cumulative frequency of returns using delta-gamma-Johnson and simulation: portfolio..120
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
9/131
1 INTRODUCTION
1.1 The Development of Risk ManagementThe profits and solvency of a financial institution are subject to certain risks, arising from
the financial assets they hold and contracts they have executed. Risk managers monitor the
risks being run by the institution, maintaining the risk within levels approved by the board
of the institution. Value at Risk (VaR) has become a popular family of tools to assist the
risk manager.
The roots of risk management lie in portfolio theory and statistics. Portfolio theory is a
particularly strong influence on the variance-covariance class of Value at Risk processes,
while the statistics literature is fundamental to all current day risk calculations. From the
statistics of sample distributions, we know how to predict future market behaviour, givenobservations from the past. The utility of the normal distribution is particularly important
in this respect. From portfolio theory, we know how to combine the market behaviour of
individual assets, to predict the market behaviour of a portfolio of these assets. We know
that a diversified portfolio will have less risk than the riskiest asset in the portfolio. A Value
at Risk measure must reward traders for diversifying their risks, rather than punish them,
and an easy way to do this is to build upon portfolio theory.
The G30 report in 1993 marks the beginning of current day Value at Risk literature. The
report was prompted by concern over the growth of derivatives trading in the industrial
world, which the G30 countries represent. It set out industry best practice for managing
derivatives trading. This report prompted numerous banks to develop market risk
management programmes, although larger institutions were already developing risk
management frameworks at the time of the report. Many banks were already implementing
internal risk models when JP Morgans RiskMetrics was published the following year.
Although weak in some areas, this landmark publication set a minimum standard for all
banks to attain, when measuring market risk.
Within Europe, the Capital Adequacy Directive (CAD) has strongly influenced risk
management practice. In the UK, the Bank of England and its regulatory successor, the
FSA, have published significant prescriptive methodologies for reporting risk exposures
and allocating risk capital. These methodologies are often conservative, in order to
maintain a general applicability across all regulated bodies. The regulator offers banks the
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
10/131
Chapter 1: Introduction
Page 10
alternative of obtaining recognition for their internal risk management processes and
models. Many banks have now completed the process to achieve model recognition for
some or all of their trading activities, which allows them to combine internal and regulatory
risk management reporting and benefit from reduced capital requirements.
There are many tools available to the market risk manager to calculate Value at Risk.
Several authors have shown that different tools are appropriate for different circumstances.
While some methodologies are undoubtedly less computer intensive, others may operate
better in unusual market conditions or where there is restricted market data available.
The variance-covariance approach is comprehensively documented through its best known
implementation, RiskMetrics. The literature for other approaches, principally historic
simulation and Monte Carlo, is more generic in its nature. A specialist parametric approach
for VaR, Monte Carlo processing is also used for pricing certain types of financial
instruments, particularly complex options. The method is used to obtain a quasi-sample
distribution for the portfolio value, based on evaluating a large number of alternative
outcomes. The development of the method has been aided by the ongoing improvements
in computer processing capacities, as well as research aimed at reducing the number of
simulations required to obtain an acceptable standard error on the estimator. Much of the
academic research focuses on achieving a no n-biased estimator from the simulation. Monte
Carlo is now an important tool within VaR, providing an approach for predicting market
behaviour when the markets are difficult to model or have poor historic data.
Monte Carlo is an expensive investment from the perspective of available computer
resources. No matter what the size of organisation, there is never enough compute capacity
to go round and, for this reason, research continues to find a cheaper way of incorporating
option risks into the Value at Risk measure. RiskMetrics suggests fitting a generic
probability distribution function to the portfolio. Other authors have provided alternative
methods.
Value at Risk measures are critically dependent on market data and the quality of the
analysis the risk manager performs on this data. A common assumption within a Value at
Risk process is that changes in market data follow a normal distribution. Econometricians
have shown that deviations from the normal distribution, such as excess kurtosis (fat tails)
are clearly observable in the financial markets. Another common technique underlying
Value at Risk is to model the joint distribution between two market data time series by
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
11/131
Chapter 1: Introduction
Page 11
capturing the correlation, using ordinary least squares regression. Studies have also shown
that regression analysis should be used with caution with financial data. The fundamental
assumptions of linear regression are frequently breached by financial time series data.
Observations from the financial markets display autocorrelation, for which current
observations are correlated with previous observations in the same time series. They alsoexhibit heteroscedasticity, time-varying volatility and covariance, making ordinary least-
squares correlation results meaningless or misleading. Other models relax some of the
assumptions, thereby improving the performance of the model with financial data. A
notable example with an application in Value at Risk is the GARCH (general auto-
regressive conditional heteroscedasticity) volatility model, which makes it possible to derive
steady state volatilities and correlations for time series.
In this chapter, we follow the evolution of VaR literature through its modern history. Key
publications are set out chronologically and any contribution to the subject of non-linear
VaR measures noted. A special place is reserved to outline the role of the regulator, which
has been key in maintaining momentum behind VaR. At the end of the chapter, we take a
look at the future direction for research.
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
12/131
Chapter 1: Introduction
Page 12
1.2 ChronologyThe chronology of significant proprietary and regulatory Value at Risk publications is set
out below, together with the significant non-linear risk academic papers.
Year Publication Themes
1988 BIS Capital Accord Regulatory requirements to allocate capital for
exposure to default risk.
1993 G30 report on Derivatives
trading
Senior management oversight, marking to
market, measuring market risk, stress testing,
independent oversight, credit exposure
management and measurement.
European Union Capital
Adequacy Directive
Requirement to allocate capital against exposure
to market risks.
BIS consultative paper on
market risk
Framework for assessing capital adequacy of
market risk.
1994 RiskMetrics first published
(version 2)
Standard market risk measurement
methodology and data set for linear portfolios.
1995 RiskMetrics 3 Non-linear risk using delta/ gamma estimate or
structured Monte Carlo.
BIS paper on proposed changes
to the capital accord for market
risk measurement
Internal VaR models to calculate capital charge.
CAD 1 amendment to the
Capital Adequacy Directive
(implemented January 1996)
UK adoption of EU CAD.
1996 RiskMetrics monitor Non-linear portfolio risk using Cornish-Fisher.
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
13/131
Chapter 1: Introduction
Page 13
Year Publication Themes
RiskMetrics 4 Non-linear risk captured using Johnson or
simulation.
Amendment to the BIS Capital
Accord to incorporate market
risks
Internal model recognition.
1998 CAD 2 Amendment to the EU
Capital Adequacy Directive
Internal model recognition for market risk
exposures.
1999 Britten-Jones & Schaefer
LiMina & Ulmer
Non-linear risk measurement (3 papers).
2000 Mina Use of quadratic approximations for Monte
Carlo simulation.
2001 Return to RiskMetrics: revision
to Technical document 4
New cash flow mapping algorithm, emphasis
on simulation for option portfolios.
Table 1: Chronology of VaR and non-linear risk publications
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
14/131
Chapter 1: Introduction
Page 14
1.3 The G30 ReportThe G30 report on Derivatives trading (Global Derivatives Study Group, 1993) was
fundamentally influential in the risk management industry. The primary recommendations
of interest to market risk managers1 were:
Recommendation Description
Senior management
oversight
Senior managers had to understand the risks that the
institution ran with its derivatives positions. This motivated a
measure of risk that could be applied uniformly across
different trading businesses, without requiring detailed
knowledge of that business.
Mark-to-market for all
trading positions
All derivative positions should be marked to market, i.e. valued
at their replacement cost. It was common practice at the time
of the report to use older accounting approaches, based on
accruals, to value swap positions. However, this was not
regarded as adequate for risk management purposes, since it
did not reflect changes in market conditions.
Market valuation Positions should be valued using appropriate adjustments so
that the value fairly reflects the likely sale price if the position
were to be closed out. This should for example reflect the bid -
offer spread and the credit spread, if appropriate.
Revenue sources Traders should measure, and thereby understand, the sources
of revenue in their positions, preferably broken down to risk
component level.
1 Other recommendations covered credit risk management and legal issues.
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
15/131
Chapter 1: Introduction
Page 15
Recommendation Description
Measure market risk The recommendation specifically mentions value at risk, a
measure that would incorporate the following sources:
Price or rate change Convexity
Volatility
Time decay
Basis or correlation
Discount rate.
Stress simulations Derivatives positions should be subject to regular what-if
scenario analysis. This should cover not just changes in market
prices but also changes in liquidity. Liquidity can affect the
ability of the trader to realise the close-out price that has been
used to value the position.
Independent oversight All derivatives trading activity should be monitored
independently within the organisation. This independent
function should have a clear mandate to impose the reports
principles on trading management, if required, and to monitor
the effectiveness of their adoption.
Table 2: Recommendations of the G30 report into Derivatives
Chew reviewed the debate that the G30 committee sparked, as central banks developed
mechanisms for the regulation of market risk capital (Chew, 1994). Stress scenarios
competed with internal models and the BIS model to win the approval of the central banks
as the prescriptive method of measuring market risk. It was a bad year for bond markets,
coming on the heels of the collapse of the ERM in the previous year and, still fresh in the
regulators minds, the stock market crash of 1987. The pressure to implement some form
of market risk capital allocation was clear, but the validity of a VaR number for capital
adequacy purposes was disputed. It was not until 1996 that the BIS would issue guidelines
for a VaR measurement that would be accepted for capital adequacy reporting (Basle
Committee on Banking Supervision, 1996).
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
16/131
Chapter 1: Introduction
Page 16
1.4 Market Risk and VaRThe RiskMetrics Groups Risk Management: A Practical Guide(Laubsch & Ulmer, 1999)
presents an overview of the common approaches to VaR today. The key approaches,
discussed in detail in the next chapter, are variance-covariance2, historical or scenario
simulation, and Monte Carlo. Laubsch & Ulmer compare the features of themethodologies. From this comparison, we see that the variance-covariance methodology is
inadequate for non-linear portfolios, whereas historic simulation requires meticulous
collection of data, and Monte Carlo is highly computer-intensive. None of the approaches
is therefore suitable for low-cost3 measurement of option risk.
The second edition of JP Morgan's RiskMetrics technical document (JP Morgan, 1994)
presented a complete treatment of a VaR process, based upon a variance-covariance
approach. The document detailed the methodology for mapping assets into a model
portfolio, and the data that was required to support the methodology. The methodology
did not cover option price sensitivities. JP Morgan updated and improved upon this
document, and in the third edition (JP Morgan, 1995) options could be processed using the
delta-gamma (i.e. Taylor series) estimate, which made the VaR a chi-squared distribution,
or using structured Monte Carlo, which valued the position exactly, without the need for
cash flow mapping. Now in its fourth edition (JP Morgan, 1996)4, the methodology
specifies the mapping of the portfolio distribution to a transformation of a normal
distribution, or a simulation based on the Taylor series expansion. For some years, the
methodology provided something close to a complete practical handbook of risk
measurement. Many other authors have published in this area, but the RMG publication
has the advantage of being comprehensive for market risk, easily available and free.
The authors delivered updates to the methodology for a number of years through regular
publications, RiskMetrics Monitor (JP Morgan, 1996 - 1999) and RiskMetrics Journal
(RiskMetrics Group, 2000-2001). The RiskMetrics Group recently brought the
methodology up to date with their publication, Return to Risk Metrics: The E volution of a
Standard (Mina & Yi Xiao, 2001). The document describes changes to the methodology
since the most recent publication of the technical paper in 1997. The main points are the
2 The term parametric, which Laubsch & Ulmer use as a synonym for variance-covariance, is avoided here, as other
authors have also referred to Monte Carlo as a parametric methodology.
3 Cost drivers are both compute cycles and the effort required to collate and clean data.
4 The fourth edition of the technical document is referenced so frequently in this thesis that it will be referred to as [TD4].
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
17/131
Chapter 1: Introduction
Page 17
unification of Monte Carlo and variance-covariance methodologies, through the use of
joint data sets, and a change in the cash flow mapping process. The document emphasizes
the use of simulation methods to model non-linear risk, mentioning that these can be
speeded up by modelling the price function of a complex derivative, perhaps using a
quadratic approximation. No reference is made to the technical documentscomputationally less demanding treatment of non-linear risk, by the use of Johnson
transformations to model portfolio sensitivities.
Lawrence and Robinson (1995) challenge RiskMetrics suitability as a risk measure, citing
the choice of a 95% confidence interval and the assumption of normality. Despite this, the
finance industry has accepted it as a de facto minimum benchmark for market risk
measurement. There is little work that examines the quality of the RiskMetrics outputs.
Hendricks (1996) compared moving averages, historic simulation and RiskMetrics
approaches to measuring Value at Risk for a foreign exchange portfolio. His conclusion
was that RiskMetrics gives a measure that is more responsive to the dynamics of the
market, but that simulation methods, which model the exact value of the portfolio over a
range of future price outcomes, ultimately give a better estimator of a confidence limit on
the portfolio P&L5. Alexander (1996) focussed on the comparison of volatilities estimated
using the RiskMetrics and GARCH approaches, suspecting that the RiskMetrics estimators
may have undesirable features. She found that the calculation for 28-day volatility exhibited
ghost features when significant events dropped out of the data set. This thesis will compare
the treatment of option positions in RiskMetrics using variance-covariance and simulation
approaches. It is more akin to the work of Hendricks than Alexander, in that it focuses on
methodology rather than data.
RiskMetrics has become a standard in the absence of the success of any competitive
methodology. Among the possible competition, Bankers Trust's RAROC 2020 is a
software solution with a methodology outside the public domain. RAROC, the Risk
Adjusted Return On Capital, is an incremental development of VaR. Actual portfolio
returns and VaR are tied together in a single measure of profitability. This has been
difficult to implement in finance, as it represents a significant change in the culture of the
trading room. The success of limits requires that they are transparent, which VaR
methodologies, by their nature, frequently are not. Nor is transparency the only issue.
Financial institutions have struggled to obtain a VaR number that may satisfactorily be
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
18/131
Chapter 1: Introduction
Page 18
used as the basis of a limit measure, and therefore to restrict the activities of traders within
the risk appetite of the institution. For this to be effective, the traders must believe that the
VaR numbers fairly represent their risks, especially when compared to other trading
activities within the institution. Owing to inevitable intellectual, technical and budgetary
compromises made when implementing risk systems, VaR numbers will not necessarilypass this test. These same problems would inhibit any attempt to adjust P&L to take
account of VaR. RAROC 2020 is worthy of note in particular because it incorporated
treatment of non-linear risks, via Monte Carlo simulation, at an early stage in the literature
(Falloon, 1995).
A reader seeking an anecdotal appreciation of the VaR process might refer to Jorion
(1996). He gives a good introduction to the subject of VaR, and includes entertaining case
studies of headline making losses in the financial community, such as Orange County. This
book is particularly good to understand the motivations for risk management and the
regulatory framework. It also includes a brief mention of the delta-gamma estimate for
non-linear risk.
1.5 Credit RiskJP Morgan launched CreditMetrics in 1997. As with RiskMetrics, the CreditMetrics
methodology offers a framework and data for risk calculation. The focus of CreditMetrics
is credit risk, specifically the risk that an entity to which the portfolio is exposed will suffera credit rating transition or default. The approach generates credit scenarios by analysing
equity price movements and assuming a relationship between equity prices and transition.
This method is preferred to using credit transition data directly, as credit data is known to
be infrequently observed (i.e. only when companies default) and richer in the US than
elsewhere. By contrast, equity price data is freely available around the world and observable
at any frequency one chooses, down to the frequency of individual transactions.
CreditRisk+, an alternative methodology available at around the same time, uses a Poisson
distribution to model default events within sectors. These methods are very different, but
neither seems to have gained the standing of RiskMetrics, which is synonymous with
market risk measurement in many people's minds. One factor preventing this has been
timing. At the time RiskMetrics was launched, many institutions were struggling to
implement in-house methodologies and systems for market risk measurement, or perhaps
5 The lower limit is a VaR measure.
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
19/131
Chapter 1: Introduction
Page 19
had not started at all. However, for many years institutions have tracked credit exposures in
a limited way, so they have acceptable systems in place and less reason to adopt something
new.
1.6 Integrated RiskThe ultimate goal in some industry minds is to build an integrated risk management
function, measuring both credit and market risks. This has particularly been a focus for
software vendors, attempting to unify the different demands of market and credit risk
measurement. While many have succeeded in implementing market and credit risk systems
using the same data, Glass (96) highlighted the reasons why an integrated risk measure is so
difficult, namely the different time perspectives involved in market and credit risk, the
requirement to run systems at transaction (counterparty) level for credit risk and the limited
benefits that may accrue in comparison to the implementation cost. Risk managers who
wish to compile risk-adjusted return measures across businesses that are subject to these
types of risk must develop a methodology and systems environment to overcome these
hurdles, as well as a convincing business case.
1.7 The UK RegulatorRegulatory requirements have developed in parallel to risk literature over the last few years.
The Capital Accord of 1988 laid down a regulatory framework for reporting capital
adequacy against credit risks. This covered important concepts, such as the risk of debt
arising from third world countries versus that of the developed world, and the need for
financial institutions to maintain Tier 1 (shares & recognised reserves) and Tier 2 (other
reserves and hybrid debt) capital, to protect against insolvency in the event of large scale
counterparty default. The accord followed the Latin American debt crisis of the early
eighties, and came at a time when swap trading was a relatively young discipline. For this
reason, the accord focused on capital requirements to protect against credit risk. From
1993, around the time of the G30 report, the Basle committee developed market risk
measures for the purposes of capital adequacy. An amendment to the accord, published by
the Basle committee in 1996, includes proposals for allocating capital against market risk
(Basle committee, 1996), reflecting the developing knowledge of these types of instrument
within the regulatory framework. The proposal includes a new category of capital, Tier 3
(subordinated debt), which can be set aside purely to protect against market risks.
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
20/131
Chapter 1: Introduction
Page 20
Rather than put forward a standard methodology, in the style of RiskMetrics, for
measuring market risk, the amendment allows the use of the institution's own internal VaR
models. The Basle committee does go as far as recommending a minimum confidence
interval (99%) and risk horizon (10 days) to use in the models. This flexibility over the
detail of the model implementation, adopted in the UK as CAD2, allows differentinstitutions to craft VaR measures to suit their own risk profiles. This is not a loophole that
allows the institution to understate their risks. All VaR model recognition from the FSA is
dependent on feedback from backtesting exercises, in which VaR measures are compared
to actual P&Ls. The regulator sets limits for the number of times P&L can exceed the VaR
measure in a given time period without incurring additional capital charges. If a significant
risk shows up in back testing then the penalties can be severe, and could potentially lead to
the loss of model recognition altogether. Institutions can use Tier 3 capital against market
risk exposures, provided that this does not exceed 250% of the institutions Tier 1 capitalthat is allocated to support market risk. Tier 2 can be substituted for Tier 3, subject to the
same restrictions.
The European Community (EC) reviews Basle committee reports and considers whether
the proposals should be incorporated into EC law. This has already led to the EC Capital
Adequacy Directive (CAD). Member countries must implement the EC directives (there is
no opt-out), but the implementation process is subject to different interpretations and the
legislative priorities of member states. The UK was the only member to implement CAD
by the deadline of 31/ 12/ 1995. Owing to the speed of adoption, the Bank of England had
to show leniency to banks that were unable to implement systems in time. Several banks
pooled funds to sponsor development of a solution by a reputable consultancy firm, but
then all found themselves unable to satisfy the deadline of the directive when the software
was delivered late.
In the UK, the FSA, and the Bank of England before them, have set down a number of
procedures for calculating market risk for regulatory reporting purposes, which certain
regulated institutions must adhere to. In 1995 the Bank of England issued the Green
Book, formally known as D raft Regulations To Implement the Investment Services and Capital
A dequacy Directives(Bank of England, 1995), which contained new requirements for capital
adequacy reporting of market risk. The bank adopted a duration ladder/ delta approach for
interest rate risk, with additional capital buffers for option positions. The bank additionally
issued notes for guidance to assist with the buffer approaches and more sophisticated
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
21/131
Chapter 1: Introduction
Page 21
alternatives. The FSA has revised the whole supervisory policy documentation (a
replacement for the Green Book). The favoured method for assessing regulatory capital is
now the Scenario Matrix, whereby the portfolio is subjected to a number of scenarios. The
scenarios are built up as a matrix of price and implied volatility values. The central point of
the matrix is the current level of price and implied volatility. Off-centre elements representrevaluation of the portfolio with a shift in price, volatility or both. The worst revaluation
outcome in the matrix is taken as the capital requirement.
The latest publication from the regulator is the 2001 amendment to the Capital Accord.
This recognizes and addresses parts of the 1988 accord that now seem weak, inequitable or
dated. It includes revised treatment of some types of debt in the existing standardised
approach, plus two forms of internal ratings-based approaches to credit exposure
assessment. It also makes provision for setting capital against the operational risks of a
firm. We should see research interest picking up around the subjects of operational risk,
which is dealt with only sketchily in the proposals, and internal ratings systems. Internal
modelling of credit risk itself will not receive any kind of boost from the regulator, since it
is not permitted for reporting capital adequacy on credit exposures.
1.8 Non-Linear Risk Measurement
One of the most challenging aspects of a risk management process is the way that it
captures the risks of an option portfolio. RiskMetrics first proposed the Cornish-Fisherpolynomial approximation to the percentile. In this approach, the tail of the distribution is
modelled as a polynomial function. The fourth edition of the Technical Document
contained a method based on Johnson curves. This system of curves can be fitted to the
first four moments of an option portfolios distribution, approximated by a quadratic form,
to derive a Value at Risk number. Johnson curves have proved to be unsatisfactory to
RiskMetrics users (e.g. Mina and Ulmer, 1999) and the group currently recommends a
simulation approach. Li (1999) shows how the theory of estimating functions can also
construct a VaR estimate from the first four moments of an option distribution, or indeed
any portfolio distribution with excessive skewness or kurtosis. Mina and Ulmer (1999)
used a Fourier inversion of the moment generating function to obtain the portfolio VaR,
and evaluated the accuracy and speed of execution of standard RiskMetrics, Cornish-Fisher
and two forms of Monte Carlo against this measure. Full Monte Carlo simulation is
distinguished from Partial Monte Carlo, in which the price function of complex derivatives
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
22/131
Chapter 1: Introduction
Page 22
is modelled with a quadratic approximation. The paper concludes that the Partial Monte
Carlo and Fast Fourier Transform offered the best trade-off of speed versus accuracy.
Finally, Britten-Jones and Schaefer (1999) use the first four moments about zero of the
portfolio pdf. They express the change in portfolio value as the sum of a set of non-central
chi-square variables, developing a system of equations that can be used in conjunction withchi-square tables to derive the VaR.
1.9 Motivations and research objectivesValue at Risk has been a developing science for more than a decade. The regulators around
the world apply pressure to financial institutions, large or small, to provide Value at Risk
measures as part of their regulatory returns. The cost of developing an internal
methodology is high. Many smaller financial institutions find that the cost of compliance
with the regulatory regime is similar to larger institutions with deeper pockets. For this
reason, they may turn to off the shelf methodologies built in to software packages. Such
institutions must be wary of implementing an external methodology that is inappropriate
for the types of risk. These risks may include exposure to options, financial instruments
that present multiple dimensions of risk and headaches for the risk manager. We take the
most popular methodology, RiskMetrics, in its most popular form, variance-covariance,
and assess its suitability for risk measurement of interest rate options. In particular, we
examine the Delta-Gamma-Johnson extension to the variance-covariance methodology,
and ask whether this approach will provide risk measures consistent with the
computationally more costly variance-covariance approach. The framework used to
produce this result can in fact be used more generally, for any portfolio for which a
variance-covariance matrix can be derived.
RiskMetrics has dominated the Value at Risk literature, as a central reference point for
academic interest. Much of the research published on the methodology has focussed on
the techniques used to collect the data. Very little has been published on the delta-gamma-
Johnson method within it, and no research known to this author has previously looked at
the implementation of the delta-gamma-Johnson method in such detail.
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
23/131
Chapter 1: Introduction
Page 23
1.10 Organisation of the thesisThe structure of the rest of this thesis is as follows:
The first part of the thesis sets the context for our study of RiskMetrics. In Chapter 2, we
examine the development of risk management tools, including Value at Risk, over the last
25 years. We see how early measures, such as notional limits, were found to be inadequate
and gave way to loan equivalents and basis point values. These in turn have their own
limitations, which have led to the development of Value at Risk. We complete Chapter 2
with a detailed review of Value at Risk processing. We see how a financial institution
gathers together its portfolio data, market data and reference data. We see how market data
is used to generate scenarios and risk factors. We outline the three primary methodologies
for measuring Value at Risk: variance-covariance, historic simulation and Monte Carlo.
The second part of the thesis introduces the RiskMetrics methodology. In Chapter 3, we
see how the methodology has developed, from a simple variance-covariance method with
data gathering, to a sophisticated blend of all the principal VaR methodologies, with a rich
data set. In Chapter 4, we take a first look at the gaps between the RiskMetrics model and
typical portfolios of financial institutions.
The third part of the thesis examines the challenges of non-linear risk measurement in
more detail. In Chapter 5, we look in detail at the measurement of non-linear risk using the
RiskMetrics Johnson modelling method. We present the original work in the thesis, in
which we propose a procedure that can be used to assess the robustness of the RiskMetrics
variance-covariance calculations for non-linear Value at Risk. We demonstrate the use of
the procedure on some test transactions and a portfolio.
In Chapter 6, we determine what conclusions we can draw from the results of our
experiments, and outline further work that could be carried out to develop the research
topic.
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
24/131
2 VALUE AT RISK CONCEPTS
2.1 IntroductionValue at Risk (VaR) has been an important component of financial risk management for
five years. It has become commoditised, such that VaR systems solutions can be bought
'off the shelf'. It has become enshrined within the financial regulations of the worlds
banks. In this chapter, we review the evolution of risk management practice that led to
VaR. Risk management is defined as the process of monitoring the risks that a financial
institution is exposed to, and taking action to maintain this exposure within levels set by
the boards risk appetite. For this research, we define Market Risk as the uncertainty in the
close-out value of an asset or liability as a result of changes in market prices. This chapter
looks at:
early attempts to limit exposure to market risk, which specify the maximum
notional value that may be held in particular types of deal;
counterparty limits, designed to limit the exposure to a financial institution or
group of institutions;
concentration limits, which view the country or industry as the risk, rather than the
individual counterparty, and limit the exposure to that;
reactive limits, such as stop loss limits, designed to limit the potential for loss on a
position;
market risk limits based on portfolio sensitivities, such as Basis Point Value (BPV)
limits, which limit the exposure to specific market risk scenarios, such as a shift in a
yield curve.
All these developments in limits are documented as the context of Value at Risk.
VaR is defined as the portfolio loss that will not be exceeded with a given level of
confidence, over a given trading horizon. This is not an absolute limit on loss and must not
be read as such. In this chapter, we will examine the inputs to VaR: trade data, static data,market data and instrument data. The chapter also considers the problems that can occur
when putting this data together. We describe different forms of processing that can be
used to generate a VaR number, using the common classifications of variance-covariance,
historic simulation and Monte Carlo. The text assesses the differences in the way they
process data, the different data requirements and the qualitative requirement for
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
25/131
Chapter 2: Value at Risk Concepts
Page 25
computational power. Also, the description highlights circumstances that lead to favouring
one approach over another, the type of market (normal or non-normal), and the type of
instrument (linear or non-linear).
This chapter also looks at the outputs of VaR, when using each of the approaches
described above. Some approaches will offer a consolidated number only, but others offer
more insight into the sources of risk. We outline how financial institutions can use the
backtesting approach to validate their VaR process, which consists of data, models and
procedures.
Finally, we assess VaR implementations in the public domain, notably RiskMetrics, and
give the reasons why this is of particular interest in this research.
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
26/131
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
27/131
Chapter 2: Value at Risk Concepts
Page 27
2.3 A history of Risk ManagementA history of risk management is really a history of the financial industry. In many cases,
significant losses at an institution have been due to unauthorised or reckless activities by an
employee, rather than failures in risk management. However, some of the largest losses
have been attributable to misunderstood credit and market risks. The following table setsout some significant events from the past thirty years that have shaped the risk
management functions we see today.
Date Risk Management Event
Seventies Latin American exposures.
Early Eighties Latin American debt crisis.
Eighties Savings & Loan industry (US building societies) lose
$150bn in mismatched interest rate exposures many
go bankrupt.
1987 Stock market crash.
1988 Capital Accord.
1991 Counterparties lose $900m on Hammersmith &
Fulham Council swap transactions declared illegal by
House of Lords.
1992 $14bn of taxpayers money is spent shoring up the
pound in the ERM, but ultimately speculators force the
pound to float freely.
1993 G30 report into derivatives trading.
1993 Metallgesellschaft loses $1.3bn through an American
subsidiary and is bailed out by creditors.
1993 Orange County goes bankrupt after the county
treasurer, Bob Citron, leverages a $7.5 billion portfolio
and loses $1.64bn (after accumulating additional
revenues of $750m for the county in previous years).
1994 Proctor & Gamble lose $157m on differential swap
trading with Bankers Trust.
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
28/131
Chapter 2: Value at Risk Concepts
Page 28
Date Risk Management Event
1994 JP Morgans RiskMetrics.
1995 Barings loses $1.3bn through positions taken by Nick
Leeson and goes bankrupt.
1995 Daiwa securities recognises losses totalling $1.1bn,
accumulated by a trader at its New York offices, and is
forced to close its US operation.
1996 Amendment to the Capital Accord for market risk.
1997 NatWest recognises losses of 90.5m on swaptions
books deliberately mis-marked by two traders.
1998 Asian debt crisis.
1998 Russian debt crisis.
1998 Long Term Capital Management loses $3.5bn of
investors money and is bailed out by a consortium of
banks.
1999 Introduction of the Euro eliminates currency risk
between participating countries.
2002 Enron Power Trading files for bankruptcy afterrecognising a series of losses and debts that had
previously been concealed in off-balance sheet deals
with partnerships.
2002 A former US branch manager for Lehman Brothers
brokers is charged with stealing $40m from clients
funds.
2002 Allied Irish Bank recognises foreign exchange losses of
$691m over five years, when a trader at American
subsidiary Allfirst recorded fictitious options deals as
hedges for real spot and forward FX contracts.
Table 4: Significant Risk Management Events (Source: Jorion (1997))
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
29/131
Chapter 2: Value at Risk Concepts
Page 29
2.4 Risk in the Financial MarketsIt is normal for a financial institution to take risks. Calculated risks make financial profits,
or returns, for their shareholders. Risk introduces uncertainty in the level of the return, for
which the institution will charge a risk premium. Two well-understood sources of risk for a
financial institution are market risk and credit risk:
? Market risk is defined as the uncertainty in the close-out value of an asset or liability
as a result of changes in market prices.
? Credit risk is the uncertainty of financial receipts that arises from the possibility that
a party to a contract may be unable to perform their obligations.
To illustrate the difference between these risks and their associated returns, consider two
divisions of a bank: a traditional banking business, and a proprietary trading arm. The
banking division works in just the same way as a retail bank. It holds accounts for its
clients, liaising with other banks to effect the transfer of funds between the clients'
accounts and other accounts around the world as required. This service is usually fee based.
The bank will place any excess of funds held in the account on the money markets
overnight to earn interest. A part of this income will be paid to the account holder.
Similarly, the bank will borrow from the market to cover any shortfall on the account, and
charge a margin on the interest cost to the client. In practice, the bank will manage liquidity
across all borrowers and lenders, resulting in larger profits for themselves. The bank may
also provide a long-term loan to the client, again charging the client a margin on its own
funding costs. Some banks will also deal in the market on behalf of a client, taking a margin
for themselves but leaving the market exposure with the client.
The key return for a banking division is Net Interest Income. This is the margin that the
bank makes by borrowing money from the market at interbank rates and lending to
customers at higher rates or, conversely, holding deposits for the clients and placing them
on the market. The bank is acting as an intermediary, adding its own margin on to the
transaction to secure a profit for the bank's shareholders. The division will also have fee
based income, which will have associated direct costs. The key risk that the division takes is
credit risk. Banking positions are usually held to maturity, so fluctuations in market rates do
not affect the income stream. However, if the bank has lent money to a client who then
becomes insolvent, the bank will lose a high proportion of the monies due to it. The bank
is also exposed to liquidity risk. This occurs when it makes payments from a client account
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
30/131
Chapter 2: Value at Risk Concepts
Page 30
based on an expectation of a receipt later in the day. This can impact the bank's profit on
the transaction, in the extreme case that the receipt never occurs. It can also impact the
bank's ability to cover further transactions with other co unterparties, owing to a shortage
of funds in the account, even if the receipt is simply delayed. This can expose the bank to
charges and interest payments. Typically, in cases where the receipt never occurs, the bankwill recover some of the outstanding value, dependent on the seniority of the debt and the
other claims on the client's assets, but the level of recovery varies widely (JP Morgan 1997).
A proprietary trading arm is a very different business. The division uses the bank's own
shareholder capital to obtain credit lines with counterparties in the markets. The bank then
uses these lines to take positions in the markets, betting on changes in market conditions
which will lead to increases in the market value of their positions. Proprietary trading is a
high risk, high return business. Without effective hedge strategies, this business will suffer
from volatile profits. This volatility is a concern for many investment banks, which may
rely on proprietary trading for 60% or more of their total profits.
The key returns of the proprietary trading arm of a bank will be Net Interest Income (NII)
and Capital Gains. Net interest income is applicable to interest bearing positions which are
held to maturity and funded by other interest bearing instruments. This type of accounting
for returns is becoming less common, as market turnover increases and positions are rarely
held to maturity. The important measure of return then becomes the capital gain, measured
by the change in the close-out value of the positions, together with any net cashflow
income or expense on the portfolio. The business is subject to market risks and credit risks.
Credit risk is present because the bank is still exposed if the counterparty or obligor6 fails
to perform. In contrast to the banking division, the amount of the exposure is not
necessarily the nominal value, particularly on derivative contracts. Market risk is the main
risk, as it is exactly this uncertainty from which the division hopes to profit. Again, the
nominal value of the contract will be an input into the exposure, but it will not be the only
consideration.
In proprietary trading, the distinction between market risk and credit risk becomes blurred.
A loan to a non-sovereign counterparty will include a credit spread, which rewards the
lender for taking a higher risk than with sovereign debt. The London interbank offer rate
6 The obligor is a more general description of a debtor that includes issuers of stock, bonds and other financial
instruments.
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
31/131
Chapter 2: Value at Risk Concepts
Page 31
(LIBOR) is calculated7 from the offered inter-bank deposit rates supplied by a panel of
contributor banks8. The average credit quality of the banks is taken into account in this
rate, so it is possible to calculate it as a spread over equivalent maturity treasuries. The rate
is used as the basis of valuing the cash component of positions for many trading
operations. Movement in this rate is usually attributed to market risk, but when Baringsbecame insolvent, credit spreads on LIBOR widened dramatically. This was an example of
a realization of an extreme market risk event as a result of a credit event.
This research will focus on market risk, and in particular the risk of certain types of
derivatives trades. This research is interested specifically in the sophisticated methods used
to manage the risk of derivatives, and one kind of derivative in particular: the option. The
ability of the risk taker to radically alter the risk profile of a portfolio with options trades,
and the new risks that this leads to, make it the most interesting financial instrument from
a risk management perspective.
2.5 Concentration and diversity
A risk manager is concerned about the risk profile of the institutions portfolio, as well as
individual positions. Portfolio risk measures are built up from risk sensitivities of the
individual positions, yet the positions themselves do not tell the whole story. The risk
manager must also have a way of modelling the concentrations, diversities and hedges in
the portfolio. Risks are concentrated if a number of risks are related in such a way that allthe risk factors tend to move together. This can happen when a trader buys a number of
shares in the same industry sector, such as pharmaceuticals. The value of each share is
strongly related to all the other shares, since news about the sector is likely to affect all
shares equally. News that one pharmaceutical company has got approval for a new drug
may push up prices for the whole sector, if market sentiment is that the approval signals
potential successes for the other companies in their outstanding approvals. Risks are
hedged if they are related in such a way that they tend to move in opposite directio ns. One
example might be a long stock position that is hedged by a short position in the index of
the market the stock trades in. For instance, a trader may buy BT shares but short FTSE
futures. These positions do not exactly hedge each other, both because the FTSE is
7 The calculation is an arithmetic mean of the middle two quartiles of the contributed rates for a given currency and
maturity
8 Members of the panel are chosen for their expertise in a particular currency, activity in the London market and credit
standing. There is a different panel for each currency quoted.
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
32/131
Chapter 2: Value at Risk Concepts
Page 32
composed of more than just BT shares, and because there is a timing difference between
holding cash (stock) and shorting futures. However, the risk manager must recognize the
partial hedge when managing the risk of the portfolio. Risks are diversified if movement in
each risk factor tells us nothing about movement in other risk factors. Portfolio theory
strongly demonstrates the benefits of diversity in maximizing the return for a given level ofrisk, or minimizing the risk for a given return.
The usual way for the risk manager to take account of concentration, hedging and diversity
across risk factors is to obtain correlation data for these factors. The correlations tell the
risk manager how to combine the risks calculated for individual positions, or how to build
a set of possible movements in the factors that are consistent with previous observations.
Concentrated risks will have a positive correlation. Hedged risks will have a negative
correlation. Diversified risks will have a near zero correlation, on average.
2.6 Risk ManagementMost banks do not want to exhibit high levels of volatility in their profits. This can be very
unsettling for shareholders, and a sustained period of significant losses can threaten the
commercial viability of the organization. For this reason, banks try to moderate the level of
risk, by making sure that a particular position will not lose an unpalatable amount in a
worst case scenario, and by diversifying their risks across different types of positions. The
next section examines different methods of limiting exposures. It also shows how riskmanagers can look for concentrations in a portfolio that indicate excessive exposure to a
particular group of correlated risks.
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
33/131
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
34/131
Chapter 2: Value at Risk Concepts
Page 34
market risk versus credit risk would offer little insight into which utilisation posed a greater
threat to the capital base of the institution. Both of the trading activities mentioned above
raise further issues with regard to netting of exposures. The bond traders will often fund
their positions through repo trades. The convertible arbitragers may also have repoed their
positions to enhance the yield, and additionally may have taken a short position in equitiesto hedge the risk embedded in the equity options they hold. Some mechanism within the
limit structure has to recognise that some utilisations reduce risk, rather than increase them.
Such netting rules are difficult to implement in a nominal-based structure. The problems
inherent in the nominal exposure approach can be summarised as:
reliance on management experience to understand the limits
lack of systematic treatment across different trading activities
difficulty of comparison across business units no systematic netting process
cannot be used for assessing capital adequacy
2.6.2 W eighted E xposures
To address the weaknesses of nominal exposure limits when assessing capital adequacy, the
regulators have devised weights based on the instrument that is giving the exposure. The
weights attempt to capture the comparative riskiness of the instruments given equal
nominal exposures. The 1988 BIS Capital Accord gives weights for calculating capitalrequirements to cover potential credit losses. The Accord specifies a 0% weighting for
OECD (Organization for Economic Co-operation and Development) sovereign debt, a
20% weighting for claims on OECD banks and a 100% weighting for LDC (less developed
country) sovereign debt (Basle Committee on Banking Supervision, 1988). The judgement
expressed here is that OECD sovereign debt is effectively free of credit risk, whilst OECD
banks are five times less risky than LDC's. This assessment can take into account the
recovery rates in the case of default for each class of debt. When companies go into
default, bond holders often get paid a percentage of the nominal of their bond s, dependingon the funds that are raised by selling off the institutions assets. These empirical recovery
rates reduce the actual credit exposures inherent in the bonds.
Although the framework is crude, one problem with the nominal exposure method is
neatly avoided with the use of weights. The market experience necessary to interpret
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
35/131
Chapter 2: Value at Risk Concepts
Page 35
traditional exposure limits is being captured in the weighting framework. The framework
provides the regulator with a means for assessing capital adequacy for different trading
activities on an equal basis.
Although this approach is an improvement on nominal exposure limits, there is very little
granularity in the weighting structure. All OECD banks do not carry the same level of risk.
This is the motivation behind recent BIS discussion papers (Basle Committee on Banking
Supervision, 2001). The BIS is actively engaged in an industry consultation aimed at
revising the 1988 accord. A new 50% risk bucket will be used for certain corporate
exposures, and a 150% bucket has been introduced for particularly risky assets, for instance
low-rated sovereigns or corporates, or funds past due that have not been written off. This
bucket can also be used for assets where the volatility of recovery rates is high. The BIS has
also proposed two new internal ratings-based approaches, for which regulated entities can
take greater control of the capital charge calculation. Participants in the approaches will be
able to estimate their own probabilities of default, and then (depending on competency)
either estimate the loss given default themselves, or use a metric supplied by the regulator.
Finally, the total capital charge will be adjusted to reflect the granularity (diversity) of the
portfolio, with respect to a reference portfolio. This weighting scheme offers more
flexibility than a bucketing system, but is not expected to reduce the overall capital
adequacy requirements of the industry by a significant margin.
Since the Accord was introduced in 1988, much of the work of the committee has
focussed on allowable netting. One of the advantages of the weighting approach is that, by
converting exposures to common units of risk, some degree of netting recognition is
possible. The current proposals allow for the widest netting yet, with recognition of a wide
range of collateral, netting of exposures within a counterparty, adjustments for currency
and maturity mismatches, and recognition of credit derivatives.
The regulators initially adopted a similar weighting based approach for market risk (Basle
Comm ittee on Banking Supervision, 1993). This is still the standard approach for marketrisk exposures, although more sophisticated treatments are available. Interest rate
exposures are mapped to a duration ladder. The duration of a set of cashflows is, roughly
speaking, the maturity point to which the cashflows are exposed to interest rate risk, on
average, assuming a flat yield curve9. Each rung on the ladder then receives a weighting to
9 For a comprehensive discussion of duration, see Fabozzi (1993).
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
36/131
Chapter 2: Value at Risk Concepts
Page 36
reflect the volatility of the interest rates for that duration bucket. Within the buckets,
exposures are netted.
Weighted exposures are still very important today, because they are central to the
regulatory reporting process for many institutions. However, they do not provide
information that a trader could use to manage a portfolio. The problems inherent in the
weighted exposure approach can be summarised as:
reliance on some management experience to understand the limits
difficulty of comparison across business units
limited systematic netting process
2.6.3 Basis Point V alue (BPV ) limits
The sophistication of internal models for market risk stems from the unsatisfactory results
from using instrument exposure limits. The allocation of exposure limits is largely
dependent on what the management perceive the risks of the different trading instruments
to be. There is no direct, mathematical link between the limit structure and the losses that
the bank is trying to restrict. As the market becomes more sophisticated, new, exotic
instruments are introduced, which management do not know how to assess, or that behave
in unpredictable ways.
There are two motivations for measuring the risk on a position with more precision.
Firstly, banks need to make sure that they have adequate capital set aside to cover adverse
market movements on their positions. Secondly, given that the capital base of the bank is
limited, managers want to allocate capital to trading strategies that generate comparatively
high profits whilst taking comparatively low risks. Given two competitive business units,
they cannot favour the more profitable one without also assessing the risks the unit is
taking. The introduction of more complex derivative instruments has made comparison of
profits on an equivalent basis progressively more difficult. It might be relatively straight
forward to compare the profits from trading in stocks to bond trading, but certainly not
easy to compare convertible arbitrage to exotic interest rate options.
To put this on a more mathematical basis, management introduced limits, which measured
the responsiveness of the instrument to a change in market conditions. Basis Point Value
(BPV) limits express the maximum exposure to a parallel shift in the interest rate curve of
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
37/131
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
38/131
Chapter 2: Value at Risk Concepts
Page 38
by stating an amount that losses will not exceed. There are strong financial incentives for
the bank never to exceed the amount.
2.7 Value at Risk
Banks ultimately need limit structures to restrict the size of the losses that the trading will
incur. It makes sense to set limits in terms of the maximum loss the bank wishes to incur,
as with the stop loss, but to relate the usage of the limit to the portfolio of outstanding
trades, as with the BPV limit.
None of the limits discussed so far takes into account the dynamics of the market. This
may be appropriate in markets where the volatility moves in fairly slow (economic) cycles,
such as the credit market. If the organisation withstood the losses arising from an exposure
of $1bn to B rated corporations last year, it is likely that it will be able to do so again thisyear. This does not mean that the expected losses are static over time, only that they
change so slowly that a quarterly review of limits would be adequate. It may take several
months of recession for credit spreads to widen appreciably, notwithstanding the
occasional Barings-style default.
Interest rate volatility changes on a much faster timescale, sometimes overnight. While an
exposure to $1bn of futures contracts seemed palatable last week, this week $750m might
be the most the bank should take on. The difference between last week and this week may
be the perceived volatility of the market, i.e. the magnitude of likely moves in rates and
prices. This volatility may vary along price curves, and movements in prices may only be
partially correlated. The bank must come to some opinion of the sort of market
movements that it expects. Often this is based on historic price data, but can be modified
to respond to periods of high market volatility. In RiskMetrics, this is achieved by
weighting the most recent price movements more highly than previous movements. It may
also be possible in some risk management systems for the risk manager to overwrite the
historic volatility used for the risk measure with an estimated measure that reflects current
market sentiments.
The bank actually wants to limit the amount of loss it can incur. The Value at Risk is
defined as the maximum likely loss which will be incurred at a given level of confidence for
a given time period. It should be noted that losses greater than the VaR can be incurred.
Actual losses would be expected to exceed a 95% confidence limit overnight loss roughly
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
39/131
Chapter 2: Value at Risk Concepts
Page 39
one day in 20. The risk manager must bear this in mind when setting the confidence limit
for the Value at Risk treatment: a limit of 95% might be appropriate for setting exposure
limits, but a limit of 99.95% (exceeded one day in 8 years) may be more appropriate for
setting aside capital. Such internal models are generally more sophisticated than regulators
frameworks, allowing the greatest flexibility for calculating and netting risks. The UKregulator has adopted the Basle committee proposals to allow the use of internal risk
models for computing the capital requirements of market risk exposures (Basle committee
on Banking Supervision, 1996). The BIS stipulates 99% confidence limit, 10-day VaR, with
a multiplier of 3, for capital adequacy purposes (Basle Committee on Banking Supervision,
1996).
To be granted permission to use these internal models for regulatory reporting,
organisations must show that their models accurately predict P&L movements within
certain tolerances. They must also show that the measures sent to the regulator are integral
to the way that the organisation controls its risks. VaR suffers from credibility issues with
traders, which makes it hard to get them to agree to have their trading restricted by the
measure. In global organisations, there is often a timeliness problem with the VaR number,
which may not be delivered back to a region until the second business day after the
position snapshot to which it applies. Faster methods may be used, but these often involve
additional compromise, making it even harder to gain acceptance. Management also fear
that traders will be able to play the imperfections in the limits system, taking on risks to
achieve returns in the risk dimensions for which the modelling is weak or assumes no risk
is present.
Despite these limitations, the majority of banking organisations monitor their Value at
Risk. Many now also use the VaR number to set limits for acceptable risk. In effect, VaR
has become a capital allocation tool for the business, since business units under their VaR
limit can expand their business activities until the unused part of their limit is consumed. A
survey of high capital value multinational corporations showed that in 2001, between 15%
and 20% of the surveyed companies used VaR as a capital allocation tool (Arthur
Andersen, 2001). The significance of this figure becomes apparent from the industrial
classifications of the companies participating in the survey. Just 16 financial institutions and
ten energy companies participated out of a total 115 respondents, implying either that
more than 75% of companies that would be expected to have VaR capabilities are using it
8/14/2019 Ms Thesis: Non-Linear Risk Measurement
40/131
Chapter 2: Value at Risk Concepts
Page 40
for capital allocation or, even more surprisingly, that its use as a capital allocation tool has
extended beyond the core base of financial institutions.
2.8 Benefits of Risk Management Controls
Derivatives have historically taken the blame for large losses in finance although, as noted
above, there is often a control issue that is the main cause of failure. The reason why
derivatives often feature in these situations is that they give the user a means of increasing
their leverage. For an equal amount of cash investment, they can increase the amount the
user stands to gain (or lose) from market movements. This often extends the time before
unauthorised trading activity is detected and increases the losses before positions are
unwound. We can look at the key losses outlined previously, to see where an improved risk
management framework might have prevented disaster. The table below sets out the
analysis.
Risk Management Event Type of risk Prevention
Latin American debt crisis Credit/ Country risk Concentration risk
controls
Savings & Loan industry (US
building societies) lose $150bn
in mismatched interest rate
exposures many go bankrupt
Interest rate derivatives Correct pricing &
maturity analysis of
interest rate
exposure
Stock market crash Market risk Stress testing
scenarios
Hammersmith & Fulham
Council
Interest rate derivatives Mark to market and
market risk
management