29
Value-at-Risk: An Overview of Analytical VaR E-mail print by Romain Berry J.P. Morgan Investment Analytics and Consulting [email protected] In the last issue, we discussed the principles of a sound risk management function to efficiently manage and monitor the financial risks within an organization. To many risk managers, the heart of a robust risk management department lies in risk measurement through various complex mathematical models. But even one who is a strong believer in quantitative risk management would have to admit that a risk management function that heavily relies on these sophisticated models cannot add value beyond the limits of understanding and expertise that the managers themselves have towards these very models. Risk managers relying exclusively on models are exposing their organization to events similar to that of the sub-prime crisis, whereby some extremely complex models failed to accurately estimate the probability of default of the most senior tranches of CDOs 1 . Irrespective of how you put it, there is some sort of human or operational risk in every team within any given organization. Models are valuable tools but merely represent a means to manage the financial risks of an organization. This article aims at giving an overview of one of the most widespread models in use in most of risk management departments across the financial industry: Value-at-Risk (or VaR) 2 . VaR calculates the worst expected loss over a given horizon at a given confidence level under normal market conditions. VaR estimates can be calculated for various types of risk: market, credit, operational, etc. We will

VAR Romain Berry

Embed Size (px)

DESCRIPTION

VaR JP Morgan

Citation preview

Value-at-Risk: An Overview of Analytical VaR E-mail

    print

by Romain BerryJ.P. Morgan Investment Analytics and [email protected]

In the last issue, we discussed the principles of a sound risk management function to efficiently manage and monitor the financial risks within an organization. To many risk managers, the heart of a robust risk management department lies in risk measurement through various complex mathematical models. But even one who is a strong believer in quantitative risk management would have to admit that a risk management function that heavily relies on these sophisticated models cannot add value beyond the limits of understanding and expertise that the managers themselves have towards these very models. Risk managers relying exclusively on models are exposing their organization to events similar to that of the sub-prime crisis, whereby some extremely complex models failed to accurately estimate the probability of default of the most senior tranches of CDOs1. Irrespective of how you put it, there is some sort of human or operational risk in every team within any given organization. Models are valuable tools but merely represent a means to manage the financial risks of an organization.

This article aims at giving an overview of one of the most widespread models in use in most of risk management departments across the financial industry: Value-at-Risk (or VaR)2. VaR calculates the worst expected loss over a given horizon at a given confidence level under normal market conditions. VaR estimates can be calculated for various types of risk: market, credit, operational, etc. We will only focus on market risk in this article. Market risk arises from mismatched positions in a portfolio that is marked-to-market periodically (generally daily) based on uncertain movements in prices, rates, volatilities and other relevant market parameters. In such a context, VaR provides a single number summarizing the organization’s exposure to market risk and the likelihood of an unfavorable move. There are mainly three designated methodologies to compute VaR: Analytical (also called Parametric), Historical Simulations, and Monte Carlo Simulations. For now, we will focus only on the Analytical form of VaR. The two other methodologies will be treated separately in the upcoming issues of this newsletter. Part 1 of this article defines what VaR is and what it is not, and describes the main parameters. Then, in Part 2, we mathematically express VaR, work through a few examples and play with varying the parameters. Part 3 and 4 briefly touch upon two critical but complex steps to computing VaR: mapping positions to risk factors and selecting the volatility model of a portfolio. Finally, in Part 5, we discuss the pros and cons of Analytical VaR.

Part 1: Definition of Analytical VaR

VaR is a predictive (ex-ante) tool used to prevent portfolio managers from exceeding risk tolerances that have been developed in the portfolio policies. It can be measured at the portfolio, sector, asset class, and security level. Multiple VaR methodologies are available and each has its own benefits and drawbacks. To illustrate, suppose a $100 million portfolio has a monthly VaR of $8.3 million with a 99% confidence level. VaR simply means that there is a 1% chance for losses greater than $8.3 million in any given month of a defined holding period under normal market conditions.

It is worth noting that VaR is an estimate, not a uniquely defined value. Moreover, the trading positions under review are fixed for the period in question. Finally, VaR does not address the distribution of potential losses on those rare occasions when the VaR estimate is exceeded. We should also bear in mind these constraints when using VaR. The ease of using VaR is also its pitfall. VaR summarizes within one number the risk exposure of a portfolio. But it is valid only under a set of assumptions that should always be kept in mind when handling VaR.

VaR involves two arbitrarily chosen parameters: the holding period and the confidence level. The holding period corresponds to the horizon of the risk analysis. In other words, when computing a daily VaR, we are interested in estimating the worst expected loss that may occur by the end of the next trading day at a certain confidence level under normal market conditions. The usual holding periods are one day or one month. The holding period can depend on the fund’s investment and/or reporting horizons, and/or on the local regulatory requirements. The confidence level is intuitively a reliability measure that expresses the accuracy of the result. The higher the confidence level, the more likely we expect VaR to approach its true value or to be within a pre-specified interval. It is therefore no surprise that most regulators require a 95% or 99% confidence interval to compute VaR.

Part 2: Formalization and Applications

Analytical VaR is also called Parametric VaR because one of its fundamental assumptions is that the return distribution belongs to a family of parametric distributions such as the normal or the lognormal distributions. Analytical VaR can simply be expressed as:

(1)where:

VaRα is the estimated VaR at the confidence level 100 × (1 - α)%. xα is the left-tail α percentile of a normal distribution  . xα is described in

the expression where R is the expected return. In order for VaR to be meaningful, we generally choose a confidence level of 95% or 99%. xα is generally negative.

P is the marked-to-market value of the portfolio.

The Central Limit Theorem states that the sum of a large number of independent and identically distributed random variables will be approximately normally distributed (i.e., following a Gaussian distribution, or bell-shaped curve) if the random variables have a finite variance. But even if we have a large enough sample of historical returns, is it realistic to assume that the returns of any given fund follow a normal distribution? Thus, we need to associate the return distribution to a standard normal distribution which has a zero mean and a standard deviation of one. Using a standard normal distribution enables us to replace xα by zα through the following permutation:

(2)

which yields:(3)

zα is the left-tail α percentile of a standard normal distribution. Consequently, we can re-write (1) as:

(4)

Example 1 – Analytical VaR of a single asset

Suppose we want to calculate the Analytical VaR at a 95% confidence level and over a holding period of 1 day for an asset in which we have invested $1 million. We have estimated3 μ (mean) and σ (standard deviation) to be 0.3% and 3% respectively. The Analytical VaR of that asset would be:

This means that there is a 5% chance that this asset may lose at least $46,347 at the end of the next trading day under normal market conditions.

Example 2 – Conversion of the confidence level

Assume now that we are interested in a 99% Analytical VaR of the same asset over the same one-day holding period. The corresponding VaR would simply be:

There is a 1% chance that this asset may experience a loss of at least $66,789 at the end of the next trading day. As you can see, the higher the confidence level, the higher the VaR as we travel downwards along the tail of the distribution (further left on the x-axis).

Example 3 – Conversion of the holding period

If we want to calculate a one-month (21 trading days on average) VaR of that asset using the same inputs, we can simply apply the square root of the time5:

(5)

Applying this rule to our examples above yields the following VaR for the two confidence levels:

Example 4 – Analytical VaR of a portfolio of two assets

Let us assume now that we have a portfolio worth $100 million that is equally invested in two distinct assets. One of the main reasons to invest in two different assets would be to diversify the risk of the portfolio. Therefore, the main underlying question here is how one asset would behave if the other asset were to move against us. In other words, how will the correlation between these two assets affect the VaR of the portfolio? As we aggregate one level up the calculation of Analytical VaR, we replace in (4) the mean of the asset by the weighted mean of the portfolio, μP and the standard deviation (or volatility) of the asset by the volatility of the portfolio, σP. The volatility of a portfolio composed of two assets is given by:

(6)

where

w1 is the weighting of the first asset w2 is the weighting of the second asset σ1 is the standard deviation or volatility of the first asset σ2 is the standard deviation or volatility of the second asset ρ1,2is the correlation coefficient between the two assets

And (4) can be re-written as:

(7)

Let us assume that we want to calculate Analytical VaR at a 95% confidence level over a one-day horizon on a portfolio composed of two assets with the following assumptions:

P = $100 million

w1 = w2 = 50%6

μ1 = 0.3% σ1 = 3% μ2 = 0.5% σ2 = 5% ρ1,2 = 30%

(8)

Example 5 – Analytical VaR of a portfolio composed of n assets

From the previous example, we can generalize these calculations to a portfolio composed of n assets. In order to keep the mathematical formulation handy, we use matrix notation and can re-write the volatility of the portfolio as:

(9)

where: w is the vector of the weights of the n assets w’ is the transpose vector of w Σ is the covariance matrix of the n assets

these risk factors. This process is called reverse engineering and aims at reducing the size of the covariance matrix and speeding up the computational time of transposing and multiplying matrices. We generally consider four main risk factors: Spot FX, Equity, Zero-Coupon Bonds and Futures/Forward. The complexity of this process goes beyond the scope of this overview of Analytical VaR and will need to be treated separately in a future article.

Part 4: Volatility Models

We can guess from the various expressions of Analytical VaR we have used that its main driver is the expected volatility (of the asset or the portfolio) since we multiply it by a constant factor greater than 1 (1.6449 for a 95% VaR, for instance) – as opposed to the expected mean, which is simply added to the expected volatility. Hence, if we have used historical data to derive the expected volatility, we could consider how today’s volatility is positively correlated with yesterday’s volatility. In that case, we may try to estimate the conditional volatility of the asset or the portfolio. The two most common volatility models used to compute VaR are the Exponential Weighted Moving Average (EWMA) and the Generalized Autoregressive Conditional Heteroscedasticity (GARCH). Again, in order to be exhaustive on this very important part in computing VaR, we will discuss these models in a future article.

Practically, we could design a spreadsheet in Excel (Exhibit 1) to calculate Analytical VaR on the portfolio in Example 4. The cells in grey are the input cells.

It is easy from there to expand the calculation to a portfolio of n assets. But be aware that you will soon reach the limits of Excel as we will have to calculate n(n-1)/2 terms for your covariance matrix.

Part 3: Risk Mapping

In order to cope with an increasing covariance matrix each time you diversify your portfolio further, we can map each security of the portfolio to common fundamental risk factors and base our calculations of Analytical VaR on

Exhibit 1: Excel Spreadsheet to calculate Analytical VaR for a portfolio of two assets

(click to enlarge)

Part 5: Advantages and Disadvantages of Analytical VaR

Analytical VaR is the simplest methodology to compute VaR and is rather easy to implement for a fund. The input data is rather limited, and since there are no simulations involved, the computation time is minimal. Its simplicity is also its main drawback. First, Analytical VaR assumes not only that the historical returns follow a normal distribution, but also that the changes in price of the assets included in the portfolio follow a normal distribution. And this very rarely survives the test of reality. Second, Analytical VaR does not cope very well with securities that have a non-linear payoff distribution like options or mortgage-backed securities. Finally, if our historical series exhibits heavy tails, then computing Analytical VaR using a normal distribution will underestimate VaR at high confidence levels and overestimate VaR at low confidence levels.

Conclusion

As we have demonstrated, Analytical VaR is easy to implement as long as we follow these steps. First, we need to collect historical data on each security in the portfolio (we advise using at least one year of historical data – except if one security has experienced high volatility, which would suggest a shorter period of time). Second, if the portfolio has a large number of underlying positions, then we would need to map them against a more manageable set of risk factors. Third, we need to calculate the historical parameters (mean, standard deviation, etc.) and need to estimate the expected prices, volatilities and correlations. Finally we apply (7) to find the Analytical VaR estimate of the portfolio.

As always when building a model, it is important to make sure that it has been reviewed, fully tested and approved, that a User Guide (including any potential code) has been documented and will be updated if necessary, that a training has been designed and delivered to the members of the risk management team and to the recipients of the outputs of the risk management function, and finally that a capable person has been allocated the oversight of the model, its current use, and regular refinement.

1CDO stands for Collaterized Debt Obligation. These instruments repackage a portfolio of average- or poor-quality debt into high-quality debt (generally rated AAA) by splitting a portfolio of corporate bonds or bank loans into four classes of securities, called tranches.2Pronounced V’ah’R. 3Note that these parameters have to be estimated. They are not the historical parameters derived from the series.4Note that zα is to be read in the statistical table of a standard normal distribution.5This rule stems from the fact that the sum of n consecutive one-day log returns is the n-day log return and the standard deviation of n-day returns is √n × standard deviation of one-day returns.

6These weights correspond to the weights of the two assets at the end of the holding period. Because of market movements, there is little likelihood that they will be the same as the weights at the beginning of the holding period.

An Overview of Value-at-Risk:Part II - Historical Simulations VaR               E-mail

    print    

By Romain BerryJ.P. Morgan Investment Analytics and [email protected]

This article is the third in a series of articles exploring risk management for institutional investors.

In the previous issue, we looked at Analytical Value-at-Risk, whose cornerstone is the Variance-Covariance matrix. In this article, we continue to explore VaR as an indicator to measure the market risk of a portfolio of financial instruments, but we touch on a very different methodology.

We indicated in the previous article that the main benefits of Analytical VaR were that it requires very few parameters, is easy to implement and is quick to run computations (with an appropriate mapping of the risk factors). Its main drawbacks lie in the significant (and inconsistent across asset classes and markets) assumption that price changes in the financial markets follow a normal distribution, and that this methodology may be computer-intensive since we need to calculate the n(n-1)/2 terms of the Variance-Covariance matrix (in the case where we do not proceed to a risk mapping of the various instruments that composed the portfolio). With the increasing power of our computers, the second limitation will barely force you to move away from spreadsheets to programming. But the first assumption in the case

of a portfolio containing a non-negligible portion of derivatives (minimum 10%-15% depending on the complexity and exposure or leverage) may result in the Analytical VaR being seriously underestimated because these derivatives have non-linear payoffs.

One solution to circumvent that theoretical constraint is merely to work only with the empirical distribution of the returns to arrive at Historical Simulations VaR. Indeed, is it not more logical to work with the empirical distribution that captures the actual behavior of the portfolio and encompasses all the correlations between the assets composing the portfolio? The answer to this question is not so clear-cut. Computing VaR using Historical Simulations seems more intuitive initially but has its own pitfalls as we will see. But first, how do we compute VaR using Historical Simulations?

Historical Simulations VaR Methodology

The fundamental assumption of the Historical Simulations methodology is that you look back at the past performance of your portfolio and make the assumption – there is no escape from making assumptions with VaR modeling – that the past is a good indicator of the near-future or, in other words, that the recent past will reproduce itself in the near-future. As you might guess, this assumption will reach its limits for instruments trading in very volatile markets or during troubled times as we have experienced this year.

The below algorithm illustrates the straightforwardness of this methodology. It is called Full Valuation because we will re-price the asset or the portfolio after every run. This differs from a Local Valuation method in which we only use the information about the initial price and the exposure at the origin to deduce VaR.

Step 1 – Calculate the returns (or price changes) of all the assets in the portfolio between each time interval.

The first step lies in setting the time interval and then calculating the returns of each asset between two successive periods of time. Generally, we use a daily horizon to calculate the returns, but we could use monthly returns if we were to compute the VaR of a portfolio invested in alternative investments (Hedge Funds, Private Equity, Venture Capital and Real Estate) where the reporting period is either monthly or quarterly. Historical Simulations VaR requires a long history of returns in order to get a meaningful VaR. Indeed, computing a VaR on a portfolio of Hedge Funds with only a year of return history will not provide a good VaR estimate.

Step 2 – Apply the price changes calculated to the current mark-to-market value of the assets and re-value your portfolio.

Once we have calculated the returns of all the assets from today back to the first day

of the period of time that is being considered – let us assume one year comprised of 265 days – we now consider that these returns may occur tomorrow with the same likelihood. For instance, we start by looking at the returns of every asset yesterday and apply these returns to the value of these assets today. That gives us new values for all these assets and consequently a new value of the portfolio. Then, we go back in time by one more time interval to two days ago. We take the returns that have been calculated for every asset on that day and assume that those returns may occur tomorrow with the same likelihood as the returns that occurred yesterday. We re-value every asset with these new price changes and then the portfolio itself. And we continue until we have reached the beginning of the period. In this example, we will have had 264 simulations.

Step 3 – Sort the series of the portfolio-simulated P&L from the lowest to the highest value.

After applying these price changes to the assets 264 times, we end up with 264 simulated values for the portfolio and thus P&Ls. Since VaR calculates the worst expected loss over a given horizon at a given confidence level under normal market conditions, we need to sort these 264 values from the lowest to the highest as VaR focuses on the tail of the distribution.

Step 4 – Read the simulated value that corresponds to the desired confidence level.

The last step is to determine the confidence level we are interested in – let us choose 99% for this example. One can read the corresponding value in the series of the sorted simulated P&Ls of the portfolio at the desired confidence level and then take it away from the mean of the series of simulated P&Ls. In other words, the VaR at 99% confidence level is the mean of the simulated P&Ls minus the 1% lowest value in the series of the simulated values. This can be formulated as follows:

(1)

where:

VaR1-α is the estimated VaR at the confidence level 100 × (1 - α)% μ(R) is the mean of the series of simulated returns or P&Ls of the portfolio Rα is the αth worst return of the series of simulated P&Ls of the portfolio or,

in other words, the return of the series of simulated P&Ls that corresponds to the level of significance α

We may need to proceed to some interpolation since there will be no chance to get a value at 99% in our example. Indeed, if we use 265 days, each return calculated at every time interval will have a weight of 1/264 = 0.00379. If we want to look at the value that has a cumulative weight of 99%, we will see that there is no value that matches exactly 1% (since we have divided the series into 264 time intervals and not

a multiple of 100). Considering that there is very little chance that the tail of the empirical distribution is linear, proceeding to a linear interpolation to get the 99% VaR between the two successive time intervals that surround the 99th percentile will result in an estimation of the actual VaR. This would be a pity considering we did all that we could to use the empirical distribution of returns, wouldn’t it? Nevertheless, even a linear interpolation may give you a good estimate of your VaR. For those who are more eager to obtain the exact VaR, the Extreme Value Theory (EVT) could be the right tool for you. We will explain in another article how to use EVT when computing VaR. It is rather mathematically demanding and would require us to spend more time to explain this method.

Applications of Historical Simulations VaR

Let us compute VaR using historical simulations for one asset and then for a portfolio of assets to illustrate the algorithm.

Example 1 – Historical Simulations VaR for one asset

The first step is to calculate the return of the asset price between each time interval. This is done in column D in Table 1. Then we create a column of simulated prices based on the current market value of the asset (1,000,000 as shown in cell C3) and each return which this asset has experienced over the period under consideration. Thus, we have 100 x (-1.93%) = -19,313.95. In Step 3, we simply sort all the simulated values of the asset (based on the past returns). Finally, in Step 4, we read the simulated value in column G which corresponds to the 1% worst loss. As there is no value that corresponds to 99%, we interpolate the surrounding values around 99.24% and 98.86%. That gives us -54,711.55.

This number does not take into account the mean, which is 1,033.21. As the 99% VaR is the distance from the mean of the first percentile (1% worst loss), we need to subtract the number we just calculated from the mean to obtain the actual 99% VaR. In this example, the VaR of this asset is thus 1,033.21 – (-54,711.55) = 55,744.76. In order to express VaR in percentage, we can divide the 99% VaR amount by the current value of the asset (1,000,000), which yields 5.57%.

Table 1 - Calculating Historical Simulations VaR for one asset

 Asset Price Histogram of Returns

Example 2 - Historical Simulations VaR for one portfolio

Computing VaR on one asset is relatively easy, but how do the historical simulations account for any correlations between assets if the portfolio holds more than one asset? The answer is also simple: correlations are already embedded in the price changes of the assets. Therefore, there is no need to calculate a Variance-Covariance matrix when running historical simulations. Let us look at another example with a portfolio composed of two assets.

Table 2 - Calculating Historical Simulations VaR for a portfolio of two assets

 Portfolio Unit Price Histogram of Returns

As you can see, we simply add a couple of columns to replicate the intermediary steps for the second asset. In this example, each asset represents 50% of the portfolio. After each run, we re-value the portfolio by simply adding up the simulated P&L of each asset. This gives us the simulated P&Ls for the portfolio (column J).

This straightforward step of simply re-composing the portfolio after every run is one of the reasons behind the popularity of this methodology. Indeed, we do not need to handle sizeable Variance-Covariance matrices. We apply the calculated returns of every asset to their current price and re-value the portfolio.

As we have noted, correlations are embedded in the price changes. In this example, the 99% VaR of the first asset is 55,744.76 (or 5.57%) and the 99% VaR of the second asset is 54,209.71 (or 5.42%). We know that VaR is a sub-additive risk measure – if we add the VaR of two assets, we will not get the VaR of the portfolio. In this case, the 99% VaR of the portfolio only represents 3.67% of the current marked-to-market value of the portfolio. That difference represents the diversification effect. Having a portfolio invested in these two assets makes the risk lower than investing in any of these two assets solely. The reason is that the gains on one asset sometimes offset the losses on the other asset (rows 10, 12, 13, 17-20, 23,

26-28, 30, 32 in Table 2). Over the 265 days, this happened 127 times with different magnitude. But in the end, this benefited the overall risk profile of the portfolio as the 99% VaR of the portfolio is only 3.67%.

Advantages of Historical Simulations VaR

Computing VaR using the Historical Simulations methodology has several advantages. First, there is no need to formulate any assumption about the return distribution of the assets in the portfolio. Second, there is also no need to estimate the volatilities and correlations between the various assets. Indeed, as we showed with these two simple examples, they are implicitly captured by the actual daily realizations of the assets. Third, the fat tails of the distribution and other extreme events are captured as long as they are contained in the dataset. Fourth, the aggregation across markets is straightforward.

Disadvantages of Historical Simulations VaR

The Historical Simulations VaR methodology may be very intuitive and easy to understand, but it still has a few drawbacks. First, it relies completely on a particular historical dataset and its idiosyncrasies. For instance, if we run a Historical Simulations VaR in a bull market, VaR may be underestimated. Similarly, if we run a Historical Simulations VaR just after a crash, the falling returns which the portfolio has experienced recently may distort VaR. Second, it cannot accommodate changes in the market structure, such as the introduction of the Euro in January 1999. Third, this methodology may not always be computationally efficient when the portfolio contains complex securities or a very large number of instruments. Mapping the instruments to fundamental risk factors is the most efficient way to reduce the computational time to calculate VaR by preserving the behavior of the portfolio almost intact. Fourth, Historical Simulations VaR cannot handle sensitivity analyses easily.

Lastly, a minimum of history is required to use this methodology. Using a period of time that is too short (less than 3-6 months of daily returns) may lead to a biased and inaccurate estimation of VaR. As a rule of thumb, we should utilize at least four years of data in order to run 1,000 historical simulations. That said, round numbers like 1,000 may have absolutely no relevance whatsoever to your exact portfolio. Security prices, like commodities, move through economic cycles; for example, natural gas prices are usually more volatile in the winter than in the summer. Depending on the composition of the portfolio and on the objectives you are attempting to achieve when computing VaR, you may need to think like an economist in addition to a risk manager in order to take into account the various idiosyncrasies of each instrument and market. Also, bear in mind that VaR estimates need to rely on a stable set of assumptions in order to keep a consistent and comparable meaning when they are monitored over a certain period of time.

In order to increase the accuracy of Historical Simulations VaR, one can also decide to weight more heavily the recent observations compared to the furthest since the latter may not give much information about where the prices would go today. We will cover these more advanced VaR models in another article.

Conclusion

Despite these disadvantages, many financial institutions have chosen historical simulations as their favored methodology to compute VaR. To many, working with the actual empirical distribution is the “real deal.”

However, obtaining an accurate and reliable VaR estimate has little value without a proper back testing and stress testing program. VaR is simply a number whose value relies on a sound methodology, a set of realistic assumptions and a rigorous discipline when conducting the exercise. The real benefit of VaR lies in its essential property of capturing with one single number the risk profile of a complex or diversified portfolio. VaR remains a tool that should be validated through successive reconciliation with realized P&Ls (back testing) and used to gain insight into what would happen to the portfolio if one or more assets would move adversely to the investment strategy (stress testing).

An Overview of Value-at-Risk: Part III – Monte Carlo Simulations VaR E-mail

  print  

by Romain BerryJ.P. Morgan Investment Analytics & [email protected]

This article is the fourth in a series of articles exploring risk management for institutional investors.

The last (and most complex) of the three main methodologies used to compute the Value-at-Risk (VaR) of a portfolio of financial instruments employs Monte Carlo Simulations. Monte Carlo Simulations correspond to an algorithm that generates random numbers that are used to compute a formula that does not have a closed (analytical) form – this means that we need to proceed to some trial and error in

picking up random numbers/events and assess what the formula yields to approximate the solution. Drawing random numbers over a large number of times (a few hundred to a few million depending on the problem at stake) will give a good indication of what the output of the formula should be. It is believed actually that the name of this method stems from the fact that the uncle of one of the researchers (the Polish mathematician Stanislaw Ulam) who popularized this algorithm used to gamble in the Monte Carlo casino and/or that the randomness involved in this recurring methodology can be compared to the game of roulette.

In this article, we present the algorithm, and apply it to compute the VaR for a sample stock. We also discuss the pros and cons of the Monte Carlo Simulations methodology compared to Analytical VaR and Historical Simulations VaR.

Methodology

Computing VaR with Monte Carlo Simulations follows a similar algorithm to the one we used for Historical Simulations in our previous issue. The main difference lies in the first step of the algorithm – instead of picking up a return (or a price) in the historical series of the asset and assuming that this return (or price) can re-occur in the next time interval, we generate a random number that will be used to estimate the return (or price) of the asset at the end of the analysis horizon.

Step 1 – Determine the length T of the analysis horizon and divide it equally into a large number N of small time increments Δt (i.e. Δt = T/N).

For illustration, we will compute a monthly VaR consisting of twenty-two trading days. Therefore N = 22 days and Δt = 1 day. In order to calculate daily VaR, one may divide each day per the number of minutes or seconds comprised in one day – the more, the merrier. The main guideline here is to ensure that Δt is large enough to approximate the continuous pricing we find in the financial markets. This process is called discretization, whereby we approximate a continuous phenomenon by a large number of discrete intervals.

Step 2 – Draw a random number from a random number generator and update the price of the asset at the end of the first time increment.

It is possible to generate random returns or prices. In most cases, the generator of random numbers will follow a specific theoretical distribution. This may be a weakness of the Monte Carlo Simulations compared to Historical Simulations, which uses the empirical distribution. When simulating random numbers, we generally use the normal distribution.

In this paper, we use the standard stock price model to simulate the path of a stock price from the ith day as defined by:

Ri = (Si+1 - Si) / Si = μ δt + σ φ δt1/2 (1)

where

Ri is the return of the stock on the ith day Si is the stock price on the ith day Si+1 is the stock price on the i+1th day μ is the sample mean of the stock price δt is the timestep σ is the sample volatility (standard deviation) of the stock price φ is a random number generated from a normal distribution

At the end of this step/day (δt = 1 day), we have drawn a random number and determined Si+1 by applying (1) since all other parameters can be determined or estimated.

Step 3 – Repeat Step 2 until reaching the end of the analysis horizon T by walking along the N time intervals.

At the next step/day (δt = 2), we draw another random number and apply (1) to determine Si+2 from Si+1. We repeat this procedure until we reach T and can determine Si+T. In our example, Si+22 represents the estimated (terminal) stock price in one month time of the sample share.

Step 4 – Repeat Steps 2 and 3 a large number M of times to generate M different paths for the stock over T.

So far, we have generated one path for this stock (from i to i+22). Running Monte Carlo Simulations means that we build a large number M of paths to take account of a broader universe of possible ways the stock price can take over a period of one month from its current value (Si) to an estimated terminal price Si+T. Indeed, there is no unique way for the stock to go from Si to Si+T. Moreover, Si+T is only one possible terminal price for the stock amongst an infinity. Indeed, for a stock price being defined on (a set of positive numbers), there is an infinity of possible paths from Si to Si+T (see footnote 1).

It is an industry standard to run at least 10,000 simulations even if 1,000 simulations provide an efficient estimator of the terminal price of most assets. In this paper, we ran 1,000 simulations for illustration purposes.

Step 5 – Rank the M terminal stock prices from the smallest to the largest, read the simulated value in this series that corresponds to the desired (1-α)% confidence level (95% or 99% generally) and deduce the relevant VaR, which is the difference between Si and the αth lowest terminal stock price.

Let us assume that we want the VaR with a 99% confidence interval. In order to

obtain it, we will need first to rank the M terminal stock prices from the lowest to the highest. Then we read the 1% lowest percentile in this series. This estimated terminal price, Si+T

1% means that there is a 1% chance that the current stock price Si could fall to Si+T

1% or less over the period in consideration and under normal market conditions. If Si+T

1% is smaller than Si (which is the case most of the time), then Si - Si+T

1% will corresponds to a loss. This loss represents the VaR with a 99% confidence interval.

Applications

Let us compute VaR using Monte Carlo Simulations for one share to illustrate the algorithm.

We apply the algorithm to compute the monthly VaR for one stock. Historical prices are charted in Exhibit 1. We will only consider the share price and thus work with the assumption we have only one share in our portfolio. Therefore the value of the portfolio corresponds to the value of one share.

Exhibit 1: Historical prices for one stock from 01/22/08 to 01/20/09

From the series of historical prices, we calculated the sample return mean (-0.17%) and sample return standard deviation (5.51%). The current price (Si) at the end of the 20th of January 2009 was $18.09. We want to compute the monthly VaR on the 20th of January 2009. This means we will jump in the future by 22 trading days and look at the estimated prices for the stock on the 19th of February 2009.

Since we decided to use the standard stock price model to draw 1,000 paths until T

(19th of February 2009), we will need to estimate the expected return (also called drift rate) and the volatility of the share on that day.

We can estimate the drift by

(2)

The volatility of the share can be estimated by

(3)

Note that since we chose δt = 1 day, these two estimators will equal the sample mean and sample standard deviation.

Based on these two estimators, we generate Si+1 from Si by re-arranging (1) as

Si+1 = Si (1 + μ σt + σ φ δt1/2) (4)

and simulate 1,000 paths for the share.

The last step can be summarized in Exhibit 2. We sort the 1,000 terminal stock prices from the lowest to the highest and read the price which corresponds to the desired confidence level. For instance, if we want to get the VaR at a 99% confidence level, we will read the 1% lowest stock price, which is $15.7530. On January 20th, the stock price was $18.09. Therefore, there is a 1% likelihood that the JPMorgan Chase & Co. share falls to $15.7530 or below. If that happens, we will experience a loss of at least $18.09 – $15.7530 = $2.5170. This loss is our monthly VaR estimate at a 99% confidence level for one share calculated on the 20th of January 2009.

Advantages

Monte Carlo Simulations present some advantages over the Analytical and Historical Simulations methodologies to

Exhibit 2: Reading VaR for one share

(click to enlarge)

compute VaR.

The main benefit of running time-consuming Monte Carlo Simulations is that they can model instruments with non-linear and path-dependent payoff functions, especially complex derivatives. Moreover, Monte Carlo Simulations VaR is not affected as much as Historical Simulations VaR by extreme events, and in reality provides in-depth details of these rare events that may occur beyond VaR. Finally, we may use any statistical distribution to simulate the returns as far as we feel comfortable with the underlying assumptions that justify the use of a particular distribution.

Disadvantages

The main disadvantage of Monte Carlo Simulations VaR is the computer power that is required to perform all the simulations, and thus the time it takes to run the simulations. If we have a portfolio of 1,000 assets and want to run 1,000 simulations on each asset, we will need to run 1 million simulations (without accounting for any eventual simulations that may be required to price some of these assets – like for options and mortgages, for instance). Moreover, all these simulations increase the likelihood of model risk.

Consequently, another drawback is the cost associated with developing a VaR engine that can perform Monte Carlo Simulations. Buying a commercial solution off-the-shelf or outsourcing to an experienced third party are two options worth considering. The latter approach will reinforce the independence of the computations and therefore reliance of its accuracy and non-manipulation.

Conclusion

Estimating the VaR for a portfolio of assets using Monte Carlo Simulations has become the standard in the industry. Its strengths overcome its weaknesses by far.

Despite the time and effort required to estimate the VaR for a portfolio, this task only represents half of the time a risk manager should spend on VaR. Indeed, the other half should be spent on checking that the model(s) used to calculate VaR is (are) still appropriate for the assets that composed the portfolio and still provide credible estimate of VaR (back testing), and on analyzing how the portfolio reacts to extreme events which occur every now and then in the financial markets (stress testing).

1 This is the reason why we used the discretized form (1) of the standard stock price model so that Monte Carlo Simulations can be handled more easily without losing too much information. Thus, the higher N and M are, the more accurate the estimates of the terminal stock prices will be, but the longer the simulations will take to run.