Value at risk


Value at risk

Value at Risk (VaR) is a maximum tolerable loss that could occur with a given probability within a given period of time. VaR is a widely applied concept to measure and manage many types of risk, although it is most commonly used to measure and manage the market risk of assets. VaR does not describe possible losses beyond the the maximum tolerable loss, although other measures of risk like volatility/standard deviation, semivariance (or downside risk) and expected shortfall may account for these.

Details of the definition

VaR has three parameters:

* The time period during which possible losses may occur. For example, financial institutions might consider the time period during which they are committed to holding a portfolio or the time they are required to liquidate assets. A 10 day period is used to compute capital requirements under the European Capital Adequacy Directive (CAD) and the Basel II Accords for market risk, whereas a 1 year period is used for credit risk.

* The confidence level represents the probability that the random variable will fall within an interval estimate around the average value. Commonly used confidence levels are 99% and 95%. After specifying a distribution, the probability of the maximum tolerable loss can be calculated from the confidence level.

* The maximum tolerable loss in monetary or other units.

Mathematical Definition

According to "Quantitative Risk Management", McNeil, Frey, Embrechts, 2005 pg. 38:

Given some confidence level alpha in (0,1) the VaR of the portfolio at the confidence level alpha is given by the smallest number l such that the probability that the loss L exceeds l is not larger than (1-alpha)

: ext{VaR}_alpha=inf{lin eal:P(L>l)leq 1-alpha}=inf{lin eal:F_L(l)geqalpha}

In probabilistic terms VaR is a quantile of the loss distribution.

Example

Consider a trading portfolio. Its market value in US dollars today is known, but its market value tomorrow is not known. The investment bank holding that portfolio might report that its portfolio has a 1-day VaR of $4 million at the 95% confidence level. This implies that under normal trading conditions the bank can be 95% confident that a change in the value of its portfolio would not result in a decrease of more than $4 million during 1 day. This is equivalent to saying that there is a 5% confidence level that the value of its portfolio will decrease by $4 million or more during 1 day. A 95% confidence interval does not imply a 95% chance of the event happening, the actual probability of the event cannot be determined.

The key point to note is that the target confidence level (95% in the above example) is the given parameter here; the output from the calculation ($4 million in the above example) is the maximum loss (the "value at risk") at that confidence level.

Common VaR calculation models

In the following, "return" means "percentage change in value".

A variety of models exist for estimating VaR. Each model has its own set of assumptions, but the most common assumption is that historical market data is our best estimator for future changes. Common models include:
# variance-covariance (VCV), assuming that risk factor returns are always (jointly) normally distributed and that the change in portfolio value is linearly dependent on all risk factor returns,
# the historical simulation, assuming that asset returns in the future will have the same distribution as they had in the past (historical market data),
# Monte Carlo simulation, where future asset returns are more or less randomly simulated; see Monte Carlo methods in finance

The variance-covariance, or delta-normal, model was popularized by J.P Morgan (now J.P. Morgan Chase) in the early 1990s when they published the "RiskMetrics Technical Document". In the following, we will take the simple case, where the only risk factor for the portfolio is the value of the assets themselves. The following two assumptions enable to translate the VaR estimation problem into a linear algebraic problem:

# The portfolio is composed of assets whose deltas are linear; more exactly: the change in the value of the portfolio is linearly dependent on (i.e., is a linear combination of) all the changes in the values of the assets, so that also the portfolio return is linearly dependent on all the asset returns.
# The asset returns are jointly normally distributed.

The implication of (1) and (2) is that the portfolio return is normally distributed because it always holds that a linear combination of jointly normally distributed variables is itself normally distributed.

We will use the following notation:
* _i means “of the return on asset i“ (for σ and mu) and "of asset i" (otherwise)
* _p means “of the return on the portfolio” (for σ and mu) and "of the portfolio" (otherwise)
*all returns are returns over the holding period
*there are N assets
*mu= expected value; i.e., mean
*σ = standard deviation
*V = initial value (in currency units)
* omega_i = V_i / V_p
*oldsymbol{omega}= vector of all omega_i (T means transposed)
*oldsymbol{Sigma}= covariance matrix = matrix of covariances between all N asset returns; i.e., an NxN matrix

The calculation goes as follows:

(i) mu_p = sum_{i=1}^N omega_i mu_i,

(ii) sigma_p = sqrt{oldsymbol{omega}^T oldsymbol{Sigma}oldsymbol{omega

The normality assumption allows us to z-scale the calculated portfolio standard deviation to the appropriate confidence level. So for the 95% confidence level VaR we get:

(iii) VaR = - V_p ( mu_p - 1.645 sigma_p )

The benefits of the variance-covariance model are the use of a more compact and maintainable data set which can often be bought from third parties, and the speed of calculation using optimized linear algebra libraries. Drawbacks include the assumption that the portfolio is composed of assets whose delta is linear, and the assumption of a normal distribution of asset returns (i.e., market price returns).

Historical simulation is the simplest and most transparent method of calculation. This involves running the current portfolio across a set of historical price changes to yield a distribution of changes in portfolio value, and computing a percentile (the VaR). The benefits of this method are its simplicity to implement, and the fact that it does not assume a normal distribution of asset returns. Drawbacks are the requirement for a large market database, and the computationally intensive calculation.

Using historical data, we can evaluate VaR using simple data as :

VaR = M * sigma_p * sqrt{10} * 2.33

* M : Market value of the portfolio
* sigma_p : Historical volatility of the portfolio
* 10 : Number of days; here we used 10 days
* 2.33 : Number of sigma needed for a level of certainty of 99%

Monte Carlo simulation is conceptually simple, but is generally computationally more intensive than the methods described above. The generic MC VaR calculation goes as follows:
* Decide on N, the number of iterations to perform.
* For each iteration:
** Generate a random scenario of market moves using some market model.
** Revalue the portfolio under the simulated market scenario.
** Compute the portfolio profit or loss (PnL) under the simulated scenario. (i.e., subtract the current market value of the portfolio from the market value of the portfolio computed in the previous step).
* Sort the resulting PnLs to give us the simulated PnL distribution for the portfolio.
* VaR at a particular confidence level is calculated using the percentile function. For example, if we computed 5000 simulations, our estimate of the 95% percentile would correspond to the 250th largest loss; i.e., (1 - 0.95) * 5000.
* Note that we can compute an error term associated with our estimate of VaR and this error will decrease as the number of iterations increases.

Monte Carlo simulation is generally used to compute VaR for portfolios containing securities with non-linear returns (e.g., options) because the computational effort required is non-trivial. Note that for portfolios without these complicated securities, such as a portfolio of stocks, the variance-covariance method is perfectly suitable and should probably be used instead. Also note that MC VaR is subject to model risk if the market model is not correct.

Caveats

Unfortunately, VaR is not the panacea of risk measurement methodologies. A subtle technical problem is that VaR is not sub-additive. That is, it's possible to construct two portfolios, A and B, in such a way that VaR (A + B) > VaR(A) + VaR(B). This is unexpected because we'd hope that portfolio diversification would reduce risk.

The theory of coherent risk measures outlines the properties we'd want any measure of risk to possess. Artzner, et al wrote the [http://www.math.ethz.ch/~delbaen/ftp/preprints/CoherentMF.pdf canonical paper] on the subject. In this paper they outline, in axiomatic fashion, the properties a risk measure should possess to be considered coherent. An example of a coherent risk measure is Expected Tail Loss (ETL) (also known as Conditional Value-at-Risk (CVaR)). Other names are Expected shortfall and worst conditional expectation.

To find an example of subadditivity violation of VaR like the one described above, see the above quoted paper by Artzner, et al.

Criticism

Nassim Taleb holds that Value at Risk is [http://www.fooledbyrandomness.com/jorion.html "charlatanism, a dangerously misleading tool"] .Taleb's [http://www.fooledbyrandomness.com/LSE-Taleb-Pilpel.pdf article in LSE] mentions three issues with conventional calculation and usage of VaR:
# Measuring probabilities of rare events requires study of vast amounts of data. For example, the probability of an event that occurs once a year can be studied by taking 4-5 years of data. But high-risk, low-probability events like natural calamities, epidemics and economic disasters (like the Crash of 1929) are once a century events which require at least 2-3 centuries of data for validating hypotheses. Since such data does not exist in the first place, it is argued, calculating risk with any accuracy is not possible.
# In the derivation of VaR, normal distributions are assumed wherever the frequency of events is uncertain.
# Fat tailed distributions are much harder to calibrate and parametrize than normal distributions.

Taleb does not offer any better alternative method except for consciousness of fragility of some measurement, "epistemology" to qualitatively rank the risks and hedging ("catastrophe insurance") where available.

Further reading

*cite book
last = Crouhy
first = M.
authorlink =
coauthors = D. Galai, and R. Mark
title = Risk Management
publisher = McGraw-Hill
date = 2001
location =
pages = 752 pages
url =
doi =
id = ISBN 0-07-135731-9

* Dowd, Kevin, "Measuring Market Risk, 2nd Edition", John Wiley & Sons, 2005, 410 pages. ISBN 0-470-01303-6.
* Glasserman, Paul, "Monte Carlo Methods in Financial Engineering", Springer, 2004, 596 pages, ISBN 0-387-00451-3.
* Holton, Glyn A., "Value-at-Risk: Theory and Practice", Academic Press, 2003, 405 pages. ISBN 0-12-354010-0.
* Jorion, Philippe, "Value at Risk: The New Benchmark for Managing Financial Risk, 3rd ed.", McGraw-Hill, 2006, 600 pages. ISBN 0-071-46495-6.
* Pearson, Neil D., "Risk Budgeting", John Wiley & Sons, 2002, 336 pages. ISBN 0-471-40556-6.
*McNeil Alexander, Frey Rüdiger, Embrechts Paul. "Quantitative Risk Management: Concepts Techniques and Tools", Princeton University Press, Princeton, 2005, 538 pages. ISBN 0-691-12255-5.

External links

* [http://www.cba.ua.edu/~rpascala/VaR/VaRForm.php Online real-time VaR calculator] , by Razvan Pascalau, Univ. of Alabama
* [http://www.wilmott.com/blogs/satyajitdas/enclosures/perfectstorms%28may2007%291.pdf “Perfect Storms” – Beautiful & True Lies In Risk Management] by Satyajit Das

* [http://www.gloriamundi.org/ “Gloria Mundi” – All About Value at Risk] by Barry Schachter


Wikimedia Foundation. 2010.