## November 25, 2012

### MA0042 [Treasury Management] Set2 Q6

Q.6 Describe the three approaches to determine VaR.

Ans:

Approaches to Compute VaR
In most of the organisations including financial and non-financial sectors, VaR has become an established risk exposure measurement tool. Multiple approaches are used to compute VaR and they have numerous variations. The measure of VaR can be calculated analytically through assumptions about return distributions in market risks, and the variances across these risks. In spite of the variations in different approaches to compute VaR, the three basic approaches used to calculate VaR are:
Variance covariance method
Simulation approaches
Extreme value theory

Variance covariance method
Variance covariance method is an approach that has the advantage of simplicity but it is limited by the difficulties related with derived probability distributions. As VaR measures the probability of loss going beyond a specific amount in a particular time period, it should be moderately simple to calculate if we can derive a probability distribution of potential values.

The method of mapping equity positions through beta is often used in this approach as it is a very crucial stage in computing VaR. But it is simplistic as it neglects the following factors while calculating VaR for nonlinear positions:

The relationship between the underlying asset price and the potential value of the component of a portfolio is nonlinear.

The price of the components is also exposed to risk factors like delay in time and the expected volatility of the underlying assets returns.

If back testing, a method which is discussed later in this unit indicates that VaR estimations are not accurate, the risk manager should try to analyse whether to change methodology, improve the mapping process, or implement both.

Risk metrics contribution
Risks metrics contribution has two major basic contributions. They are making variance and covariance method freely available to everyone, and providing easy access to compute the VaR logically for a portfolio. The following assumptions underlying the computations of VaR are described by publications by J. P. Morgan in 1996:

Returns may not distribute themselves normally and the outliers are very common. It is assumed that the return divided by the forecasted standard deviation is normally distributed.

The attention on the standardised returns indicates that we should focus on the size relative to the standard deviation rather than the size of the return.

The focus on normal standardised returns attains more exposure of VaR estimation to the frequent outliers risks than that could be assumed with a normal distribution. The risk metrics approach also covers standard normal and normal mixture distributions.

ARCH and GARCH model
To generate more accurate variance covariance values in VaR estimations, few recommended improving the sampling methods and data innovations. Others suggested that arithmetical innovations in existing data can bring better accuracy. R F Engle, an American economist, suggested the following two variants which provide better forecasts of variance and better estimations of VaR:

Autoregressive Conditional Heteroskedasticity (ARCH) model The basic idea of ARCH is that the error terms conditional variance at time (t) depends on the squared error term (t-1). ARCH is crucially applied in the following areas:
- The shock effects on the variance of stock market returns.
- Effects of increase in the variance of excess returns of bonds on risk premiums.

Generalised Autoregressive Conditional Heteroskedasticity (GARCH) model This model was introduced by Taylor (1986) and Schwert (1989). It is described by a symmetric response of current volatility to positive and negative lagged errors.

Simulation approaches
In this approach, we estimate VaR by assuming the distribution of basic risk factors or targeting asset returns, extracting a sample from the joint distribution and then recalculating the portfolio of assets. Here, the revaluation of VaR of each asset is computed as per the value of each set of risk factors. They recalculate the portfolio with a simple approach that is based on partial derivatives. Analysing the assumptions based on marginal distributions and dependence structure among various benchmark assets is relevant. The three methods of simulation approaches are as follows:

Historical simulation It is the most popular among simulation approaches. It represents the simplest way to evaluate VaR for many portfolios. This approach estimates VaR by creating imaginary returns of that portfolio based on time series. These returns are gained by applying historical data on the portfolio and evaluating the changes occurred in each period.

Hybrid model In this method, portfolio returns are categorised based on historical stimulation in decreasing order. Then, the manager would evaluate the gathered weights of portfolio returns. VaR is detected by the value for which the total weight would be equal to the aspired confidence level. This approach has both the advantages of risk metrics contribution and historical simulation.

Monte Carlo simulation: This method is based on using random data and probability to gain an approximate solution to an issue in lesser time when compared to the formal techniques. It depends on the assumption that more simulations provide higher level of accuracy. Various Monte Carlo methods are introduced as an attempt to minimise the approximation error. The four methods of Monte Carlo simulation are as follows:

- Crude Monte Carlo This method concludes the confidence intervals of your method and the accuracy of the answer.
- Acceptance Rejection Monte Carlo: This evaluation provides a less accurate approximation when compared to Crude Monte Carlo method.
- Stratified sampling: This technique divides the interval into subintervals and then performs Crude Monte Carlo method on each interval.
- Importance sampling: This method uses more samples on more important functional areas. It achieves good approximation on the important functional areas which has greater impact on the overall approximation value and reduces variance.

Extreme value theory
Extreme value theory is used for measuring extreme risks. It concentrates only on the samples of returns data carrying information about extreme behaviour. The samples of non-overlapping returns is categorised into n blocks in each block. A series of maxima and minima are generated by extracting the respective largest rise and fall in returns from each block. A Generalised Extreme Value (GEV) or Generalised Pareto (GP) distributions is used to one of these series through method of moments to evaluate the tail index parameter. This parameter illustrates the way in which the extreme events in the data can occur. The probability of occurrence of an extreme event is estimated from the VaR value for a given probability when the tail index is available.

Extreme value theory provides a significant set of techniques to quantify the boundaries between different loss classes. It also delivers a scientific language for translating management guidelines on the boundaries into actual numbers. Extreme value theory generates methods for quantifying events and their consequences in a statistically optimal way. It also helps in the patterning of default probabilities and the evaluation of divergence factors in the management of bond portfolios.

It has developed as one of the most important statistical fields for applied sciences and is widely used in many other subjects. This modeling is applied in the fields of management strategy, thermodynamics of earthquakes, memory cell failure and bio-medical data processing.