Risk measures - value at risk and beyond


Master's Thesis, 2007

79 Pages, Grade: 1 (A)


Excerpt


Contents

PREFACE

ABSTRACT

ABBREVIATIONS

TABLES

FIGURES

1 INTRODUCTION

2 INTRODUCTION TO RISK
2.1 DEFINITION OF “RISK”
2.2 TYPES OF RISK
2.3 MARKET RISK
2.4 MEASURING RISK

3 VALUE AT RISK (VAR)
3.1 HISTORY OF VAR, RISKMETRICS
3.2 WHAT IS VAR?
3.3 COMPUTING VAR
3.3.1 Analytic, Variance-Covariance or Parametric Approach
3.3.2 Historical Simulation Approach
3.3.3 Monte Carlo Simulation Approach
3.3.4 Choosing between methods

4 CRITICISM ON VALUE AT RISK
4.1 CONCEPT OF COHERENT RISK MEASURES
4.2 ADVANTAGES OF VAR
4.3 DRAWBACKS AND FLAWS OF VAR
4.3.1 VaR and Subadditivity
4.3.2 Further weaknesses of VaR
4.3.3 The Jorion-Taleb Debate

5 BEYOND VALUE AT RISK
5.1 TAIL CONDITIONAL EXPECTATION (TCE)
5.2 WORST CONDITIONAL EXPECTATION (WCE)
5.3 EXPECTED SHORTFALL (ES)
5.4 CONDITIONAL VALUE AT RISK (CVAR)
5.5 EXPECTED TAIL LOSS (ETL)
5.6 CONFUSION OF TONGUES
5.7 CONDITIONAL DRAWDOWN (CDD) AT RISK (CDAR)
5.8 EXPECTED REGRET (ER)
5.9 SPECTRAL RISK MEASURES
5.10 DISTORTION RISK MEASURES
5.11 OTHER RISK MEASURES
5.12 MODIFICATIONS OF VALUE AT RISK
5.12.1 Conditional Autoregressive Value at Risk (CAViaR)
5.12.2 Modified VaR
5.12.3 Stable modelling of VaR
5.13 OUTLOOK ON RESEARCH IN RISK MEASUREMENT

6 CONCLUSION

BIBLIOGRAPHY

Preface

I would like to thank everybody who helped me in writing this master’s thesis.

Especially I would like to thank my family, most notably my parents Ing. Franz Hoefler and Ing. Marlies Hoefler who supported me at anytime, in every sense and who made my studies possible in the first place. Special mention must go to my girlfriend Mag. Anna Maria Koeck for her moral support, her helpfulness, and for the enriching discussions.

I greatly appreciate the opportunity, given to me by Dr. Markus Glawischnig, to write this master’s thesis at the Institute for Financial Management. I would also like to acknowledge his contribution in regards scientific management and the useful recommendations and considerations he provided.

Not least of all I am indebted to all the people who supported me in the course of my research on this topic. Particularly I would like to express my gratitude to Prof. Dr. Freddy Delbaen (Department of Mathematics, ETH Zuerich) and

Prof. Philippe Jorion (Paul Merage School of Business, University of California at Irvine). The proof-reading was in the hands of my friend Jonathon Turner – thank you very much.

Stainz, September 2007 Bernhard Hoefler, Bakk. rer. soc. oec.

Abstract

This thesis deals with risk measures, in particular with Value at Risk (VaR) and risk measures beyond VaR.

Corporations are exposed to different kinds of risks and therefore risk management has become a central task for a successful company. VaR is nowadays widely adapted internationally to measure market risk and is the most frequently used risk measure amongst practitioners due to the fact that the concept offers several advantages. However, VaR also has its drawbacks and hence there have been and still are endeavours to improve VaR and to find better risk measures.

In seeking alternative risk measures to try to overcome VaR’s disadvantages, while still keeping its advantages, risk measures beyond VaR were introduced. The most important alternative risk measures such as Tail Conditional Expectation, Worst Conditional Expectation, Expected Shortfall, Conditional VaR, and Expected Tail Loss are presented in detail in the thesis. It has been found that the listed risk measures are very similar concepts of overcoming the deficiencies of VaR and that there is no clear distinction between them in the literature – ‘confusion of tongues’ would be an appropriate expression. Two concepts have become widespread in the literature in recent years: Conditional VaR and Expected Shortfall, however there are situations where it can be seen that these are simply different terms for the same measure.

Additionally other concepts are touched upon (Conditional Drawdown at Risk, Expected Regret, Spectral Risk Measures, Distortion Risk Measures, and other risk measures) and modifications of VaR (Conditional Autoregressive VaR, Modified VaR, Stable modelling of VaR) are shortly introduced.

Recapitulatory the basic findings of the thesis are that there are numerous sophisticated alternative measures and concepts readily available, that there prevails a ‘confusion of tongues’ with the alternative risk measures in the respective literature and that promising theories and models are on the verge of entering the mainstream financial risk management stage. At the end of the day however neither VaR nor any other introduced risk measure is perfect. There are certain limitations aligned with every method; no single method is the best risk measure.

Abbreviations

illustration not visible in this excerpt

Tables

Table 1: Steps in computing VaR

Table 2: Monte Carlo Simulation approach

Table 3: Example of violation of subadditivity of VaR – Final events

Table 4: Example of violation of subadditivity of VaR – Results

Table 5: Properties of CVaR and VaR compared

Figures

Figure 1: Typology of financial risks

Figure 2: Subgroups of market risk

Figure 3: Probability density function

Figure 4: HS VaR from histogram of S&P 500 profits/losses

Figure 5: Portfolio Loss Distribution, VaR and CVaR

1 Introduction

“The stock market will fluctuate.”

That was the answer of J. P. Morgan when he was asked what the stock market was going to do.

Fluctuation means change – everything changes and that can have positive or negative outcomes for those affected. Risk is a consequence of changes: In the financial world it is the expectance of gains or losses (cf. chapter 2.1). All individuals have to cope with risks in everyday life, eliminating risk from our lives is impossible. Insurances are offered to reduce risks or people watch out before crossing the road; on the other hand people deliberately take risks when playing in the lottery or investing money on the stock market.

Not only do individuals face risks but also corporations face the same challenges. They are exposed to different kinds of risks. In order to be able to manage these risks appropriately, risk management has become a central task and challenge for a successful company since the financial disasters in the 1990s (LTCM, Barings Bank, Metallgesellschaft, Orange County) . For this purpose risk managers use certain risk measures. A basic measure would be the volatility (standard deviation of unexpected outcomes) , a more sophisticated one is Value at Risk. VaR is nowadays widely adapted internationally to measure market risk but is also used for the measurement of credit and even operational risk.

However, VaR also has its drawbacks and flaws and hence there have been and still are endeavours from practitioners and academic researchers to improve VaR and to find better risk measures. Especially in the academic community the discourses on VaR were very intense and alternative risk measures were put forward several times. Nevertheless VaR remains the most frequently used risk measure amongst practitioners.

Given these basic conditions it is the intention of this thesis to concentrate on the following key points:

- give a review of the basic ideas and concepts of VaR
- address the limitations of and criticism on VaR
- give a literature overview of alternative risk measures

The available thesis is subdivided into three main chapters where the approach of dealing with the above mentioned issues will be to give a short introduction to basic terms in chapter 2. How is risk defined, which forms of risk are relevant for a company, what exactly is market risk (which is mainly measured by e.g. VaR) and how does measurement of risk work. Chapter 3 is then dedicated to VaR and should give a brief overview of this popular risk measure: The history of VaR, what is VaR, how can VaR be computed and how to choose between different methods. The next chapter – chapter 4 – will criticise VaR – showing the advantages and its limitations. As a first step the idea of coherent risk measures is introduced. Chapter 4 will summarize various arguments pro and contra VaR from different authors as such an overview is not yet available in the literature. The negative criticism on VaR leads to the last chapter, chapter 5 ‘Beyond Value at Risk’, which will give a literature overview of the most important advances in the area of financial risk management. Several alternative risk measures which (should) address VaR’s limitations will be introduced. Some concepts which have not directly led to applicable risk measures so far are also shown in this chapter. An outlook on research in the field of financial risk management and risk measurement closes this chapter.

A conclusion summarizes the key outcomes of this thesis and tries to address the key points which were stated above, in a compact way.

2 Introduction to risk

This short chapter should give a brief overview, as an introduction to the whole topic, on the term risk, the different risk types which companies face and market risk which is measured by VaR and the proposed alternative risk measures in chapter 5.

2.1 Definition of “Risk”

Risk in the financial world means expectance of gains or losses. Jorion defines risk from the viewpoint of finance theory as “the dispersion of unexpected outcomes due to movements in financial variables” . Risk consists of two components:

- Uncertainty, and
- Exposure.

Adapting an example from Holton (2003), this can be explained as follows: If somebody crosses the street, the person is exposed to injury or death from an accident with a car but he/she is uncertain because he/she does not know if he/she will be run over. Therefore, being both uncertain and exposed means facing risk. Applied to finance this means that e.g. if a bank does not invest in shares there is no risk whereas the share prices still remain uncertain. Only by buying shares is the bank also exposed to the uncertainty and hence faces risk.

2.2 Types of risk

Unsurprisingly there are several different types of risk a company can be exposed to. Not all corporations face the same risks; e.g. if a company does not trade transnationally, there will be no exchange rate risks. The risks especially faced by

a financial institution such as a bank are shown in Figure 1.

Figure 1: Typology of financial risks

illustration not visible in this excerpt

Source: Own description following Crouhy et al. (2001), p. 35

- Market risks will be discussed in detail in chapter 2.3.
- Credit risks are risks of changes in a counterparty’s credibility, which could in the worst case lead to default (e.g. an issuer of a fixed income security is not willing or not able to pay the promised coupons), another case would be that the counterparty is downgraded by a rating agency.
- Liquidity risks arise from risks of a possible inability of a company to fund its illiquid assets.
- Operational risks are risks arising from the potential malfunction of a company’s internal systems (could be a copier breaking down or on a higher level the breakdown of management control).
- Legal and regulatory risks: Legal risk could for example be entering a legal agreement without knowing whether the contract may be enforced. Regulatory risks arise from a potential impact of a possible change in laws.
- Human factor risks are a rather special form of operational risk. They arise simply from human errors (e.g. unintentionally destroying a file, entering wrong values in a model, etc.).

2.3 Market risk

Market risk is discussed in this chapter in more detail since this form of risk is the one, which is mainly measured by risk measures such as Value at Risk and hence the understanding of what is actually measured is crucial.

In short, market risk is the exposure to unexpected changes in financial market prices (e.g. share prices) and rates (e.g. exchange rates).

Figure 2: Subgroups of market risk

illustration not visible in this excerpt

Source: Own description

As can be seen in Figure 2 market risks can be further subdivided into:

- Interest rate risk: Is the risk that the value of a fixed-income security (e.g. a bond) will change due to a change in market interest rates.
- Equity price risk: Has two components: Systematic risk, which is the sensitivity of a portfolio to changes in the stock market – cannot be eliminated through portfolio diversification; Specific or idiosyncratic risk which is that part of equity price risk that is determined by firm-specific characteristics – can be diversified away in a portfolio.
- Exchange rate risk: Risk that foreign exchange rates vary – one of the major risks multinational corporations face.
- Commodity price risk: Risk that the prices of commodities change; commodities generally have higher volatilities than financial securities.

Important for the measurement of market risks is the exposure (cf. chapter 2.1): It can be measured in either profit or losses or in changes of a portfolio value.

2.4 Measuring risk

Firstly, a measure itself must be defined. “A measure is an operation that assigns a value to something." There is also a need to differentiate between measure and metric, the latter being an interpretation of the respective value measured. Combining chapter 2.1 with the definition above, a risk measure is therefore a measure which is applied to risks and a risk metric is then an interpretation of such a measure.

Risk Measurement has evolved and changed over time from basic indicators, such as the notional amount for an individual security, to more complex measures as for example the duration or convexity to very sophisticated methodologies for calculating VaR. VaR is a very powerful concept of measuring risk (especially for short horizons) by combining all the multiple components of market risk, which were introduced in the previous chapter.

3 Value at Risk (VaR)

This chapter will give basic information about the history of VaR, will answer the question “What is VaR?”, will basically introduce three basic approaches for computing VaR and the last part will be about criticism on VaR.

3.1 History of VaR, RiskMetrics

Several major financial institutions started working on internal models to measure and aggregate risks – for their own internal risk management purposes – in the late 1970s and 1980s. J. P. Morgan developed the best known system then – industry legend tells that the chairman of J. P. Morgan at the time, Sir Dennis Weatherstone, requested a daily one-page report (the ‘4:15 report’) showing risk and potential losses over the next 24 hours. Over a long time J. P. Morgan staff developed a single risk measure aggregating all risks across the institution, which was finally operationable by around 1990. Value at Risk was the used aggregate measure, which was estimated based on standard portfolio theory. As it had a major positive effect, J. P. Morgan’s 1993 global research conference highlighted the new risk system. In the same year the term “Value-at-Risk” was used in a published document for the first time (in the so-called G-30-Report). Great interest in the product made J. P. Morgan launch its risk system – simplified and under the name “RiskMetrics” – in October 1994 when the system and the necessary data became freely available on the internet. This, of course, attracted a lot of academics as well as practitioners and contributed a lot to the rapid spread of VaR systems. Investment banks and securities houses were the first to adopt VaR systems, other financial institutions and also non-financial companies soon followed. Nowadays VaR is widely used by many institutions and companies and has become an industry standard. This also because regulators (The Basel Committee on Banking Supervision allows banks using their own VaR models for the calculation of their capital requirements for market risk ) became interested in VAR as well.

3.2 What is VaR?

After looking at the history of VaR, VaR itself should be defined and the basic characteristics be given. Basically the concept of VaR is a means of estimating risk using standard statistical techniques answering the question ‘How much can an institution/company lose over a given time horizon with x% probability?’. The VaR concept provides an answer for this specific question in the form of the aggregation of all the risks in a portfolio/company into a single number which is given in currency units. These properties make it very attractive and useable for example in the boardroom or for disclosure in annual reports. There are several definitions available in the literature ; here two should be provided:

“VAR measures the worst expected loss over a given horizon under normal market conditions at a given confidence level.”

“VaR is a measure of market risk. It is the maximum loss which can occur with X% confidence over a holding period of t days.”

As can be seen, there are three important characteristics of VaR:

- Confidence level: Most of the users use confidence levels between 95 % and 99 %. A confidence level of 95 % implies that a potential loss will not exceed the VaR in 95 out of 100 cases. The higher the confidence level, the higher the confidence but the higher the VaR value is also.
- Holding period: Should reflect the portfolio’s features on which the risk is being measured (e.g. the holding period of a trading portfolio might usually be one day; for an investment portfolio it could be one month or even one quarter).
- Assumption of normality: In calculating VaR mostly a normal distribution of daily equity returns is assumed. This assumption allows the relatively easy calculation and transformation of VaR but is also heavily criticised. More on that will be discussed in chapter 4.3.2 ‘Further weaknesses of VaR’.

Additional assumptions are that the portfolio remains constant over the respective time period (the institution’s risk profile remains constant) and that the portfolio is marked-to-market on the target horizon.

Formally defined, the VaR at a given level of confidence [illustration not visible in this excerpt] for a normal distribution with X denoting the considered factor, [illustration not visible in this excerpt] the expected return (mean) and  the standard deviation simplifies to:

Abbildung in dieser Leseprobe nicht enthalten ,

where [illustration not visible in this excerpt] denotes the [illustration not visible in this excerpt]-quantile of the standard normal distribution.

However, VaR does not measure political, personnel or regulatory risk – it only measures risks which can be captured using quantitative techniques.

3.3 Computing VaR

Independently of the method used to calculate the VaR estimate for market risk there are four tasks which are equal for all methods:

1. Definition of the time horizon over which the maximum potential loss should be estimated
2. Definition of the confidence level
3. Creation of a probability distribution of returns for the considered portfolio
4. Calculation of the VaR estimate

The probability distribution of the future returns is estimated starting from historical data that is observed over a certain time period – this is common to all techniques. Also the correlations are obtained from historical data and used for the simulation of the probability distributions. Another common feature is that complex portfolios are decomposed into simpler instruments that are only exposed to one market factor – this procedure is called “mapping” (e.g. a coupon bond is decomposed into a set of zero-coupon bonds where each of those is exposed to only one risk factor – a specific interest rate). The differences in VaR methodologies mainly lie in how the probability density function is constructed ; the different techniques are presented in the following subchapters. First of all, there is no single best method of calculating VaR – with all methods there are trade-offs between theoretical accuracy and computational efficiency.

3.3.1 Analytic, Variance-Covariance or Parametric Approach

This method which is sometimes also called correlation approach is the simplest approach to estimating VaR using published information on correlation and volatility constructing an internal weighting matrix. The data can be obtained from J. P. Morgan/RiskMetrics, which offers a steadily growing correlation matrix for free that is then used for the VaR calculation. It is relatively easy to understand and implement and therefore is also the basis of those VaR applications, which are most widely used. A drawback is that it produces inaccurate estimates if non-linear pay-offs (such as options) are present in a portfolio. The approach assumes that returns on risk factors are normally distributed and that correlations

and deltas are constant and hence are subject to model risk in the sense that the distribution assumption might be incorrect. The volatilities are derived from historical observations – hence historical data is required. Basically there are two ways of calculating the volatilities and correlations:

- The historic data is weighted equally (historic volatility/correlation); large movements in the market can distort results.
- Unequal weighting of past observations – recent data is given more weight; methods are GARCH (generalised autoregressive conditional heteroscedasticity) and Exponentially Weighted Moving Averages . These models assume that the future can be predicted from the past.

Other methods would be constant volatility, EGARCH (exponential GARCH), cross-market GARCH, implied Volatility and subjective views.

The following table should provide an example of how to calculate the VaR for a single-asset portfolio, depending only on one risk factor.

Table 1: Steps in computing VaR

Step Task Example

Step 1 Mark position to market € 100 million

Step 2 Measure variability of risk factor 10 % p.a.

Step 3 Set time horizon 10 working days (= 2 weeks)

Step 4 Set confidence level 95 % (= factor 1.64485)

Step 5 Report potential loss

(= VaR) Abbildung in dieser Leseprobe nicht enthalten

Source: Own description following Jorion (2000), p. 108

The asset’s returns are assumed to be univariate normally distributed and hence characterised by two parameters – the standard deviation σ and the mean μ (it is

assumed that the expected return – mean – equals zero). The calculation of the VaR estimate is then reduced to finding the value for the x%-quantile (1 – y% confidence level) of the standard normal distribution (value for 99 % confidence = 2.32635; for 95 % = 1.64485) – see Figure 3 for better illustration.

The VaR of € 3.28 means that only in 1 out of 20 cases (confidence level of

95 %) will the loss be greater than € 3.28 m. Recalculating the example with a confidence level of 99 % (i.e. 1 out of 100 cases) leads to a VaR of € 4.64 m. In Figure 3 it is shown that outcomes, which are less than or equal to 1.645 standard deviations below the mean take place only 5 % of the time. The currency value at the 95 % confidence level is the 95 % VaR for a specified time horizon.

Figure 3: Probability density function

illustration not visible in this excerpt

Source: Own description following Butler (1999), p. 23

Normally a portfolio consists of more than one asset. The calculation of VaR for portfolios with more assets follows the following steps:

- Decompose financial instruments into sets of simpler instruments only exposed to one market risk factor – Cash-flow mapping as basically explained in chapter 3.3 on page 10. The purpose is to standardize the intervals of the cash flows so that the RiskMetric data set’s volatilities and correlations can be used.
- Specify distribution – in the analytic method the user has to make an assumption about the market factor’s distribution; most often (also with RiskMetrics) a normal distribution is assumed.
- Calculate portfolio variance and VaR: It can be assumed that the portfolio is also normally distributed if all the market factors are so; therefore the portfolio variance can be computed using standard statistical methods and then VaR is calculated as described in Table 1, step 5.

For more complex portfolios matrices are used for the calculation of VaR.

3.3.2 Historical Simulation Approach

The basic concept behind the historical simulation approach is to use a portfolio asset’s historical distribution of returns to simulate the portfolio VaR under the hypothetical assumption that the portfolio was held over the time period covered by the historical data set. Sometimes the historical simulation approach is also called non-parametric approach as the distribution of the portfolio value changes is constructed from historical data without imposing distributional assumptions or estimating parameters. It is the simplest method for calculating VaR and does not need the three main assumptions behind the parametric approach (normally distributed returns, constant correlations and deltas) as they are already reflected in market-price data. As there is no need for modelling the distribution of risk factors, the historical simulation approach is not exposed to model risk.

The simpler approach (historic method) is to simply keep historic data of profit and losses within the portfolio and to calculate the e.g. 5th percentile (95 % VaR). As this method is based on actual data major market movements, which occurred in the past, are picked up accurately. Furthermore “mapping” is not required as opposed to the parametric approach. If the weightings in the observed portfolio change over time this method is unsuitable but can be overcome with the more complex historical simulation approach.

With this approach the basic ideas stay the same but additionally the current portfolio composition is used and portfolio price changes are hypothetically simulated with historical data (e.g. portfolio consists of 40 % share A and 60 % share B; historical prices for say 1,000 days are collected and the portfolio value for each day, keeping the 40 % - 60 % weightings, is calculated/simulated). Also options and more complex positions can be included in the simulations. The time period daily data should be collected has to be at least one year (regulators) but is recommended being three to five years (to also estimate the probability of rare events). Once the hypothetical portfolio returns are simulated over the chosen time period, these are translated into portfolio profits and losses and the VaR can be read off from a profits and losses histogram. For example, if there were 1,000 days of historical data used and the sought VaR is based on a 95 % confidence level (meaning that the actual loss is expected to exceed the estimated VaR on 50 days – 5 %) the VaR would be the 51st highest loss (cf. Figure 4).

Figure 4: HS VaR from histogram of S&P 500 profits/losses

illustration not visible in this excerpt

Source: Own description following Alexander (1998), p. 269

For Figure 4 historical daily data of the S&P 500 Index (04/12/2002 – 22/11/2006 = 1,001 observations) was obtained from finance.yahoo.com and then the 1,000 daily log returns were computed. Those were then sorted and the histogram built. The 51st highest loss is –1.31 % which is therefore the 95 % VaR.

So far only advantages of this approach were addressed, however there are also disadvantages: The approach is totally dependent on a particular historical data set – any events not included in the data are completely ignored. This shows the need to complement this approach with stress tests and scenario analyses to provide for plausible risks that are underrepresented in the data set. Additionally the method is not suitable for optimization. Other problems could simply be to obtain the necessary data and to choose the length of the historical estimation period.

3.3.3 Monte Carlo Simulation Approach

Measurement of the risk exposure for portfolios with options included is, due to option’s non-linear pay-off profiles, more problematic than for linear pay-off profile portfolios. Therefore practitioners increasingly use the Monte Carlo simulation (MCS) approach as it is believed to provide more accurate estimations of risk exposures for non-linear pay-off instruments (including e.g. exotic options).

This method “specifies statistical models for basic risk factors and underlying assets…[and] simulates the behaviour of risk factors and asset prices by generating random price paths.”

The MCS approach uses basically the same steps calculating a VaR estimation as the historical simulation however firstly in the MCS approach a statistical distribution, which is believed to approximate the possible changes in the market factors is chosen (often normal and lognormal distributions). For the definition of the correlations among market factors most often historical data is used (the Cholesky decomposition is used to manipulate the randomness of the numbers to preserve the correlations in simulations ). Then thousands (maybe tens of thousands) of hypothetical changes in market factors are generated by a random number generator and used to construct just as many hypothetical portfolio profits and losses for the current portfolio and their distribution. The idea is that if enough of these simulations are taken, the simulated distribution will converge to the unknown ‘true’ distribution. From this simulated distribution the VaR is then determined similar to the historical simulation.

Rachev et al. (2005) offer a 5-step algorithm of how the MCS approach is performed – see Table 2:

Table 2: Monte Carlo Simulation approach

Step Task

Step 1 Specify stochastic processes and parameters for financial variables and correlations

Step 2 Simulation of hypothetical price changes for all selected variables randomly chosen from the specified distribution

Step 3 Computation of portfolio values with simulated asset values

Step 4 Repetition of steps 2 and 3 many times to form the distribution of the portfolio returns

Step 5 Measure VaR from the simulated distribution as the negative of the empirical -th quantile

Source: Own description following Rachev et al. (2005), p. 224

There are some (practical) issues with the MCS approach. It is very consuming regarding human and computational resources and is therefore the most costly approach – there is a considerable trade-off between accuracy and speed. Furthermore it is not suitable for optimization and is also vulnerable to model risk – statistical errors may arise from the estimation of model parameters; even the model itself could be subject to misspecification. Additionally the random number generator does not produce really random numbers – they are pseudo-random as they are generated through an algorithm using a deterministic rule which takes a starting value and then generates values. Once the starting value recurs the same sequence of numbers will then be generated. Nowadays standard-software packages such as Microsoft Excel however do not recur within billions of numbers.

MCS approaches are very powerful and the most flexible tools for the estimation of VaR. The most important advantage is that this approach offers the most reliable VaR estimations (at least for a total portfolio), consequently the management information function is best fulfilled compared to the other approaches. Furthermore if the modelling is done correctly this approach is able to handle even the most complex and exotic portfolios. Moreover if a large number of scenarios are used the convergence error can be made as small as desired and if the correlation estimates are based on the most recent data it is possible to better reflect the current market conditions.

3.3.4 Choosing between methods

When an institution has to decide which method to implement, the decision mainly depends on the composition of the portfolio – for portfolios without options the analytic approach may be the best choice because pricing models are not required and publicly available software and data (e.g. RiskMetrics) help with implementation. For portfolios with options the historical simulation or the MCS approach are better suited. The historical simulation is conceptually straightforward and no distributional assumptions are necessary whereas implementing the MCS approach is complex and time-consuming. Additionally the task ‘reliability of results’ (as all methods rely more or less on historical data) is a factor.

To recapitulate, there is no clear answer to the question “Which method is the best?”. No VaR approach is superior in all situations.

The following chapter will deal with criticism on Value at Risk – both positive and negative. As a starting point the concept of coherent risk measures will be introduced.

[...]

Excerpt out of 79 pages

Details

Title
Risk measures - value at risk and beyond
College
University of Graz  (Institut für Finanzwirtschaft)
Grade
1 (A)
Author
Year
2007
Pages
79
Catalog Number
V83867
ISBN (eBook)
9783638876049
ISBN (Book)
9783638882736
File size
890 KB
Language
English
Keywords
Risk, Value at Risk, VaR, Risk Measures, Measures, Beyond Var, Beyond Value at Risk, Types of Risk, Market Risk, Measuring Risk, RiskMetrics, Analytic, Variance, Covariance, Parametric, Historical Simulation, Monte Carlo Simulation, Coherent Risk Measure, Advantages of Value at Risk, Drawbacks of Value at Risk, Subadditivity, Weaknesses of Value at Risk, Jorion, Taleb, Tail Conditional Expectation, Worst Conditional Expectation, TCE, WCE, Expected Shortfall, ES, Conditional, CVaR, Expexted, Loss, ETL, Drawdown, CDD, CDaR, Expected, Regret, ER, Spectral, Monte, Carlo, Simulation, Advantage, Tail, Shortfall, Drawback, Coherent, Coherence, Advantages, Weaknesses, Risiko, Maß, Gauss, Verteilung, Market, Type
Quote paper
MMag. Bernhard Höfler (Author), 2007, Risk measures - value at risk and beyond, Munich, GRIN Verlag, https://www.grin.com/document/83867

Comments

  • No comments yet.
Look inside the ebook
Title: Risk measures - value at risk and beyond



Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free