A comparison between advanced Value at Risk models and their backtesting in different portfolios


Thèse de Master, 2012

87 Pages, Note: 1


Extrait


Contents

1 Introduction
1.1 Relevance
1.2 ResearchQuestions
1.3 State of the Literature
1.4 The Structure of the Thesis
1.5 Definitions and Assumptions

2 Methodology
2.1 VaR Models
2.1.1 GARCH VaR
2.1.2 Principal Component VaR
2.1.3 Filtered Historical Simulation
2.2 ModelValidation
2.2.1 Unconditional Coverage Test
2.2.2 Independence Test
2.2.3 Regulatory Backtest
2.2.4 Economic Cost of VaR forecasts
2.2.5 Procedure for Model Validation.

3 Data
3.1 Portfolios
3.1.1 Bond Portfolio
3.1.2 Bloomberg/EFFAS Bond Indices
3.2 TimeFrames

4 Empirical Results
4.1 GARCH VaR Results
4.2 PCAVaRResults
4.2.1 Real P&L
4.2.2 Hypothetical P&L
4.3 Filtered Historical Simulation VaR Results

5 Conclusion

6 Bibliography

List of Appendices

Appendix A R Code: Unconditional coverage likelihood ratio

Appendix B R Code: Independence test

Appendix C R Code: Regulatory capital test

Appendix D R Code: Economic cost

Appendix E R Code: PV01 Maturity mapping

Appendix F R Code: PCA VaR

Appendix G R Code: GARCH VaR

Appendix H Individual degree of freedom ofgarchInitParameters (fGarch)

Appendix R Code:FHS VaR

Appendix J Portfolio details

Appendix K Comparison ofthe Portfolios in their portfolio value

Appendix L Comparison ofthe Portfolios in their portfolio returns

Appendix M Time Series of Regulatory Backtest

Appendix N Backtest PCA VaR with hypothetical P%L

Appendix O Parameters for 99% VaR GARCH-N with different sample size

List of Figures

Figure 1: Rolling window in the parameter estimation process

Figure 2: Term structure of EUR zero-coupon rates (2003 to 2011)

Figure 3: Replication of the yield curve with 3 principal components

Figure 4: Replication of the yield curve with 7 principal components

Figure 5: An example of a PV01 mapping at a specific date

Figure 6: p-Valuefor n=1000, confidence interval=1%

Figure 7: Binomial Density Function

Figure 8: Procedure for Model Validation

Figure 9: Distribution of the bond portfolio

Figure 10: Lower tail ofthe bond portfolio distribution

Figure 11: Distribution ofthe EFFAS Bond Index Portfolio

Figure 12: Lower tail ofthe EFFAS Bond Index Portfolio distribution

Figure 13: Histogram ofthe Bond Portfolio

Figure 14: Histogram ofthe EFFAS Bond Index Portfolio

Figure 15: Development ofthe credit spreads in the Bond Portfolio

Figure 16: Comparison Real Price vs. Hypothetical Price

Figure 17: Eigenvectors ofthe first 3 principal components

List of Tables

Table 1: Explanation ofvariation with eigenvalues

Table 2: Decision Errors

Table 3: Contingency table for conditional coverage

Table 4: Basel Penalty Zones for confidence level 99%

Table 5: Cut off values for 99% and 99.9% confidence levels

Table 6: Asset allocation ofthe Bond Portfolio

Table 7: Results for GARCH VaR Unconditional Coverage Test

Table 8: Results for GARCH VaR Independence Test

Table 9: Results for GARCH VaR Regulatory Test

Table 10: Results for GARCH VaR Economic Costs

Table 11: Results over different timeframes for GARCH VaR Unconditional Coverage Test

Table 12: Results for PCA VaR Real P&L Unconditional Coverage Test

Table 13: Results for PCA VaR Real P&L Independence Test

Table 14: Results for PCAVaR Real P&L Regulatory Test

Table 15: Results for PCA VaR Real P&L Economic Costs

Table 16: Results over different timeframes for PCA VaR Real P&L Unconditional Coverage Test

Table 17: Results for PCA VaR hypothetical P&L Unconditional Coverage Test.

Table 18: Results for PCA VaR hypothetical P&L Independence Test

Table 19: Results for PCA VaR hypothetical P&L Regulatory Test

Table 20: Results for PCA VaR hypothetical P&L Economic Costs

Table 21: Results over different timeframes for PCA VaR hypothetical P&L Unconditional Coverage Test

Table 22: Results for FHS VaR Unconditional Coverage Test

Table 23: Results for FHS VaR Independence Test

Table 24: Results for FHS VaR Regulatory Test

Table 25: Results for FHS VaR Economic Costs

Table 26: Results over different timeframes for FHS VaR Unconditional Coverage Test

List of Abbreviations

illustration not visible in this excerpt

Abstract

This thesis analyses three VaR models in detail. To begin with, there is a short description of the theoretical background of the models. Next, four different backtests are performed on two different portfolios for each of the three models. The source code used for the implementation is available in the appendix. The main part will deal with the interpretation of the backtesting results. Each model will be compared with the same backtests and dimensionality, which allows the comparison of models with each other. The main outcome of this backtest is the knowledge as to how a model should be calibrated and how robust a model is. In a validation procedure, the author selects that calibration which yields the best results for each model.

1 Introduction

1.1 Relevance

In order to describe the risk of a portfolio, one general risk metric has chiefly been used by the financial industry since its introduction by JP Morgan in the mid- 1990s, namely the Value at Risk (VaR), which describes potential future deviations from the expected return.[1]

Due to more dynamic markets and increasing correlation in downward scenarios, the internal and external demand for advanced Value at Risk models has increased. Nowadays the challenge in risk management is to find a robust model that reacts to current market movements, meets regulatory requirements, and minimizes economic costs.

"Measuring events that are unmeasurable can sometimes make things worse. A measuring process that lowers your anxiety level can mislead you into a false sense of security. [...] You have to start with knowledge of what's going on in the world and then possibly refine it with statistical methods, not the other way around.”[2]

Even though the VaR gives the management a good overview of firm wide risk, there are several risks going along with this measure. Due to its simplicity, professionals are sometimes inveigled to use VaR as the most important risk- benchmark. A deep understanding of the portfolio and possible market scenarios is crucial to avoiding unexpected losses. VaR is only one measure in the toolbox of a risk manager and therefore its relevance should not be overestimated. Moreover an understanding of how the model works and which data is used to calibrate the model is needed to interpret and understand the results. In further chapters, the reader will see in what way especially details like calibration and the underlying data will affect the result. The question as to whether the market environment used for model calibration is representative for the actual market should be raised constantly.[3]

The goal of this paper is to describe three different VaR methods and compare these methods with respect to their results in back testing and practical usability.

It is not the purpose of this paper to show all different VaR methods, but it will focus on the following three methods: Principle component VaR, GARCH VaR and FHS VaR.

Principle component VaR was chosen because the framework has not yet been applied in a practical context. The goal of this thesis is to show under which circumstances this measurement method passes the backtests on its real Profit & Loss (P&L) and on its hypothetical Profit & Loss.

GARCH VaR is a well-known method for capturing the volatility of a risk factor. In its core idea GARCH VaR is a simple model, but the parameter calibration has a substantial impact on the backtesting results, which is why different combinations regarding observation period and confidence level will be used forVaR estimation. Filtered Historical Simulation VaR combined with GARCH volatility was chosen as an alternative to the parametric approaches. The model is very flexible in predicting the underlying risk profile and very easy to implement.

1.2 ResearchQuestions

Do advanced VaR models result in a good estimation of the true loss in different portfolios over different timeframes?

1.3 State of the Literature

Since VaR has been a well-known topic in the financial industry for years, literature and practical material is easily accessible in many variations. Even though the VaR is only a figure, there is a wide range of possibilities to come up with this figure. The models cover parametric or historical approaches, or even involve simulation, as well any mix of these. The complexity of the models varies as well as their practicability. Some models are valid only for specific risk factors, like the Principal Component Analysis for interest rate risk. Some models are more general, like a GARCH model that captures volatility clustering. Even though some simple models do not consider particular risk factors, their performance is not necessarily poor. Some models like the GARCH model have a simple idea and do not have very high requirements regarding the data or technical environment but are very effective.

When it comes to the practical appliance, the heterogeneity in the literature is very high. Since the characteristics of the market change constantly, the practical use of VaR models is an evolutionary process. The recent models proposed in the literature concern mainly issues linked to credit spread and exogenous illiquidity.[4]

1.4 The Structure of the Thesis

After this introductory chapter the thesis will be structured as follows.

In the chapter 2 the methodology of the thesis will be described. The main focus is on describing the VaR models and their implementation. In this chapter model assumptions will be discussed as well as the backtesting methodology. The code of each model and the backtest can be found in the appendix.

Chapter 3 will explain which data is the basis for the empirical study. The most important part of this chapter is concerned with the construction of two bond portfolios and their statistical description. Furthermore, an outline of a type of regime switching in the credit spreads at a certain point in the data is given.

Chaper 4 will show how models react to the above change of regime. After an introduction to the fundamental data, the results of the empirical study will be presented and analysed. The analysis will focus on each model itself and later among each other. Two statistical backtests will be performed as well as one regulatory and one economic backtest. Based on one statistical backtest, a time frame analysis will be performed to examine whether the performance of the VaR model depends on the time frame of the backtest.

The concluding chapter will sum up the key findings of each model as well as the differences and similarities of each model.

1.5 Definitions and Assumptions

The value at risk is defined as a loss in value terms that will not be exceeded with a predefined confidence level over a predefined holding period. Those two variables depend on regulatory requirements or individual internal limits. The assumptions applying to each VaR method will be described in every subcategory.

The normal VaR has one important assumption that is not suited for practical applications: all returns are i.i.d. normally distributed.[5]

illustration not visible in this excerpt

The normal analytical VaR in value terms can be expressed in the following form: illustration not visible in this excerpt

The VaR is only a threshold that will not be exceeded with a given confidence level. The VaR does not state how high the losses are in case the threshold is exceeded. A risk measure that gives the reader this information is called the expected tail loss or conditional VaR.[6]

illustration not visible in this excerpt

The expected tail loss is the average level of loss under the condition that the VaR is exceeded.

2 Methodology

At the outset, a consistent set of data with a good data quality was required. The first step was to select securities fulfilling the following requirements. There were to be no gaps in the time series of daily quotes. The portfolio was to be as diversified as possible with respect to its issuer and industry, with as long an intercept time range of all outstanding securities as possible in order to have a representative time range. The specifications of each bond needed to be constantly available in order to perform sensitivities or credit spread calculations. Detailed information about the portfolio can be found in Appendix J.

The next step was to create a data ware house in the form of an SQL database where the historical data was saved as well as specific data for each bond like coupon, coupon frequency, issuer etc.

In step three the VaR model estimations for different parameters were systematically saved in the database. In a further step the time series of observed returns were compared against the estimated returns of the VaR models with the backtesting procedures described in Appendix A to Appendix D.

2.1 VaR Models

2.1.1 GARCH VaR

In the following chapter a well-known VaR method based on the idea of Robert Engle (1982) will be applied. He developed the ARCH model to estimate the dynamic behaviour of the conditional variance of an asset. Bollerslev introduced GARCH (generalized autoregressive conditional heteroskedasticity), a generalization of this model, in 1986. The simplest GARCH(p,q) model is a GARCH{ 1,1) model which has only three parameters to estimate.[7]

As the behaviour in the market changes over time, a model which tries to explain the volatility in the market should react appropriately to recent market shocks. The principle of GARCH is simply to adjust the returns of a long historical sample to the actual market conditions.[8]

In the following chapter the GARCH-N and GARCH-T models will be applied to the portfolios named in chapter 3.1. The parameter estimation will be applied with the R package fGarch which is constantly enhanced and maintained by the Rmetrics core team.[9]

The basic information every volatility model is based on is the return rt from t-1 to t which is denoted as rt = [illustration not visible in this excerpt] where Pt is the price at time t. Let us assume pt-i that the time series rt is decomposed in a predictable and an unpredictable part.[10]

illustration not visible in this excerpt

Under the information set It_1 the predictable part is the conditional mean E(rt|/t_i). The deviation of the conditional mean is the unpredictable part et which is also called innovation process or market shock.[11] The shock et is defined as an ARCH process:

In this case xt is a sequence of i.i.d. random variables with N(0,1). By reforming we get:

illustration not visible in this excerpt

In general the standard G^flCtf(p, q) model is defined by:

illustration not visible in this excerpt

The parameter p indicates how many autoregressive lags appear, while q indicates how many moving average lags the volatility model uses. A higher order makes sense especially for a long time range of data. The additional lags are necessary to capture different seasonal components with unequal decay of information.[12]

The symmetric normal GARCH{ 1,1) model variance is given by the following equation:

illustration not visible in this excerpt

In order to keep the unconditional and conditional volatility positive and finite, some restrictions are necessary:

illustration not visible in this excerpt

- Parameter a indicates how strong the conditional volatility reacts to market shocks.

- Parameter /? indicates the persistence in conditional volatility. If this parameter is high the reaction of volatility to recent market events takes a long time.

- The sum of both parameters a + p describes the rate of convergence to the long term average volatility. If a + /? is high the changes in volatility are rather stable.

- The constant parameter co is related to long term volatility. It is defined by

- In the equation above st denotes the market shock or random innovations under the information in [illustration not visible in this excerpt] 13

In general the estimation of the parameters is applied with a rolling window approach. In the following approach the function garchFit will be used to estimate the parameters a,p and w.

illustration not visible in this excerpt

illustration not visible in this excerpt

Figure 1: Rolling window in the parameter estimation process Source: Alexander (2008d) p.333

For non-skewed GARCH-T one additional parameters will be estimated. This parameter defines the degree of freedom for the student t distribution. The author of the package fGarch decided to cap the degree of freedom at a level of 10. For this purpose a higher degree of freedom is desirable. Therefore the source code described in Appendix H was modified in this estimation process, where a maximum degree of freedom of 100 is used. In this package MLE (maximum likelihood estimation) is used to fit the parameters.[14]

The maximum likelihood estimation is a common tool to estimate parameters of a GARCH model. The density function of normal distribution and student-t distribution is given in D(xt) or D(xt; s). The sample variable xt is standardized by ^ and o , which leads to [illustration not visible in this excerpt] The parameters in the vector 6 are estimated by maximizing the value of the log likelihood function of D over a sample period R[15]

illustration not visible in this excerpt

The log likelihood function is liable to have convergence problems, which can be avoided by increasing the sample data set. The robustness of the estimation can be proved by choosing different initial values and comparing the results. It is also possible to plot the development of the parameters over time. The robustness of such estimates is ensured only by a smooth development of parameters.[16] As an optimization routine fGarch uses the function nlminb to minimize the negative maximum log likelihood function. This routine uses a Newton-type algorithm for minimization.[17]

2.1.2 PrincipalComponentVaR

The PCA (Principal Component Analysis) is a common statistical method which was introduced by Karl Pearson in 1901 and further developed by Harold Hotelling in 1933.[18] This analysis has the great advantage that a common behaviour of a huge dataset can be compressed into a few variables without losing much information. The PCA can be used for correlated datasets e.g. for the implied volatility of options or the interest rate term structure.[19] In this chapter the general concept of PCA will be explained and also an application to calculate the VaR with the help of PCA will be presented.

illustration not visible in this excerpt

Figure 2: Term structure of EUR zero-coupon rates (2003 to 2011) Source: Own construction.

The PCA can be performed with a covariance matrix or a correlation matrix based on changes of the zero-coupon yield curve. If a correlation matrix is the basis of the eigenvalues and eigenvectors, the variance of the time series will not be captured. In further chapters of this thesis, the use of a covariance matrix for V is preferred.

illustration not visible in this excerpt

The matrix X has n columns denoted by x±,x2 ... xn. These vectors represent a time series of T daily changes of the zero-coupon yield curve for each maturity. The matrix V has n eigenvalues [illustration not visible in this excerpt]. With orthogonalization a set of correlated variables will be transformed into a set of uncorrelated variables. The principal components are ordered by decreasing explanatory power, meaning that the first principal component explains the most variation, the second principal component explains the most variation of the remaining variation, etc., or mathematically [illustration not visible in this excerpt][20]

The principal components can be displayed by the matrix P with the dimensions T xn.

illustration not visible in this excerpt

The vector pm is the m-th column of the matrix P. Each principal component pm is a time series with a dimension of Txl. The covariance matrix t_[1]P'P of the principal components is the diagonal matrix of eigenvalues A.

illustration not visible in this excerpt

The general definition of an eigenvector w of a square matrix V is any vector w where the following condition holds[21]:

illustration not visible in this excerpt

The same relation holds for a matrix W of m eigenvectors and a diagonal matrix A with n eigenvalues:[20] [21]

illustration not visible in this excerpt

illustration not visible in this excerpt

The variation explained by an eigenvalue is the proportion of the sum of all eigenvalues:

illustration not visible in this excerpt

In order to see how k principal components replicate the input data, it is possible to write a linear combination of the principle component time series Pk and the eigenvector matrix Wk. The equation P = XW can be rewritten as:

illustration not visible in this excerpt

It may be written in a linear equation:

The vector xi has for each time step i m columns which represent the maturities.

illustration not visible in this excerpt

Figure 4: Replication of the yield curve with 7 principal components Source: Own construction.

illustration not visible in this excerpt

Table 1: Explanation of variation with eigenvalues Source: Own construction.

Table 1 shows that the first 9 eigenvalues explain over 98% of the variation in the zero-coupon yield curve. The first 3 eigenvalues explain over 91%.

illustration not visible in this excerpt

Figure 5: An example of a PV01 mapping at a specific date Source: Own construction.

In order to calculate an interest rate sensitive VaR via PCA, it is necessary to calculate a measure that captures the interest rate sensitivity. In this approach the PV01 is used to indicate the interest rate sensitivity of the whole portfolio. Each PV01 is mapped to a maturity via a function described in Appendix E.

In an attempt to simplify matters, it was decided to calculate the PV01 with an approximation where the modified duration is the input value for each bond. In general, the PV01 is the change in present value of a position after the interest rate level shifts one basis point upward.[23]

illustration not visible in this excerpt

Ifthe Macaulay duration and the yield are known, the PV01 is defined as follows:[24]

illustration not visible in this excerpt

The interest rate sensitivity p combines the eigenvector matrix Wk' with the dimension kxm and the PV01 vector 0 with the dimension mxl. Based on the sensitivity vector p combined with the diagonal matrix D with k eigenvalues (A1,A2 ...Ak), the VaR with a confidence level of a can be calculated out of k risk factors.[25]

illustration not visible in this excerpt

The complete R code is available in Appendix F.

2.1.3 Filtered Historical Simulation

Until now in this thesis only parametric approaches have been used. In linear portfolios, this type of model is widely used in practice. The main disadvantage of this approach is that the risk manager has to make assumptions regarding the return distribution of each asset or the portfolio as a whole. As can be seen in Figure 10, the assumption of normality cannot be made in the portfolios used in this thesis.

To overcome this problem, Barone-Adesi et.al. (1998) proposed a Value at Risk model which is able to capture the empirical (non-parametric) historical distribution. These historical returns are combined with a parametric GARCH model. The historical returns are scaled to the current market conditions with the past conditional volatility.[26]

The great advantage is that with this model the risk factor distribution is observed historically without any assumptions on the behaviour of the risk factors. Based on the behaviour of these risk factors, one can perform a pricing simulation and infer a VaR estimate. This simulation can be path dependent for an arbitrary time interval. For options especially, this approach yields a more realistic risk examination. The main idea of FHS is to use the empirical distribution of returns and scale this distribution in order that this distribution is i.i.d. (independently and identically distributed). This condition is fulfilled if the return series has no serial correlation or volatility clusters. The GARCH innovations can be standardized by scaling it with the estimated daily GARCH standard deviation.[27]

By considering the formulas of chapter 2.1.1, we can transform the residuals et to an i.i.d. series by:

illustration not visible in this excerpt

The series xt has no serial correlations nor volatility clusters. For the sake of consistency and comparability it was decided to calculate only the one day VaR. In addition to that, the multi-step algorithm will be explained, even though it is not really needed in the calculation.

From this series xt a random draw will be multiplied with the deterministic volatility forecast.

illustration not visible in this excerpt

The volatilities for i = 2,3 ... can be simulated by a random draw of zt+i_i.

[illustration not visible in this excerpt] For a VaR estimation of 10 days, this path will be completed until i = 10. To obtain a representative distribution of the returns, this simulation will be repeated many times. Barone-Adesi et.al. (1998) suggest 5,000 simulations. In this thesis, 5,000 and 10,000 simulations will be applied. For the sake of simplicity, the simulations here are based on portfolio returns. In the literature, it is suggested that a simulation on asset basis would lead to more realistic reflection of the variances and co-movements ofthe asset returns.[28]

illustration not visible in this excerpt

The 10-day return of this path for asset a, when xa t is a random draw from the standardized return distribution, is as follows:

illustration not visible in this excerpt

The histogram ofthe two portfolios can be seen in Figure 13 and Figure 14.

2.2 Model Validation

The main objective of this thesis is to evaluate whether any given VaR model has a high or low predictive power compared to the other VaR models. Four validation methods were used as described in this chapter. The first two tests are of statistical nature and the second two tests are based on regulatory requirements.

In a statistical hypothesis test, a hypothesis H0 will have to be rejected in the event that there is a significant outcome. In each case there is a possibility that this decision to reject was incorrect. These possibilities are called type 1 and type 2 errors and depend on the significance level. A type 1 error occurs if a correct model is falsely rejected. The type 2 error occurs if an incorrect model is falsely accepted. By increasing the significance level, the probability of accepting an incorrect model (type 2 error) is reduced, but the probability of rejecting a correct model (type 1 error) is increased.[29]

illustration not visible in this excerpt

Table 2: Decision Errors Source: Jorion (2009) p.89

2.2.1 Unconditional Coverage Test

The unconditional coverage test measures the accuracy of the forecasts for a specified interval of the distribution. It measures the coverage probability by a likelihood ratio which compares the expected proportion of returns in a defined confidence interval nexp with the observed proportion of returns in a defined interval nobs.[30]

illustration not visible in this excerpt

illustration not visible in this excerpt

Figure 6: p-Value for n=1000, confidence interval=1% Source: Own construction.

The distribution of —2\n(_LRuc) is chi-squared X[2] distributed with one degree of freedom.[31]

Considering the formula for the likelihood ratio one can see that the distance of the observed returns to the VaR estimates is irrelevant. The ratio of the exceeded VaR estimates to the observed returns is the only factor that is relevant for rejecting the null hypothesis. Furthermore, this test neglects the fact that the VaR estimate may be exceeded several times in a row, which indicates that the underlying volatility adjustment does not work properly.[32]

In the empirical study in chapter 4, the test will be performed at a 5% significance level, which is the probability that a type 1 error occurs. When the likelihood ratio follows a X[2] distribution, it is also possible that models with a very low number of exceedances will be rejected. This type of rejection will be commented on in the empirical part ofthe thesis in order to avoid misleading conclusions.

2.2.2 Independence Test

The conditional coverage test was introduced by Christoffersen (1998) as an extension to the unconditional coverage test. The main idea of this test is the possibility to penalize models which are not able to adjust their conditional volatility to the current market conditions. If the VaR estimate is exceeded several times in a row, the model has an insufficient fast adjustment ofthe volatility and is therefore not suitable, even if the model passes the unconditional coverage test.[33] The test is performed by defining an indicator that has a binary value:[34]

illustration not visible in this excerpt

In the next step ntj will be defined as the number of days where the condition j occurs, when on the previous day condition i was fulfilled. The different outcomes are illustrated in Table 3:

illustration not visible in this excerpt

Table 3: Contingency table for conditional coverage Source: Nieppola (2009) p.27

The scheme above leads to the conclusion that e.g. n01 counts the returns rt that do not exceed the VaR estimate but are followed by an exceeded estimate. The variable n11 counts the returns rt that exceed the VaR estimate and are followed by an exceeded estimate. The variable n10 counts the returns that exceeded the VaR estimates but are not followed by a VaR estimate. The variable n00 counts the number of returns that do not exceed the VaR estimate and are not followed by a VaR violation.

According to Table 3 the proportional exceedances n:01 and ^11 can be calculated:

illustration not visible in this excerpt

The independence test statistics is given by:[35]

illustration not visible in this excerpt

The distribution of -21n(L^nd) is chi-squared X[2] distributed with one degree of freedom. The tf0 says that the exceedances are independent. This test only deals with the dependence but not with the accuracy of the model. This is evident because is not part of the likelihood ratio L^nd.

2.2.3 Regulatory Backtest

Since banks have to cover potential losses with a capital buffer based on a VaR model, the model’s accuracy has to be evaluated. This backtest is based on daily 99% VaR estimates within the last 250 trading days. A VaR model which has a high accuracy will be rewarded with a low scaling factor. This scaling factor 5t will be needed for the market risk capital requirement MCAt.

5t is defined by x the number of VaR exceedances over 250 trading days. The three zones indicate the quality of a model.

illustration not visible in this excerpt

Table 4: Basel Penalty Zones for confidence level 99% Source: Jorion (2009) p.709

If a model is assigned to the green zone the probability is very low that an inaccurate model will be accepted (type 2 error). The cumulative probability shows that the cut off value to the yellow zone is 95.88%. This means that there is a 4.12% probability that a green model will be assigned to the yellow or red zone (type 1 error).[36]

The yellow zone does not indicate if a model will be rejected because the number of exceptions could be produced by accurate and inaccurate models with the same probability range. The Basel Committee on Banking Supervision defined in their Annex 10a the following reasons for exceptions:[37]

Basic integrity ofthe model: the bank’s systems are not able to measure the risk of positions in the portfolio or the volatilities and/or correlations used by the model were calculated incorrectly.

Model’s accuracy could be improved: the model has insufficient accuracy of the VaR estimates.

Bad luck or markets moved in fashion unanticipated by the model: arises when the main requirements of a model were fulfilled but nevertheless the VaR will be underestimated. For example if low frequency events with high severity occur. Another reason could be an unexpected market movement of volatility or correlations, which could not be predicted by the model.

Intra-day trading: if the model uses historical data and a large intra-day loss has occurred between calculating the VaR and reporting it, considering this change in the portfolio for that day will be impossible.

A Model which is assigned to the red zone has a very small probability of 0.01% that it is falsely rejected (type 1 error). Therefore the Basel Committee on Banking Supervision has defined a higher scaling factor and a following investigation of the model by the supervisor. In case of a regime shift, volatilities and correlations are not representative anymore and will lead to many exceptions in a row. Under extraordinary circumstances the bank has the possibility to adopt the model to the new regime.[38]

The regulatory backtest follows a Bernoulli trial involving a binomial distribution with p = 1 — 99% = 0.01 an n = 250. The parameterp is the success probability of n trials where the outcome can result either in x = 1 or x = 0. The expected value of the binomial variable is defined by E[X] = pn and its variance leads to V[X] = p(1 — p)n. With the given parameters the mean E[X] is 2.5 and the variance V[X] is 2.475. The standard deviation amounts to 1.57. The distribution of this experiment can be seen in Figure 7.

Based on the binomial density function, the cut off values for 99% and 99.9% confidence level will be given as follows:

illustration not visible in this excerpt

Table 5: Cut off values for 99% and 99.9% confidence levels Source: Own construction

The market risk capital requirement based on 10-day 99% VaR estimates is defined by:

illustration not visible in this excerpt

The supervisor has the possibility to adjust the market capital requirements by an absolute capital adjustment c which depends on qualitative criteria.[39]

2.2.4 Economic Cost of VaR forecasts

The main disadvantage of the unconditional and conditional coverage tests is that they measure only the number of exceeded observations. In this case models where the VaR estimate is underestimated only a few times but by severe amounts will be accepted but a model where the VaR estimate is underestimated many times but by a small amount will not be accepted. Financial institutions have to cover their potential market risk exposure with enough regulatory capital. For this purpose the 1-day 99% VaR estimate is used as a capital requirement. If a VaR model constantly overestimates the capital requirements needed, the financial institution faces higher opportunity costs when providing this additional capital. If a VaR model underestimates the capital requirements, the financial institution faces additional capital requirements and costs to reallocate its portfolio. In order to reflect the additional costs, a penalty factor of 1.2 is used. This relationship can be described in an economic cost function ECt. According to this test the cumulated interest summed up over the backtesting period represents the opportunity costs for this model.[40]

illustration not visible in this excerpt

For this purpose the EONIA rate was used as an interest rate.

2.2.5 Procedure for Model Validation

In order for some conclusions to be drawn from the tests performed in chapter 4, the following procedure will be applied for each portfolio and confidence level:

illustration not visible in this excerpt

Figure 8: Procedure for Model Validation Source: Own construction

The first step is the validation via the unconditional coverage test. Once a model has been rejected in one test, it is excluded from further testing. The remaining models have to pass the independence test. All successful models have to pass the regulatory test at least in the yellow zone. Models that are in the green zone are preferred, which means that models in the yellow zone are dropped. Eventually, the model with the lowest economic costs is selected as the favourite model.

3 Data

The following portfolios were applied to the backtests described in chapter 4. To be consistent within the portfolios the author decided to use price indices, because performance indices have a trend component (accrued interest or dividends), which distort the real downside risk in a time series. Both portfolios start with 2004­11-26 and end with 2011-09-23. This time range was chosen because it was not possible to find a greater intercept of issued bonds that fulfil the portfolio and data quality criteria.

3.1 Portfolios

3.1.1 BondPortfolio

The following portfolio is an example of the Hold-to-Maturity portfolio of an Austrian bank.[41] A dominant part of the portfolio consists of financial issues and government bonds. The returns are calculated with clean prices and the weighting of each bond is equal. The assets in the portfolio were held to maturity without any rebalancing.

The reader sees in Table 6 that especially government bonds suffered great losses.[42] These losses arise mainly out of the sovereign debt crisis, which affected the high proportion of Hellenic Government bonds. By looking at the composition of this bond portfolio it seems that this portfolio is not well diversified within issuers and sectors. This fact leads to a higher cluster risk and increases the requirement of the VaR model to react on current market circumstances. More than half of the portfolio is invested in the financial sector and about one quarter is invested in government bonds. No sector has a positive annual return. Also the standard deviation and the downside range of the mean characterized the risk inherent in this time range.

illustration not visible in this excerpt

Table 6: Asset allocation of the Bond Portfolio

From the density plot in Figure 10 the reader can see that especially in the fat tails the normal distribution underestimates the density. The Student-T distribution has a better overall fit to the empirical distribution even if it underestimates the empirical distribution at the long tails, it is still a better approximation for the VaR model. The empirical distribution has a kurtosis of 1.76 and a skewness of -0.12. The mean of this portfolio is -0.80% and the standard deviation is 3.31%. The p value of the Jarque-Bera-Test is below 2.2 E-16 which indicates that the null hypothesis of normality can be rejected.

illustration not visible in this excerpt

Figure 9: Distribution of the bond portfolio Source: Own construction.

illustration not visible in this excerpt

Figure 10: Lower tail of the Bond Portfolio distribution: Own construction.

3.1.2 Bloomberg/EFFAS Bondlndices

The portfolio described below consists of three bond indices provided by the EFFAS (European Federation of Financial Analyst Societies). The three bond indices are price indices clustered by maturity groups and were monthly rebalanced. For this purpose each bond index is weighted equally. The bond index with the ticker EU15PR contains European Government bonds with an average duration of one to five years. The ticker EU50PR contains European Government bonds with an average duration of five to ten years. The ticker EG5PR contains European Government bonds with an average duration greater than 10 years.

Below the reader can see a density plot comparison between the empirical distribution and analytical Normal distribution and Student-T distribution. The mean of this portfolio is -0.52% and the standard deviation is 4.38%. The empirical distribution has a kurtosis of 2.65 and a skewness of 0.19. The reader can see that the EFFAS Bond Index Portfolio is more leptokurtic with a higher standard deviation and has thicker tails than the Bond Portfolio described in Chapter 3.1.1. A reason for this difference is the unusual high volatility for European Government bonds in the years 2008 to 2011 caused by the sovereign debt crisis. The p value of the Jarque-Bera-Test is below 2.2 E-16 which indicates that the null hypothesis of normality can be rejected.

Details of the time series can be seen in Appendix K and Appendix L.

illustration not visible in this excerpt

Figure 11: Distribution of the EFFAS Bond Index Portfolio Source: Own construction.

illustration not visible in this excerpt

Figure 12: Lower tail of the EFFAS Bond Index Portfolio distribution: Own construction.

illustration not visible in this excerpt

Figure 13: Histogram of the Bond Portfolio Source: Own construction.

illustration not visible in this excerpt

Figure 14: Histogram of the EFFAS Bond Index Portfolio Source: Own construction.

3.2 Time Frames

In order to test if the performance of a model is independent of the current market situation, the author suggests splitting the observation horizon into two time frames. The main reason for this separation is the evidence of a regime switch after the year 2008 concerning the changing behaviour of the credit spreads. In Figure 15 one can see the development of the credit spreads ofthe Bond Portfolio. In order to keep the analysis clear, the author decided to attach this time frame analysis to each backtesting part of chapter 4. For each model only the unconditional coverage backtest will be performed onto the two time frames. Those two time frames will be later on called pre-2008 and post-2008.

The time frame pre-2008 starts at 2004-11-26 and ends at 2007-12-31. Post-2008 starts at 2008-01-01 and ends at 2011-09-22.

4 Empirical Results

In this chapter, four different backtests will be performed for each model. Before the reader continuous to the results of the back tests, the following information is helpful to know how to interpret the results:

The results of the unconditional coverage test and independence test are denoted in p-values. The null hypothesis H0 states that there is no significant difference between expected and realized observations for the unconditional coverage test. If the test is significant (H0 < 0.05), the null hypothesis will be rejected. In this case the alternative hypothesis Ha concludes that there is a significant difference between expected and realized observations. A graphical representation can be found in Figure 6.

For the independence test, the null hypothesis H0 states that the VaR exceedances are independent. If the test is significant (H0 < 0.05), the null hypothesis will be rejected. Then, the alternative hypothesis Ha concludes that the dependency of the VaR exceedances is significant.

The regulatory test is a daily representation of the VaR exceptions looking only on the last 250 days. If a model is evaluated by this backtest, it is not enough to look at the last 250 days in the time series. An alternative is to move through the time series and make every day a backtest by looking on the last 250 days. In some timeframes more VaR exceptions will occur than in others. An example of a regulatory backtest time series can be found in Appendix M. For this purpose the author decided to take the worst situation (maximum) of each time series.

The results of the economic test are the annual interest rate costs, relative to its portfolio value. In order to make the results comparable the accrued interest costs are scaled to one year.

The last test will incorporate how good a model is in the pre-2008 and post-2008 time concerning the unconditional coverage test. For the parametric GARCH model and the FHS the observation period of 900 days will not be covered because the pre-2008 data series is smaller than the observation period. Due to that reason, the results for the observation period of 600 days in the pre-2008 sample are not that reliable compared to the post-2008 sample.

4.1 GARCH VaR Results

The four backtests are performed for each portfolio with the observation period 105, 315 and 900 days. For each period a 99% and 99.9% VaR GARCH-N and VaR GARCH-T estimate will be back tested.

By looking at the parameter evolvement (see Appendix O) it can be seen that especially the parameters for an observation period of 105 days is very unstable. In general it can be said that the parameters are more stable the higher the observation period is. The parameters for the observation period of 315 days are more stable for GARCH-N, but for GARCH-T the fluctuation of the parameters is still very high. It is possible that the additional parameter (degree of freedom) that has to be estimated leads to a more unstable estimation result.

With an observation period of 600 days the parameter estimation is only robust with the GARCH-N model. By looking at the GARCH-T model for the EFFAS Bond Index portfolio, the parameters are in the most volatile times (2007 to 2008) rather unstable and the estimation process leads to some convergence problems in this time.

In the sample set of 900 days, the estimated results are very stable. By looking on the robustness of the estimation results, it can be seen that there occur many convergence problems for lower observation periods like 105 and 315 days. The t test of every parameter is insufficient for an observation period of 105 days regardless of the model. For the observation period of 315 days the t test is especially for the time span 2010-06-01 to 2010-10-30 is insufficient, and the estimation leads in this period to convergence problems. In the time span from 2007-02-01 to 2009-01-01 the robustness of the estimation of alpha & beta is good, but for omega not sufficient for GARCH-N.

For the observation period of 900 days the t test results in far better results for both models over the whole time range. In general, one can conclude that an observation period of 900 days is enough to get robust parameter estimates and at the same time good backtesting results.

Unconditional Coverage Test

illustration not visible in this excerpt

Table 7: Results for GARCH VaR Unconditional Coverage Test Source: Own construction.

The only two variations that can be rejected at a 5% significance level are the 99% GARCH-T (105) model in each portfolio. These rejections could be misleading. In both cases the observed exceedances are lower than the expected. Due to the fact that the unconditional coverage test follows a X[2] distribution, model variations with lower exceedances than expected have a lower p-Value. Therefore it is recommended to compare the results with the regulatory backtest. From the results above, there is no trend visible regarding the sample size. Also regarding the question as to whether a risk manager should use GARCH-N or GARCH-T a clear result cannot be extracted out ofthis result.

Independence Test

illustration not visible in this excerpt

Table 8: Results for GARCH VaR Independence Test Source: Own construction.

The independence test checks if the VaR exceedances are dependent. If the p- Value of the independence test is lower than the significance level of 5%, the hypothesis that the model is independent can be rejected. For the GARCH VaR only GARCH-T (99%) with 900 days observation period in Bond Portfolio can be rejected. In general the p-Values are for the Bond Portfolio at 99% lower than for the EFFAS Bond Index Portfolio. At a 99.9 confidence level the independence is stronger than on a 99% confidence level.

illustration not visible in this excerpt

Table 9: Results for GARCH VaR Regulatory Test Source: Own construction.

From a regulatory point of view all models passed the test. As mentioned before a trend is not visible. For the Bond Portfolio with confidence level of 99% and sample size of 315 and 600, the VaR exceedances are higher than for a sample size of 105 and 900. For the EFFAS Bond Index Portfolio this relation does not hold. Here the highest exceedances with a GARCH-N model occur with 105 days sample size or with a GARCH-T model with a sample size of 600 days.

It is surprising that the GARCH-T model with 105 days sample size yields to very good results. Due to the very small sample size, the parameters are more flexible (see Appendix O). This leads to a fast adjustment of the volatility. But again, it has to be said that the t-test of many parameter estimates is not significant with this small sample size.

Economic Costs

Like in the regulatory test, a clear trend cannot be found here. At a confidence level of 99% the GARCH-T model with sample size of 105 days yields to the lowest economic costs. This is the reason because this model reacts to changes in the market very fast and does not constantly overestimate the VaR. For the confidence level of 99.9% this relation is exactly vice versa. With a leptokurtic student t distribution the volatile parameters lead to a very high overestimation at this confidence level. More stable parameters are more suitable for higher confidence levels.

illustration not visible in this excerpt

Table 10: Results for GARCH VaR Economic Costs Source: Own construction.

As already mentioned, the sample size is crucial for the estimation process. Especially for small observation periods, the robustness of the model lacks. Even though the model passes all backtests, it is possible that the calibration process is not stable and therefore from a statistical point of view not valid. Therefore the author does not recommend choosing a small sample size like 105 days. For some portfolios even sample sizes around 315 can be too small.

Timeframes

The analysis of the different time frames results in the conclusion that for the Bond Portfolio the difference between the pre-2008 and post-2008 backtesting results are higher than for the EFFAS Bond Index Portfolio. For the EFFAS Bond Index

Portfolio, one can see that the results of the backtest are considerable better in the pre-2008 time. It is therefore not a valid answer to say that the model delivers a VaR estimation that is equal good for each timeframe. One conclusion from that could be that the regime switch had an unexpected effect on the assumed distribution of returns for the EFFAS Bond Index Portfolio.

illustration not visible in this excerpt

Table 11: Results over different timeframes for GARCH VaR Unconditional Coverage Test Source: Own construction.

Conclusion

From the backtests above, the results will be summarized shortly. By applying the procedure described in chapter 0 the conclusion is that for the confidence interval of 99%, the best model would be a 315-GARCH-T model for the Bond Portfolio and for the EFFAS Bond Index Portfolio a 900-GARCH-T model. For a confidence interval of 99.9%, the observation period of 900 days yields to the best result for both portfolios. The question, whether to use a GARCH-T or GARCH-N model depends on the portfolio. For the Bond Portfolio a GARCH-N model is not sufficient at a 99.9% confidence level. If both models are good enough, in general the GARCH-N model leads to lower economic costs.

4.2 PCA VaRResults

The four backtests are performed for each portfolio with the observation period 105, 315 and 900 days. Additionally, the tests will be performed on the hypothetical P&L of each portfolio. For each period a 99% and 99.9% VaR with 3 principal components (stated as 3 PC) and 7 principal components (stated as 7 PC) will be back tested.

The reason why the backtest is performed on the real P&L and the hypothetical P&L is the following: the only source of risk, which is considered by the PCA VaR, is the interest rate risk. Changes in the credit spread will not be captured. In Figure 15, the annual credit spread of the Bond Portfolio is plotted. The author wants to evaluate to which extent the two back tests differ.[43]

illustration not visible in this excerpt

4.2.1 Real P&L

This subsection describes the results of the VaR backtests, compared with the real P&L. By comparing Figure 3 and Figure 4, one can see that the short-term interest rates in years 2007 to 2009 could not be replicated fully with only 3 principal components. Therefore the author decided to perform the tests also with 7 principal components. In the tests can be seen that an increase in the principal components does not lead to a more accurate VaR estimate. In all cases, the test between 3 PC and 7 PC was identical, or the differences were minor. One reason for this is that the most cash flows are mapped to higher maturities. Those higher maturities were possible to replicate even with 3 PC without any loss of information.

In general a smaller observation period leads to a more dynamic behaviour of the VaR estimate. The higher the observation period the smoother the VaR estimate will be. In the case of 900 days the chance of many exceedances is very high, if the interest rate landscape changes very fast.

Unconditional Coverage Test

illustration not visible in this excerpt

Table 12: Results for PCA VaR Real P&L Unconditional Coverage Test Source: Own construction.

For the Bond Portfolio, no combination passed the test at a 5% significance level. The situation is quite different for the EFFAS Bond Index Portfolio. At a confidence level of 99%, only the VaR estimate with an observation period of 315 days did not pass the test. At a confidence level of 99.9%, only the tests with observation

period 105 and 315 passed the test. It can be seen that the backtest result depends highly on the underlying portfolio.

Independence Test

illustration not visible in this excerpt

Table 13: Results for PCA VaR Real P&L Independence Test Source: Own construction.

The independence test indicates that for the Bond Portfolio especially longer observation periods are affected of VaR dependency. In those cases, several VaR exceedances are consecutive. Due to the reason that those models didn’t pass the unconditional coverage test, the choice is not restricted. For the EFFAS Bond Index Portfolio, the null hypothesis H0 can’t be rejected.

The graphical analysis in Appendix N yields to the conclusion that many clusters appear in the time, where the short term interest rate level and the credit spreads increased especially for longer observation periods.

Regulatory Test

The results of the regulatory test are clear. The higher the observation period, the worse the regulatory backtest is. At a confidence level of 99%, only the VaR estimation with sample size 105 days passed the test.

At a confidence level of 99.9%, again only the VaR estimation with sample size 105 days passed the test for the Bond Portfolio. For the EFFAS Bond Index Portfolio, also estimates with a sample size of 315 days passed the regulatory test. This result is not fully consistent with the results of the unconditional coverage test.

The results suggest that at some point in time, the maximum exceedances of the last 250 days are very high with a sample size of e.g. 600 days in the EFFAS Bond Index Portfolio. On average, the exceedances are still low otherwise the model would not pass the unconditional coverage test.

illustration not visible in this excerpt

Table 14: Results for PCA VaR Real P&L Regulatory Test Source: Own construction.

Economic Costs

illustration not visible in this excerpt

Table 15: Results for PCA VaR Real P&L Economic Costs Source: Own construction.

The economic costs are directly linked to the results of the regulatory test. Due to the reason that the results got worse with a higher observation period, the results are here similar. For a confidence level of 99%, the best model uses 7 principal components with a sample size of 105 days. As already mentioned the results do not differ between 3 PC and 7 PC notable. For a confidence level of 99.9%, also models with a sample size of 105 days yield to the best results. Here, 7 PC yield to the best result for the Bond Portfolio. For the EFFAS Bond Index Portfolio, the lowest economic costs arise with 3 PC.

Timeframes

The results from the table below are for the Bond Portfolio significant. The unconditional coverage test in the pre-2008 time yields to far better results than in the post-2008 time. For the EFFAS Bond Index Portfolio, the results are more balanced. This yields to the conclusion that the model has problems to react appropriate to a regime switch. In Appendix N this circumstance can be seen.

illustration not visible in this excerpt

Table 16: Results over different timeframes for PCAVaR Real P&L Unconditional Coverage Test Source: Own construction.

Conclusion

By applying the procedure described in chapter 0, the conclusion is that for the Bond Portfolio, no combination at both confidence intervals is appropriate. For the EFFAS Bond Index Portfolio, one can see that the best results yield from the model with 7 principal components based observation period of 105 days. Both variants yield only to the yellow zone of the regulatory test.

4.2.2 Hypothetical P&L

This subsection describes the results of the VaR backtests of the hypothetical P&L. This P&L is generated out of the hypothetical prices, without any credit spread or liquidity spread. The price of each bond with annual coupon payments C and a notional of N is calculated in this way:

illustration not visible in this excerpt

The only uncertainty in this equation is the interest ratezt from the zero-coupon yield curve. The reader can see in Figure 16 the divergence between the real price and the present value of the portfolio. This graph in combination with Figure 15, suggests that until mid of 2008 the credit spread had a minor role. The fact that the credit spread widened does not necessary mean that the returns at high confidence levels differ to a great extent.

illustration not visible in this excerpt

Figure 16: Comparison Real Price vs. Hypothetical Price Source: Own construction.

Unconditional Coverage Test

illustration not visible in this excerpt

Table 17: Results for PCA VaR hypothetical P&L Unconditional Coverage Test Source: Own construction.

At a 5% significance level, the null hypothesis of all models can be rejected. This result which is based on hypothetical P&L is worse than for the real P&L. In Appendix N, the reader can see the time series of both P&Ls. It can be observed that the hypothetical P&L has more outliers. One reason for this is, that the present value of the bond reacts instantaneous to changes in the zero-coupon yield curve. The observed market prices are duller and do not change immediately to the fair value of the bond. Therefor the hypothetical P&L is sometimes more volatile than the real P&L. This is seen in all further tests for the hypothetical P&L.

Independence Test

illustration not visible in this excerpt

Table 18: Results for PCA VaR hypothetical P&L Independence Test Source: Own construction.

These results indicate that the null hypothesis H0 that the VaR exceedances are independent can’t be rejected which means that the VaR exceedances do not occur consecutive. Due to the reason that no models passed the unconditional coverage test these results won’t be used for further selection.

Regulatory Test

illustration not visible in this excerpt

Table 19: Results for PCA VaR hypothetical P&L Regulatory Test Source: Own construction.

From a regulatory point of few, only the model with sample size 105 passed the regulatory test at a confidence level of 99%. For a confidence level of 99.9%, additionally models with 315 days sample size passed the test. By looking at these results, the reader can see that the regulatory backtest has a very high tolerance. Models that do not pass the conditional or unconditional coverage test do pass the regulatory test. One reason for that is the very small probability of a type I error at this test.

Economic Costs

The sample size of 105 days with 7 principal components yields for each portfolio and confidence level to the lowest economic costs. Again, the difference between 3 and 7 principal components is not notable.

illustration not visible in this excerpt

Table 20: Results for PCA VaR hypothetical P&L Economic Costs Source: Own construction.

Time frames

illustration not visible in this excerpt

Table 21: Results over different timeframes for PCA VaR hypothetical P&L Unconditional

Coverage Test

Source: Own construction.

From the results of the time frame analysis, the conclusion is that for the hypothetical P&L also the results differ between the pre-2008 and post-2008 time. In the hypothetical P&L, the individual credit spread will not be considered. The unconditional coverage test yields for the pre-2008 to better results for both portfolios. This leads to the conclusion that even for a replicated P&L, where the idiosyncratic credit spread plays no role the PCA VaR is not able to react appropriate to changes in the zero-coupon yield curve.

Conclusion

The backtests from the PCA, based on the hypothetical P&L, suggest that the VaR estimation works only with real P&L at lower observation periods. The time frame analysis shows that the results are better in the pre-2008 sample, but not as good as in the pre-2008 sample ofthe real P&L.

illustration not visible in this excerpt

Figure 17: Eigenvectors ofthe first 3 principal components Source: Own construction.

In Figure 17, the reader can see the first 3 eigenvectors of the zero-coupon yield curve development over 8 years. The first three principal components explain approx. 91% of the variation of the zero-coupon yield curve for these 8 years.

These eigenvectors can have different shapes, for different times. The first principal component describes a parallel shift. The second eigenvector shows that an increase in short maturities does not affect longer maturities. This is mainly driven by the strong changes in short-term interest rates between 2007 and 2009 which can be seen in Figure 2. The third eigenvector describes that a decrease in mid-term maturities yield to an increase in long-term maturities. The second and the third eigenvector differ to the typical movements described in the literature. The second principal component describes a rotation, where the eigenvector is monotonic decreasing with increasing maturity. The third eigenvector is described in the literature similar, as a quadratic function of the maturity, where the function is highest at short and long maturities and lowest at middle maturities.[44]

The main assumption of this model is that changes in the interest rate level have the greatest proportion of influence on the price change of a bond. Therefore, the author assumed that the results of the backtest based on hypothetical returns lead to better results. This was the reason why this additional test was performed. As already described this assumption did not hold. The results are better for the real P&L, but this is also dependent on the portfolio. The PCA VaR works better for the EFFAS Bond Index Portfolio with a small sample size. In general it makes no difference for these portfolios if 3 or 7 principal components were used for the VaR estimation.

A further development could be to cluster bonds to different risk categories where the credit spread is highly correlated. This credit spread will be added to the interest rate surface. Out of this compound surface a PCA will be performed in a further step.[45]

4.3 Filtered Historical Simulation VaR Results

Like in the models before, for each portfolio the sample size differs between 105, 315, 600, and 900 days. For each sample size, the two confidence levels 99% and 99.9% were evaluated with 5,000 and 10,000 simulations. The two different simulations were chosen to evaluate if the simulation size makes a difference. By looking at the tables below, we can see that the results do not differ to a notable amount at a 99.9% confidence level. At a 99% confidence level, one can see that the difference between 5,000 and 10,000 simulations has a bigger influence on the backtests of the Bond Portfolio, but a lesser influence on the EFFAS Bond Index Portfolio. Due to that reason, it is recommendable to increase the number of simulations to avoid this inaccuracy.

Unconditional Coverage Test

illustration not visible in this excerpt

Table 22: Results for FHS VaR Unconditional Coverage Test Source: Own construction.

At a confidence level of 99%, no model variation can be rejected at a 5% significance level. In the Bond Portfolio, the p-Values are higher at the sample size of 600 days and for EFFAS Bond Index Portfolio at 105 days. This result could be misleading. As already mentioned, models with fewer violations than expected have a lower p-Value. This is the case for the EFFAS Bond Index Portfolio, with a sample size of 900 days. Therefore, one could conclude that depending on the available historical data the sample size should be increased as much as possible, even though the unconditional coverage test leads to worse p-Values.

Independence Test

illustration not visible in this excerpt

Table 23: Results for FHS VaR Independence Test Source: Own construction.

The conditional coverage test penalizes models that are not able to react on volatility clustering. On a 99.9% confidence level, the p-Values from the independence test are rather high, which leads to the conclusion that on this confidence level, after one exceedance no other exceedance follows. On a confidence level of 99%, the results differ more. The model with 315 observations and 10,000 simulations has to be rejected, because it does not react fast enough to jumps in the volatility. Also the GARCH-N (99%) model with 5,000 simulations and 900 observations has to be rejected.

In general, one can say that the independency on a 99.9% confidence level is higher than on a 99% level.

Regulatory Test

From regulatory point of few, all model variations passed the regulatory test on a 99% confidence level. On a 99.9% confidence level, only model variations with a sample size over 105 days passed the regulatory test and were in the green zone. At a 99% confidence level, the highest sample size leads to the best results. The other model variations are all in the yellow zone.

illustration not visible in this excerpt

Table 24: Results for FHS VaR Regulatory Test Source: Own construction.

Economic Costs

illustration not visible in this excerpt

Table 25: Results for FHS VaR Economic Costs Source: Own construction.

Due to the fact that for every VaR exceed will be penalized with a factor of 1.2 the results show that precautious models lead to less economic costs. One reason for this is, that the penalize factor is in relation to the interest rate level rather high. From the table above, the reader can see that the higher the sample size the lower the economic costs.

Time frames

illustration not visible in this excerpt

Table 26: Results over different timeframes for FHS VaR Unconditional Coverage Test Source: Own construction.

From the time frame analysis we cannot make any suggestions if the model works better in the pre-2008 time than in the post-2008 time or vice versa.

Conclusion

Based on the validation procedure described in chapter 0, the author comes to the conclusion that for the Bond Portfolio with 99% confidence interval, the 10,000 simulations with an observation period of 900 days provide the best results. For the confidence level of 99.9%, the models with a sample size of 315 and 600 yield to the best results. Up to the last decision criteria, those two variations are equally good. For the EFFAS Bond Index Portfolio, for both confidence levels, the models with a highest sample size of 900 days deliver the best results after performing all backtests.

5 Conclusion

The choice of a VaR model depends in first consequence on the portfolio and its risk drivers. It seems obvious that a linear bond portfolio should not be treated equally than a non-linear option portfolio. In this thesis, only linear bond portfolios were covered with the interest rate as the main risk driver. The main question is whether the VaR model is able to capture all risks adequately. In the three proposed VaR models only the PCA VaR simulates directly the volatility of the risk factor. The other two models capture only the volatility of the observed market prices. In addition to the market volatility the FHS VaR incorporates also the empirical return distribution.

The parametric GARCH models are in practice very easy to implement and yield to the best backtesting results of all three models. Even though the VaR estimation is not stable for all regimes it is recommendable to use this model for linear portfolios. By estimating the parameters the significance of the maximum likelihood estimation should be watched closely. As see in Appendix O the robustness of the parameters are highly dependent on the sample size. It is recommendable to have a sample size greater than 315 days.

The VaR models with FHS have all in all a good performance. The models perform over the whole sample equally good. In our example we provided a framework to simulate directly the prices. Of course it would be also possible to simulate the distribution of the risk factors and compute the effect on the price. For a linear bond portfolio we observe good results, but they are not as good as a parametric GARCH model. The strengths of this model can be seen in option portfolios, or other non-linear portfolios. With this type of model it can be recommended to increase the sample size as much as possible.

The PCA VaR is a model that works as long as the only risk factor is the variation of the zero-coupon yield curve. If this is not the case, the model has problems to estimate the VaR correctly. As mentioned in chapter 3.2, this was the case in 2008. In this time the individual credit spread of the bonds diverged massively from its average in the pre-2008 time. As a result, the VaR estimate in this time was way too low and lead to bad overall backtesting results. In order to test if the PCA VaR works on a hypothetical P&L better than on a real P&L, all backtests were performed again on a new return series without credit spreads. This test did

not yield to better results of the backtests, because some shocks in the zero- coupon yield curve lead to stronger price fluctuations than in the observed market prices. This means that interest rate shocks were absorbed slightly.

As an overall conclusion, it can be said that parametric GARCH models are very robust and have a wide range of application. Before one uses a model, it has to be ensured that it is able to pass the model validation process based on the applied portfolios. As we saw in the empirical analysis the performance of the backtests can differ for each portfolio to a great amount. Therefore it is not possible to state that a certain model fits to all applications.

6 Bibliography

Alberg, D. / Shalit, H. / Yosef, R. (2008): Estimating stock market volatility using asymmetric GARCH models, Israel: Routledge

Alexander, C. (2008a): Market Risk Analysis - Quantitative Methods in Finance, West Sussex: John Wiley & Sons

Alexander, C. (2008b): Market Risk Analysis - Practical Financial Econometrics, West Sussex: John Wiley & Sons

Alexander, C. (2008c): Market Risk Analysis - Pricing, Heding and Trading Financial Instruments, West Sussex: John Wiley & Sons

Alexander, C. (2008d): Market Risk Analysis - Value-at-Risk Models, West Sussex: John Wiley & Sons

Angelidis, T. / Benos, A. / Degiannakis, S. (2003): The Use of GARCH Models in VaR Estimation, Piraeus: University of Piraeus

Barone-Adesi, G. / Giannopoulos, K. /Vosper, L. (1998): VaR Without Correlations for Nonlinear Portfolios, London: City University Business School, London

Basel Committee. (1998): Performance of Models-Based Capital Charges for Market Risk, Basel: Bank for International Settlement

Basel Committee. (2005): Basel II: International Convergence of Capital Measurement and Capital Standards: A Revised Framework Part 5, Basel: Bank for International Settlement

Christoffersen, P. (1998): Evaluating Interval Forecasts

Engle, R. (2001): The Use of ARCH/GARCH Models in Applied Econometrics, Pittsburgh: Journal of Economic Perspectives

Grau, W. (1999): Begutachtung eines VAR Modells, Wien: OeNB

Hamilton, D., J. (2005): Regime-Switching Models, San Diego: University of

California

Hassine, B. (2005): Identifying the Statistical Factors of Credit Spread Changes on Corporate Bonds, Paris: University of Paris II

Hotelling, H. (1933): Analysis of a complex of statistical variables into principal components Journal of Educational Psychology

Jorion, P. (2009): Financial Risk Manager Handbook, New Jersey: John Wiley & Sons

Kupiec, P. 1995. Techniques for Verifying the Accuracy of Risk Measurement Models. In: Journal of Derivatives 3.

Malava, A. (2006): Principal Component Analysis on Term Structure of Interest Rates, Helsinki: Helsinki University of Technology

Miazhynskaia, T. / Dockner, J., E. / Dorffner, G. (2003): On the Economic Costs of Value at Risk Forecasts, Vienna: Vienna University of Economics

Nassim, T. (1998): Derivatives Strategy, available at:

http://www.derivativesstrateqy.com/maqazine/archive/1998/0498fea1 .asp, accessed 26 Mar 2012

Nieppola, O. (2009): Backtesting Value-at-Risk Models, Helsinki: Helsinki School of Economics

Nocera, J. (2009): Risk Mismanagement, available at:

http://www.nytimes.com/2009/01/04/maqazine/04risk-t.html, accessed 31 October 2011

Place, J. (2000): Basic Bond Analysis, London: Centre for Central Banking Studies

Wuertz, D. / Chalabi, Y. / Miklovic, M. (2009): Rmetrics - Autoregressive Conditional Heteroskedastic Modelling, Zurich: Rmetrics

illustration not visible in this excerpt

Appendix K Comparison ofthe Portfolios in their portfolio value

Source: Own construction.

Appendix L Comparison of the Portfolios in their portfolio returns

Source: Own construction.

Appendix M Time Series of Regulatory Backtest

illustration not visible in this excerpt

Appendix N Backtest PCA VaR with hypothetical P%L

illustration not visible in this excerpt

Source: Own construction.

Appendix O Parameters for 99% VaR GARCH-N with different sample size

illustration not visible in this excerpt

[...]


[1] Alexander (2008a) p.1

[2] Nassim (1998)

[3] Nocera (2009)

[4] Alexander (2008d) p.395

[5] Alexander (2008d) p.19

[6] Alexander (2008d) p.35

[7] Alberg et.al. (2008) pp.1201-1208

[8] Alexander (2008b) p.137

[9] Wuertz et.al. (2009)

[10] Angelidis et.al. (2003) p.2

[11] Angelidis et.al. (2003) p.2

[12] Engle (2001) pp.158-166

[13] Alexander (2008b) p.137

[14] Wuertz et.al. (2009) p.3

[15] Angelidis et.al. (2003) p.5

[16] Alexander (2008b) p.138

[17] Wuertz et.al. (2009) p.35

[18] Hotelling (1933) pp.417-441

[19] Malava (2006) p.2

[20] Alexander (2008a) pp.64-66

[21] Alexander (2008a) p.61

[22] Alexander (2008a) p.50

[23] Alexander (2008c) p.42

[24] Place (2000) p.42

[25] Alexander (2008d) p.81

[26] Barone-Adesi et.al. (1998) p.1

[27] Barone-Adesi et.al. (1998) p.2

[28] Barone-Adesi et.al. (1998) p.7

[29] Jorion (2009) p.89

[30] Kupiec, 1995, pp.73-84

[31] Alexander (2008d) p.337

[32] Alexander (2008b) p.359

[33] Christoffersen (1998) p.7

[34] Nieppola (2009) p.27

[35] Alexander (2008d) p.338

[36] Basel Committee (1998) p.3

[37] Basel Committee (2005) p.317

[38] Basel Committee (2005) p.318

[39] Nieppola (2009) p.23

[40] Miazhynskaia et.al. (2003) p.10

[41] The real portfolio differs in terms of a unequal weighting and buy dates

[42] For more details look up Appendix J Portfolio details

[43] The strong increase at the end of the time series is caused mainly by the increase in credit spreads of Greek government bonds which have a portfolio weight of approx. 6%.

[44] Alexander (2008b) p.58

[45] Hassine (2005)

Fin de l'extrait de 87 pages

Résumé des informations

Titre
A comparison between advanced Value at Risk models and their backtesting in different portfolios
Université
Fachhochschule des bfi Wien GmbH
Cours
Riskmanagement
Note
1
Auteur
Année
2012
Pages
87
N° de catalogue
V204602
ISBN (ebook)
9783656317562
ISBN (Livre)
9783656319009
Taille d'un fichier
887 KB
Langue
anglais
Annotations
gewann den 2. Platz beim CFA Austria Prize
Mots clés
Value at Risk, GARCH, PCA, Principal Component Analysis, Filtered Historical Simulation, FHS, VaR, T-GARCH
Citation du texte
Christian Steinlechner (Auteur), 2012, A comparison between advanced Value at Risk models and their backtesting in different portfolios, Munich, GRIN Verlag, https://www.grin.com/document/204602

Commentaires

  • Pas encore de commentaires.
Lire l'ebook
Titre: A comparison between advanced Value at Risk models and their backtesting in different portfolios



Télécharger textes

Votre devoir / mémoire:

- Publication en tant qu'eBook et livre
- Honoraires élevés sur les ventes
- Pour vous complètement gratuit - avec ISBN
- Cela dure que 5 minutes
- Chaque œuvre trouve des lecteurs

Devenir un auteur