Portfolio credit risk models give a probability distribution for portfolio credit losses. Validation of the model
includes testing whether observed losses were consistent with the model’s predictions. The main focus when
testing credit portfolio models is on the “high loss” end of the distribution, which, assuming normal distribution,
means “low probability”. Normally one or five percent Value at risk is used, which means that a given loss within
specified time will be observed with a probability of one or five percent respectively.
“A risk manager has two jobs: make people take more risk the 99% of the time it is safe to do so, and survive
the other 1% of the time. Value at risk is the boarder.”1
Table of Contents
1. Introduction
2. Testing distributions with the “Berkowitz Test”
3. Implementation of the Berkowitz test to Excel
4. Simulating the critical CHI-squared value
5. Berkowitz on Subportfolios
Research Objectives and Key Themes
This paper aims to provide a robust framework for the validation of credit portfolio models by employing the Berkowitz test to evaluate the consistency between predicted loss distributions and observed portfolio outcomes. The work focuses on overcoming the limitations of standard Value-at-Risk (VaR) testing by shifting the focus from simple threshold exceedances to a comprehensive analysis of the entire distribution.
- Validation methodology for credit portfolio risk models.
- Application of the Berkowitz test for distribution consistency.
- Practical implementation of statistical testing using Microsoft Excel.
- Simulation techniques for critical chi-squared values in small-sample scenarios.
- Advanced extensions for analyzing correlated subportfolios.
Excerpt from the Book
Testing distributions with the “Berkowitz Test”
For testing the distribution with the berkowitz test the models forecast of the loss distribution at the beginning of the period is needed. The cummulated distribution of the probability of portfolio losses might loook as the figure below.
The distribution returns for a given loss, L the cummulated probability, F(L) with which this loss is not exceeded.
Furthermore experienced losses for a certain amount of period (say, five years) are required for validating the portfolio.
The basic idea behind the Berkowitz test is to evaluate the entire distribution and test its consistency with the observed losses.
To test, one first needs to make two transformations:
1st Transformation. The loss of a period, Lt has to be replaced with the predicted probability to observe this loss or a smaller one, F(Lt); in the figure above this would mean replacing the observed value on the X-axis with the corresponding vlue on the Y-axis, so that:
pt = F(Lt)
This transformation will produce numbers between 0 and 1, that should be, given the models prediction is correct, uniformly distributed. The observations should be therefore below the median loss for 50% as F(medianloss)=0.5
2nd Transformation. The p − V alues have to be transformed by applying the inverse cummulative standard normal distribution function, φ−1(x), so:
zt = φ−1(pt)
This transformation distributes the pt -given the models predictions were correct - normally with zero mean and unit variance.
To validate the model the hypothesis that zt have zero mean and unit varince is therefore tested.
Summary of Chapters
1. Introduction: Outlines the necessity of validating credit portfolio models beyond simple Value-at-Risk checks due to the long time horizons involved in credit risk.
2. Testing distributions with the “Berkowitz Test”: Details the transformation process required to convert loss observations into a standard normal distribution format for hypothesis testing.
3. Implementation of the Berkowitz test to Excel: Provides a practical guide on calculating Maximum Likelihood estimates and likelihood ratio statistics within a spreadsheet environment.
4. Simulating the critical CHI-squared value: Explains how to generate simulated distributions to determine threshold levels when dealing with small-sample data constraints.
5. Berkowitz on Subportfolios: Extends the validation framework to account for correlation between different asset subportfolios using specialized VBA functions.
Keywords
Credit Risk, Portfolio Models, Berkowitz Test, Model Validation, Value at Risk, Maximum Likelihood, Statistical Significance, Loss Distribution, Chi-squared Test, Subportfolios, Asset Correlation, VBA, Excel Implementation, Hypothesis Testing, Financial Risk Management
Frequently Asked Questions
What is the primary focus of this publication?
The paper focuses on validating credit portfolio models by testing whether observed losses are consistent with model predictions across the entire distribution, rather than relying solely on specific loss thresholds.
What is the core issue with standard Value-at-Risk (VaR) testing for credit portfolios?
Standard VaR testing requires a large number of observations to achieve statistical significance. Given that credit portfolios often have one-year horizons, waiting for enough events to validate a 1% VaR would take centuries, making it impractical.
How does the Berkowitz test solve this validation problem?
The Berkowitz test transforms the predicted loss distribution into a standard normal distribution. This allows for a robust statistical test of the entire predicted distribution using limited historical data.
What scientific methods are applied in the validation process?
The work utilizes Maximum Likelihood estimation, Likelihood Ratio tests, and Monte Carlo simulation techniques, primarily implemented within Microsoft Excel and VBA.
What is covered in the implementation sections of the book?
The implementation chapters provide step-by-step Excel formulas and VBA macros to perform the mathematical transformations, calculate LR statistics, and simulate critical values for hypothesis rejection.
Which criteria define the characterized keywords?
The keywords are selected based on the technical core of the validation framework, including the statistical tests, the specific financial application (credit risk), and the computational tools used (Excel/VBA).
Why is the "1st Transformation" in the Berkowitz test necessary?
This transformation maps the raw loss values to predicted probabilities, which, if the model is correct, should be uniformly distributed between 0 and 1.
How is the "2nd Transformation" interpreted?
By applying the inverse cumulative standard normal distribution function, the data is forced into a standard normal distribution (zero mean, unit variance), allowing for the application of standard statistical hypothesis tests.
How does the proposed approach handle correlated subportfolios?
The book introduces a "rhosearch" VBA function to estimate correlation coefficients and adjust the log-likelihood calculations, allowing the Berkowitz test to be extended to multiple, correlated sub-asset groups.
How is the significance level determined in this model?
Significance is determined by comparing computed lambda statistics against simulated distributions produced via Monte Carlo methods, specifically using the percentile of simulated test statistics.
- Quote paper
- Manuel Mahler-Hutter (Author), 2008, Validation of credit portfolio models, Munich, GRIN Verlag, https://www.grin.com/document/153552