Excerpt

## Table of Content

1. Introduction

2. Introduction of the Series

3. Modelling a Univariate Model page

3.1 Trend Analysis page

3.2 Seasonal Analysis page

3.3 Cyclical Analysis page

3.4 Evaluation of the ARMA Model page

3.5 The Unit-Root Test page

3.6 The ARIMA Model

4. Modelling a Multivariate Model

5. Conclusion

6. Bibliography

Appendices A till G

## 1. Introduction

Forecasting is one of the mayor issue in today’s business world. Whether it concerns the economic situation, stock prices, or production levels, a glance into the future would be very valuable. By excluding uncertanties, expenses can be saved and revenues be generated. Unnecessary or too little inventories, capacity standing idle or being short, missing raw materials or too many employees are just some of the situations, which lead to lower profits. Hence, perfect forecasts would be worth a lot of money. But, as the expression states, a “perfect forecast” is a paradox, since the future will stay uncertain till the moment, where it becomes the present. As the American philosopher Eric Hoffer once stated: “The only way to predict the future is to have the power to shape the future”, which would take place in the present. The one chance we have in making inferences about the future, is to incorporate logic, intuition, and experience into models, which will then - if we are lucky, that is - produce more or less accurate forecasts.

Forecasting, if pursued by professionals, relies mainly on past data, since those are the most reliable source of unbiased information. Applying econometric models will then lead to results, which can be tested for their stability and for reliability, especially when compared to actual data. This procedure will be presented during the following paragraphs with the help of an example, namely the number of cars produced in Germany every month. I chose this data set out of two reasons. Firstly, these industry is one of the most important industries within the German economy, and secondy, my professional engagement with a car-producing company provides my with some insight into the industy.

In order to generate forecasts, the model will build upon systematic components, such as trends, seasonality, and cyclicality. Furthermore, concepts like moving averages and autoregressions will be incorporated into the model. Following, the data will be tested, whether it returns to it mean after experiencing a shock. This will be done via a unit-root test. In case of its occurrence, the unit-root will be removed by a stochastic model. Next, a second series will be introduced, the collection of leading indicators for the German economy, and incorporated. According tests towards the causal relationship will be provided and a forecast will be suggested on the basis of a multivariate model. Concluding, the two forecasts will be compared.

## 2. Introduction of the Series

The automotive industry is one of the most important in the German economy, which can easily be seen on the number of domestic brands, such as Mercedes-Benz, Audi, Volkswagen, and BMW. But due to the globalization, those formerly German companies turned into international, if not global enterprises. Therefore, some of the firms’ products are assembled abroad, while foreign companies (e.g. Ford) produce now within German borders. Thus the presented series encorporated the cars produced in Germany.

Figure 1 German Car Production 1960:01 till 1998:12

illustration not visible in this excerpt

Figure 2 Histogram of German Car Production

illustration not visible in this excerpt

By looking at the data, one can see that the production of cars increased during the last four centuries. The high variability, which obviously indicates seasonality, is also worth mentioning. A third observation are the cycles the series describes, which seems to follow the business cycle of the German economy. But these observations should be underlined with some scientific evidence. As can be seen from figure 2, the distribution seems to be normal. Firstly, the mean, the median, and the mode do not differ too much from each other,indicating a normal distribution. Secondly, the skewness coefficient of -0.076477 underlines this finding. Thirdly, the kurtosis value of 2.261076 indicates a little flatter tails then normal. Finally, the Jarque-Bera test examines the hypothesis of independent normally distributed observation. The reported probability rejects it in favor of the alternative hypothesis. This leads to the conclusion that the data provide a good basis for an analytical forecast.

## 3. Modelling a Univariate Model

### 3.1 Trend Analysis

Most data series exhibit a trend. Underlying this trend is usually some kind of growth, such as inflation, population growth, or increases in wealth. But these trend do not always have to be linear. Most stock market indices increased exponentially, while learning curves are not seldomly u-shaped (one would speak here of a quadratic trend).

As stated before, the data at hand also seems to follow a trend. This section will be used to test the data for the occurrence of a linear, quadratic, exponential, or polynomial trend. But before, the underlying statistics of these models will be introduced:

illustration not visible in this excerpt^{1} ^{2} ^{3} ^{4}

These four models all weigh the trend differently. The linear trend regression fits a straight line through the data, where the slope of the line is the estimated growth rate. The other three methods are non-linear.

Comparing the models will be done by looking at the mean squared error (MSE). This measure will be optimal, the smaller its value will be. There are different concepts, which take the MSE into account, namely the R2, the Akaike info criterion (AIC), and the Schwarz criterion (SIC). They differ in the penalizing factor. Generaly, one can say, that the SIC penalizes the MSE most strongly and will therefore lead to the most reliable indicator, leading to a more parsimonious (simple) model.

Table 1: Model Selection Criteria

illustration not visible in this excerpt

The comparison of the results can be seen in Table 1. The first remarkable fact is that all indicators do not vary much from each other. This can also be seen in the graphs, which are exhibited in Appendix A. The exponential trend regression cannot be compared this way, since it relies on a logarithm of the data.

Figure 3: Quadratic Trend Regression

illustration not visible in this excerpt

Following the Schwarz criterion, as argued above, the quadratic trend model (see figure 3) will be chosen as the best model.

Another way of deciding, which model to take, is to test the prediction capabilities of the several models. The history, forecast and realization of the above suggested models is shown in the figures five till eight of appendix A. Suprisingly, the forecasts suggest that either the linear or the exponential trend model is the most accurate (see figure 4). The quardatic trend regression does not capture the continuing growth in production. This leads to the conclusion that a linear trend model will probably be the most useful for further analysis.

Figure 3:

illustration not visible in this excerpt

### 3.2 Seasonal Analysis

Car registrations are highly seasonal, since the purchase of a car is quite an investment. So it is quite logical that little cars will be registered in december. In january of 2002, a car of 2001 will be counted as being a year old, no matter, in which month in 2001 it was registered. On the other hand, spring is known to be the time, where most cars are sold. The purchasing behavior of prospective customers is very likely to be incorporated, when production targets are set, in order to prevent high inventory levels. Therefore, it is very likely to find seasonality in the series at hand. As predicted, the series really shows a high seasonality. All of the twelve monthly dummies are significant at the 5%-level (see Appendix B).

Since the seasons have an influence on car sales, it might be interesting to test for seasonality by regressing the linear trend model on three quaterly dummies, taking one season as the base. Since the sales are very likely to lag one month behind the production, the seasons of interst would be february till april, may till july, and august till october, taking november till january as the base. The results can be seen in appendix B. AIC and SIC go up compared to the monthly model and the dummies are not all significant at the 5%-level anymore. Therefore, the model adjusting for monthly seasonality is superior to the quarterly model and will consequently be used for further models.

### 3.3 Cyclical Analysis

When looking at figure five, the residuals seem to exhibit a cycle. As mentioned before, the reason of this could be the business cycle of the German economy. This cycle can be incorporated into the estimate and therefore into the forecast. In order to do this, the series has to be covariance stationary, that is, its mean and its covariance structure should be stable over time. This paper goes into detail at a later point in time.

Cycles can be enclosed into a model in two different ways. Firstly, a model can include lags of the variable, that is the data will be regressed on itself (autoregression, AR). The second way is the moving average (MA), which means that the series will be smoothed by using averages. The statistical equations for those two methods look as follows:

illustration not visible in this excerpt

The question, when to incorporate which method, can be answered by looking at the autocorrelation function and the partial autocorrelation function. With the autoregression, the autocorrelation function damps slowly, since each displacement has a certain correlation with the previous and the following. The partial autocorrelation function breaks off sharply after the number the first displacement for an AR(1) process. When looking at moving-average processes, it is exactly the other way around the autocorrelation function breaks off after the first displacement for a MA(1) process, since any further displacements do not play a role. The partial autocorrelation function damps slowly.

When turning to the series, it is helpful to take a look at the correlogramm (figures 6 and 7). The histogramm clearly suggests an autoregressive function. The autocorrelation damps slowly, while the partial autocorrelation function experiences a cut-off. The complete statistics are presented in appendix C table 1. Although the correlogramm just suggests the introduction or AR- trems, estimating the series with the help of MA-terms might also help. Appendix C table 2 provides an overview of the AIC- and SIC-values of several ARMA-models. The lowest SIC-value is provided by a linear trend, seasonally adjusted, ARMA 1,1 model (23.56744).This supports the earlier speculation of the linar trend model being the most appropriate estimate.

Figure 6: Autocorrelation of Car Productions

illustration not visible in this excerpt

Figure 7: Partial Autocorrelation

illustration not visible in this excerpt

### 3.4 Evaluation of the ARMA Model

Finally, the evaluation of the developed model will be done as follows: First the model is put into action by providing a controlled forecast. Second, the correlogram of the model will be looked at to find room for improvement.

The controlled forecast is presented in figure 8. The actual data stays at first within the 95%-interval. After two and a half years the interval overstates the number of cars produced for one, while after three and a half years it understates it. Nevertheless, the forecast seems to be quite reasonable, especially when taking the forecasting horizon into account.

**[...]**

^{1} See Diebold, p. 72

^{2} Ibid. p. 75

^{3} Ibid. p. 78

^{4} Ibid. p. 76

- Quote paper
- Maria Kimme (Author), 2002, Elements of Forecasting - A Case Study: German Car Production, Munich, GRIN Verlag, https://www.grin.com/document/34942

Publish now - it's free

Comments