667
Autocorrelation One of the assumptions of the simple and the multiple regressions is that the value, which the error term assumes in one period, is uncorrelated to its value in any other time period. In other words, the assumption is based on the fact that the error terms in the regression model is independent. The mathematical notation is cov ( ε i j ) ij This ensures that the average value of the dependent variable depends only on the independent variable and not on the error term. The Durbin-Watson statistic tests for first-order autocorrelation or serial correlation that arises in time – series data. The error term ε t at time period t is correlated with error terms ε t1 t2 andε t+1 t+2 ...... etc. First - order autocorrelation is denoted by ρ 1 and the error term ε t is correlated with ε t1 . Second – order autocorrelation is denoted by ρ 2 and the error term ε t is correlated with ε t2 . One way to correct the autocorrelation problem is to use the first differences of the variables and assume that the first differences of the error term are independent. One simple way to detect the autocorrelation problem is to plot the residuals against time and observe the patterns. For example, they can be positively or negatively correlated according to the data. If the calculated value of the Durbin – Watson statistic d is close to zero, there is evidence of positive autocorrelation. If it is closer to 4, there is evidence of negative autocorrelation. Finally, if the value of d is close to 2, there is no autocorrelation. In EViews 6, you will find the option in the regression specification box as Newey – West test for heteroskedasticity and autocorrelation. The problem of autocorrelation is that the coefficients of the linear regression equation are not efficient. They do not have minimum variances. The estimated variances and standard errors are biased. Therefore, we have problem with the t-statistics and the p-values. 1

Introduction to Econometrics 2

Embed Size (px)

Citation preview

Page 1: Introduction to Econometrics 2

Autocorrelation

One of the assumptions of the simple and the multiple regressions is that the value, which the error term assumes in one period, is uncorrelated to its value in any other time period. In other words, the assumption is based on the fact that the error terms in the regression model is independent. The mathematical notation is

cov (ε i , ε j ) i≠ j This ensures that the average value of the dependent variable depends only on the independent variable and not on the error term. The Durbin-Watson statistic tests for first-order autocorrelation or serial correlation that arises

in time – series data. The error term ε t at time period t is correlated with error

terms ε t−1 , εt−2 and εt+1 , εt+2 . .. . ..etc. First - order autocorrelation is denoted by ρ1

and the error term ε t is correlated with ε t−1 . Second – order autocorrelation is

denoted by ρ2 and the error term ε t is correlated with ε t−2 . One way to correct the autocorrelation problem is to use the first differences of the variables and assume that the first differences of the error term are independent. One simple way to detect the autocorrelation problem is to plot the residuals against time and observe the patterns. For example, they can be positively or negatively correlated according to the data. If the calculated value of the Durbin – Watson statistic d is close to zero, there is evidence of positive autocorrelation. If it is closer to 4, there is evidence of negative autocorrelation. Finally, if the value of d is close to 2, there is no autocorrelation. In EViews 6, you will find the option in the regression specification box as Newey – West test for heteroskedasticity and autocorrelation.

The problem of autocorrelation is that the coefficients of the linear regression equation are not efficient. They do not have minimum variances. The estimated variances and standard errors are biased. Therefore, we have problem with the t-statistics and the p-values. The error variance, which is calculated as the residual sum of squares divided by the degrees of freedom is biased and did not reflect the true value of the error variance, σ

2. The formula is as follows:

σ 2= RSSd . f

Finally, the R2 and the F-test did not represent the true numerical values. They could be underestimated or overestimated. There are cases that we use the h-statistic instead of the Durbin statistic. For example, we are testing for serial

correlation in models with lagged dependent variable. yt = α+β1 x1+ β2 y t−1+εt .The mathematical formula is as follows:

h= ρ√ n1−nv ( β2 )

1

Page 2: Introduction to Econometrics 2

Where:

h is the statistic that is used instead of the Durbin one .ρ is the estimator of the first order serial correlation coefficient obtained from the ordinary least squares residual .n is the sample size .v is the estimated variance of the coefficient β2 .

Please consider the following linear regression equation and the lagged one:

y t=α+βx t+εt (1)

and

y t−1=α+βx t−1+εt−1 (2)

By subtracting (1) from (2) we get the following equation:

( y t− yt−1 )=β ( xt−x t−1)+( εt−ε t−1) (3 )

This technique for correcting the autocorrelation problem leads to larger Durbin – Watson statistic and lower residual sum of squares. Please consider the following example:

The disposable income, y, is linearly related to the expenses, x, of a UK household. The linear regression equation will be as follows:

y t=α+βx t+εt t =1,2, .. .. . T

yt= 4 . 567+2 .456 xt

t-statistics (1 .23 ) (2 .56 ) R1

2=0 . 765 DW1= 0 . 75 RSS1= 0 . 0356 n1=30

The first differences linear equation of the disposable income, y, and the expenses, x, are as follows:

Δy t=α+Δxt+εt t =1,2, .. . .. T

The sign Δ is used to denote first differences.Δy t= 3 .489+1 .267 Δx t

t-statistics (2 .45 ) (2.12) R2

2=0 .576 DW2= 1 . 80 RSS2= 0 . 0123 n2=20

2

Page 3: Introduction to Econometrics 2

Please note that the firt differences equation leads to decreased R2, larger DW value and lower residual sum of squares than for the equation in the levels. Please note that R2 is not comparable between different models. We prefer models with high R2, as it is an indication that there is a strong relationship between the variables. When we use first differences we usually gets low R2. A possible explanation of this phenomenon is that the regression equation is misspecified. For example, there are missing independent variables that could explain a high degree of changes of the dependent variable.

If there is autocorrelation, it leads to biased standard errors and thus to incorrect statistical tests. Autocorrelation is tested by using the test of the Durbin-Watson statistic. The equation is based on the ordinary least squares, OLS, residuals.

The mathematical equation of Durbin – Watson statistic is as follows:

d=∑t=2

n

(εt−εt−1)2

∑t=1

n

εt2

Where d is the Durbin – Watson statistic. ε t is the error term or residuals in period t .ε t-1 is the error term or residuals lagged one period from the previous observation .

I will do a detailed regression example to help you understand how we calculate the Durbin – Watson statistic. The whole idea is based on the error term or residuals that we get from the regression equation. Please check again the regression section in case that you are confused with the residuals and how we calculate them.

3

Page 4: Introduction to Econometrics 2

Let’s take as an example the revenues and expenses of a small shop. The figures are denoted in pounds.

Revenues (y) the dependent variable Expenses (x) the independent variable

321 152312 140300 162301 164320 174330 190340 195350 199352 210364 235372 231400 242351 256381 289401 231407 269421 302467 308442 312444 328

SUMMARY OUTPUT

Regression StatisticsMultiple R 0.927423R Square 0.860113Adjusted R Square 0.852341Standard Error 19.22193Observations 20

ANOVA

df SS MS FSignificanc

e F

Regression 1 40892.5140892.5

1110.675

1 4.07E-09

Residual 18 6650.686369.482

5Total 19 47543.2

Coefficients

Standard Error t Stat P-value Lower 95%

Upper 95%

Lower 95.0%

Upper 95.0%

Intercept 185.2836 17.9658710.3130

95.55E-

09 147.5387223.028

6 147.5387 223.0286

x 0.79981 0.07602610.5202

24.07E-

09 0.6400850.95953

4 0.640085 0.959534

4

Page 5: Introduction to Econometrics 2

The regression equation is as follows:

y= 185.28+ 0.79981 xThe t-statistics are (10 .31) (10 .52)

The residuals from this regression are as follows:

ObservationPredicte

d y Residuals, ε t

Error! Objects cannot be

created from editing field

codes. (ε t−εt−1 )2

1 306.8547 14.145267222 297.257 14.74298455 0.597717323 0.3572659983 314.8528 -14.85283055 -29.59581509 875.9122714 316.4525 -15.4524501 -0.599619554 0.3595436095 324.4505 -4.450547869 11.00190223 121.04185276 337.2475 -7.2475043 -2.796956431 7.8229652777 341.2466 -1.246553185 6.000951115 36.011414298 344.4458 5.554207708 6.800760892 46.250348719 353.2437 -1.243699839 -6.797907546 46.21154701

10 373.2389 -9.238944262 -7.995244423 63.9239333911 370.0397 1.960294846 11.19923911 125.422956612 378.8376 21.1623873 19.20209245 368.720354613 390.0349 -39.03494958 -60.19733688 3623.71936714 416.4287 -35.42867222 3.606277361 13.0052364115 370.0397 30.96029485 66.38896706 4407.49494816 400.4325 6.567523322 -24.39277152 595.007302617 426.8262 -5.826199317 -12.39372264 153.604360818 431.6251 35.37494202 41.20114134 1697.53404819 434.8243 7.175702914 -28.19923911 795.197086320 447.6213 -3.621253517 -10.79695643 116.5742682

I have calculated the last two columns in Excel.

Then, we apply the equation of Durbin – Watson statistic based on the error term or residuals. The mathematical equation of Durbin – Watson statistic is as follows:

d=∑t=2

n

(εt−εt−1)2

∑t=1

n

εt2

Where d is the Durbin – Watson statistic.

5

Page 6: Introduction to Econometrics 2

ε t is the error term in period t .ε t-1 is the error term lagged one period or observation .

Residuals ε t ε t2 (ε t−εt−1 ) (ε t−εt−1 )2

14.14526722 200.088584814.74298455 217.3555933 0.597717323 0.357265998

-14.85283055 220.6065752 -29.59581509 875.912271-15.4524501 238.7782141 -0.599619554 0.359543609

-4.450547869 19.80737633 11.00190223 121.0418527-7.2475043 52.52631858 -2.796956431 7.822965277

-1.246553185 1.553894842 6.000951115 36.011414295.554207708 30.84922326 6.800760892 46.25034871

-1.243699839 1.546789289 -6.797907546 46.21154701-9.238944262 85.35809108 -7.995244423 63.923933391.960294846 3.842755882 11.19923911 125.4229566

21.1623873 447.8466362 19.20209245 368.7203546-39.03494958 1523.727289 -60.19733688 3623.719367-35.42867222 1255.190815 3.606277361 13.0052364130.96029485 958.5398569 66.38896706 4407.4949486.567523322 43.13236259 -24.39277152 595.0073026

-5.826199317 33.94459848 -12.39372264 153.604360835.37494202 1251.386523 41.20114134 1697.5340487.175702914 51.49071231 -28.19923911 795.1970863

-3.621253517 13.11347703 -10.79695643 116.5742682Total 6650.685687 13094.17107

d = 13094.1707 / 6650.685687 = 1.9688 or 1.97 (to (2.d.p.)

Once you have calculated the Durbin – Watson statistic, then, the next step is to compare the value 1.97 with the values of Durbin – Watson table at the 5% significance level in the appendix that you will find at the end of the Econometrics book. To detect positive, negative or no autocorrelation use the upper and lower critical values according to the number of observations and the number of explanatory variables for 1% and 5% significance levels.

In the appendix, you have a column nominated as n, which is the number of the observations. K is the number of independent variables used in the test. Then, you have a scale of dL, which is Durbin lower value and du, which is Durbin upper value. The test is inconclusive if dL < d < dU .

If d > du, then, there is no evidence of autocorrelation or do not reject the null hypothesis.

In contrast, if d < dL , then, there is evidence of autocorrelation or reject the null hypothesis of no autocorrelation.

In case of dL< d < dU, the test is inconclusive.

6

Page 7: Introduction to Econometrics 2

In our case, by checking the values in the appendix we have the following conclusions.

Our observations n = 20K = 1 as we have one independent variable.dL = 1.20dU = 1.41d = 1.97 ( This is the value that we have calculated)

Since d = 1.97 > 1.41, then, there is no evidence of positive autocorrelation and the sample evidence suggests that the null hypothesis is not rejected at the 5% significance level.

Many thanks for your participation and attention.

I have attached the output of the regression equation and the result of the Durbin – Watson statistic in EViews 6. I have used the bold function at the bottom of the table.

Dependent Variable: RvenueMethod: Least SquaresDate: 05/19/16 Time: 12:15Sample: 1 20Included observations: 20

Variable Coefficient Std. Error t-Statistic Prob.

C 185.2836 17.96587 10.31309 0.0000EXPE 0.799810 0.076026 10.52022 0.0000

R-squared 0.860113    Mean dependent var 368.8000Adjusted R-squared 0.852341    S.D. dependent var 50.02273S.E. of regression 19.22193    Akaike info criterion 8.844620Sum squared resid 6650.686    Schwarz criterion 8.944193Log likelihood -86.44620    Hannan-Quinn criter. 8.864058F-statistic 110.6751    Durbin-Watson stat 1.968845Prob(F-statistic) 0.000000

7

Page 8: Introduction to Econometrics 2

Exercise

You are given the following data related to the number of persons and their daily incomes expressed in pounds. Daily incomes are the dependent variable and the number of persons is the independent variable.

Number of persons, (x)

Daily incomes, (y)

43 63.0032 54.3032 51.0030 39.0026 52.0025 55.0023 41.2022 47.7022 44.50

21 43.0020 46.8020 42.4019 56.5019 55.0019 53.00

18 55.0018 45.00

17 50.7017 37.50

You are required to calculate the Durbin – Watson statistic in Excel and EViews 6.

8

Page 9: Introduction to Econometrics 2

Solution

First of all, plot your data in Excel. Then, run the regression equation in addition by checking the box of the residuals. You will get the following output.

SUMMARY OUTPUT

Regression Statistics

Multiple R 0.386547

R Square 0.149419Adjusted R Square 0.099385

Standard Error 6.465665

Observations 19

ANOVA

df SS MS FSignificanc

e F

Regression 1 124.8433124.84330

72.98633

8 0.102089

Residual 17 710.68241.804820

9

Total 18 835.5253

Coefficients

Standard Error t Stat P-value Lower 95%

Upper 95%

Intercept 39.96495 5.4815497.2908125

61.26E-

06 28.3998751.5300

2

x 0.39112 0.226329 1.72810230.10208

9 -0.086390.86863

3

9

Page 10: Introduction to Econometrics 2

RESIDUAL OUTPUT

Observation Predicted y Residuals ε t

1 56.78309 6.216912 52.48078 1.8192253 52.48078 -1.480784 51.69854 -12.69855 50.13406 1.8659426 49.74294 5.2570627 48.9607 -7.76078 48.56958 -0.869589 48.56958 -4.06958

10 48.17846 -5.1784611 47.78734 -0.9873412 47.78734 -5.3873413 47.39622 9.10377914 47.39622 7.60377915 47.39622 5.60377916 47.0051 7.99489917 47.0051 -2.005118 46.61398 4.08601819 46.61398 -9.11398

Then, we apply the equation of Durbin – Watson statistic based on the error term or residuals. The mathematical equation of Durbin – Watson statistic is as follows:

d=∑t=2

n

(εt−εt−1)2

∑t=1

n

εt2

Where d is the Durbin – Watson statistic.

ε t is the error term in period t .ε t-1 is the error term lagged one period or observation .

Calculate the squared error term, the error term lagged one period and the squared error term lagged one perod in Excel.

10

Page 11: Introduction to Econometrics 2

Observations Residuals ε t ε t2 (ε t−εt−1) (ε t−εt−1 )2

1 6.21691 38.6499652 1.819225 3.3095789 -4.39768477 19.339633 -1.48078 2.1926951 -3.3 10.894 -12.6985 161.25282 -11.2177609 125.83825 1.865942 3.4817404 14.5644783 212.1246 5.257062 27.636699 3.39111957 11.499697 -7.7607 60.22845 -13.0177609 169.46218 -0.86958 0.7561685 6.89111957 47.487539 -4.06958 16.561477 -3.2 10.24

10 -5.17846 26.816447 -1.10888043 1.22961611 -0.98734 0.974841 4.19111957 17.5654812 -5.38734 29.023436 -4.4 19.3613 9.103779 82.878795 14.4911196 209.992514 7.603779 57.817458 -1.5 2.2515 5.603779 31.402341 -2 416 7.994899 63.918406 2.39111957 5.71745317 -2.0051 4.020431 -10 10018 4.086018 16.695546 6.09111957 37.1017419 -9.11398 83.064662 -13.2 174.24

Total 710.68196 1178.338

d = 1178.338 / 710.68196 = 1.658 or 1.66 (to (2.d.p.)

Once you have calculated the Durbin – Watson statistic, then, the next step is to compare the value 1.66 with the values of Durbin – Watson table at the 5% significance level in the appendix that you will find at the end of the Econometrics book.

In the appendix, you have a column nominated as n, which is the number of the observations. K is the number of independent variables used in the test. Then, you have a scale of dL, which is Durbin lower value and du, which is Durbin upper value. The test is inconclusive if dL < d < dU .

If d > du, then, there is no evidence of autocorrelation or do not reject the null hypothesis.

In contrast, if d < dL , then, there is evidence of autocorrelation or reject the null hypothesis of no autocorrelation.

In our case, by checking the values in the appendix we have the following conclusions.

11

Page 12: Introduction to Econometrics 2

Our observations n = 19K = 1 as we have one independent variable.dL = 1.18dU = 1.40d = 1.66 ( This is the value that we have calculated)

Please make your conclusion whether to accept or reject the null hypothesis.

I have attached the output of the regression equation and the result of the Durbin – Watson statistic in EViews 6. I have used the bold function at the bottom of the table.

Dependent Variable: YMethod: Least SquaresDate: 05/19/16 Time: 16:03Sample: 1 19Included observations: 19

Variable Coefficient Std. Error t-Statistic Prob.

C 39.96495 5.481549 7.290813 0.0000X 0.391120 0.226329 1.728102 0.1021

R-squared 0.149419    Mean dependent var 49.08421Adjusted R-squared 0.099385    S.D. dependent var 6.813081S.E. of regression 6.465665    Akaike info criterion 6.670189Sum squared resid 710.6820    Schwarz criterion 6.769604Log likelihood -61.36680    Hannan-Quinn criter. 6.687014F-statistic 2.986338    Durbin-Watson stat 1.658038Prob(F-statistic) 0.102089

12

Page 13: Introduction to Econometrics 2

Exercise

It is given the following data related to the share prices of car industry in relation to the deutsche index DAX prices. The share prices are denoted in euros.

Share prices, (y) DAX prices, (x)10.45 70009.21 67008.45 63007.89 62006.45 61005.27 59004.78 58003.36 57502.87 56002.23 55002.12 54002.02 53001.87 48001.34 36781.12 3100

You are required to calculate the Durbin – Watson statistic in Excel and EViews 6.

13

Page 14: Introduction to Econometrics 2

Solution

First of all, plot your data in Excel. Then, run the regression equation in addition by checking the box of the residuals. You will get the following output.

SUMMARY OUTPUT

Regression Statistics

Multiple R 0.823055087

R Square 0.677419676Adjusted R Square 0.652605805

Standard Error 1.851547778

Observations 15

ANOVA

df SS MS FSignificanc

e F

Regression 1 93.59079 93.59079427.3000

4 0.000164Residual 13 44.56698 3.4282292Total 14 138.1578

CoefficientsStandard

Error t Stat P-value Lower 95%Upper 95%

Intercept -9.170194735 2.68388 -3.41676790.00459

2 -14.9684 -3.37203

dax 0.00248993 0.000477 5.2249440.00016

4 0.001460.00351

9

RESIDUAL OUTPUT

Observation

Predicted share

Residuals

1 8.2593169482.19068

3

2 7.5123378761.69766

2

3 6.516365781.93363

4

4 6.2673727561.62262

75 6.018379732 0.431626 5.520393683 -0.250397 5.271400659 -0.4914

14

Page 15: Introduction to Econometrics 2

8 5.146904147 -1.78699 4.773414611 -1.90341

10 4.524421587 -2.2944211 4.275428563 -2.1554312 4.026435539 -2.0064413 2.781470419 -0.91147

14 -0.0122313111.35223

1

15 -1.451410992.57141

1

Then, we apply the equation of Durbin – Watson statistic based on the error term or residuals. The mathematical equation of Durbin – Watson statistic is as follows:

d=∑t=2

n

(εt−εt−1)2

∑t=1

n

εt2

Where d is the Durbin – Watson statistic.

ε t is the error term in period t .ε t-1 is the error term lagged one period or observation .

Calculate the squared error term, the error term lagged one period and the squared error term lagged one perod in Excel.

RESIDUAL OUTPUT

Observations Residuals ε t

ε t2 (ε t−εt−1) (ε t−εt−1)

2

1 2.1906830524.7990922

3

2 1.6976621242.8820566

9 -0.4930209 0.243073 1.93363422 3.7389413 0.2359721 0.055683

4 1.6226272442.6329191

7 -0.311007 0.096725

5 0.4316202680.1862960

6 -1.191007 1.4184986 -0.250393683 0.062697 -0.682014 0.465143

7 -0.4914006590.2414746

1 -0.241007 0.058084

8 -1.7869041473.1930264

3 -1.2955035 1.678329

9 -1.9034146113.6229871

8 -0.1165105 0.01357510 -2.294421587 5.2643704 -0.391007 0.152886

15

Page 16: Introduction to Econometrics 2

2

11 -2.1554285634.6458722

9 0.138993 0.019319

12 -2.0064355394.0257835

7 0.148993 0.022199

13 -0.9114704190.8307783

2 1.0949651 1.198949

14 1.3522313111.8285295

2 2.2637017 5.124346

15 2.571410996.6121544

8 1.2191797 1.486399

 Total44.566979

3 12.0332

d = 12.0332 / 44.5669793 = 0.2700 or 0.27(to 2.d.p.).

Once you have calculated the Durbin – Watson statistic, then, the next step is to compare the value 0.27 with the values of Durbin – Watson table at the 5% significance level in the appendix that you will find at the end of the Econometrics book.

In the appendix, you have a column nominated as n, which is the number of the observations. K is the number of independent variables used in the test. Then, you have a scale of dL, which is Durbin lower value and du, which is Durbin upper value. The test is inconclusive if dL < d < dU .

If d > du, then, there is no evidence of autocorrelation or do not reject the null hypothesis.

In contrast, if d < dL , then, there is evidence of autocorrelation or reject the null hypothesis of no autocorrelation.

In our case, by checking the values in the appendix we have the following conclusions.

Our observations n = 15K = 1 as we have one independent variable.dL = 1.08dU = 1.36d = 0.27 ( This is the value that we have calculated)

Since d = 0.27 < 1.08, then, there is evidence of positive autocorrelation and the sample evidence suggests that the null hypothesis of no autocorrelation is rejected at the 5% significance level.

I have attached the output of the regression equation and the result of the Durbin – Watson statistic in EViews 6. I have used the bold function at the bottom of the table.

Dependent Variable: SHAREMethod: Least Squares

16

Page 17: Introduction to Econometrics 2

Date: 05/19/16 Time: 16:46Sample: 1 15Included observations: 15

Variable Coefficient Std. Error t-Statistic Prob.

C -9.170195 2.683880 -3.416768 0.0046DAX 0.002490 0.000477 5.224944 0.0002

R-squared 0.677420    Mean dependent var 4.628667Adjusted R-squared 0.652606    S.D. dependent var 3.141403S.E. of regression 1.851548    Akaike info criterion 4.193487Sum squared resid 44.56698    Schwarz criterion 4.287893Log likelihood -29.45115    Hannan-Quinn criter. 4.192481F-statistic 27.30004    Durbin-Watson stat 0.270003Prob(F-statistic) 0.000164

Please use the first differences values of the variables instead of their levels to correct for the autocorrelation by getting a larger DW statistic and a lower residual sum of squares. I have attached the table with the relevant data.

Share prices, (y)

DAX prices, (x) Share prices

(y)DAX prices (x)

10.45 7000 0 09.21 6700 -1.24 -3008.45 6300 -0.76 -4007.89 6200 -0.56 -1006.45 6100 -1.44 -1005.27 5900 -1.18 -2004.78 5800 -0.49 -1003.36 5750 -1.42 -502.87 5600 -0.49 -1502.23 5500 -0.64 -1002.12 5400 -0.11 -1002.02 5300 -0.1 -1001.87 4800 -0.15 -5001.34 3678 -0.53 -11221.12 3100 -0.22 -578

I have attached the output in EViews 6. As you can see the DW statistic has increased to 1.51 from 0.27 and the sum residual squared has decreased to 2.81 from 44.57. Please note that the firt differences equation leads to decreased R2, larger DW value and lower residual sum of squares than for the equation in the levels. Please not that R2 is not comparable between different models. We prefer models with high R2, as it is an indication that there is a strong relationship between the variables. When we use first differences we usually gets low R2. A possible explanation of this phenomena is that the regression equation is misspecified. For example, there are missing independent variables that could explain a high degree of changes of the dependent variable.

17

Page 18: Introduction to Econometrics 2

Dependent Variable: SHAREMethod: Least SquaresDate: 05/21/16 Time: 19:54Sample: 1 14Included observations: 14

Variable Coefficient Std. Error t-Statistic Prob.

C -0.767797 0.180981 -4.242428 0.0011DAX -0.000364 0.000455 -0.800290 0.4391

R-squared 0.050668    Mean dependent var -0.666429Adjusted R-squared -0.028443    S.D. dependent var 0.476939S.E. of regression 0.483674    Akaike info criterion 1.516754Sum squared resid 2.807291    Schwarz criterion 1.608048Log likelihood -8.617277    Hannan-Quinn criter. 1.508303F-statistic 0.640463    Durbin-Watson stat 1.509150Prob(F-statistic) 0.439100

Exercise

It is given the following data related to the output in units in relation to the total cost denoted in pounds.

Output in units, (y) Total cost in pounds, (x)0 121 232 293 344 425 576 717 748 769 7710 8011 8212 8413 8514 8815 90

You are required to calculate the Durbin – Watson statistic in Excel and EViews 6.

18

Page 19: Introduction to Econometrics 2

Exercise

It is given the following data related to the revenues in relation to the expenditures denoted in pounds.

Revenues,(y) Expenditures (x)7000 30008000 40008500 500010000 600012000 700013000 1000014000 1100018000 800019000 1200020000 1000021000 1800022000 1900023000 1800025000 2000030000 24000

You are required to calculate the Durbin – Watson statistic in Excel and EViews 6.

19

Page 20: Introduction to Econometrics 2

Exercise

I have attached another example with the following dataset. The dependent variable is share prices and the independent variables are the market and the price earnings ratio, PE.

Share Market PE3.526787 8.73209 0.499819922-4.34533 -5.19815 -4.21807425.222709 6.21865 0.877518989-4.99619 -5.5393 -0.745571156-3.04336 7.69808 -0.317399673-2.375422 -4.99735 2.0611907912.651303 5.42777 2.629939417-0.68924 -1.5424 -0.8809008470.205664 1.4639 0.2530117572.4783 3.6528 -0.920953642

0.237407 -0.1494 -1.1487126780.329728 0.16688 0.372533199-0.26869 -0.1444 -1.7140759240.064769 0.097873 4.630805383-0.5873 -0.09911 -2.530835538

0.329225 -0.08344 -1.038471775-0.11849 0.122767 0.530904829

20

Page 21: Introduction to Econometrics 2

0.011541 -0.45767 -1.857415164-0.18757 -0.53046 1.227869802-0.38752 -0.11118 0.642577585-0.26835 -0.28947 1.8390492010.262798 -0.17676 -0.954329890.355054 -1.15686 -2.503369578-1.34302 -0.5771 0.877111656-0.77964 0.578182 -1.55602547-0.04649 -0.05331 0.0405596920.098381 -0.23054 -0.522472998-0.09585 -0.66625 2.93017109-0.0059 -0.50071 -0.497160825-0.05415 -0.53128 -1.885502172

You are required to calculate the Durbin – Watson statistic in Excel and EViews 6.

Solution

First of all, plot your data in Excel. Then, run the regression equation in addition by checking the box of the residuals. You will get the following output.

SUMMARY OUTPUT

Regression Statistics

Multiple R 0.696339

R Square 0.484888Adjusted R Square 0.446732

Standard Error 1.482907

Observations 30

ANOVA

df SS MS FSignificanc

e F

Regression 2 55.8896327.9448

212.7078

9 0.000129

Residual 27 59.373352.19901

3

Total 29 115.263

21

Page 22: Introduction to Econometrics 2

Coefficients

Standard Error t Stat P-value Lower 95%

Upper 95%

Intercept -0.26077 0.273942-

0.951910.34958

9 -0.822850.30131

6

Market 0.40578 0.087384.64384

97.93E-

05 0.2264910.58506

9

PE 0.131419 0.154480.85072

20.40240

5 -0.185550.44838

5

RESIDUAL OUTPUT

Observation

Predicted Share

Residuals

13.34822569

70.17856

1

2

-2.92440851

5 -1.42092

32.37795922

4 2.84475

4

-2.60648613

6 -2.3897

52.82124672

3 -5.86461

6

-2.01771080

2 -0.35771

72.28733801

60.36396

5

8

-1.00240936

90.31316

9

90.36650485

7 -0.1608410 1.10043469 1.37786

22

Page 23: Introduction to Econometrics 2

2 5

11

-0.47235362

20.70976

1

12

-0.14409236

4 0.47382

13 -0.544624410.27593

4

140.38752546

9 -0.32276

15

-0.63358467

90.04628

5

16

-0.43110058

90.76032

6

17

-0.14117943

20.02268

9

18

-0.69058063

50.70212

2

19

-0.31465111

90.12708

120 -0.22143446 -0.16609

21

-0.13654140

2 -0.13181

22

-0.45791009

50.72070

8

23

-1.05918881

71.41424

3

24

-0.37967311

2 -0.9633525 -0.23064426 -0.549

26

-0.27706879

40.23057

9

27

-0.42297857

4 0.52136

28

-0.14603659

40.05018

7

29

-0.52928162

20.52338

2

30

-0.72414127

10.66999

1

Then, we apply the equation of Durbin – Watson statistic based on the error term or residuals. The mathematical equation of Durbin – Watson statistic is as follows:

23

Page 24: Introduction to Econometrics 2

d=∑t=2

n

(εt−εt−1)2

∑t=1

n

εt2

Where d is the Durbin – Watson statistic.

ε t is the error term in period t .ε t-1 is the error term lagged one period or observation .

Calculate the squared error term, the error term lagged one period and the squared error term lagged one perod in Excel.

RESIDUAL OUTPUT

Observations Residuals ε t ε t2 (ε t−εt−1) (ε t−εt−1)2

1 0.178561303 0.031884139

2 -1.420921485 2.019017866 -1.59948279 2.558345

3 2.844749776 8.092601294.26567126

1 18.195954 -2.389703864 5.710684559 -5.23445364 27.39955 -5.864606723 34.39361201 -3.47490286 12.07495

6 -0.357711198 0.1279573015.50689552

5 30.3259

7 0.363964984 0.132470510.72167618

2 0.5208178 0.313169369 0.098075053 -0.05079562 0.002589 -0.160840857 0.025869781 -0.47401023 0.224686

10 1.377865308 1.8985128071.53870616

5 2.36761711 0.709760622 0.503760141 -0.66810469 0.44636412 0.473820364 0.224505738 -0.23594026 0.05566813 0.27593441 0.076139799 -0.19788595 0.03915914 -0.322756469 0.104171738 -0.59869088 0.358431

15 0.046284679 0.0021422720.36904114

8 0.136191

16 0.760325589 0.5780950010.71404090

9 0.50985417 0.022689432 0.00051481 -0.73763616 0.544107

18 0.702121635 0.4929747910.67943220

3 0.46162819 0.127081119 0.016149611 -0.57504052 0.33067220 -0.16608554 0.027584407 -0.29316666 0.085947

21 -0.131808598 0.0173735060.03427694

2 0.001175

22 0.720708095 0.5194201590.85251669

3 0.726785

23 1.414242817 2.0000827450.69353472

1 0.4809924 -0.963346888 0.928037226 -2.3775897 5.652933

25 -0.54899574 0.3013963220.41435114

8 0.171687

24

Page 25: Introduction to Econometrics 2

26 0.230578794 0.053166580.77957453

4 0.60773627 0.521359574 0.271815806 0.29078078 0.08455328 0.050186594 0.002518694 -0.47117298 0.222004

29 0.523381622 0.2739283220.47319502

7 0.22391430 0.669991271 0.448888304 0.14660965 0.021494

Total 59.37335129 104.8316

d = 104.8316 / 59.37335129 = 1.7656 or 1.77 (to 2.d.p.).

Once you have calculated the Durbin – Watson statistic, then, the next step is to compare the value 1.77 with the values of Durbin – Watson table at the 5% significance level in the appendix that you will find at the end of the Econometrics book.

In the appendix, you have a column nominated as n, which is the number of the observations. K is the number of independent variables used in the test. Then, you have a scale of dL, which is Durbin lower value and du, which is Durbin uppr value. The test is inconclusive if dL < d < dU .

If d > du, then, there is no evidence of autocorrelation or do not reject the null hypothesis.

In contrast, if d < dL , then, there is evidence of autocorrelation or reject the null hypothesis of no autocorrelation.

In our case, by checking the values in the appendix we have the following conclusions.

Please complete the following missing data.

Our observations n =K = as we have two independent variable.dL = dU = d = ( This is the value that we have calculated)

Please draw your conclusion whether to accept or reject the null hypothesis at the 5% significance level and calculate the Durbin – Watson statistic in EViews 6.

25

Page 26: Introduction to Econometrics 2

Example of Breusch – Godfrey LM test

Another test that you can use to detect first order serial correlation is the Breusch – Godfrey LM test.

It is given the following dataset:

ObservationsShare returns

Risk- free rate Size

1 3.526787 2.755836 0.823441

2 -4.34533 2.725726 4.458782

3 5.222709 3.092531 4.232887

4 -4.99619 3.089416 2.137064

5 -3.04336 2.802173 1.658494

6 -2.375422 2.409178 0.20628

7 2.651303 2.220845 0.028879

8 -0.68924 2.522834 0.774512

9 0.205664 2.669247 0.259913

10 2.4783 2.969892 0.325758

11 0.237407 1.994892 0.008251

12 0.329728 1.861559 0.008251

13 -0.26869 2.469416 0.929169

14 0.064769 2.503892 0.633557

15 -0.5873 2.265607 0.711829

16 0.329225 2.606003 0.008251

17 -0.11849 2.639092 2.19223

18 0.011541 2.572749 1.91165

19 -0.18757 2.540823 0.008251

20 -0.38752 2.259086 0.008251

State the null and alternative hypotheses

H0: There is no autocorrelation.H1: There is autocorrelation.

Run the regression equation with a lagged dependent variable.

26

Page 27: Introduction to Econometrics 2

The mathematical formula that you insert in EViews 6 is as follows:

y c x1 x2 y(-1)

Then, select view ….. residual tests ……. serial correlation LM test.

Breusch-Godfrey Serial Correlation LM Test:

F-statistic 2.544289    Prob. F(1,15) 0.1315Obs*R-squared 2.900418    Prob. Chi-Square(1) 0.0886

Test Equation:Dependent Variable: RESIDMethod: Least SquaresDate: 09/06/16 Time: 16:48Sample: 1 20Included observations: 20Presample missing value lagged residuals set to zero.

Variable Coefficient Std. Error t-Statistic Prob.

C -0.793607 3.157040 -0.251377 0.8049X1 0.164347 1.137073 0.144535 0.8870X2 0.258290 0.352155 0.733455 0.4746X3 0.085138 0.098938 0.860517 0.4030

RESID(-1) 0.502019 0.304735 1.647394 0.1203

R-squared 0.145021    Mean dependent var -1.83E-16Adjusted R-squared -0.015288    S.D. dependent var 1.463129S.E. of regression 1.474270    Akaike info criterion 3.891060Sum squared resid 34.77556    Schwarz criterion 4.139993Log likelihood -33.91060    Hannan-Quinn criter. 3.939654F-statistic 0.678477    Durbin-Watson stat 1.697407Prob(F-statistic) 0.616781

Is there a serial correlation by checking by checking the probability of the chi-square statistic at the 5% significance level?

Hint: If the probability value is below the 5% significance level, then, the LM test is significant and the sample evidence suggests that we can reject the null hypothesis of no autocorrelation of first order. If the probability value is above the 5% significance level, accept the alternative hypothesis.

27

Page 28: Introduction to Econometrics 2

Exercise

Please consider the folwoing regression equation:

y t= 1 .4+ 0 . 2x1+ 1.2x2

t−statistics (0 . 2) ( 0. 45 ) (1 .34 )

R2=0 . 98DW=1 . 9

Based on the data given, it is required to find if there is serial correlation in the errors?

Exercise

Please state whether the following statements are true or false by giving an explanation.

(a) Lack of detection of serial correlation in the errors is the cause of biased standard errors.(b) the Durbin – Watson statistic is irrelevant if the errors follow a heteroskedastic pattern.(c) An econometrician is estimating total expenses and revenues based on their levels and not on their first differences in order to get a high R2. Is this a good argument to get a model without serial correlation in the errors.(d) Autocorrelation in time series could be corrected using first differences regressed variables?(e) Is the Durbin – Watson statistic relevant if the errors follow a homoskedastic pattern?

28

Page 29: Introduction to Econometrics 2

Exercise of the h-statistic

There are cases that we use the h-statistic instead of the Durbin statistic.It is used in autoregressive models with lagged dependent variable. For example, we are testing for serial correlation in models with lagged dependent variable. Lagged dependent variable are used as explanatory variable when we have adjustment lags and expectations of an investment product.

Please consider the following regression equation:

yt = α+β1 x1+ β2 x2+ β3 y t−1+εt .

The mathematical formula of the h-statistic is as follows:

h= ρ√ n1−nv ( β2 )

Where:

h is the statistic that is used instead of the Durbin one.ρ is the estimator of the first order serial correlation coefficient obtained from the ordinary least squares residual . It lies between -1 and 1 .

It is calculated as { ρ≈1−d2 There are different methods to calculate

¿ the coefficient of autocorrelation ρ . For example, the Durbin two-step method and the ¿Cochrane - Orcutt two-step method . ¿d is the Durbin statistic and it is calculated when running the regression equation in EViews 6 . ¿ n is the sample size. ¿v is the estimated variance of the standard error of the coefficient β2. ¿Hint: You square the value of the standard error of the coefficient of the lagged depndent value. ¿¿

29

Page 30: Introduction to Econometrics 2

It is given the following dataset:

ObservationsShare returns

Risk- free rate Size

1 3.526787 2.755836 0.823441

2 -4.34533 2.725726 4.458782

3 5.222709 3.092531 4.232887

4 -4.99619 3.089416 2.137064

5 -3.04336 2.802173 1.658494

6 -2.375422 2.409178 0.20628

7 2.651303 2.220845 0.028879

8 -0.68924 2.522834 0.774512

9 0.205664 2.669247 0.259913

10 2.4783 2.969892 0.325758

11 0.237407 1.994892 0.008251

12 0.329728 1.861559 0.008251

13 -0.26869 2.469416 0.929169

14 0.064769 2.503892 0.633557

15 -0.5873 2.265607 0.711829

16 0.329225 2.606003 0.008251

17 -0.11849 2.639092 2.19223

18 0.011541 2.572749 1.91165

19 -0.18757 2.540823 0.008251

20 -0.38752 2.259086 0.008251

You are required to calculate the following:

(a) Run the regression equation and show the numerical values of the coefficients, the standard errors, the t-statistics, the R2 and the Durbin – Watson statistic. The format of the regression equation will be as follows:

y t=α+β1 x1+β2 x2+β3 y t−1+εt.

Where: y t is the share returns and it is the dependent variable.

x1 is the risk – free rate. This is the first independent variable. x2 is the size effect. This is the second independent variable.

30

Page 31: Introduction to Econometrics 2

yt-1 is the dependent variable lagged one period.

Please add the following: Standard errors, (SE) t-statistics DW R2

(b) Calculate the h-statistic based on the above equation. (c) Test the null hypothesis of no first order autocorrelation with 5%

significance level. If the critical value of h is within ± 1.96, then we do not reject the null hypothesis.

Help: I have attached the regression output with a lagged dependent variable.

The mathematical formula that you insert in EViews 6 is as follows:

y c x1 x2 y(-1)

Dependent Variable: YMethod: Least SquaresDate: 09/06/16 Time: 16:35Sample: 1 20Included observations: 20

Variable Coefficient Std. Error t-Statistic Prob.

C -0.746361 3.374321 -0.221189 0.8277X1 0.205429 1.224991 0.167698 0.8689X2 0.147632 0.341023 0.432911 0.6709

y(-1) 0.474979 0.091243 5.205664 0.0001

R-squared 0.639599    Mean dependent var -0.097084Adjusted R-squared 0.572023    S.D. dependent var 2.437189S.E. of regression 1.594407    Akaike info criterion 3.947738Sum squared resid 40.67416    Schwarz criterion 4.146884Log likelihood -35.47738    Hannan-Quinn criter. 3.986613F-statistic 9.464978    Durbin-Watson stat 1.131510Prob(F-statistic) 0.000783

Hint: The variance of the lagged dependent variable is calculated by squaring the numerical value of the standard error. In other words, (0.091243)2 = 0.008325285.

I have done the calculations of the h – statistic. The numerical value exceeds the critical value of ± 1.96 with 95% confidence level.

31

Page 32: Introduction to Econometrics 2

Example of the Newey and West standard errors

The Newey and West procedure adjust the standard erros values from the heteroskedasticity and autocorrelation problem. You will find this option in EViews 6 software. You enter your regression equation in the specification box. You select LS, least squares regression. Then, you press options and you click on the box heteroskedasticity consistent coefficient covariance and you select Newey – West test.

Let’s take an example to see the differences in the numerical values of the standard errors and the t-statistics. The numerical values of the coefficients remain the same.

It is given the following dataset:

ObservationsShare returns

Risk- free rate Size

1 3.526787 2.755836 0.823441

2 -4.34533 2.725726 4.458782

3 5.222709 3.092531 4.232887

4 -4.99619 3.089416 2.137064

5 -3.04336 2.802173 1.658494

6 -2.375422 2.409178 0.20628

7 2.651303 2.220845 0.028879

8 -0.68924 2.522834 0.774512

9 0.205664 2.669247 0.259913

10 2.4783 2.969892 0.325758

11 0.237407 1.994892 0.008251

12 0.329728 1.861559 0.008251

13 -0.26869 2.469416 0.929169

14 0.064769 2.503892 0.633557

15 -0.5873 2.265607 0.711829

16 0.329225 2.606003 0.008251

17 -0.11849 2.639092 2.19223

18 0.011541 2.572749 1.91165

19 -0.18757 2.540823 0.008251

20 -0.38752 2.259086 0.008251

32

Page 33: Introduction to Econometrics 2

Run the original regression in Eviews 6. The dependent variable is share returns and the independent variables are risk – free rate and size.

yt = α+β1 x1+ β2 x2+εt .

The regression equation in Eviews 6 will be as follows:

y c x1 x2

Dependent Variable: YMethod: Least SquaresDate: 09/07/16 Time: 15:05Sample: 1 20Included observations: 20

Variable Coefficient Std. Error t-Statistic Prob.

C -1.842579 5.362260 -0.343620 0.7353X1 0.461626 1.948906 0.236864 0.8156X2 0.383890 0.538160 0.713338 0.4853

R-squared 0.029193    Mean dependent var -0.097084Adjusted R-squared -0.085020    S.D. dependent var 2.437189S.E. of regression 2.538681    Akaike info criterion 4.838647Sum squared resid 109.5633    Schwarz criterion 4.988007Log likelihood -45.38647    Hannan-Quinn criter. 4.867804F-statistic 0.255598    Durbin-Watson stat 2.714070Prob(F-statistic) 0.777377

Press estimate. Then, you press options and you click on the box heteroskedasticity consistent coefficient covariance and you select Newey – West test.

Dependent Variable: YMethod: Least SquaresDate: 09/07/16 Time: 15:06Sample: 1 20Included observations: 20Newey-West HAC Standard Errors & Covariance (lag truncation=2)

Variable Coefficient Std. Error t-Statistic Prob.

C -1.842579 4.973573 -0.370474 0.7156X1 0.461626 1.899490 0.243026 0.8109X2 0.383890 0.525856 0.730029 0.4753

33

Page 34: Introduction to Econometrics 2

R-squared 0.029193    Mean dependent var -0.097084Adjusted R-squared -0.085020    S.D. dependent var 2.437189S.E. of regression 2.538681    Akaike info criterion 4.838647Sum squared resid 109.5633    Schwarz criterion 4.988007Log likelihood -45.38647    Hannan-Quinn criter. 4.867804F-statistic 0.255598    Durbin-Watson stat 2.714070Prob(F-statistic) 0.777377

Could you see the difference between the two regressions. The original regression and the regression that incorporates the Newey-West HAC Standard Errors & Covariance. The coefficients are the same. The standard errors and the t-statistics are adjusted to correct the heteroskedasticity and autocorrelation problem. If the problem of autocorrelation persist then transform your variables, drop and change them or increase your sample size.

34

Page 35: Introduction to Econometrics 2

Exercise

It is given the following dataset related to the interest rates and to the return on investment nominated in percentages.

observations Interest rates (%) Return on investment (%)1 10.45 9.562 10.12 8.123 9.67 8.344 9.45 7.475 9.10 7.116 8.56 6.457 8.24 6.128 7.45 6.789 7.23 5.3410 6.14 5.1111 6.11 5.2312 5.45 4.1713 5.33 4.1814 5.12 3.5615 4.24 3.1216 4.11 2.1417 3.45 2.5618 3.33 1.4519 3.12 1.2320 2.15 1.12

(a) Please check for positive or negative first – order autocorrelation in the residuals by plotting your data in a scatterplot

(b) Run the ordinary least squares regression. If there is autocorrelation, what remedial actions are you going to take? Check the value of the Durbin – Watson statistic.

(c) Are you going to use natural logarithmic ln transformation?

Exercise

Estimate the coefficient of autocorrelation using the Durbin’s two – step method.

35

Page 36: Introduction to Econometrics 2

Share prices, (y) DAX prices, (x)10.45 70009.21 67008.45 63007.89 62006.45 61005.27 59004.78 58003.36 57502.87 56002.23 55002.12 54002.02 53001.87 48001.34 36781.12 3100

First step: Use the following regression format:

y t−1=α+β1 x t−1++ β2 y t−1+ε t

Run the regression equation in EViews 6. Use the coefficient of the lagged dependent variable as an estimate of the coefficient of autocorrelation ρ .

Step 2:Incorporate the value of the coefficient of autocorrelation in the following equation and run the regression in EViews 6. Compare your results in terms of coefficients, standard errors and t-statistics with those of the OLS equation.( y t−ρy t−1)=α (1−ρ)+β1 (x1−ρx t−1 )+ε t

Other methods to consider in estimating the coefficient of

autocorrelation ρ are as follows:

(a) The Cochrane-Orcutt two-step method

(b) The Hildreth-Lu search procedure

(c) The maximum likelihood method.

They are beyond the scope of the objectives of this book. Please do your personal research.

36

Page 37: Introduction to Econometrics 2

Autocorrelation function, ACF. Partial autocorrelation function, PACF.Q statistic

The autocorrelation function, ACF, could be used to check variability in the error term or the residuals.

The mathematical formula is as follows:

ACFs=cov ( y t , y t−s )

σ y2

Please consider the following time series in different time periods. The time series represent the return of the share prices of a hypothetical supermarket in Boscombe. It is located in the South - West of England.

T = trend yt = dependent variable1 -2.34782 -1.27313 0.84674 0.78295 3.03726 4.34057 0.73418 -0.89129 -2.382410 -1.482711 -0.856812 0.078513 2.486714 3.644815 4.752216 2.864417 3.356518 5.324719 4.312420 5.2376

The first step, it is to find the covariances in each period and the variance of the dependent variable. Please refer to the correlation section to refresh your knowledge. Consider the covariance concept and the measures of dispersion for the variance calculation. Please use Excel to do the calculations. The covariance

37

Page 38: Introduction to Econometrics 2

function is defined as = covar(). I will do the first four lags for simplicity. EViews software calculate many lags.

A

T = trend

B

yt = dependent variable

C

yt-1

D

yt-2

E

yt-3

F

yt-4

1 -2.34782 -1.2731 -2.34783 0.8467 -1.2731 -2.34784 0.7829 0.8467 -1.2731 -2.34785 3.0372 0.7829 0.8467 -1.2731 -2.34786 4.3405 3.0372 0.7829 0.8467 -1.27317 0.7341 4.3405 3.0372 0.7829 0.84678 -0.8912 0.7341 4.3405 3.0372 0.78299 -2.3824 -0.8912 0.7341 4.3405 3.037210 -1.4827 -2.3824 -0.8912 0.7341 4.340511 -0.8568 -1.4827 -2.3824 -0.8912 0.734112 0.0785 -0.8568 -1.4827 -2.3824 -0.891213 2.4867 0.0785 -0.8568 -1.4827 -2.382414 3.6448 2.4867 0.0785 -0.8568 -1.482715 4.7522 3.6448 2.4867 0.0785 -0.856816 2.8644 4.7522 3.6448 2.4867 0.078517 3.3565 2.8644 4.7522 3.6448 2.486718 5.3247 3.3565 2.8644 4.7522 3.644819 4.3124 5.3247 3.3565 2.8644 4.752220 5.2376 4.3124 5.3247 3.3565 2.8644

Variance 6.81

38

Page 39: Introduction to Econometrics 2

The mathematical formulas in Excel of the covariance is as follows:

Covar(y,y t-1)

=COVAR(B2:B19,C2:C19)

4.765142

Covar(y,y t-2)

=COVAR(B3:B18,D3:D18)

2.808584

Covar(y,y t-3)

=COVAR(B4:B17,E4:E17)

0.962773

Covar(y,y t-4)

=COVAR(B5:B16,F5:F16)

-0.624

17

By applying the formula, we will have the following results:

ACF1 = 4.765142 / 6.81 = 0.6997

ACF2 = 2.808584 / 6.81 = 0.4124

ACF3 = 0.962773 / 6.81 = 0.1414

ACF4 = -0.62417 / 6.81 = -0.0917

Conclusion: The correlation is high for the first two lags and then decrease and gets close to zero.

39

Page 40: Introduction to Econometrics 2

The partial autocorrelation function, PACF, are the coefficient obtained from the regression based on the dependent or independent variable lagged one period.

The mathematical formula is as follows:

yt = β1 yt−1+β2 y t−2+.. . .. .. .+βn y t−n+εt

So basically you run a regression of yt on yt-1 , yt-2, yt-3 and yt-4. For simplicity, we take four observations and, therefore, you should get four coefficients. I have attached the previous example.

A

T = trend

B

yt = dependent variable

C

yt-1

D

yt-2

E

yt-3

F

yt-4

1 -2.34782 -1.2731 -2.34783 0.8467 -1.2731 -2.34784 0.7829 0.8467 -1.2731 -2.34785 3.0372 0.7829 0.8467 -1.2731 -2.34786 4.3405 3.0372 0.7829 0.8467 -1.27317 0.7341 4.3405 3.0372 0.7829 0.84678 -0.8912 0.7341 4.3405 3.0372 0.78299 -2.3824 -0.8912 0.7341 4.3405 3.037210 -1.4827 -2.3824 -0.8912 0.7341 4.340511 -0.8568 -1.4827 -2.3824 -0.8912 0.734112 0.0785 -0.8568 -1.4827 -2.3824 -0.891213 2.4867 0.0785 -0.8568 -1.4827 -2.382414 3.6448 2.4867 0.0785 -0.8568 -1.482715 4.7522 3.6448 2.4867 0.0785 -0.856816 2.8644 4.7522 3.6448 2.4867 0.078517 3.3565 2.8644 4.7522 3.6448 2.486718 5.3247 3.3565 2.8644 4.7522 3.644819 4.3124 5.3247 3.3565 2.8644 4.7522

40

Page 41: Introduction to Econometrics 2

20 5.2376 4.3124 5.3247 3.3565 2.8644

Further test of correlation is the Box – Pierce statistic Q = T∑ ACF 2. The null

hypothesis is that there is no correlation. The Q statistic is based on the chi-square distribution.

Let’s calculate the Q statistic based on the above example:

Q = T∑ ACF 2= 20 [0.69972 + 0.41242 + 0.14142 +(-0.0917)2] = 20 (0.48958009

+ 0.17007376 + 0.01999396 + 0.00840889) = 20 ( 0.6880567) = 13.76

Then, the next step is to check in the appendix the critical value of the chi-square table with a 5 % significance level. The critical value is 9.49 with four degrees of freedom. The estimated Q value is 13.76 > 9.49. The sample evidence suggests to reject the null hypothesis. There is correlation in the share price time - series.

You can also use the Breusch – Godfrey LM test for detection of serial correlation of the error term.

41

Page 42: Introduction to Econometrics 2

Multicollinearity

Multicollinearity is a violation of the fifth assumption. It is a case when the independent variables are highly correlated between them and it becomes difficult to separate and determine their individual effects with the dependent variable. Please consider the following regression model.

y=β1 x1+β2 x2+. .. . .. .+βn xn+εt

The estimated coefficients may be statistically insignificant, although R2 is very significant. In addition, the standard errors could be very high or the t-ratios very low. The confidence intervals for the parameters of interest are thus very wide. When the explanatory variables are highly intercorrelated, it becomes difficult to separate the separate effects of each of the explanatory variables on the explained variable. It can be overcome or reduced by collecting more data or dropping one of the highly collinear variables.

Multicollinearity refers to the case in which two or more explanatory variables in the regression model are highly correlated, making it difficult or impossible to isolate their individual effects on the dependent variable. The R2 is very high but the the t-statistics of the independent variables are not significant. To test for multicollinearity, we construct correlation matrices and compute variance-inflation factors, VIF. It is known as variance inflation factor because as R2

increases, the variance and the standard errors of the coefficients of the independent variables increase.

The first step is to construct nad analyze the correlation structure of the explanatory variables. The second step is to check the t-ratios of the individual coefficients and the standard errors. The third step is to compute the variance – inflation factors based on the squared multiple correlation coefficient R2 of the regression model. Another test is to use auxiliary regression. We regress the independent variable on the remaining independent variables. For example, if we have three independent variables x1, x2, x3, then, regress x1 on x2 nd x3. Then, x2

on x3. Check the value of R2 and the F value of individual regressions. If they are significant at the 5% or 1% significance level then we have collinear problem. Once the multicollinearity problem is detected, the solution available is to to drop

42

Page 43: Introduction to Econometrics 2

the collinear variable that shows the highest correlation figure. Another solution is to collect more data and increase the sample size.

Please check the variances, the covariances and the variance of the error term in relation to the R2. Multicollinearity depends on the variance of the explanatory variable and the variance of the error term. I have included the corresponding formuals.

V ( β1 )=σ 2

Sx1(1−r1,22 )

V ( β2)=σ 2

Sx2(1−r1,22 )

cov ( β1 , β2 )=−σ2 r1,2

2

Sx1 x2(1−r1,22 )

Where: σ2

is the variance of the error term.

r1,22 is the squared multiple correlation coefficient of two variables 1 and 2 .

Sx1=∑ x12−n x1

2

Sx2=∑ x22−n x2

2

Sx1x2=∑ x1 x2−n x1 x2

I have added a brief review of the correlation analysis. Simple correlation analysis is concerned with measuring the degree of linear association between two variables. It does indicate how well the points on a scatter diagram fit the regression line. In other words, it describes the nature of the spread of the items about the line. The measure most commonly used is Pearson’s coefficient of correlation, denoted by R, which always lies between -1 and +1.

Degrees of correlation

Correlation between two variables could be showed on scatter diagram by plotting a number of pairs of data on the graph.

R= -1 Perfect negative linear correlationR= -0.8 Strong negative linear correlationR= -0.2 Weak negative linear correlation

43

Page 44: Introduction to Econometrics 2

R=0 No correlationR=0.2 Weak positive linear correlationR=0.8 Strong positive linear correlationR=1 Perfect positive linear correlation

I have attached detailed examples that I have used in my PhD thesis to detect for multicollinearity.

Tables 1 and 2 show the correlation matrices for the four independent variables used in UK and US models respectively.

Table 1 Correlation matrix of UK independent variables. The table shows the correlation matrix of the UK independent variables. The

abbreviations used are the following. (Rm−Rf ) is the excess market return for UK,

(Rs−Rb ) is the size effect, (Rg−Rv ) is the book-to-market effect and (Rm)2 measures market timing ability. We use monthly data from January 1990 to January 2003.

(Rm−Rf ) (Rs−Rb ) (Rg−Rv ) (Rm)2

(Rm−Rf ) 1

(Rs−Rb ) 0.14 1

(Rg−Rv ) -0.04 0.03 1

(Rm)2 -0.11 0.06 0.09 1Source: calculated by the author

Table 2 Correlation matrix of US independent variables. The table shows the correlation matrix of the independent variable. The abbreviations

used are the following. (Rm−Rf ) is the excess market return for UK, (Rs−Rb ) is the

size effect, (Rg−Rv ) is the book-to-market effect and (Rm)2 measures market timing ability. We use monthly data from January 1990 to January 2003

(Rm−Rf ) (Rs−Rb ) (Rg−Rv ) (Rm)2

(Rm−Rf ) 1

(Rs−Rb ) 0.01 1

44

Page 45: Introduction to Econometrics 2

(Rg−Rv ) -0.04 -0.33 1

(Rm)2 0.18 -0.06 -0.03 1Source: calculated by the author

According to both tables, none of the correlation coefficients is greater than 0.2 and therefore the independent variables are not strongly correlated.

Another method for detecting multicollinearity is the variance-inflation factor (VIF). It is calculated as

VIF i=1

1−Ri2

where VIF is the variance-inflation factor and R

i2 is the squared multiple correlation coefficient obtained from a regression of the ith independent variable on the other independent variables. A high VIF suggests a collinearity problem, but a VIF less than 10 indicates that there is no collinearity problem. Table 3 summarises the results for the UK and Table 4 summarises the results for the US.

Table 3 VIF results of the UK independent variables.

Table 3 shows the VIF results of the UK independent variables. The abbreviations used

are the following. fm RR is the excess market return for UK, (Rs−Rb ) is the size

effect, (Rg−Rv ) is the book-to-market effect and (Rm)2 measures market timing ability. We use monthly data from January 1990 to January 2003.

(Rm−Rf ) (Rs−Rb ) (Rg−Rv ) (Rm)2VIF 1.64 1.18 1.13 1.93

Source: calculated by the author

Table 4 VIF results of the US independent variables.

Table 4 shows the VIF results of the US independent variables. The abbreviations used

are the following. (Rm ,t−R f , t ) is the excess market return for US, (R s ,t−Rb ,t ) is the size

effect, (Rg−Rv ) is the book-to-market effect and (Rm)2 measures market timing ability. We use monthly data from January 1990 to January 2003

45

Page 46: Introduction to Econometrics 2

(Rm,t−R f , t ) (R s ,t−Rb ,t ) (Rg−Rv ) (Rm)2VIF 1.02 1.26 1.38 1.06

Source: calculated by the author

It is clear from both tables that all the independent variables show a VIF less than 2 which is a sign that there is no multicollinearity.

Exercise

Please consider the following example with n = 72 observations. Please construct the correlation matrix of the following independent variables and analyze the degrees of correlation among them. Do you detect a multicollinearity problem?

Observations

Risk free (x1)

Size (x2) Market return (x3)

Age (x4) Hurdle rate (x5)

Mgt performance (x6)

1 2.755836 0.823441 5.397865001 40 10.34 12 2.725726 4.458782 6.857003219 25 5.677 1.53 3.092531 4.232887 7.989673776 86 10.35 0.674 3.089416 2.137064 3.379482658 36 2.345 15 2.802173 1.658494 -0.45807375 50 3.13 1.756 2.409178 0.20628 7.497715958 13 4.31 1.757 2.220845 0.028879 -0.92491506 14 10.45 1.58 2.522834 0.774512 -8.40360195 57 0 1.59 2.669247 0.259913 1.42834889 72 2.144 2.25

10 2.969892 0.325758 14.91365445 71 5.678 2.4511 1.994892 0.008251 7.948571731 3 8.99 2.3112 1.861559 0.008251 6.81028783 3 2.76 2.3113 2.469416 0.929169 5.783098584 10 15 1.6714 2.503892 0.633557 -0.67634733 2 1.23 1.2315 2.265607 0.711829 6.876247514 10 1.34 1.616 2.606003 0.008251 6.928020847 6 2.34 1.317 2.639092 2.19223 -2.23667513 48 3.456 1.6718 2.572749 1.91165 8.918700935 17 4.311 1.3219 2.540823 0.008251 -1.6115003 32 2.67 0.3220 2.259086 0.008251 2.446540443 27 3.21 0.4821 2.608419 0.148522 -1.40503816 24 7.89 0.1322 2.558915 0.191428 7.779562137 22 2.134 0.4523 3.042134 0.145221 5.438933357 24 1.34 0.6724 2.168168 0.167979 6.333734893 18 5 1.525 3.415726 0.609765 -0.93701863 12 10 1.2326 2.244892 0.864844 0.084759516 24 1.33 2.3427 3.257392 0.072066 11.41224438 100 15 2.2228 2.592576 0.960441 -0.28480562 46 11 1.529 2.533267 0.008251 0.89586715 48 14 1.530 2.678596 0.008251 3.650721789 48 12 1.531 2.595078 0.243411 0.080447353 24 13 1.5

46

Page 47: Introduction to Econometrics 2

32 3.029892 0.115517 7.968449474 4 13 1.533 2.502834 0.406785 -3.33913367 24 10 1.534 2.801892 0.187146 1.174795116 12 15 1.535 2.985979 0.553095 -5.23778486 36 10 1.536 2.873892 0.452711 2.603855611 2 12 1.537 2.738983 0.258661 6.248583233 24 15 138 2.188128 0.742609 -7.40827572 10 10 1.539 2.984753 0.142399 -5.46280949 84 12 1.2540 2.774178 7.447185 12.74202837 36 20 0.2341 2.867511 0.889802 1.764284504 56 15 142 3.143092 0.123768 0.320108901 5 11 0.3343 3.162392 0.201396 0.538832773 56 21 0.444 2.054598 0.799455 -4.55270147 15 12 0.545 3.108503 0.44564 -6.16328308 90 13 0.146 2.633226 0.272711 6.60581503 90 14 0.547 2.47726 0.150642 6.444594868 36 15 0.848 2.887051 1.918407 3.828884296 14 16 1.549 2.370292 0.75086 -0.03430596 14 8 250 2.802692 0.75086 3.07357262 14 6 251 2.830845 0.75086 3.52403273 14 5 252 1.625607 0.068069 -3.02022138 24 4 1.553 1.647749 0.169617 -1.99379246 24 3 1.554 2.855726 0.493801 -4.70154363 84 10 255 3.073577 0.75086 -6.45093819 36 2 1.556 2.516488 1.523075 5.770408932 24 3 1.557 2.503017 0.336658 -5.22058709 4 15 158 2.515186 0.885383 8.476452598 4 4 1.559 2.581392 0.141228 5.945917922 24 10 1.560 2.804629 0.082512 -3.63438734 7 17.5 1.561 2.831559 0.082512 -3.01410765 36 11.34 1.562 2.935229 0.082512 5.530374864 84 11 1.563 2.551392 0.082512 -0.04525887 3 12 1.564 2.944589 0.082512 9.559315003 72 14 1.565 3.145347 0.082512 10.42954088 12 15 1.566 4.196281 1.683867 9.71768063 60 11 167 4.112035 0.660097 10.7369913 24 12 168 4.090387 0.421637 -0.59326216 72 13 1269 2.403577 2.988672 10.65900268 6 9 1.270 2.703371 0.386157 2.721150814 10 8 1.271 2.461197 2.880581 8.251326888 10 7 1.272 2.928787 1.489131 3.99353016 5 6 1

Solution

Please use Excel or EViews 6 to get the following correlation matrices table.

Risk free (x1) Size (x2)

Market return (x3) Age (x4)

Hurdle rate (x5)

Mgt performance (x6)

Risk free (x1) 1Size (x2) 0.09821986 1Market return (x3) 0.1686035 0.29233455 1Age (x4) 0.38801428 0.03076871 -0.034756211 1

47

Page 48: Introduction to Econometrics 2

Hurdle rate (x5) 0.3403579 0.0465486 0.056071459 0.192384 1Mgt performance (x6) 0.24910849 -0.1095105 -0.039927531 0.144637 -0.007158494 1

Exercise

Please run the multiple regression equation. Check the standard errors and the t-ratios if they have a high or low value. Then, compute the variance inflation factors, VIF based on the following equation.

VIF i=1

1−Ri2

observations

ShareReturns(y)

Risk free (x1) Size (x2)

Market return (x3)

Age (x4)

Hurdle rate (x5)

Mgt performance (x6)

1 3.526787

2.755836

0.823441

5.397865001 40 10.34 1

2 -4.34533

2.725726

4.458782

6.857003219 25 5.677 1.5

3 5.222709

3.092531

4.232887

7.989673776 86 10.35 0.67

4 -4.99619

3.089416

2.137064

3.379482658 36 2.345 1

5 -3.04336

2.802173

1.658494

-0.45807375 50 3.13 1.75

6 -2.37542

22.40917

8 0.206287.49771595

8 13 4.31 1.757 2.65130

32.22084

50.02887

9-

0.92491506 14 10.45 1.58 -

0.689242.52283

40.77451

2-

8.40360195 57 0 1.59 0.20566

42.66924

70.25991

3 1.42834889 72 2.144 2.2510

2.47832.96989

20.32575

814.9136544

5 71 5.678 2.4511 0.23740

71.99489

20.00825

17.94857173

1 3 8.99 2.3112 0.32972

81.86155

90.00825

1 6.81028783 3 2.76 2.3113 -

0.268692.46941

60.92916

95.78309858

4 10 15 1.67

48

Page 49: Introduction to Econometrics 2

14 0.064769

2.503892

0.633557

-0.67634733 2 1.23 1.23

15-0.5873

2.265607

0.711829

6.876247514 10 1.34 1.6

16 0.329225

2.606003

0.008251

6.928020847 6 2.34 1.3

17 -0.11849

2.639092 2.19223

-2.23667513 48 3.456 1.67

18 0.011541

2.572749 1.91165

8.918700935 17 4.311 1.32

19 -0.18757

2.540823

0.008251 -1.6115003 32 2.67 0.32

20 -0.38752

2.259086

0.008251

2.446540443 27 3.21 0.48

21 -0.26835

2.608419

0.148522

-1.40503816 24 7.89 0.13

22 0.262798

2.558915

0.191428

7.779562137 22 2.134 0.45

23 0.355054

3.042134

0.145221

5.438933357 24 1.34 0.67

24 -1.34302

2.168168

0.167979

6.333734893 18 5 1.5

25 -0.77964

3.415726

0.609765

-0.93701863 12 10 1.23

26 -0.04649

2.244892

0.864844

0.084759516 24 1.33 2.34

27 0.098381

3.257392

0.072066

11.41224438 100 15 2.22

28 -0.09585

2.592576

0.960441

-0.28480562 46 11 1.5

29-0.0059

2.533267

0.008251 0.89586715 48 14 1.5

30 -0.05415

2.678596

0.008251

3.650721789 48 12 1.5

31 -0.00384

2.595078

0.243411

0.080447353 24 13 1.5

32 -0.00799

3.029892

0.115517

7.968449474 4 13 1.5

33-0.0538

2.502834

0.406785

-3.33913367 24 10 1.5

34 -0.00541

2.801892

0.187146

1.174795116 12 15 1.5

35-0.0178

2.985979

0.553095

-5.23778486 36 10 1.5

36 -0.00335

2.873892

0.452711

2.603855611 2 12 1.5

37 -0.03195

2.738983

0.258661

6.248583233 24 15 1

38 -0.05187

2.188128

0.742609

-7.40827572 10 10 1.5

39 -0.00569

2.984753

0.142399

-5.46280949 84 12 1.25

40-0.0009

2.774178

7.447185

12.74202837 36 20 0.23

41 -0.15083

2.867511

0.889802

1.764284504 56 15 1

49

Page 50: Introduction to Econometrics 2

42 -0.00748

3.143092

0.123768

0.320108901 5 11 0.33

43 -0.78198

3.162392

0.201396

0.538832773 56 21 0.4

44 -0.01982

2.054598

0.799455

-4.55270147 15 12 0.5

45 -0.04005

3.108503 0.44564

-6.16328308 90 13 0.1

46-0.002

2.633226

0.272711 6.60581503 90 14 0.5

47 -0.00483 2.47726

0.150642

6.444594868 36 15 0.8

48 -0.05627

2.887051

1.918407

3.828884296 14 16 1.5

49 -0.01541

2.370292 0.75086

-0.03430596 14 8 2

50 -0.00233

2.802692 0.75086 3.07357262 14 6 2

51 -0.00399

2.830845 0.75086 3.52403273 14 5 2

52 -0.00422

1.625607

0.068069

-3.02022138 24 4 1.5

53 -0.00349

1.647749

0.169617

-1.99379246 24 3 1.5

54-0.0038

2.855726

0.493801

-4.70154363 84 10 2

55 -7.53E-05

3.073577 0.75086

-6.45093819 36 2 1.5

56 -0.00448

2.516488

1.523075

5.770408932 24 3 1.5

57 -0.01442

2.503017

0.336658

-5.22058709 4 15 1

58 -0.04197

2.515186

0.885383

8.476452598 4 4 1.5

59 -0.05222

2.581392

0.141228

5.945917922 24 10 1.5

60 -0.01357

2.804629

0.082512

-3.63438734 7 17.5 1.5

61 -0.01134

2.831559

0.082512

-3.01410765 36 11.34 1.5

62 -0.00305

2.935229

0.082512

5.530374864 84 11 1.5

63-0.0007

2.551392

0.082512

-0.04525887 3 12 1.5

64 -0.01616

2.944589

0.082512

9.559315003 72 14 1.5

65 -0.00178

3.145347

0.082512

10.42954088 12 15 1.5

66-0.0375

4.196281

1.683867 9.71768063 60 11 1

67 -0.04155

4.112035

0.660097 10.7369913 24 12 1

68 -0.23705

4.090387

0.421637

-0.59326216 72 13 12

69 -9.67E-17

2.403577

2.988672

10.65900268 6 9 1.2

70-0.0583

2.703371

0.386157

2.721150814 10 8 1.2

71 - 2.46119 2.88058 8.25132688 10 7 1.2

50

Page 51: Introduction to Econometrics 2

0.04783 7 1 872 -

0.041772.92878

71.48913

1 3.99353016 5 6 1

Exercise

Consider the following correlation matrix.

y x1 x2

y 1.00 0.12 0.93x1 0.29 1.00 0.38x2 0.30 0.91 1.00

Please analyze the correlation matrix. Do you consider any variable as highly collinear? The problem of multicollinearity will arise from detecting high correlation between two or more variables.

51

Page 52: Introduction to Econometrics 2

Exercise

Please consider the following regression equation with their standard errors and t-statistics in parentheses.

y= 0 .57 β1+0 .32 β2

SD: (4 .34 ) (5 .54 )t-statistics (0 . 13) (0 .06 ) R2=0 . 92

There is a high degree of correlation between the variables . By checking the standard errors and the t-statistics, could you detect signs of multicollinearity ?

52

Page 53: Introduction to Econometrics 2

Exercise

Please consider the following data obtained from regressing y on x1, x2 and x3.

Variable Coefficient Standard errors t-statisticsx1 0.023 4.156 0.00553x2 0.312 3.245 0.096x3 0.145 2.214 0.065

Constant -16.00 5.567 -2.87n = 20 R2 = 0.95 F-statistics = 200

It is required to detect if there is a multicollinearity problem by checking the R2, the F-statistics, the standard errors and the t-statistics.

53

Page 54: Introduction to Econometrics 2

Exercise

Please consider the following dataset.

observations ShareReturns(y) Risk free (x1) Size (x2) Market return (x3)

1 3.526787 2.755836 0.823441 5.3978650012 -4.34533 2.725726 4.458782 6.8570032193 5.222709 3.092531 4.232887 7.9896737764 -4.99619 3.089416 2.137064 3.3794826585 -3.04336 2.802173 1.658494 -0.458073756 -2.375422 2.409178 0.20628 7.4977159587 2.651303 2.220845 0.028879 -0.924915068 -0.68924 2.522834 0.774512 -8.403601959 0.205664 2.669247 0.259913 1.42834889

10 2.4783 2.969892 0.325758 14.9136544511 0.237407 1.994892 0.008251 7.94857173112 0.329728 1.861559 0.008251 6.8102878313 -0.26869 2.469416 0.929169 5.78309858414 0.064769 2.503892 0.633557 -0.6763473315 -0.5873 2.265607 0.711829 6.87624751416 0.329225 2.606003 0.008251 6.92802084717 -0.11849 2.639092 2.19223 -2.2366751318 0.011541 2.572749 1.91165 8.91870093519 -0.18757 2.540823 0.008251 -1.611500320 -0.38752 2.259086 0.008251 2.44654044321 -0.26835 2.608419 0.148522 -1.4050381622 0.262798 2.558915 0.191428 7.77956213723 0.355054 3.042134 0.145221 5.43893335724 -1.34302 2.168168 0.167979 6.33373489325 -0.77964 3.415726 0.609765 -0.9370186326 -0.04649 2.244892 0.864844 0.08475951627 0.098381 3.257392 0.072066 11.4122443828 -0.09585 2.592576 0.960441 -0.2848056229 -0.0059 2.533267 0.008251 0.89586715

54

Page 55: Introduction to Econometrics 2

30 -0.05415 2.678596 0.008251 3.65072178931 -0.00384 2.595078 0.243411 0.08044735332 -0.00799 3.029892 0.115517 7.96844947433 -0.0538 2.502834 0.406785 -3.3391336734 -0.00541 2.801892 0.187146 1.17479511635 -0.0178 2.985979 0.553095 -5.2377848636 -0.00335 2.873892 0.452711 2.60385561137 -0.03195 2.738983 0.258661 6.24858323338 -0.05187 2.188128 0.742609 -7.4082757239 -0.00569 2.984753 0.142399 -5.4628094940 -0.0009 2.774178 7.447185 12.7420283741 -0.15083 2.867511 0.889802 1.76428450442 -0.00748 3.143092 0.123768 0.32010890143 -0.78198 3.162392 0.201396 0.53883277344 -0.01982 2.054598 0.799455 -4.5527014745 -0.04005 3.108503 0.44564 -6.1632830846 -0.002 2.633226 0.272711 6.6058150347 -0.00483 2.47726 0.150642 6.44459486848 -0.05627 2.887051 1.918407 3.82888429649 -0.01541 2.370292 0.75086 -0.0343059650 -0.00233 2.802692 0.75086 3.0735726251 -0.00399 2.830845 0.75086 3.5240327352 -0.00422 1.625607 0.068069 -3.0202213853 -0.00349 1.647749 0.169617 -1.9937924654 -0.0038 2.855726 0.493801 -4.7015436355 -7.53E-05 3.073577 0.75086 -6.4509381956 -0.00448 2.516488 1.523075 5.77040893257 -0.01442 2.503017 0.336658 -5.2205870958 -0.04197 2.515186 0.885383 8.47645259859 -0.05222 2.581392 0.141228 5.94591792260 -0.01357 2.804629 0.082512 -3.6343873461 -0.01134 2.831559 0.082512 -3.0141076562 -0.00305 2.935229 0.082512 5.53037486463 -0.0007 2.551392 0.082512 -0.0452588764 -0.01616 2.944589 0.082512 9.55931500365 -0.00178 3.145347 0.082512 10.4295408866 -0.0375 4.196281 1.683867 9.7176806367 -0.04155 4.112035 0.660097 10.736991368 -0.23705 4.090387 0.421637 -0.5932621669 -9.67E-17 2.403577 2.988672 10.6590026870 -0.0583 2.703371 0.386157 2.72115081471 -0.04783 2.461197 2.880581 8.25132688872 -0.04177 2.928787 1.489131 3.99353016

Please consider the following data obtained from regressing y on x1, x2 and x3.

Variable Coefficient Standard errors t-statisticsConstant 0.06 0.92 0.065

x1 -0.05 0.34 -0.147x2 -0.12 0.13 -0.92x3 0.03 0.03 1

55

Page 56: Introduction to Econometrics 2

n = 72 R2 = 0.02 F-statistics = 0.53

It is required to detect if there is a multicollinearity problem by checking the R2, the F-statistics, the standard errors and the t-statistics. In addition, check the residual plot for any unusual pattern in your data in order to remove or add data.

Exercise of dropping variables

Please consider the following dataset and the correlation matrix.

Fund returns Size Market returns Book - to - market returns8.8262274 9.960064 6.770160683 2.9656252914.3672068 5.355746 2.101215141 3.414592015

-13.744606 -0.83212 -2.256078026 -13.4427797513.6533375 0.455159 3.415008092 23.48955149-12.953695 2.587989 -0.948914314 -12.8014539-12.422064 -5.26719 -1.376494263 -12.72458829-12.739913 -2.81245 -2.109216247 -13.03323001-19.315255 0.08267 -7.732750354 -15.6827118-14.946826 1.892262 -15.8749113 -15.06487572-16.339207 -5.17364 -5.148485371 -14.25398135-22.642978 1.008631 0.24888198 -2.937226066-13.606672 -1.95412 -3.445206217 -13.22732476-10.952726 0.025582 -11.00011018 -14.52055318-9.4008243 -1.79843 0.244664806 -0.61935590311.8695715 -2.02135 3.38517113 10.24154317-3.8978891 0.929765 -0.924758847 -9.07628458

-11.29111 1.537612 -8.775426604 -13.60504611-16.661287 -1.18146 -7.103029456 -8.763327169

-14.44891 0.796062 2.350319644 -10.90464269-16.364596 0.908112 2.623642306 -10.9229353-10.670823 0.109951 -1.092979559 -0.969071134-16.306824 -1.66881 -0.566832617 -11.79717279-4.7014841 -3.97912 -5.525479768 -12.18943902-17.227822 -0.54654 -6.09870943 -18.5726555-16.819389 1.794597 -7.128801365 -10.594762749.79868093 2.22648 1.900347699 5.241326231-16.407574 2.90064 -4.765574598 -10.10091956-16.800822 4.582849 2.256769855 -11.15196914-10.780188 1.950273 -3.014224796 -10.15023338-13.281923 -1.35779 -5.225298329 -11.3487645

-15.90212 -5.19986 -4.372702903 -18.966362-12.735701 -0.28909 -3.009988946 -19.14028734-11.127754 -2.51951 -5.27213103 -11.35464814

-16.97936 1.359845 -9.017072966 -19.49957599-10.13201 -2.71113 2.694147467 -10.50217486-15.78303 -0.52075 -4.799965131 -15.89689017

-16.070046 2.410862 -7.305311366 -15.3709710923.9325585 0.709116 2.718674687 14.14992614

-15.49828 -1.54017 -6.271749703 -11.148277

56

Page 57: Introduction to Econometrics 2

-11.026655 -4.29409 2.547264128 -10.06320304-11.29506 -3.60993 3.746687032 -11.43183106

-25.210391 -7.10815 -1.127785203 -22.82488946-14.169418 -6.16405 0.629884119 -13.6499725-12.417722 4.466414 -7.612368855 -10.90432259-15.836023 -5.06333 -6.631707976 -19.32730889-14.341472 8.042842 -10.15385517 -17.55733615-16.013067 -2.31934 -6.779492991 -10.4645982-12.351678 -2.94247 5.807435413 -10.36236795-16.051034 -6.47283 -5.191925454 -15.52020525-15.507248 -0.0779 4.307946209 -14.1095313625.5680413 3.477585 2.141515594 21.43590757-24.050537 0.95435 3.458736009 -19.12759498-12.618249 7.26415 -2.328445821 -11.92522035-22.663026 -7.75868 0.405165127 -20.37703302-12.587351 -4.16073 -3.001386449 -16.46059687

-18.49934 -5.2051 -13.01239192 -11.75662186-21.0231 0.476781 -2.350246928 -20.9983765

-6.9925595 -10.2425 -17.09413351 -8.549554357-7.9491123 -0.11357 -9.429005018 -8.456761373-10.996422 -4.59335 -14.96744007 -13.64316679-22.552878 1.871977 -8.705122947 -19.86020083-18.469067 5.660192 5.107106143 -24.057125168.94789729 1.23615 11.5829977 16.2650910321.1087905 3.339655 3.200897507 14.58491199-12.478781 3.986295 -4.792938648 -15.7073619713.2626208 0.448869 12.2606341 16.14001536-8.9434078 5.521951 -7.683346632 -7.893876608-16.884126 4.200894 6.296174259 -14.77477941-19.072579 -0.40545 -14.81734321 -11.2947414214.5084942 -7.34258 7.062232477 26.6638861413.4128866 2.428614 -0.922061916 13.8337761926.5926627 3.27669 15.66395515 19.21851198-19.770788 12.44936 -19.02650795 -10.44646497-22.025804 5.658625 -9.789377382 -13.02528807-13.127026 -2.89127 17.84848758 -18.35433058

-13.94609 -2.56212 -2.867582535 -11.48090296-15.411772 -1.62255 4.237393994 -5.27760335815.3874982 5.286432 2.683161051 10.6522883-11.306411 2.040671 -4.275937147 -10.7888977615.4461602 -0.13068 6.778060323 13.8809627-19.068238 1.879668 -6.645082878 -16.81811859-24.036225 -5.39266 2.160831737 -25.36898249-16.293175 1.90084 -5.304864768 -16.4588215712.0015709 -2.21752 -1.30755068 10.91914469-24.294161 3.951971 3.005900556 -11.65017838-4.8255104 1.283857 -5.165205523 20.04675084-10.309238 -3.81126 -4.275694375 -13.43372624-10.333768 0.268692 13.31939629 -10.739021275.35216783 3.495569 2.454867193 2.780105608

-19.73225 -4.4211 -17.02418316 -15.96706557

57

Page 58: Introduction to Econometrics 2

-18.973101 -2.86788 -9.983114132 -11.67714282-11.301331 2.530577 -0.088339962 -21.63333362-14.934746 -10.2433 -7.379219909 -0.114463999-20.255129 1.787003 -16.96223063 -21.31924314-11.483275 10.58094 -10.26966727 -14.05361898

-17.98376 -1.11783 -4.146557016 -13.0952221410.0422963 -1.77422 2.555558476 20.94857359-13.732612 -2.45459 0.106627672 -13.30444263-14.836437 1.177549 1.868378125 -12.3008136910.1665141 3.920495 -2.201628006 14.16366117-15.167811 0.998025 -5.176270858 -18.21846229-10.324338 -2.3444 -9.114406255 -17.52563013-10.176053 2.571829 -10.6757764 -18.07453099-14.421339 -4.67418 0.468311538 -12.32480659-19.258996 -2.60953 -7.92730825 -13.34256959

-19.05183 -5.61631 5.552752453 -12.60051889-13.761018 4.65114 2.817569926 -12.8717217

I have added a summary of the correlation matrix

Fund returns Size

Market returns

Book- to- market returns

Fund returns 1Size 0.17228037 1

Market return0.43332306

80.01160

7 1Book - to market return

0.884104922

0.119942

0.408601021 1

It is required to detect any sign of multicollinearity among the independent variables. If there is such sign, then drop the collinear independent variables and run again the correlation matrix. For example, check the correlation between market returns and book – to – market returns. The dependent variable is fund returns and the independent variables are size, market returns and book – to- market returns.

58

Page 59: Introduction to Econometrics 2

Exercise

Please consider the following dataset the shows the price nominated in pounds in relation to monthly earnings, UK 3-months interest rate nominated in percentage, monthly disposable income nominated in humdreds of pounds.

Price Earnings Interest rate Disposable income40 1000 10 90038 980 9.8 80035 960 9.7 70033 950 9.5 40030 940 9.3 30029 930 9.0 20027 810 8.45 10026 805 8.10 28925 800 7.45 25724 784 6.89 17920 740 5.10 17019 730 5.98 14018 720 4.12 13017 710 3.23 12015 690 2.15 11914 670 2.89 10012 500 1.34 11011 400 1.23 9910 300 1.98 98

(a) Test for multicollinearity by using the correlation matrix, the variance inflation factor and regressing the independent variable on the remaining independent variables. Check the R2 and the F-value of each regression if they are significant. Determine if it exists a collinear problem.

(b) Could you think about other variables in that could affect price in case that you drop one of the mentioned variables.

(c) Check at the 5% significance level the coefficients of the independent variables, their t-statistics, their standard errors and their p-values.

(d) Test for separate regressions of the dependent variable price, y , in relation to earnings, interest rate, and disposable income.

(e) Transform your variables by using the ln function. In other words, check the natural logarithmic return. Could you see any difference?

Thanks for your time and for your participation.

59

Page 60: Introduction to Econometrics 2

Errors in variables

60

Page 61: Introduction to Econometrics 2

It refers to measurement errors related to the independent variables. For example, the data set is not consistent and there are gaps in the measurement history. A solution to this problem is to replace the independent variable with another one that is not correlated with the error term. Errors in the measurement of the dependent variable will lead to biased coefficients and estimated larger variances. The error of the dependent or independent variable will get added to the residual term. Another solution to avoid the measurement error of the independent variable is to use proxy variables that are adjusted with the economic or financial theory in relation to the hypothesese testing. The data should be uncorrelated with the error term. The residual plot will reveal heteroskedasticity and autocorrelation patterns in the data. Use the regression error specification test, RESET to test for model misspecification. Check the R2 and the F-test of the initial model and the estimated one to find out if there is any prevelant relationship between the residuals and the estimated dependent variable.

Example

The following data are given for price, quantity demanded and income of 10 UK manufactured products.

Price Quantity demanded Income2.34 200 50,0003.45 180 40,0004.56 170 41,0005.12 140 39,0006.89 160 37,0007.14 150 35,0008.67 130 25,0009.54 120 22,00010.11 110 16,000

(a) Calculate the price and income elasticity of demand for fall in price from 4.56 pounds to 8.67 pounds. The mathematical formulas are as follows:

Price elasticity of demand formula.

e pD=|%ΔQ

%ΔP|=

Q1−Q0

(Q0+Q1)/2P1−P0

(P0+P1)/2

Income elasticity of demand formula.

61

Page 62: Introduction to Econometrics 2

eY=

|%ΔQ%ΔY

|=Q1−Q0

(Q0+Q1 )/2Y 1−Y 0

(Y 0+Y 1)/2

(b) Is the demand elastic, inelastic or neither elastic or inelastic? If the numerical value is less than 1, then it is inelastic.

(c) Comment on the income elasticity in terms of sign and product specification. For example, is it an inferior product?

(d) Do you spot any error in the measurement of the above variables. Run a regression equation and use as the dependent variable the price and as the independent variable the income.

Vector autoregressions, cointegration and causality in terms of model selection and specification

62

Page 63: Introduction to Econometrics 2

Unit root is very important test as it affects the type of the test that you will conduct. It is a guideline that is used to decide if you are going to use regression or cointegration if the variables are cointegrated or an error correction model should be included.

For example, let’s assume that we have two time - series of two variables that have a unit root. A solution to this problem is to differentiate the numerical values of both the dependent and independent variables of the following equations.

Δ y t=a+ y t−1

Δ x t=a+x t−1

If we accept the null hypothesis of a unit root, then, we differentiate once again the time series. The mathematical equations will be as follows:

ΔΔ { y t=a+Δyt−1 ¿ ¿ΔΔ { x ¿t=a+Δxt−1 ¿¿The regression of the dependent on the independent variable is as follows:

y t=a+bx t+εt

Both the dependent and independent variables have a unit root. In other words, they are not stationary, then, we differentiate them one more time until they become stationary. If both variables become stationary in their first difference, I(1), then, we run a cointegration and an error correction model. If the error term, ε t , or the residuals has not a unit root after running the regression, then, the variables x and y are cointegrated. The regression of the dependent on the independent variable is as follows:

y t=a+bx t+εt

In Excel, we tick the box of the residuals to get the time series of the error term or the residuals of the regression. The mathematical equation of the residuals of a long – run relationship in an error correction model is as follows:

ε t= y t−a−bx t

Then, we apply the unit root test on the error term ε t . The mathematical formula

is as follows:

Δ εt=a+bεt−1

63

Page 64: Introduction to Econometrics 2

Once both variables are cointegrated, the mathematical formula of the error correction model will be as follows:

Δ y t=a+bΔx t−εt−1

Finally, we test for causality to see if x granger cause y or if y granger cause x. The granger causality in an error correction model is calculated by using the following equations:

Δ y t=a+bΔy t−1+cΔxt−1+εt−1

And

Δ x t=a+bΔxt−1+cΔy t−1+εt−1

Let’s take as an example the revenues and expenses of a small shop to start to understand the application of the above equations.

Revenues (yt) the dependent variable

Expenses (xt) the independent variable

321 152312 140300 162301 164320 174330 190340 195350 199352 210364 235372 231400 242351 256381 289401 231407 269421 302467 308442 312444 328

The first step is to differentiate the numerical values of both the dependent and independent variables of the following equations to find out if the time series are stationary.

64

Page 65: Introduction to Econometrics 2

Δ y t=a+ y t−1

Δ y yt-1

-9 321-12 312

1 30019 30110 32010 33010 340

2 35012 352

8 36428 372

-49 40030 35120 381

6 40114 40746 421

-25 4672 442

SUMMARY OUTPUT

65

Page 66: Introduction to Econometrics 2

Regression StatisticsMultiple R 0.117335R Square 0.013768Adjusted R Square -0.04425Standard Error 21.22406Observations 19

ANOVA

df SS MS FSignificanc

e F

Regression 1 106.9019106.901

90.23731

7 0.632373

Residual 17 7657.835450.460

9Total 18 7764.737

Coefficients

Standard Error t Stat P-value Lower 95%

Upper 95%

Lower 95.0%

Upper 95.0%

Intercept 24.97065 38.280550.65230

60.52292

7 -55.7944105.735

7 -55.7944 105.7357

yt-1 -0.0507 0.104071 -0.487150.63237

3 -0.270270.16887

3 -0.27027 0.168873

Δ y t=24 . 97−0. 05 yt−1

t−statistic (-0 . 49)

Then, we compare the t-statistic with the ADF critical values. In our case t = -0.49 > -3.33. The sample evidence suggest that we could not reject the null hypothesis. There is a unit root. The dependent variable is not stationary.

66

Page 67: Introduction to Econometrics 2

Δ x t=a+x t−1

Δ x xt-1

-12 15222 140

2 16210 16416 174

5 1904 195

11 19925 210-4 23511 23114 24233 256

-58 28938 23133 269

6 3024 308

16 312

SUMMARY OUTPUT

Regression StatisticsMultiple R 0.094537R Square 0.008937Adjusted R Square -0.04936Standard Error 21.28549Observations 19

ANOVA

df SS MS FSignificanc

e F

Regression 1 69.4574269.4574

20.15330

3 0.700263

Residual 17 7702.227453.072

2Total 18 7771.684

Coefficients

Standard Error t Stat P-value Lower 95%

Upper 95%

Lower 95.0%

Upper 95.0%

Intercept 17.32851 21.169970.81854

20.42437

4 -27.336361.9933

1 -27.3363 61.99331

xt-1 -0.03596 0.091852 -0.391540.70026

3 -0.229760.15782

8 -0.22976 0.157828

Δ x t=17 . 33−0. 04 y t−1

t−statistic ( -0 . 39 )

67

Page 68: Introduction to Econometrics 2

Then, we compare the t-statistic with the ADF critical values. In our case t = -0.39 > -3.33. The sample evidence suggest that we could not reject the null hypothesis. There is a unit root. The independent variable is not stationary.

We differentiate once again the time series. The mathematical equations will be as follows:

ΔΔ { y t=a+Δyt−1 ¿ ¿ ¿¿ΔΔ y Δ yt-1

-3 -913 -1218 1-9 190 100 10

-8 1010 2-4 1220 8

-77 2879 -49

-10 30-14 20

8 632 14

-71 4627 -25

68

Page 69: Introduction to Econometrics 2

SUMMARY OUTPUT

Regression StatisticsMultiple R 0.822337R Square 0.676239Adjusted R Square 0.656004Standard Error 20.35731Observations 18

ANOVA

df SS MS FSignificanc

e F

Regression 1 13849.5613849.5

633.4191

3 2.81E-05Residual 16 6630.721 414.42Total 17 20480.28

Coefficients

Standard Error t Stat P-value Lower 95%

Upper 95%

Lower 95.0%

Upper 95.0%

Intercept 9.601098 5.0439771.90347

80.07512

6 -1.0916520.2938

5 -1.09165 20.29385

Δ y t-1 -1.33735 0.231339 -5.780932.81E-

05 -1.82777 -0.84694 -1.82777 -0.84694

ΔΔ { y t=a+Δyt−1 ¿ ¿ΔΔ { y ¿t=9 .60−1.34 Δy t−1 ¿ t−statistic ( -5 . 78 ) ¿ ¿Then, we compare the t-statistic with the ADF critical values. In our case t = -5.78 <-3.33. The sample evidence suggest that we could reject the null hypothesis. The dependent variable yt becomes stationary at the first difference. Please notice the change in R2 when the series is stationary in relation when it was not.

69

Page 70: Introduction to Econometrics 2

ΔΔ { xt=a+Δxt−1¿

ΔΔ x Δ xt-1

34 -12-20 22

8 26 10

-11 16-1 57 4

14 11-29 2515 -4

3 1119 14

-91 3396 -58-5 38

-27 33-2 612 4

SUMMARY OUTPUT

Regression StatisticsMultiple R 0.856824R Square 0.734147Adjusted R Square 0.717531Standard Error 19.0385Observations 18

ANOVA

df SS MS FSignificance

FRegression 1 16015.01 16015.01 44.18369 5.61E-06Residual 16 5799.43 362.4644Total 17 21814.44

Coefficients

Standard Error t Stat P-value Lower 95%

Upper 95%

Lower 95.0%

Intercept 14.35515 4.883117 2.939752 0.009613 4.003408 24.70689 4.003408Δ x t-1 -1.43995 0.21663 -6.64708 5.61E-06 -1.89919 -0.98072 -1.89919

70

Page 71: Introduction to Econometrics 2

ΔΔ { xt=a+Δxt−1¿

ΔΔ { x t=14 .36−1. 44 Δxalignl¿ t−1 ¿¿¿¿ ¿ t−statistic (-6 . 65 ) ¿¿Then, we compare the t-statistic with the ADF critical values. In our case t = -6.65 <-3.33. The sample evidence suggest that we could reject the null hypothesis. The independent variable xt becomes stationary at the first difference. The sample evidence suggest that we can reject the null hypothesis of a unit root at their first difference. Please notice the change in R2 when the series is stationary in relation when it was not.

71

Page 72: Introduction to Econometrics 2

The regression of the dependent on the independent variable is as follows:

y t=a+bx t+εt

SUMMARY OUTPUT

Regression StatisticsMultiple R 0.927423R Square 0.860113Adjusted R Square 0.852341Standard Error 19.22193Observations 20

ANOVA

df SS MS FSignificanc

e F

Regression 1 40892.5140892.5

1110.675

1 4.07E-09

Residual 18 6650.686369.482

5Total 19 47543.2

CoefficientsStandard

Error t Stat P-value Lower 95%Upper 95%

Lower 95.0%

Intercept 185.2836 17.9658710.3130

9 5.55E-09 147.5387223.028

6 147.5387

Expenses (x) 0.79981 0.07602610.5202

2 4.07E-09 0.6400850.95953

4 0.640085

RESIDUAL OUTPUT

ObservationPredicted Revenues

(y) Residuals1 306.8547 14.145272 297.257 14.742983 314.8528 -14.85284 316.4525 -15.45255 324.4505 -4.450556 337.2475 -7.24757 341.2466 -1.246558 344.4458 5.5542089 353.2437 -1.2437

10 373.2389 -9.2389411 370.0397 1.96029512 378.8376 21.16239

72

Page 73: Introduction to Econometrics 2

13 390.0349 -39.034914 416.4287 -35.428715 370.0397 30.9602916 400.4325 6.56752317 426.8262 -5.826218 431.6251 35.3749419 434.8243 7.17570320 447.6213 -3.62125

In Excel, we tick the box of the residuals to get the time series of the error term or the residuals of the regression. The mathematical equation of the residuals of a long – run relationship in an error correction model is as follows:

ε t= y t−a−bx t

ε t= y t−185 .28−0 . 7998x t

73

Page 74: Introduction to Econometrics 2

Then, we apply the unit root test on the error term ε t . The mathematical formula

is as follows:

Δ εt=a+bεt−1

Δε ε t-1

0.597717

14.14527

-29.5958

14.74298

-0.59962

-14.8528

11.0019-

15.4525-

2.79696-

4.450556.00095

1 -7.24756.80076

1-

1.24655-

6.797915.55420

8-

7.99524 -1.243711.1992

4-

9.2389419.2020

91.96029

5-

60.197321.1623

93.60627

7-

39.034966.3889

7-

35.4287-

24.392830.9602

9-

12.39376.56752

341.2011

4 -5.8262-

28.199235.3749

4

-10.7977.17570

3

SUMMARY OUTPUT

74

Page 75: Introduction to Econometrics 2

Regression StatisticsMultiple R 0.712424R Square 0.507548Adjusted R Square 0.478581Standard Error 19.46349Observations 19

ANOVA

df SS MS FSignificance

FRegression 1 6637.492 6637.492 17.52115 0.00062Residual 17 6440.066 378.8274Total 18 13077.56

Coefficients

Standard Error t Stat P-value Lower 95%

Upper 95%

Lower 95.0%

Intercept -0.74448 4.465463 -0.16672 0.869558 -10.1658 8.676837 -10.1658ε t-1 -1.00005 0.238912 -4.18583 0.00062 -1.50411 -0.49598 -1.50411

Δ εt=a+bεt−1

Δ εt=−0 . 74−1. 00 εt−1

t−statistic (-4 .19 )

We reject the null hypothesis of the existence of a unit root in the error term or residuals at the 95 % confidence level or 5% significance level. We compare the t-statistic with the ADF critical values. In our case t = -4.19 <-3.33. The sample evidence suggest that we could reject the null hypothesis. Therefore, the variables y and x are cointegrated.

75

Page 76: Introduction to Econometrics 2

The error correction model will be as follows:

Δ y t=a+bΔx t−εt−1

Δ y Δ x ε t-1

-9 -1214.1452

7

-12 2214.7429

81 2 -14.8528

19 10 -15.452510 16 -4.4505510 5 -7.247510 4 -1.24655

2 115.55420

812 25 -1.2437

8 -4 -9.23894

28 111.96029

5

-49 1421.1623

930 33 -39.034920 -58 -35.4287

6 3830.9602

9

14 336.56752

346 6 -5.8262

-25 435.3749

42 16 7.17570

76

Page 77: Introduction to Econometrics 2

3

SUMMARY OUTPUT

Regression StatisticsMultiple R 0.67179R Square 0.451302Adjusted R Square 0.382715Standard Error 16.31811Observations 19

ANOVA

df SS MS FSignificance

FRegression 2 3504.244 1752.122 6.579978 0.008216Residual 16 4260.493 266.2808Total 18 7764.737

Coefficients

Standard Error t Stat P-value Lower 95%

Upper 95%

Lower 95.0%

Intercept 4.472882 4.164353 1.074088 0.298715 -4.35515 13.30091 -4.35515Δ xt 0.232012 0.198462 1.169049 0.259507 -0.18871 0.652733 -0.18871ε t-1 -0.77843 0.21476 -3.62463 0.002278 -1.2337 -0.32316 -1.2337

Δ y t=4 . 47+0 .23 Δx t−0 .78 εt−1

(1 . 17) (-3 . 62) R2 =0 .45

The error correction model is used to measure the long – run disequilibrium of the revenues in relation to expenses. We are estimating the speed at which the dependent variable such as revenues returns to equilibrium after a change in the independent variable such as expenses. The negative sign in the error term shows that if expenses are above the long – run relationship with revenues, they will decrease to return to equilibrium.

77

Page 78: Introduction to Econometrics 2

Causality

Finally, we test for causality to see if x granger cause y or if y granger cause x. The granger causality in an error correction model is calculated by using the following equations:

Δ y t=a+bΔy t−1+cΔxt−1+εt−1

Δ y Δ yt-1 Δ xt-1ε t-1

-9 0 0 14.14527-12 -9 -12 14.74298

1 -12 22 -14.852819 1 2 -15.452510 19 10 -4.4505510 10 16 -7.247510 10 5 -1.246552 10 4 5.554208

12 2 11 -1.24378 12 25 -9.23894

28 8 -4 1.960295-49 28 11 21.1623930 -49 14 -39.034920 30 33 -35.4287

78

Page 79: Introduction to Econometrics 2

6 20 -58 30.9602914 6 38 6.56752346 14 33 -5.8262

-25 46 6 35.374942 -25 4 7.175703

SUMMARY OUTPUT

Regression StatisticsMultiple R 0.650982R Square 0.423778Adjusted R Square 0.308534Standard Error 17.27081Observations 19

ANOVA

df SS MS FSignificanc

e F

Regression 3 3290.5241096.84

1 3.67721 0.036328

Residual 15 4474.213298.280

9Total 18 7764.737

Coefficients

Standard Error t Stat P-value Lower 95%

Upper 95%

Lower 95.0%

Upper 95.0%

Intercept 7.997014 4.507113 1.774310.09630

6 -1.60968 17.6037 -1.60968 17.6037

Δ y t-1 0.006552 0.2312030.02833

80.97776

6 -0.48625 0.49935 -0.48625 0.49935Δ x t-1 -0.16795 0.246096 -0.68246 0.50535 -0.69249 0.35659 -0.69249 0.35659

79

Page 80: Introduction to Econometrics 2

ε t-1 -0.79085 0.299339 -2.6420.01848

4 -1.42888-

0.15283 -1.42888 -0.15283

Δ y t=a+bΔy t−1+cΔxt−1+εt−1

Δ y t=7 .997+0 .007 Δyt−1−0. 17 Δxt−1−0 .79 εt−1

t−statistic (0 . 028 ) ( -0 . 68) ( -2 .64 )

The coefficient of Δx t−1 is not significant at the 5% significance level. The independent variable x does not granger the dependent variable y.

We then test for the opposite relationship.

Δ x t=a+bΔxt−1+cΔy t−1+εt−1Δ x Δ yt-1 Δ xt-1

ε t-1

-12 0 014.1452

7

22 -9 -1214.7429

82 -12 22 -14.8528

10 1 2 -15.452516 19 10 -4.450555 10 16 -7.24754 10 5 -1.24655

11 10 45.55420

825 2 11 -1.2437-4 12 25 -9.23894

11 8 -41.96029

5

80

Page 81: Introduction to Econometrics 2

14 28 1121.1623

933 -49 14 -39.0349

-58 30 33 -35.4287

38 20 -5830.9602

9

33 6 386.56752

36 14 33 -5.8262

4 46 635.3749

4

16 -25 47.17570

3

SUMMARY OUTPUT

Regression StatisticsMultiple R 0.702944R Square 0.49413Adjusted R Square 0.392956Standard Error 16.18942Observations 19

ANOVA

df SS MS FSignificanc

e FRegression 3 3840.222 1280.07 4.88396 0.014528

81

Page 82: Introduction to Econometrics 2

4 3

Residual 15 3931.462262.097

5Total 18 7771.684

Coefficients

Standard Error t Stat P-value Lower 95%

Upper 95%

Lower 95.0%

Upper 95.0%

Intercept 14.01761 4.2249073.31784

90.00468

4 5.01242323.0227

9 5.012423 23.02279Error! Objects cannot be created from editing field codes.y t-1 -0.6445 0.216727 -2.97378

0.009465 -1.10644 -0.18256 -1.10644 -0.18256

Δ x t-1 -0.09196 0.230687 -0.398620.69578

7 -0.58365 0.39974 -0.58365 0.39974

ε t-1 0.652528 0.2805962.32550

50.03447

9 0.0544511.25060

5 0.054451 1.250605

Δ x t=a+bΔxt−1+cΔy t−1+εt−1

Δ x t=14 . 02−0 .09 Δx t−1−0 . 64 Δyt−1+0. 65 ε t−1

t−statistic ( -0 . 40) ( -2 .97 ) (2 .33 )

The coefficient is statistically negatively significant at the 5% significant level. y does granger cause x.

I have added two articles to help you understand in practice cointegration, Granger causality and unrestricted vector autoregressive system. The purpose of writing the articles is to strengthen your research skills in terms of designing the research problem and applying methodological issues. Good luck once again.

An application of a Johansen Cointegration test and a Vector Error Correction, (VEC) model to test the Granger causality between general government revenues and general government total expenditures in Greece.

Preface

In this article, we are investigating the effects of macroeconomic variables in terms of natural logarithmic yearly returns of general government revenues and general government total expenditures of Greece. We have applied a Vector Error Correction model, (VEC) a Granger causality and Johansen cointegration test to check for long – term relationship between general government revenues and

82

Page 83: Introduction to Econometrics 2

general government total expenditures. By using a (VEC), model we have found that 62% of the disequilibrium or error term or speed of adjustment towards long-run equilibrium is corrected each year by changes in general government revenues. Error term accounted as 56% of the disequilibrium or speed of adjustment towards long – run equilibrium is corrected each year by changes in general total government expenditures. The speed of adjustment is not too fast for both variables. Moreover, we have found statistically significant long – run relationship between the general government revenues and general government total expenditures by using Johansen cointegration test. Finally, at the 5% significance level, we have found significant causality that the natural logarithmic yearly returns of the general government expenditures is a Granger causality of the natural logarithmic yearly returns of the general government total revenues and not vice versa. The data that we have used are natural logarithmic yearly returns starting from 01/01/1980 to 01/01/2018, which total to 39 observations. The data was obtained from the Statistical Department of the International Monetary Fund.

Keywords: General government revenues, general government total expenditures, Johansen Cointegration, Vector Error Correction,(VEC), model, Granger causality.

Introduction

Application and modeling of long – run relationships in Finance has attracted the attention since many years for both academics and practitioners from the point of view that there is noise or speculation in the market that drives prices away from their fundamentals. This disequilibrium that is created from no – arbitrage activities creates non-stationary series that are binding together by a long – run relationship that will affect the prices and show manifestations in different time intervals. Among the academics that have used this methodology are Johansen and Juselius, (1990), Kao,(1999), MacKinnon, Haug and Michelis, (1999), Osterwald, (1992), Pedroni, (1999, 2004), White,(1980), Lutkephol, (1991), Engle and Granger, (1987).

83

Page 84: Introduction to Econometrics 2

Fiscal policy is the result of changing the level of government expenditures or taxation to affect the level of aggregate demand. Greece has adopted a contractionary fiscal policy since 2008 in an attempt to cut government expenditures. It has a budget deficit in terms that government’s expenditure exceeds its revenue from taxation. The government is trying to eliminate the public debt and run a budget surplus where tax revenues exceed central government expenditures. According to the International Monetary Fund Statistics, (IMF), Greece net debt has increased significantly from 127.100 % in 2009 to 173.432% in 2013. It has a GDP of -2.339% in 2009 and in 2011 the figure was -5.00%. In 2009, the unemployment rate was 9.375% and in 2013 it reached 18.987%. In Greece, in 2009, the general government revenues as percent of GDP were 38.342% and in 2013, it was 42.968%. The general government total expenditures as percent of GDP was 53.940% in 2009 and in 2013, it has reached the value of 47.542%. For the period of 2000 to 2009, there was an increase of the general government total expenditures of 7.252% as percent of GDP.

The volume of imports of goods and services has decreased from 1.156% in 2001 and it reached the value of -14.324% in 2012. The volume of exports of goods and services has decreased from 17.312% in 2004 to 2.770% in 2013. The output gap as percent of potential GDP has decreased from 10.045% in 2007 to -10.648% in 2013. The total investment as percent of GDP has decreased from 26.721% in 2007 to 13.198% in 2013. The gross national savings as percent of GDP has decreased from 17.944% in 2003 to 12.912% in 2013. There was a decrease of -5.032%. The general government net lending / borrowing as percent of GDP was -15.598% in 2009 and- 4.575% in 2013.

In this article, we are focusing on modelling the relationship between two variables only. We are going to test if there is short, long – term relationship and Granger causality between general government revenues and general government total expenditures.

The rest of the paper is organized as follows. Section 1 describes the methodology and the data. Section 2 is an analysis of statistical and econometric tests and Section 3 summarizes and concludes.

1.Methodological issues and data explanations.

In this article, we are going to use a Johansen cointegration methodology to find out if there is a long-run relationship between the two variables. Then, we test for Granger causality through a Error Vector Correction Model, (EVCM) to check if there exists Granger causality among the general government revenues, (r) and general government total expenditures, (e).

Let’s assume that natural logarithmic yearly returns of the general government revenues,(r) and general government total expenditures, (e) are endogenous

84

Page 85: Introduction to Econometrics 2

variables and are jointly determined by a Error Vector Correction Model, (EVCM). Let’s a constant be the only exogenous variable. By taking 1 lags of the endogenous variables, as they are integrated of order I(1), the mathematical equations, (1) and (2) are as follows:

ΔR ln r=α1+α11 ΔR ln rt−1+α 12 ΔR ln e t−1−β1 ECt−1+ε1t (1)

ΔR ln e=α 2+α 21 ΔR ln rt−1+α 22 ΔR ln e t−1−β2 ECt−1+ε2 t (2) Where:

ΔR ln r and Δ Rlne are the rate of change of general government revenues and general government total expenditures .α ij and β i are the parameters to be calculated. α 1 and α 2 are the constants.ECt-1 is the error correction term and is defined as difference of the coefficient of the lagged value of the dependent variable in relation to the independent variable. ECt-1= y t−1 - dxt-1 . d represents the long-run relationship between the dependent and independent variables . ε ij is the error term of the EVCM regression equation .According to EViews 6 users guide II, (p.363), the VAR-based cointegration tests developed in Johansen, (1991, 1995) of order p are as follows:

y t=A1 yt−1+ .. .. ..+A p yt−p+Bχ t+ε t (3)

Where yt is a k-vector of non-stationary I(1) variables, χ t is a d-vector of

deterministic variables, and ε t is a vector of innovations.

According to EViews 6 users guide II, (p.363), the VAR equation could be written as:

Δy t=Πyt−1+∑i=1

p−1

Γ i Δyt−i+Βχ t+εt

(4)

Where:

The matrix Π=∑

i=1

p

A i−I , The matrix Γ i=− ∑j=i+1

p

A j

(5)

85

Page 86: Introduction to Econometrics 2

To find the number of co-integrating vectors, Johansen,(1991,1995) used two statistic tests. The first one is the trace test. According to E-views user’s guide II, it tests the null hypothesis of r cointegrating relations against the alternative of k cointegrating relations, where k is the number of endogenous variables, for r = 0,1,……k-1. The alternative of k cointegrating relations corresponds to the case where none of the series has a unit root and a stationary VAR may be specified in terms of the levels of all of the series. The trace statistic for the null hypothesis of r cointegrating relations is computed as:

LRtr (r |k)=−T ∑

i=r+1

k

log (1−λ i)

(6)

Where λ i is the i-th largest eigenvalue of the Π matrix in equation 5.

The second block of the output reports the maximum eigenvalue statistic, which tests the null hypothesis of r cointegrating relations against the alternative of r+1 cointegrating relations. This test statistic is computed as:

LRmax (r |r+1) = -T log(1-λr+1) = LR tr(r|k) – LRtr (r+1|k) (7)

for r = 0,1,……, k-1. Descriptive statistics will be displayed and to test for normality the Jarque – Bera statistic is analysed. We check for stationary of the series by applying the Augmented Dickey – Fuller’s stationary test, (ADF), statistic to calculate and compare the critical values.

The data that we have used are yearly returns starting from 01/01/1980 to 01/01/2018, which total to 39 observations. The data has been derived from the International Monetary Fund, (IMF). According to the International Monetary Fund, (IMF), general government revenues includes taxes, social contributions, grants receivables, and other revenue. General government total expenditures includes total expense and the net acquisition of non-financial assets.

The logarithmic formula that we have used is:

Rt= ln (Pt /P t−1 ) (8)

Where: Rt is the yearly return for year t, Pt is the closing price for year t, and Pt-1 is the closing price lagged one period for year t-1.

86

Page 87: Introduction to Econometrics 2

Figure 1, shows a trend line that illustrates the relationship between general government revenues and general total government expenditures for the period 01/01/1980 to 01/01/2018. The data was obtained from the IMF and data beyond 2013 was based on estimates and projections from the IMF staff.

A trend line that shows the relationship between general government revenues and general total

government expenditures.

0

2E+10

4E+10

6E+10

8E+10

1E+11

1,2E+11

1,4E+11

1980

1984

1988

1992

1996

2000

2004

2008

2012

2016

Date

Bill

ions

in E

uro General government

revenue in Billions

General governmenttotal expenditure inBillions

Source: Author’s calculation based on Excel software. Data were obtained from the Statistical Department of the International Monetary Fund, (IMF).

According to Figure 1, the general government revenues have decreased from 94.764 billions in 2008 to 78.833 billions in 2013. The total general government expenditures have increased from 63.627 billions in 2000 to 117.850 billions in 2008. Then, the expenditures have decreased from 117.850 billions in 2008 to 87.226 billions in 2013.

87

Page 88: Introduction to Econometrics 2

2. Statistical and econometric tests.

Table 1 shows descriptive statistics and normality tests of the logarithmic yearly returns of the general government revenues and total general government expenses.

Table 1 displays Jarque - Bera normality test of the logarithmic yearly returns of the general government revenues and total general government expenditures for the period 01/01/1980 to 01/01/2018.

LNR LNE Mean  0.102043  0.100106 Median  0.110114  0.092448 Maximum  0.328670  0.315797 Minimum -0.075202 -0.110734 Std. Dev.  0.095202  0.107063 Skewness  0.242968  0.014847 Kurtosis  2.582699  2.552721

 Jarque-Bera  0.666697  0.326529 Probability  0.716520  0.849367

 Sum  3.979682  3.904143 Sum Sq. Dev.  0.344406  0.435578

 Observations  39  39

Source: Author’s calculation based on EViews 6 software.Significant p-value at the 5% significance level.

We state the hypotheses as follows:

H0: The natural log difference of the yearly returns of the general government revenues, and natural logarithmic yearly returns of the total general government expenditures are normally distributed.

H1: The log difference of the yearly returns of the general government revenues, and logarithmic yearly returns of the total general government expenditures are not normally distributed.

According to Table 1, the Jarque – Bera χ2

statistics for both variables are not significant at the 5% significance value. For example, the logarithmic yearly

returns of general government revenues shows a χ2

statistic of 0.67, which is not significant, as the p-value is 0.72. The sample evidence suggests that we cannot reject H0 of normality.

88

Page 89: Introduction to Econometrics 2

Tables 2-5 show the ADF tests of the natural logarithmic yearly returns of the general government revenues, and logarithmic yearly returns of the total general government expenditures for the period 01/01/1980 to 01/01/2018.

Table 2 shows the ADF test of the yearly log difference of the general government revenues for the period 01/01/1980 to 01/01/2018.

Null Hypothesis: LNR has a unit rootExogenous: ConstantLag Length: 0 (Automatic based on SIC, MAXLAG=9)

t-Statistic   Prob.*

Augmented Dickey-Fuller test statistic -2.380751  0.1537Test critical values: 1% level -3.615588

5% level -2.94114510% level -2.609066

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test EquationDependent Variable: D(LNR)Method: Least SquaresDate: 09/11/13 Time: 15:10Sample (adjusted): 2 39Included observations: 38 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

LNR(-1) -0.260525 0.109430 -2.380751 0.0227C 0.028192 0.015356 1.835824 0.0747

R-squared 0.136027    Mean dependent var 0.001227Adjusted R-squared 0.112028    S.D. dependent var 0.067838S.E. of regression 0.063926    Akaike info criterion -2.610994Sum squared resid 0.147114    Schwarz criterion -2.524806Log likelihood 51.60889    Hannan-Quinn criter. -2.580329F-statistic 5.667973    Durbin-Watson stat 1.944586Prob(F-statistic) 0.022691

Source: Author’s calculation based on EViews 6 software.

For a level of significance of one per cent, the critical value of the t-statistic from logarithmic returns of Dickey-Fuller’s,(1979), table is -3.62. According to Table 2 and to the sample evidence, we cannot reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -2.38, which is greater than the critical values, (-3.62, -2.94, -2.61). In other words, the yearly log difference of the general government revenues is not a stationary series.

89

Page 90: Introduction to Econometrics 2

Table 3 shows the ADF test at first difference of the yearly log difference of the general government revenues for the period 01/01/1980 to 01/01/2018.

Null Hypothesis: D(LNR) has a unit rootExogenous: ConstantLag Length: 0 (Automatic based on SIC, MAXLAG=9)

t-Statistic   Prob.*

Augmented Dickey-Fuller test statistic -7.350649  0.0000Test critical values: 1% level -3.621023

5% level -2.94342710% level -2.610263

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test EquationDependent Variable: D(LNR,2)Method: Least SquaresDate: 09/11/13 Time: 15:11Sample (adjusted): 3 39Included observations: 37 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

D(LNR(-1)) -1.159121 0.157690 -7.350649 0.0000C -0.002123 0.010699 -0.198450 0.8438

R-squared 0.606883    Mean dependent var -0.003620Adjusted R-squared 0.595651    S.D. dependent var 0.102328S.E. of regression 0.065069    Akaike info criterion -2.574207Sum squared resid 0.148188    Schwarz criterion -2.487131Log likelihood 49.62284    Hannan-Quinn criter. -2.543509F-statistic 54.03204    Durbin-Watson stat 2.115913Prob(F-statistic) 0.000000

Source: Author’s calculation based on EViews 6 software.

For a level of significance of one per cent, the critical value of the t-statistic from log difference of Dickey-Fuller’s,(1979), table is -3.62. According to Table 3 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -7.35, which is smaller than the critical values, (-3.4608, -2.8744, -2.5736). In other words, the log difference of the general government revenues is a stationary series of order I(1).

90

Page 91: Introduction to Econometrics 2

Table 4 shows the ADF test of the yearly log difference of the total general government expenditures for the period 01/01/1980 to 01/01/2018.

Null Hypothesis: LNE has a unit rootExogenous: ConstantLag Length: 0 (Automatic based on SIC, MAXLAG=9)

t-Statistic   Prob.*

Augmented Dickey-Fuller test statistic -2.485913  0.1267Test critical values: 1% level -3.615588

5% level -2.94114510% level -2.609066

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test EquationDependent Variable: D(LNE)Method: Least SquaresDate: 09/11/13 Time: 15:12Sample (adjusted): 2 39Included observations: 38 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

LNE(-1) -0.284374 0.114394 -2.485913 0.0177C 0.030079 0.016848 1.785303 0.0826

R-squared 0.146510    Mean dependent var 0.001205Adjusted R-squared 0.122802    S.D. dependent var 0.080329S.E. of regression 0.075236    Akaike info criterion -2.285188Sum squared resid 0.203774    Schwarz criterion -2.198999Log likelihood 45.41857    Hannan-Quinn criter. -2.254523F-statistic 6.179763    Durbin-Watson stat 1.693937Prob(F-statistic) 0.017700

Source: Author’s calculation based on EViews 6 software.

For a level of significance of one per cent, the critical value of the t-statistic from log difference of Dickey-Fuller’s, (1979), table is -3.62. According to Table 4 and to the sample evidence, we cannot reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -2.49, which is greater than the critical values, (-3.62, -2.94, -2.61). In other words, the log difference of the general government expenditures is not a stationary series.

91

Page 92: Introduction to Econometrics 2

Table 5 shows the ADF test at first difference of the yearly log difference of the total general government expenditures for the period 01/01/1980 to 01/01/2018.

Null Hypothesis: D(LNE) has a unit rootExogenous: ConstantLag Length: 0 (Automatic based on SIC, MAXLAG=9)

t-Statistic   Prob.*

Augmented Dickey-Fuller test statistic -10.25436  0.0000Test critical values: 1% level -3.621023

5% level -2.94342710% level -2.610263

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test EquationDependent Variable: D(LNE,2)Method: Least SquaresDate: 09/11/13 Time: 15:12Sample (adjusted): 3 39Included observations: 37 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

D(LNE(-1)) -1.243768 0.121292 -10.25436 0.0000C -0.006973 0.009744 -0.715649 0.4790

R-squared 0.750271    Mean dependent var -0.008625Adjusted R-squared 0.743136    S.D. dependent var 0.116933S.E. of regression 0.059263    Akaike info criterion -2.761109Sum squared resid 0.122926    Schwarz criterion -2.674032Log likelihood 53.08051    Hannan-Quinn criter. -2.730410F-statistic 105.1520    Durbin-Watson stat 2.208040Prob(F-statistic) 0.000000

Source: Author’s calculation based on EViews 6 software.

For a level of significance of one per cent, the critical value of the t-statistic from log difference of Dickey-Fuller’s,(1979), table is -3.62. According to Table 5 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -10.25, which is smaller than the critical values, (-3.62, -2.94, -2.61). In other words, the log difference of the general government expenditures is a stationary series or I(1).

92

Page 93: Introduction to Econometrics 2

Table 6 displays the Vector Error Correction, (VRC) model. lnr represents natural logarithmic yearly returns of the general government revenues, and lne which symbolize the natural logarithmic general government total expenditures yearly returns from 01/01/1980 to 01/01/2018, which total to 39 observations.

 Vector Error Correction Estimates Date: 09/13/13 Time: 19:58 Sample (adjusted): 4 39 Included observations: 36 after adjustments Standard errors in ( ) & t-statistics in [ ]

Error Correction: D(LNR) D(LNE)

CointEq1 -0.620750  0.560305 (0.25394)  (0.25737)[-2.44445] [ 2.17703]

D(LNR(-1)) -0.064104 -0.285494 (0.17924)  (0.18166)[-0.35765] [-1.57162]

D(LNE(-1))  0.025254  0.067068 (0.18338)  (0.18586)[ 0.13771] [ 0.36085]

C -0.002296 -0.007118

 (0.00948)  (0.00961)[-0.24217] [-0.74078]

 R-squared  0.321147  0.224353 Adj. R-squared  0.257504  0.151636 Sum sq. resids  0.103523  0.106338 S.E. equation  0.056878  0.057646 F-statistic  5.046110  3.085287 Log likelihood  54.24478  53.76198 Akaike AIC -2.791377 -2.764554 Schwarz SC -2.615430 -2.588608 Mean dependent -0.002361 -0.007408 S.D. dependent  0.066008  0.062586

 Determinant resid covariance (dof adj.)  7.14E-06 Determinant resid covariance  5.64E-06

 Log likelihood  115.3637

 Akaike information criterion -5.853539 Schwarz criterion -5.413673

Source: Author’s calculation based on EViews 6 software.

93

Page 94: Introduction to Econometrics 2

By applying equations (1) and (2) to the results of table 6 we have:

ΔR ln r=α1+α11 ΔR ln rt−1+α12 ΔR ln e t−1−β1 ECt−1+ε1tΔR ln r= -0 . 002 - 0 . 064 + 0 . 03 - 0 .62 (0 .009 ) (0 . 179) ( 0. 18 ) (0 .25 ) [-0 . 242 ] [-0 . 36 ] [0 . 138 ] [ -2 . 44 ] (9)

ΔR ln e=α 2+α 21 ΔR ln rt−1+α 22 ΔR ln et−1−β2 ECt−1+ε2 tΔR ln e= -0 .007 -0.285 + 0 .067 + 0 .56 (0 .009 ) (0 .18) ( 0. 186 ) (0 .257 ) [-0 .74 ] [ -1. 57 ] [0 . 36 ] [2 .177 ] (10)

According to Table 6, the Vector Error Correction model, (VEC) is used to measure the short – term disequilibrium of the general government revenues and general total government expenditures from their long – run relationship. It estimates the speed at which the dependent variable such as the general government revenues returns to equilibrium after a change of an independent variable such as general total government expenditures. Before to test for a long-term cointegration equilibrium, we use the (VEC), model and we found that 62% of the disequilibrium or error term or speed of adjustment towards long-run equilibrium is corrected each year by changes in general government revenues. Error term accounted as 56% of the disequilibrium or speed of adjustment towards long – run equilibrium is corrected each year by changes in general total government expenditures.

The results of the regressions (9) and (10), show that the t-statistics of the coefficients of the error correction term are statistically significant at the 5 % significance level. The remaining coefficients of both equations are not statistically significant. Specifically, the coefficient of the error tem of the dependent variable general government revenues has a negative value of -0.62 and a t-statistic of -2.44. On the other hand, the coefficient of the error tem of the dependent variable general total government expenditures has a positive value of 0.56 and a t-statistic of 2.177. The negative or positive values of the one variable will affect the other variable downward or upward to adjust its value and achieve equilibrium. For example, the error term of the variable general total government expenditures has positive values and it is above the equilibrium level. In contrast, the error tem of the variable general government revenues has a negative value and it is below the equilibrium level. To restore equilibrium, the general total government expenditures have to decrease or fall and the general government revenues have to increase. Actually, this is what the Greek government is currently doing to better control the contractionary Fiscal policy that has adopted.

94

Page 95: Introduction to Econometrics 2

Table 7 shows the coefficients and the p-values of the Vector Error Correction, (VRC) model, which was displayed in Table 6. lnr represents natural logarithmic yearly returns of the general government revenues from 01/01/1980 to 01/01/2018, which total to 39 observations.

Dependent Variable: D(LNR)Method: Least SquaresDate: 09/13/13 Time: 21:11Sample (adjusted): 4 39Included observations: 36 after adjustmentsD(LNR) = C(1)*( LNR(-1) - 0.892695205347*LNE(-1) - 0.0134665385566 )        + C(2)*D(LNR(-1)) + C(3)*D(LNE(-1)) + C(4)

Coefficient Std. Error t-Statistic Prob.

C(1) -0.620750 0.253943 -2.444445 0.0202C(2) -0.064104 0.179236 -0.357652 0.7230C(3) 0.025254 0.183382 0.137713 0.8913C(4) -0.002296 0.009481 -0.242174 0.8102

R-squared 0.321147    Mean dependent var -0.002361Adjusted R-squared 0.257504    S.D. dependent var 0.066008S.E. of regression 0.056878    Akaike info criterion -2.791377Sum squared resid 0.103523    Schwarz criterion -2.615430Log likelihood 54.24478    Hannan-Quinn criter. -2.729967F-statistic 5.046110    Durbin-Watson stat 2.133267Prob(F-statistic) 0.005638

Source: Author’s calculation based on EViews 6 software.

According to Table 7, the coefficient of the cointegrating term C(1) is the only statistically significant with 5% significance level. The remaining coefficients C(2), C(3) and the constant C(4) are not significant. The whole model has a significant F-statistic with a probability of 0.005

Table 8 shows a Wald test for the coefficients C(2) and C(3) that was calculated in Table 7 for the Vector Error Correction model for the period 01/01/1980 to 01/01/2018.

Wald Test:Equation: Untitled

Test Statistic Value df Probability

F-statistic 0.069513 (2, 32) 0.9330Chi-square 0.139027 2 0.9328

Null Hypothesis Summary:

Normalized Restriction (= 0) Value Std. Err.

C(2) -0.064104 0.179236C(3) 0.025254 0.183382

95

Page 96: Introduction to Econometrics 2

Restrictions are linear in coefficients.Source: Author’s calculation based on EViews 6 software.

The hypotheses that have been formulated and tested are as follows:

H0: The coefficients C(2) = C(3) = 0 and do not affect the dependent variable, which is the natural logarithmic return of the general government revenues.

H1: The coefficients C(2) = C(3) ≠ 0 and do affect the dependent variable, which is the natural logarithmic return of the general government revenues.

According to Table 8, the probability of the Chi-square statistic is 0.93 and the sample evidence suggests accepting the null hypothesis. In other words, both coefficients are not significant and cannot affect jointly the dependent variable in the short-term.

Table 9 shows the coefficients and the p-values of the Vector Error Correction, (VRC) model, which was displayed in Table 6. lne represents the natural logarithmic general government total expenditures yearly returns from 01/01/1980 to 01/01/2018, which total to 39 observations.

Dependent Variable: D(LNE)Method: Least SquaresDate: 09/13/13 Time: 21:09Sample (adjusted): 4 39Included observations: 36 after adjustmentsD(LNE) = C(5)*( LNR(-1) - 0.892695205347*LNE(-1) - 0.0134665385566 )        + C(6)*D(LNR(-1)) + C(7)*D(LNE(-1)) + C(8)

Coefficient Std. Error t-Statistic Prob.

C(5) 0.560305 0.257372 2.177026 0.0370C(6) -0.285494 0.181656 -1.571621 0.1259C(7) 0.067068 0.185858 0.360855 0.7206C(8) -0.007118 0.009609 -0.740778 0.4642

R-squared 0.224353    Mean dependent var -0.007408Adjusted R-squared 0.151636    S.D. dependent var 0.062586S.E. of regression 0.057646    Akaike info criterion -2.764554Sum squared resid 0.106338    Schwarz criterion -2.588608Log likelihood 53.76198    Hannan-Quinn criter. -2.703144F-statistic 3.085287    Durbin-Watson stat 2.245649Prob(F-statistic) 0.041071

Source: Author’s calculation based on EViews 6 software.

According to Table 9, the coefficient of the cointegrating term C(5) is the only statistically significant with 5% significance level, as the p-value of 0.037 is less than 5%. The remaining coefficients C(6), C(7) and the constant C(8) are not significant.

96

Page 97: Introduction to Econometrics 2

Table 10 shows a Wald test for the coefficients C(6) and C(7) that was calculated in Table 9 for the Vector Error Correction model for the period 01/01/1980 to 01/01/2018.

Wald Test:Equation: Untitled

Test Statistic Value df Probability

F-statistic 1.534250 (2, 32) 0.2311Chi-square 3.068500 2 0.2156

Null Hypothesis Summary:

Normalized Restriction (= 0) Value Std. Err.

C(6) -0.285494 0.181656C(7) 0.067068 0.185858

Restrictions are linear in coefficients.Source: Author’s calculation based on EViews 6 software.

The hypotheses that have been formulated and tested are as follows:

H0: The coefficients C(6) = C(7) = 0 and do not affect the dependent variable, which is the natural logarithmic general government total expenditures.

H1: The coefficients C(6) = C(7) ≠ 0 and do affect the dependent variable, which is the natural logarithmic general government total expenditures.

According to Table 10, the probability of the Chi-square statistic is 0.22 and the sample evidence suggests accepting the null hypothesis. In other words, both coefficients are not significant and cannot affect jointly the dependent variable in the short-term.

Table 11 shows the Breusch – Godfrey serial correlation LM test of the residuals of equation (1), which has as dependent variable the yearly changes of the natural logarithmic of the general government revenues for the period 01/01/1980 to 01/01/2018.

Breusch-Godfrey Serial Correlation LM Test:

F-statistic 1.735211    Prob. F(4,28) 0.1703Obs*R-squared 7.151240    Prob. Chi-Square(4) 0.1281

Source: Author’s calculation based on EViews 6 software.

The hypotheses that have been formulated and tested are as follows:

H0: The residuals of the Vector Error correction regression model of equation (1)

97

Page 98: Introduction to Econometrics 2

of the natural logarithmic of general government revenues have not serial correlation.

H1: The residuals of the Vector Error correction regression model of equation (1) of the natural logarithmic of general government revenues have serial correlation.

According to Table 11, the probability Chi-square is 0.13, which is greater than 5% significance level and, therefore, we cannot reject H0. In other words, there is no serial correlation in the residuals.

Table 12 shows the Heteroskedasticity test of Breusch-Pagan-Godfrey of the residuals of equation, (1), which has as dependent variable the yearly changes of the natural logarithmic of the general government revenues for the period 01/01/1980 to 01/01/2018.

Heteroskedasticity Test: Breusch-Pagan-Godfrey

F-statistic 0.943815    Prob. F(4,31) 0.4518Obs*R-squared 3.908218    Prob. Chi-Square(4) 0.4186Scaled explained SS 4.543415    Prob. Chi-Square(4) 0.3374

Source: Author’s calculation based on EViews 6 software.

The hypotheses that have been formulated and tested are as follows:

H0: The residuals of the Vector Error correction regression model of equation (1) of the natural logarithmic of general government revenues are homoskedastic.

H1: The residuals of the Vector Error correction regression model of equation (1) of the natural logarithmic of general government revenues are not homoskedastic or they are heteroskedastic.

According to Table 12, the probability Chi-square is 0.42, which is greater than 5% significance level and, therefore, we reject H1 and accept H0. In other words, the residuals of equation (1) are homoskedastic.

98

Page 99: Introduction to Econometrics 2

Figure 2 and Table 13 displays Jarque - Bera normality test of the residuals of equation (1) which has as dependent variable the yearly changes of the natural logarithmic of the general government revenues for the period 01/01/1980 to 01/01/2018.

0

2

4

6

8

10

12

-0.15 -0.10 -0.05 -0.00 0.05 0.10

Series: ResidualsSample 4 39Observations 36

Mean -5.30e-18Median 0.003489Maximum 0.122062Minimum -0.151222Std. Dev. 0.054386Skewness -0.158868Kurtosis 3.942650

Jarque-Bera 1.484319Probability 0.476085

Source: Author’s calculation based on EViews 6 software.

We state the hypotheses as follows:

H0: The residuals of the natural logarithmic differences of the yearly returns of the dependent variable, which is the general government revenues expressed in equation (1) are normally distributed.

H1: The residuals of the natural logarithmic differences of the yearly returns of the dependent variable, which is the general government revenues expressed in equation (1) are not normally distributed.

According to Table 13, the Jarque – Bera χ2

statistic is 1.48 for the residuals and is not significant at the 5% significance value. The sample evidence suggest that we can not reject H0 of normality.

99

Page 100: Introduction to Econometrics 2

Table 14 displays the Breusch – Godfrey serial correlation LM test for the residuals of equation, (2), which has as dependent variable the yearly changes of the natural logarithmic of the general total government expenditures for the period 01/01/1980 to 01/01/2018.

Breusch-Godfrey Serial Correlation LM Test:

F-statistic 0.968919    Prob. F(4,28) 0.4400Obs*R-squared 4.377142    Prob. Chi-Square(4) 0.3574

Source: Author’s calculation based on EViews 6 software.

The hypotheses that have been formulated and tested are as follows:

H0: The residuals of the Vector Error correction regression model of equation, (2), of the natural logarithmic of general total government expenditures have no serial correlation.

H1: The residuals of the Vector Error correction regression model of equation, (2), of the natural logarithmic of general total government expenditures have serial correlation.

According to Table 14, the probability Chi-square is 0.36, which is greater than 5% significance level and, therefore, we cannot reject H0. In other words, there is no serial correlation in the residuals.

Table 15 displays the Heteroskedasticity test of Breusch-Pagan-Godfrey for the residuals of equation, (2), which has as dependent variable the yearly changes of the natural logarithmic of the general total government expenditures for the period 01/01/1980 to 01/01/2018.

Heteroskedasticity Test: Breusch-Pagan-Godfrey

F-statistic 1.620119    Prob. F(4,31) 0.1941Obs*R-squared 6.224497    Prob. Chi-Square(4) 0.1830Scaled explained SS 4.554400    Prob. Chi-Square(4) 0.3361

Source: Author’s calculation based on EViews 6 software.

The hypotheses that have been formulated and tested are as follows:

H0: The residuals of the Vector Error correction regression model of equation (2) of the natural logarithmic of general total government expenditures are homoskedastic.

100

Page 101: Introduction to Econometrics 2

H1: The residuals of the Vector Error correction regression model of equation (2) of the natural logarithmic of general total government expenditures are not homoskedastic or they are heteroskedastic.

According to Table 15, the probability Chi-square is 0.18, which is greater than 5% significance level and, therefore, we reject H1 and accept H0. In other words, the residuals of equation (2) are homoskedastic.

Figure 3 and Table 16 display Jarque - Bera normality test of the residuals of equation (2) which has as dependent variable the yearly changes of the natural logarithmic of the general total government expenditures for the period 01/01/1980 to 01/01/2018.

0

1

2

3

4

5

6

7

8

9

-0.15 -0.10 -0.05 -0.00 0.05 0.10 0.15

Series: ResidualsSample 4 39Observations 36

Mean 7.71e-19Median 0.014701Maximum 0.134718Minimum -0.126500Std. Dev. 0.055120Skewness -0.171790Kurtosis 2.852090

Jarque-Bera 0.209886Probability 0.900376

Source: Author’s calculation based on EViews 6 software.

We state the hypotheses as follows:

H0: The residuals of the natural logarithmic differences of the yearly returns of the dependent variable, which is the general total government expenditures expressed in equation (2) are normally distributed.

H1: The residuals of the natural logarithmic differences of the yearly returns of the dependent variable, which is the general total government expenditures expressed in equation (2) are not normally distributed.

According to Table 16, the Jarque – Bera χ2

statistic is 0.21 for the residuals and is not significant at the 5% significance value, as the p-value is 0.90. The sample evidence suggest that we can not reject H0 of normality.

101

Page 102: Introduction to Econometrics 2

Table 17 shows the Johansen Cointegration test of the general government revenues and the general total government expenditures for the trace test and maximum eigenvalue test for the period 01/01/1980 to 01/01/2018.

Date: 09/11/13 Time: 19:39Sample (adjusted): 3 39Included observations: 37 after adjustmentsTrend assumption: Linear deterministic trendSeries: LNE LNRLags interval (in first differences): 1 to 1

Unrestricted Cointegration Rank Test (Trace)

Hypothesized Trace 0.05No. of CE(s) Eigenvalue Statistic Critical Value Prob.**

None *  0.444130  24.19058  15.49471  0.0019At most 1  0.064411  2.463426  3.841466  0.1165

 Trace test indicates 1 cointegrating eqn(s) at the 0.05 level * denotes rejection of the hypothesis at the 0.05 level **MacKinnon-Haug-Michelis (1999) p-values

Unrestricted Cointegration Rank Test (Maximum Eigenvalue)

Hypothesized Max-Eigen 0.05No. of CE(s) Eigenvalue Statistic Critical Value Prob.**

None *  0.444130  21.72715  14.26460  0.0028At most 1  0.064411  2.463426  3.841466  0.1165

 Max-eigenvalue test indicates 1 cointegrating eqn(s) at the 0.05 level * denotes rejection of the hypothesis at the 0.05 level **MacKinnon-Haug-Michelis (1999) p-valuesSource: Author’s calculation based on EViews 6 software.

We state the hypotheses as follows:

H0: There is no cointegration or long-run relationship between the variables of natural logarithmic yearly returns of the general government revenues and the general total government expenses.

H1: There is cointegration or long-run relationship between the variables of natural logarithmic yearly returns of the general government revenues and the general total government expenses.

102

Page 103: Introduction to Econometrics 2

The fact that both series are integrated of level I(1) or they become stationary at their first differences, we motivate us to apply a cointegration test. According to Table 17, the results of the Johansen cointegration tests in terms of trace tests and maximum eigenvalue tests show that there is a long – run relationship between government revenues and total government expenditures. The p-values for both statistics is very significant at the 5% significance level. Therefore, we reject H0

and accept the alternative hypothesis H1.

Table 18 displays the Granger Causality test. lnr represents natural logarithmic yearly returns of the general government revenues, and lne which symbolize the natural logarithmic yearly returns of the general government total expenditures from 01/01/1980 to 01/01/2018, which total to 39 observations.

Pairwise Granger Causality TestsDate: 09/12/13 Time: 07:10Sample: 1 39Lags: 2

 Null Hypothesis: Obs F-Statistic Prob.

 LNE does not Granger Cause LNR  37  5.91195 0.0065 LNR does not Granger Cause LNE  1.85643 0.1727

Source: Author’s calculation based on EViews 6 software.

The hypotheses that have been formulated and tested for pairwise Granger causality exogeneity tests are as follows:

The null hypothesis, H0, states that the logarithmic yearly returns of the general government revenues are not a Granger cause of the general government total expenditures and vice versa.

The alternative hypothesis, H1, states that the logarithmic yearly returns of the general government revenues are a Granger cause of the general government total expenditures and vice versa.

According to Table 18, at the 5% significance level, we have found significant causality that the logarithmic yearly returns of the general government expenditures is a Granger causality of the logarithmic yearly returns of the general government total revenues and not vice versa. The p-probability was 0.007.

Section 3 summarizes and concludes.

In this article, we have attempted to model the short and long - run effects of macroeconomic variables, namely, general government revenues and general government total expenditures. We have applied a Vector Error Correction model,

103

Page 104: Introduction to Econometrics 2

(VEC) a Granger causality and Johansen cointegration test to check for long – term relationship between general government revenues and general government total expenditures in Greece.

The Jarque – Bera χ2

statistics for both dependent variables measured as the natural logarithmic yearly returns of general government revenues and general government expenditures are not significant at the 5% significance value. The sample evidence suggests that we cannot reject H0 of normality. Then, we have applied Dickey – Fuller’s stationary tests and we have found that all the time series of both variables are stationary at their first differences or I(1).

By using a (VEC), model we have found that 62% of the disequilibrium or error term or speed of adjustment towards long-run equilibrium is corrected each year by changes in general government revenues. Error term accounted as 56% of the disequilibrium or speed of adjustment towards long – run equilibrium is corrected each year by changes in general total government expenditures. The speed of adjustment is not too fast for both variables. For example, the error term or speed of adjustment of the variable general total government expenditures has positive values and it is above the equilibrium level. In contrast, the error tem or speed of adjustment of the variable general government revenues has a negative value and it is below the equilibrium level. To restore equilibrium, the general total government expenditures have to decrease or fall and the general government revenues have to increase. Actually, this is the current Fiscal policy of the Greek government to better control the expenditures in an effort to increase government revenues.

Then, by applying equation, (1) of the (VEC), model, we have found that the coefficient of the cointegrating term C(1) is the only statistically significant with 5% significance level for the dependent variable general government revenues. We did further tests to check the validity of the VEC model of equation (1), in terms of serial correlation, heteroskedasticity an normality of the residuals. We have found that, there is no serial correlation in the residuals. The residuals of equation (1) are homoskedastic. The residuals of the natural logarithmic differences of the yearly returns of the dependent variable, which is the general government revenues expressed in equation (1) are normally distributed. Similarly, we did for equation (2). The coefficient of the cointegrating term C(5) is the only statistically significant with 5% significance level for the dependent variable general total government expenditures. We did further tests to check the validity of the VEC model of equation (2), in terms of serial correlation, heteroskedasticity an normality of the residuals. We have found that, there is no serial correlation in the residuals. The residuals of equation (2) are homoskedastic. The residuals of the natural logarithmic differences of the yearly returns of the dependent variable, which is the general total government expenditures expressed in equation (2) are normally distributed.

We have found statistically significant long – run relationship between the general government revenues and general government total expenditures in Greece by using Johansen cointegration test.

104

Page 105: Introduction to Econometrics 2

Finally, at the 5% significance level, we have found significant causality that the logarithmic yearly returns of the general government expenditures in Greece is a Granger causality of the logarithmic yearly returns of the general government total revenues and not vice - versa.

References

Dickey,D.A and Fuller,W.A.,(1979),” Distribution of the Estimators for Autoregressive Time Series with a Unit Root. Journal of the American Statistical Association, 74, pp 427 – 431.

Engle, R.,F., and Granger, C.,W.,J.,(1987).” Cointegration and Error Correction: Representation, Estimation, and Testing”. Econometrica, 55, pp 251 -276.

Johansen, S.,(1991),” Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector Autoregressive Models. “Econometrica, 59, pp.1551 – 1580.

Johansen, S.,(1995), “ Likelihood – based Inference in Cointegrated Vector Autiregressive Models, Oxford: Oxford University Press.

Johansen, S., and Juselius,K.,(1990), “ Maximum Likelihood Estimation and Inferences on Cointegration – with applications to the demand for money”. Oxford Bulletin of Economics and Statistics, 52, pp.169 -210.

Lutkepohl, H.,(1991), Introduction to Multiple Time Series Analysis. New York: Springer – Verlag.

Kao,(1999), “ Spurious Regression and Residual – Based Tests for Cointegration in Panel Data”. Journal of Econometrics, 90, pp. 1 - 44.

MacKinnon, J,G., Haug, A and Michelis,L., (1999),” Numerical Distribution Functions of Likelihood Ratio Tests for Cointegration”. Journal of Applied Econometrics, 14, pp 563 – 577.

Osterwald, L,M., (1992), “ A Note with Quantiles of the Asymptotic Distribution of the Maximum Likelihood Cointegration Rank Test Statistics”. Oxford Bulletin of Economics and Statistics, 54, pp 461 – 472.

105

Page 106: Introduction to Econometrics 2

Pedroni,P., (1999),” Critical Values for Cointegration Tests in Heterogenous Panels with Multiple Regressors”. Oxford Bulletin of Economics and Statistics, 61, pp653 -70.

Pedroni, P., (2004), “ Panel Cointegration; Asymptotic and Finite Sample Properties of Polled Time Series Tests with an Application of the PPP Hypothesis”. Econometric Theory, 20, pp597 - 625.

White,H.,(1980), “ A Heteroskedasticity – Consistent Covariance Matrix and a Direct Test for Heteroskedasticity”. Econometrica, 48, pp 817 – 838.

Application of an Unrestricted Vector Autoregressive system in the term structure of the US interest rates. Evidence from short, medium and long-term yields of the US interest rates.

Preface

In this article, we are investigating the effects of macroeconomic variables logarithmic returns, namely seasonally adjusted money supply,(M1), total index of industrial production,(IP) and seasonally adjusted of total consumer credit outstanding, (CCO), on the logarithmic mean monthly returns of the US term structure of interest rates. We have applied an Unrestricted Vector Autoregressive system to check for exogeneity tests, impulse- responses and variance decompositions of the macro factors on the logarithmic mean returns of 3 month, 5 year and 10 year Treasury with constant maturities. Impulse – responses showed that the magnitude of the time series shock, positive or negative between two variables gradually decrease and, then, dies off slowly, as the time passes away or as the periods increases. Variance decompositions showed that macro factors variance increase or decrease in percentage terms in relation to the monthly mean returns of the US interest rates. The data that we have used are monthly returns starting from 01/01/1990 to 01/01/2013, which total to 276 observations. The data was obtained from the Federal Reserve Statistical Release Department and the symbols of the series are H.6, G.17, G.19, and H.15.

Keywords: Seasonally adjusted money supply, (M1), total index of industrial production, (IP), seasonally adjusted total consumer credit outstanding, (CCO), 3 month Treasury with constant maturity, 5-year and 10-year Treasury with constant maturities, Vector Autoregressive system, block exogeneity tests, impulse - responses, variance decompositions.

106

Page 107: Introduction to Econometrics 2

Introduction

This article will focus on modeling the logarithmic monthly returns of the seasonally adjusted money supply,(M1), total index of industrial production,(IP) and seasonally adjusted of total consumer credit outstanding(CCO) on the logarithmic mean monthly returns of the US interest rates. By using E-Views the model will be tested to validate the hypotheses that will be formulated.

Vector autoregressive model (VARs) is used to analyze multivariate time series and was developed by Sims (1980). It was a generalization of univariate time series models and it is a helpful tool for macro analysis. The estimation output helps the researcher to carries out pairwise Granger causality tests, impulse responses, (IRs) and variance decompositions, (VD). Specifically, the impulse responses are used to test the magnitude of the shocks between two variables. The first variable generates innovations and it is known as impulses and the second variable observe the responses and it is knows as responses. The impulses are orthogonalized through transformation of the inverse of the Cholesky factor of the residual covariance matrix. The variance decomposition shows the effects of a shock and the variation of an endogenous variable in relation to the other variables in the Unrestricted Vector Autoregressive system. The same lags of the variables that will be analyzed have to be stationary to be able to carry out the joint significance regression tests.

The rest of the paper is organized as follows. Section 1 describes the methodology and the data. Section 2 is an analysis of statistical and econometric tests and Section 3 summarizes and concludes.

107

Page 108: Introduction to Econometrics 2

1.Methodology and data description

In this article, we are going to use an unrestricted vector autoregressive system, (UVAR), to check for pairwise Granger causality exogeneity tests, impulse- responses and variance decompositions of the macro factors on the logarithmic mean returns of 3 month, 5 year and 10 year Treasury with constant maturities. Unrestricted vector autoregressive models have been studied by various researchers such as: Alexander,(2003), Brooks, (2002), Amisano and Giannini, (1997), Boswijk, (1995), Christiano, Eichenbaum, Evans, (1999), Doornik and Hansen, (1994), Fisher, (1932).

By using the lag length criteria as shown in Section 2 related to econometric tests, we have found that 2 out of the five criteria indicates 6 lags as the optimal model. Thus, our mathematical notation of the Unrestricted Vector Autoregressive system will include 6 lags and then for simplicity, we are going to use two lags for the pairwise equations. The combinations of the pair equations that have been used are illustrated starting from equation (1) to equation (30).

Let’s assume that logarithmic monthly returns of the money supply, M1, and logarithmic monthly mean returns of the 3 month Treasury constant maturity, (3MTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 6 lags of the endogenous variables, the mathematical equations, (1) and (2) are as follows:

ln M 1=α 11 ln M 1t−1+α 12 ln3 MTCM t−1+b11 ln M 1t−2+b12 ln 3 MTCM t−2+c11 ln M 1t−3+c12 ln 3MTCM t−3

+d11 ln M 1t−4+d12 ln 3 MTCM t−4+e11 ln M 1t−5+e12 ln 3MTCM t−5+ f 11 ln M 1t−6+f 12ln 3 MTCM t−6

+c1+ε1 t (1)

ln3 MTCM t=α21ln M 1t−1+α22 ln 3 MTCM t−1+b21 ln M 1t−2+b22 ln 3 MTCM t−2+c21 ln M 1t−3+c22 ln 3 MTCM t−3+d21 ln M 1t−4+d22 ln 3 MTCM t−4+e21 ln M 1t−5+e22 ln3 MTCM t−5+ f 21 ln M 1t−6+f 22 ln3 MTCM t−6+c2+ε2 t

108

Page 109: Introduction to Econometrics 2

(2)

Where α ij , b ij , cij , d ij , eij , f ij are the parameters to be calculated and c i is the constant and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the money supply, M1, and logarithmic monthly mean returns of the 5 year Treasury constant maturity, (5YTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (3) and (4) are as follows:

ln M 1=α 11 ln M 1t−1+α 12 ln5 YTCM t−1+b11 ln M 1t−2+b12 ln 5 YTCM t−2+c1+ε1t (3)

ln 5 YTCM t=α21 ln M 1t−1+α 22 ln5YTCM t−1+b21 ln M 1t−2+b22 ln 5YTCM t−2+c2+ε2 t (4)

Where α ij , b ij are the parameters to be calculated and c i is the constant and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the money supply, M1, and logarithmic monthly mean returns of the 10 year Treasury constant maturity, (10YTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (5) and (6) are as follows:

ln M 1=α 11 ln M 1t−1+α 12 ln10 YTCM t−1+b11 ln M 1t−2+b12 ln 10 YTCM t−2+c1+ε1t (5)

ln 10 YTCM t=α21 ln M 1t−1+α 22 ln 10 YTCM t−1+b21 ln M 1t−2+b22 ln 10YTCM t−2+c2+ε2 t (6)

Where α ij , b ij are the parameters to be calculated and c i is the constant and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the industrial production,(IP), and logarithmic monthly mean returns of the 3 month Treasury constant maturity, (3MTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (7) and (8) are as follows:

ln IP=α 11 ln IPt−1+α 12 ln3 MTCM t−1+b11 ln IP t−2+b12 ln3 MTCM t−2+c1+ε1 t (7)

109

Page 110: Introduction to Econometrics 2

ln 3 MTCM t=α 21 ln IPt−1+α22 ln 3 MTCM t−1+b21 ln IP t−2+b22 ln3 MTCM t−2+c2+ε 2t (8)

Where α ij , b ij are the parameters to be calculated and c i is the constant and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the industrial production, (IP), and logarithmic monthly mean returns of the 5 year Treasury constant maturity, (5YTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (9) and (10) are as follows:

ln IP=α11 ln IPt−1+α 12 ln5 YTCM t−1+b11 ln IPt−2+b12 ln 5 YTCM t−2+c1+ε 1t (9)

ln 5 YTCM t=α21 ln IP t−1+α 22 ln5 YTCM t−1+b21 ln IP t−2+b22 ln5YTCM t−2+c2+ε2 t (10)

Where α ij , b ij are the parameters to be calculated and c i is the constant and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the industrial production,(IP), and logarithmic monthly mean returns of the 10 year Treasury constant maturity, (10YTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (11) and (12) are as follows:

ln IP=α 11 ln IPt−1+α 12 ln10 YTCM t−1+b11 ln IPt−2+b12 ln 10 YTCM t−2+c1+ε 1t (11)

ln 10 YTCM t=α21 ln IP t−1+α 22 ln10 YTCM t−1+b21 ln IP t−2+b22 ln10 YTCM t−2+c2+ε2 t (12)

Where α ij , b ij are the parameters to be calculated and c i is the constant and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the seasonally adjusted total consumer credit outstanding, (CCO), and logarithmic monthly mean returns of the

110

Page 111: Introduction to Econometrics 2

3 month Treasury constant maturity, (3MTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (13) and (14) are as follows:

ln CCO=α 11 lnCCO t−1+α 12 ln 3 MTCM t−1+b11 ln CCOt−2+b12 ln 3 MTCMt− 2+c1+ε1 t (13)

ln 3 MTCM t=α 21 ln CCOt−1+α22 ln 3 MTCM t−1+b21 ln CCOt−2+b22 ln 3 MTCMt− 2+c2+ε2 t (14)

Where α ij , b ij are the parameters to be calculated and c i is the constant terms and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the seasonally adjusted total consumer credit outstanding, (CCO), and logarithmic monthly mean returns of the 5 year Treasury constant maturity, (5YTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (15) and (16) are as follows:

ln CCO=α 11 lnCCO t−1+α 12 ln 5 YTCM t−1+b11 ln CCOt−2+b12 ln5 YTCM t−2+c1+ε1 t (15)

ln 5 YTCM t=α21 lnCCO t−1+α 22 ln5YTCM t−1+b21 lnCCO t−2+b22 ln5 YTCM t−2+c2+ε2 t (16)

Where α ij , b ij are the parameters to be calculated and c i is the constant term and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the seasonally adjusted total consumer credit outstanding, (CCO), and logarithmic monthly mean returns of the 10 year Treasury constant maturity, (10YTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (17) and (18) are as follows:

ln CCO=α 11 lnCCO t−1+α 12 ln 10 YTCM t−1+b11 ln CCOt−2+b12 ln10 YTCM t− 2+c1+ε1 t (17)

111

Page 112: Introduction to Econometrics 2

ln 10 YTCM t=α21 lnCCO t−1+α 22 ln10 YTCM t−1+b21 lnCCOt−2+b22 ln10 YTCM t−2+c2+ε2 t (18)

Where α ij , b ij are the parameters to be calculated and c i is the constant term and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly mean returns of the 3 month Treasury with constant maturity,(3MTCM) and logarithmic monthly mean returns of the 5 year Treasury constant maturity, (5YTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (19) and (20) are as follows:

ln 3 MTCM=α11 ln 3 MTCM t−1+α12 ln5 YTCM t−1+b11 ln 3 MTCM t− 2+b12 ln 5YTCM t−2+c1+ε1t (19)

ln 5 YTCM t=α21 ln 3 MTCM t−1+α 22 ln5 YTCM t−1+b21 ln3 MTCM t−2+b22 ln 5YTCM t−2+c2+ε2 t (20)

Where α ij , b ij are the parameters to be calculated and c i is the constant term and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly mean returns of the 3 month Treasury with constant maturity,(3MTCM) and logarithmic monthly mean returns of the 10 year Treasury constant maturity, (10YTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (21) and (22) are as follows:

ln3 MTCM=α 11 ln 3 MTCMt−1+α12 ln10YTCM t−1+b11 ln 3 MTCM t− 2+b12 ln 10 YTCM t−2+c1+ε1 t (21)

ln10 YTCM t=α21 ln 3MTCM t−1+α22 ln10 YTCM t−1+b21 ln3 MTCM t−2+b22 ln 10YTCM t−2+c2+ε2 t (22)

Where α ij , b ij are the parameters to be calculated and c i is the constant term and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly mean returns of the 5 year Treasury with constant maturity,(5YTCM) and logarithmic monthly mean returns of the 10 year Treasury constant maturity, (10YTCM), are endogenous variables and are jointly

112

Page 113: Introduction to Econometrics 2

determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (23) and (24) are as follows:

ln 5YTCM=α11 ln5 YTCM t−1+α12 ln 10YTCM t−1+b11 ln 5 YTCM t−2+b12 ln 10 YTCM t−2+c1+ε1 t (23)

ln 10 YTCM t=α21 ln5 YTCM t−1+α22 ln 10 YTCM t−1+b21 ln 5 YTCM t−2+b22 ln 10YTCM t−2+c2+ε 2t (24)

Where α ij , b ij are the parameters to be calculated and c i is the constant term and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the seasonally adjusted total consumer credit outstanding, (CCO), and logarithmic monthly returns of the seasonally adjusted money supply, (M1), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (25) and (26) are as follows:

ln CCO=α 11 lnCCO t−1+α 12 ln M 1t−1+b11 ln CCOt−2+b12 ln M 1t−2+c1+ε1 t (25)

ln M 1t=α 21 ln CCOt−1+α22 ln M t−1+b21 lnCCO t−2+b22 ln M t−2+c2+ε2 t (26)

Where α ij , b ij are the parameters to be calculated and c i is the constant term and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the seasonally adjusted total consumer credit outstanding, (CCO), and the logarithmic monthly returns of the industrial production, (IP), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (27) and (28) are as follows:

ln CCO=α 11 lnCCO t−1+α 12 ln IPt−1+b11 ln CCOt−2+b12 ln IPt−2+c1+ε1t (27)

ln IP t=α 21 ln CCOt−1+α22 ln IP t−1+b21 ln CCOt−2+b22 ln IPt−2+c2+ε2 t (28)

113

Page 114: Introduction to Econometrics 2

Where α ij , b ij are the parameters to be calculated and c i is the constant term and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the seasonally adjusted money supply, (M1), and the logarithmic monthly returns of the industrial production, (IP), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (29) and (30) are as follows:

ln M 1=α11 ln M 1t−1+α 12 ln IP t−1+b11 ln M 1t−2+b12 ln IP t−2+c1+ε1 t (29)

ln IP t=α 21 ln M 1t−1+α22 ln IP t−1+b21 ln M 1t−2+b22 ln IP t−2+c2+ε 2t (30)

Where α ij , b ij are the parameters to be calculated and c i is the constant term and εij is the error term of the UVAR regression equation .

The log likelihood statistic that is used in the UVAR model is computed assuming a multivariate normal Gaussian distribution. According to E-views user’s guide II the equation is:

l=−T2 {k (1+ log 2 π )+ log|Ω|}

(31)

The two information criteria in the UVAR model are computed as follows:

AIC= -2l / T +2n / T (32)SC= -2l / T +nlog T / T (33)

The hypotheses that we are going to formulate and test for pairwise Granger causality exogeneity tests are as follows:

The null hypothesis, H0, states that the macroeconomic variables are not a Granger cause of short, medium and long-term Treasury with constant maturities and vice versa.

The alternative hypothesis, H1, states that the macroeconomic variables are Granger cause of short, medium and long-term Treasury with constant maturities and vice versa.

Descriptive statistics will be displayed and to test for normality the Jarque – Bera statistic is analysed. We check for stationarity of the series by applying the Augmented Dickey – Fuller’s stationarity test, (ADF), statistic to calculate and compare the critical values.

114

Page 115: Introduction to Econometrics 2

The data that we have used are monthly returns starting from 01/01/1990 to 01/01/2013, which total to 276 observations. The data has been derived from money stock measures, industrial production and capacity utilization, consumer credit, and selected interest rates. All the data were obtained from the Federal Reserve Statistical Release Department and they are denoted by the symbols, H.6, G.17, G.19, and H.15. According to the Federal Reserve Statistical Release, the seasonally adjusted money supply, (M1), consists of currency outside the US Treasury, Federal Reserve Banks, the vaults of depository institutions, traveller’s checks of nonblank issuers, demand deposits at commercial banks less cash items in the process of collection and Federal Reserve Float, other checkable deposits, credit union share draft accounts, and demand deposits at thrift institutions. There is a contradiction of whether the money supply should be regarded as exogenous or endogenous variable. Some monetary economists perceive it as exogenous and not related to interest rates. Others believe that higher interest rates lead to increase in the money supply. In our study, we will use money supply as endogenous variable.

According to the Federal Reserve Statistical Release, the industrial production index, (IP), measures the real output of all manufacturing, mining, electric and gas industries. Manufacturing is consisted of those industries included in the North American Industry Classification System, (NAICS). It has been constructed from 312 individual series, which are market groups and industry groups. Te current formula that is used to measure IP is the geometric mean of the change in output and is calculated using the unit value estimate for the current month and the estimate for previous month. Production indexes for a restricted number of industries are calculated by dividing estimated nominal output by a corresponding Fischer price index.

According to the Federal Reserve Statistical Release, the seasonally adjusted consumer credit outstanding covers short and intermediate term extended to individuals by excluding loans secured by real estate.

The return of the financial series are calculated by taking the log difference mean returns of the monthly price changes of the 3 – month, 5-year and 10-year Treasury with constant maturities and the log difference returns of the macroeconomic factors.

The logarithmic formula that we have used is:

Rt= ln(Pt /P t−1 ) (34)

Where: Rt is the monthly return for month t, Pt is the closing price for month t, and Pt-1 is the closing price lagged one period for month t-1.

115

Page 116: Introduction to Econometrics 2

Figure 1, shows the fluctuations of the logarithmic monthly returns of the seasonally adjusted money supply, (M1) for the period 01/01/1990 to 01/01/2013.

Logarithmic monthly returns of the seasonally adjusted money supply,(M1).

-0,04

-0,02

0

0,02

0,04

0,06

0,08

1990

-02

1991

-05

1992

-08

1993

-11

1995

-02

1996

-05

1997

-08

1998

-11

2000

-02

2001

-05

2002

-08

2003

-11

2005

-02

2006

-05

2007

-08

2008

-11

2010

-02

2011

-05

2012

-08

Source: Author’s calculation based on Excel software. Data were obtained from the Federal Reserve Statistical Release Department.

As shown from Figure 1, there was a substantial increase in the money supply in 2011 and 2012. The Federal Open Market Committee has decided to purchase $600 billion of longer – term Treasury securities by the end of the second quarter of 2011. The purpose of asset purchase program is to maximise employment and achieve price stability. The committee has adopted an expansionary monetary policy with low interest rates to reduce the unemployment rate as a result of the recession of 2008.

116

Page 117: Introduction to Econometrics 2

Figure 2, shows the fluctuations of the logarithmic monthly returns of the seasonally adjusted total consumer credit outstanding for the period 01/01/1990 to 01/01/2013.

Logarithmic monthly returns of the seasonally adjusted total consumer credit outstanding, (cco).

-0,02

-0,01

0

0,01

0,02

0,03

0,04

0,05

0,06

1990

-02

1991

-03

1992

-04

1993

-05

1994

-06

1995

-07

1996

-08

1997

-09

1998

-10

1999

-11

2000

-12

2002

-01

2003

-02

2004

-03

2005

-04

2006

-05

2007

-06

2008

-07

2009

-08

2010

-09

2011

-10

2012

-11

Source: Author’s calculation based on Excel software. Data were obtained from the Federal Reserve Statistical Release Department.

As shown from Figure 2, during 2008 and 2009 we had a decline and negative figures of total consumer credit outstanding. Household wealth was reduced and credit supply was tightened from the banks by adopting a stricter lending standards. There was a sharp contraction on consumer spending. Then, there was a positive increase of the seasonally adjusted total consumer credit outstanding from December 2010.

117

Page 118: Introduction to Econometrics 2

Figure 3, shows the fluctuations of the logarithmic monthly returns of the Industrial Production, (IP) for the period 01/01/1990 to 01/01/2013.

Logarithmic monthly returns of Industrial Production, (IP).

-0,05

-0,04

-0,03

-0,02

-0,01

0

0,01

0,02

0,03

1990

-02

1991

-04

1992

-06

1993

-08

1994

-10

1995

-12

1997

-02

1998

-04

1999

-06

2000

-08

2001

-10

2002

-12

2004

-02

2005

-04

2006

-06

2007

-08

2008

-10

2009

-12

2011

-02

2012

-04

Source: Author’s calculation based on Excel software. Data were obtained from the Federal Reserve Statistical Release Department.

As shown from Figure 3, there was a decrease in the industrial production in 2008 and 2009. The US recession in 2008 has affected negatively the growth of the economy. Specifically, the index in March 2008 was 100,0078 and in December 2008, it was 89,5631. The decline was -10.44 percent. There was contraction in the industrial output of consumer goods, production of raw materials and manufacturing output. Then, in May 2010, the industrial production index started to rise. There was an obvious increase in all manufacturing sectors.

.

118

Page 119: Introduction to Econometrics 2

Figure 4, shows the fluctuations of the logarithmic monthly mean returns of 3-month Treasury constant maturity for the period 01/01/1990 to 01/01/2013.

1990

-02

1990

-12

1991

-10

1992

-08

1993

-06

1994

-04

1995

-02

1995

-12

1996

-10

1997

-08

1998

-06

1999

-04

2000

-02

2000

-12

2001

-10

2002

-08

2003

-06

2004

-04

2005

-02

2005

-12

2006

-10

2007

-08

2008

-06

2009

-04

2010

-02

2010

-12

2011

-10

2012

-08

-2

-1.5

-1

-0.5

0

0.5

1

1.5

Logarithmic monthly mean returns of 3 month Treasury constant maturity.

Source: Author’s calculation based on Excel software. Data were obtained from the Federal Reserve Statistical Release Department.

According to Figure 4, from March 2007 to March 2008, the rate declined from 5,08 percent to 1,22 percent. The expansionary monetary policy that was adopted as a result of the recession in 2008 created low short-term interest rates. The purpose was to foster business growth and credit supply as a result of the recession of 2008.

119

Page 120: Introduction to Econometrics 2

Figure 5, shows the fluctuations of the logarithmic monthly returns of 5–year Treasury constant maturity for the period 01/01/1990 to 01/01/2013.

1990

-02

1991

-04

1992

-06

1993

-08

1994

-10

1995

-12

1997

-02

1998

-04

1999

-06

2000

-08

2001

-10

2002

-12

2004

-02

2005

-04

2006

-06

2007

-08

2008

-10

2009

-12

2011

-02

2012

-04

-0.5-0.4-0.3-0.2-0.1

00.10.20.30.40.5

Logarithmic monthly mean returns of 5-year Treasury constant maturity

Source: Author’s calculation based on Excel software. Data were obtained from the Federal Reserve Statistical Release Department.

According to Figure 5, there was a positive and negative fluctuation of the 5-year Treasury constant maturity rate. The rate in June 2007 was 5,026 percent and in December 2012, it has reached 0,66 percent. The drop was accounted as a negative figure of -86,86 percent. The expansionary monetary policy that was adopted as a result of the recession in 2008 created low interest rates for the medium term.

Figure 6, shows the fluctuations of the logarithmic monthly mean returns of 10–year Treasury constant maturity for the period 01/01/1990 to 01/01/2013.

120

Page 121: Introduction to Econometrics 2

1990

-02

1990

-12

1991

-10

1992

-08

1993

-06

1994

-04

1995

-02

1995

-12

1996

-10

1997

-08

1998

-06

1999

-04

2000

-02

2000

-12

2001

-10

2002

-08

2003

-06

2004

-04

2005

-02

2005

-12

2006

-10

2007

-08

2008

-06

2009

-04

2010

-02

2010

-12

2011

-10

2012

-08

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

Logarithmic monthly mean returns of 10 - year Treasury constant maturity

Source: Author’s calculation based on Excel software. Data were obtained from the Federal Reserve Statistical Release Department.

According to Figure 6, there was a continuous positive and negative fluctuation of the 10-year Treasury constant maturity rate. The rate in June 2000 was 6,097 percent and in December 2012, it has reached 1,6371 percent. The drop was accounted as a negative figure of -73.15 percent. The expansionary monetary policy that was adopted as a result of the recession in 2008 created low interest rates for the long –term Treasury rate.

2. Statistical and econometric tests.

Table 1 shows descriptive statistics and normality tests of the logarithmic mean monthly returns of the US interest rates and the logarithmic monthly returns of the macroeconomic factors.

Table 1 displays Jarque - Bera normality test. RLN3 represents logarithmic mean monthly returns of the 3 month Treasury constant maturity. RLN5 represents logarithmic mean monthly returns of the 5 year Treasury constant maturity. RLN10 shows logarithmic mean monthly returns of the 10 year Treasury constant maturity. RLM1 represents the logarithmic monthly returns of the money supply, M1. RLNIP shows the logarithmic monthly returns of total index industrial production. LNCC represents the logarithmic monthly returns of seasonally adjusted total consumer credit outstanding.

RLN3 RLN5 RLN10 RLM1 RLNIP LNCC Mean -0.016970 -0.008748 -0.005511 0.004100 0.001695 0.004526 Median 7.03E-05 -0.005260 -0.003825 0.003411 0.002172 0.004349 Maximum 1.283792 0.410626 0.225705 0.059298 0.020992 0.048117 Minimum -1.677346 -0.362561 -0.317181 -0.032563 -0.043029 -0.008800 Std. Dev. 0.226450 0.092960 0.070552 0.009052 0.006670 0.005267 Skewness -0.753106 0.014283 -0.278897 1.894240 -1.706891 2.110637 Kurtosis 24.06679 5.494598 4.510561 13.55751 11.56759 19.14710

121

Page 122: Introduction to Econometrics 2

Jarque-Bera 5129.900 71.57410 29.81866 1446.856 978.1609 3203.302 Probability 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000

Observations 276 276 276 276 276 276Source: Author’s calculation based on EViews 6 software.Significant p-value at 5% significance level.

We state the hypotheses as follows:

H0: The log difference of the monthly mean returns of the 3month, 5-year and 10-year Treasury constant maturities and the log difference of the monthly returns of the macroeconomic factors are normally distributed.

H1: The log difference of the monthly mean returns of the 3month, 5-year and 10-year Treasury constant maturities and the log difference of the monthly returns of the macroeconomic factors are not normally distributed.

According to Table 1, the Jarque – Bera χ2

statistics for all variables are very significant at the 5% significance value. For example, the logarithmic monthly

returns of seasonally adjusted total consumer credit outstanding shows a χ2

statistic of 3203.302, which is very significant, as the p-value is 0.0000. The joint test of the null hypothesis that sample skewness equals 0 and sample kurtosis equals 3 is rejected. Thus, we can reject H0 of normality. The distribution of the various variables shows excess kurtosis. It is leptokurtic and slightly positively or negatively skewed. For example, the kurtosis of the logarithmic monthly returns of total index industrial production is 11.57, which is greater than 3. The coefficient of variation for the same variable, which is calculated as the division of the standard deviation from the mean is 3.94 percent compared with the coefficient of variation of 2.21 percent of the logarithmic monthly returns of the money supply, M1. We have also conducted normality tests, correlograms and autocorrelation LM test for the residuals of the six components and we have found that residuals are not normally distributed. Thus, the null hypothesis, H0, concerning normality is rejected at the 5% significance level for all the variables. Correlograms and autocorrelation tests show that as the number of lags increase, the residual serial correlation is not significant at 5% significance level. For example, in the six lags, the LM-statistic is 42.86 and the p-value is 0.2005, which is not significant, as it is above the 5% or 0.05 significance level. The only variable that has residuals, which are normally distributed, is the RLN10. It shows logarithmic mean monthly returns of the 10-year Treasury constant maturity. For detailed explanations, see appendix 1.

Tables 2-7 show the ADF tests of the log differences of the US seasonally adjusted money supply,(M1), the total index of industrial production, the seasonally adjusted total consumer credit outstanding, the 3 month Treasury constant maturity, the 5 year Treasury constant maturity, and the 10 year Treasury constant maturity.

Table 2 shows the ADF test of the monthly log difference of the US seasonally adjusted money supply,(M1) for the period 01/01/1990 to 01/01/2013.

122

Page 123: Introduction to Econometrics 2

ADF Test Statistic -10.05439 1% Critical Value* -3.4564 5% Critical Value -2.8724 10% Critical Value -2.5725

*MacKinnon critical values for rejection of hypothesis of a unit root.

Augmented Dickey-Fuller Test EquationDependent Variable: D(RLM1,2)Method: Least SquaresDate: 09/03/13 Time: 20:05Sample(adjusted): 10 277Included observations: 268 after adjusting endpoints

Variable Coefficient Std. Error t-Statistic Prob. D(RLM1(-1)) -4.492907 0.446860 -10.05439 0.0000

D(RLM1(-1),2) 2.562788 0.414627 6.180950 0.0000D(RLM1(-2),2) 1.752297 0.356906 4.909692 0.0000D(RLM1(-3),2) 1.199162 0.287056 4.177443 0.0000D(RLM1(-4),2) 0.658168 0.213936 3.076474 0.0023D(RLM1(-5),2) 0.237991 0.135026 1.762549 0.0792D(RLM1(-6),2) 0.088057 0.062595 1.406765 0.1607

C 0.000100 0.000519 0.192760 0.8473R-squared 0.839553 Mean dependent var -2.37E-05Adjusted R-squared 0.835233 S.D. dependent var 0.020940S.E. of regression 0.008500 Akaike info criterion -6.668165Sum squared resid 0.018784 Schwarz criterion -6.560971Log likelihood 901.5341 F-statistic 194.3530Durbin-Watson stat 2.015268 Prob(F-statistic) 0.000000

Source: Author’s calculation based on EViews 6 software.

For a level of significance of one per cent, the critical value of the t-statistic from first difference logarithmic returns of Dickey-Fuller’s table is -3.4564. According to Table 2 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -10.05, which is smaller than the critical values, (-3.4564, -2.8724, -2.5725). In other words, the monthly log difference of the US seasonally adjusted money supply,(M1) is a stationary series at first difference.

Table 3 shows the ADF test of the monthly log difference of the total index of industrial production for the period 01/01/1990 to 01/01/2013.

ADF Test Statistic -4.354043 1% Critical Value* -3.4563 5% Critical Value -2.8724 10% Critical Value -2.5725

*MacKinnon critical values for rejection of hypothesis of a unit root.

Augmented Dickey-Fuller Test EquationDependent Variable: D(RLNIP)Method: Least SquaresDate: 09/03/13 Time: 20:01Sample(adjusted): 9 277Included observations: 269 after adjusting endpoints

Variable Coefficient Std. Error t-Statistic Prob.RLNIP(-1) -0.385679 0.088580 -4.354043 0.0000

D(RLNIP(-1)) -0.546625 0.095198 -5.741976 0.0000D(RLNIP(-2)) -0.370037 0.099800 -3.707793 0.0003

123

Page 124: Introduction to Econometrics 2

D(RLNIP(-3)) -0.120089 0.100846 -1.190811 0.2348D(RLNIP(-4)) 0.073570 0.096534 0.762114 0.4467D(RLNIP(-5)) 0.136871 0.084399 1.621710 0.1061D(RLNIP(-6)) 0.141045 0.061900 2.278585 0.0235

C 0.000660 0.000396 1.666640 0.0968R-squared 0.481856 Mean dependent var -9.83E-06Adjusted R-squared 0.467959 S.D. dependent var 0.008271S.E. of regression 0.006033 Akaike info criterion -7.353812Sum squared resid 0.009500 Schwarz criterion -7.246906Log likelihood 997.0877 F-statistic 34.67440Durbin-Watson stat 2.006337 Prob(F-statistic) 0.000000

Source: Author’s calculation based on EViews 6 software.

For a level of significance of one per cent, the critical value of the t-statistic from log difference of Dickey-Fuller’s table is -3.4563. According to Table 3 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -4.35, which is smaller than the critical values, (-3.46, -2.87, -2.57). In other words, the log difference of the total index of industrial production is a stationary series.

Table 4 shows the ADF test of the monthly log difference of the seasonally adjusted total consumer credit outstanding for the period 01/01/1990 to 01/01/2013.

ADF Test Statistic -8.639621 1% Critical Value* -3.4564 5% Critical Value -2.8724 10% Critical Value -2.5725

*MacKinnon critical values for rejection of hypothesis of a unit root.

Augmented Dickey-Fuller Test EquationDependent Variable: D(LNCC,2)Method: Least SquaresDate: 09/03/13 Time: 20:03Sample(adjusted): 10 277Included observations: 268 after adjusting endpoints

Variable Coefficient Std. Error t-Statistic Prob.D(LNCC(-1)) -3.513592 0.406684 -8.639621 0.0000

D(LNCC(-1),2) 1.656438 0.378032 4.381734 0.0000D(LNCC(-2),2) 1.031427 0.329107 3.134020 0.0019D(LNCC(-3),2) 0.576473 0.268388 2.147910 0.0326D(LNCC(-4),2) 0.271776 0.200671 1.354335 0.1768D(LNCC(-5),2) 0.088550 0.130420 0.678965 0.4978D(LNCC(-6),2) 0.026548 0.061972 0.428393 0.6687

C 3.15E-05 0.000263 0.119686 0.9048R-squared 0.815298 Mean dependent var 2.27E-07Adjusted R-squared 0.810325 S.D. dependent var 0.009883S.E. of regression 0.004304 Akaike info criterion -8.029018Sum squared resid 0.004817 Schwarz criterion -7.921824Log likelihood 1083.888 F-statistic 163.9534

124

Page 125: Introduction to Econometrics 2

Durbin-Watson stat 2.005699 Prob(F-statistic) 0.000000Source: Author’s calculation based on EViews 6 software.

For a level of significance of one per cent, the critical value of the t-statistic from first difference logarithmic returns of Dickey-Fuller’s table is -3.4564. According to Table 4 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -8.64, which is smaller than the critical values, (-3.4564, -2.8724, -2.5725). In other words, the log difference of the seasonally adjusted total consumer credit outstanding is a stationary series at first difference.

Table 5 shows the ADF test of the monthly mean log difference of the 3 month Treasury constant maturity for the period 01/01/1990 to 01/01/2013.

ADF Test Statistic -6.472774 1% Critical Value* -3.4563 5% Critical Value -2.8724 10% Critical Value -2.5725

*MacKinnon critical values for rejection of hypothesis of a unit root.

Augmented Dickey-Fuller Test EquationDependent Variable: D(RLN3)Method: Least SquaresDate: 09/03/13 Time: 20:12Sample(adjusted): 9 277Included observations: 269 after adjusting endpoints

Variable Coefficient Std. Error t-Statistic Prob.RLN3(-1) -0.919984 0.142131 -6.472774 0.0000

D(RLN3(-1)) 0.299800 0.130816 2.291760 0.0227D(RLN3(-2)) -0.001907 0.118999 -0.016023 0.9872D(RLN3(-3)) 0.080153 0.106160 0.755015 0.4509D(RLN3(-4)) 0.051031 0.092663 0.550719 0.5823D(RLN3(-5)) -0.006495 0.072980 -0.088995 0.9292D(RLN3(-6)) 0.016635 0.062000 0.268307 0.7887

C -0.016036 0.013244 -1.210806 0.2271R-squared 0.409799 Mean dependent var -8.68E-05Adjusted R-squared 0.393970 S.D. dependent var 0.274589S.E. of regression 0.213762 Akaike info criterion -0.218620Sum squared resid 11.92617 Schwarz criterion -0.111714Log likelihood 37.40433 F-statistic 25.88889Durbin-Watson stat 1.997717 Prob(F-statistic) 0.000000

Source: Author’s calculation based on EViews 6 software.

125

Page 126: Introduction to Econometrics 2

For a level of significance of one per cent, the critical value of the t-statistic from Dickey-Fuller’s table is -3.4563. According to Table 5 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -6.47, which is smaller than the critical values, (-3.4563, -2.8724, -2.5725). In other words, the monthly log difference of the returns of the 3-month Treasury constant maturity is a stationary series.

Table 6 shows the ADF test of the monthly mean log difference of the 5 year Treasury constant maturity for the period 01/01/1990 to 01/01/2013.

ADF Test Statistic -9.762106 1% Critical Value* -3.4563 5% Critical Value -2.8724 10% Critical Value -2.5725

*MacKinnon critical values for rejection of hypothesis of a unit root.

Augmented Dickey-Fuller Test EquationDependent Variable: D(RLN5)Method: Least SquaresDate: 09/03/13 Time: 20:12Sample(adjusted): 9 277Included observations: 269 after adjusting endpoints

Variable Coefficient Std. Error t-Statistic Prob.RLN5(-1) -1.526023 0.156321 -9.762106 0.0000

D(RLN5(-1)) 0.511095 0.138134 3.699997 0.0003D(RLN5(-2)) 0.573164 0.121711 4.709204 0.0000D(RLN5(-3)) 0.526487 0.110618 4.759488 0.0000D(RLN5(-4)) 0.533275 0.097619 5.462816 0.0000D(RLN5(-5)) 0.355052 0.084187 4.217433 0.0000D(RLN5(-6)) 0.230586 0.061120 3.772712 0.0002

C -0.014229 0.005610 -2.536363 0.0118R-squared 0.545295 Mean dependent var -0.000220Adjusted R-squared 0.533100 S.D. dependent var 0.129982S.E. of regression 0.088817 Akaike info criterion -1.975196Sum squared resid 2.058872 Schwarz criterion -1.868290Log likelihood 273.6639 F-statistic 44.71406Durbin-Watson stat 1.953588 Prob(F-statistic) 0.000000

Source: Author’s calculation based on EViews 6 software.

126

Page 127: Introduction to Econometrics 2

For a level of significance of one per cent, the critical value of the t-statistic from Dickey-Fuller’s table is -3.4563. According to Table 6 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -9.76, which is smaller than the critical values, (-3.4563, -2.8724, -2.5725). In other words, the monthly log difference of the 5-year Treasury constant maturity returns is a stationary series.

Table 7 shows the ADF test of the monthly mean log difference of the 10-year Treasury constant maturity for the period 01/01/1990 to 01/01/2013.

ADF Test Statistic -9.631798 1% Critical Value* -3.4563 5% Critical Value -2.8724 10% Critical Value -2.5725

*MacKinnon critical values for rejection of hypothesis of a unit root.

Augmented Dickey-Fuller Test EquationDependent Variable: D(RLN10)Method: Least SquaresDate: 09/03/13 Time: 20:13Sample(adjusted): 9 277Included observations: 269 after adjusting endpoints

Variable Coefficient Std. Error t-Statistic Prob.RLN10(-1) -1.612196 0.167383 -9.631798 0.0000

D(RLN10(-1)) 0.560705 0.148396 3.778430 0.0002D(RLN10(-2)) 0.591852 0.130197 4.545823 0.0000D(RLN10(-3)) 0.499509 0.118148 4.227814 0.0000D(RLN10(-4)) 0.556850 0.101390 5.492162 0.0000D(RLN10(-5)) 0.335896 0.086417 3.886928 0.0001D(RLN10(-6)) 0.211280 0.060782 3.476029 0.0006

C -0.009894 0.004183 -2.364946 0.0188R-squared 0.591031 Mean dependent var -0.000294Adjusted R-squared 0.580063 S.D. dependent var 0.102685S.E. of regression 0.066542 Akaike info criterion -2.552669Sum squared resid 1.155677 Schwarz criterion -2.445763Log likelihood 351.3340 F-statistic 53.88434Durbin-Watson stat 1.986598 Prob(F-statistic) 0.000000

Source: Author’s calculation based on EViews 6 software.

127

Page 128: Introduction to Econometrics 2

For a level of significance of one per cent, the critical value of the t-statistic from Dickey-Fuller’s table is -3.4563. According to Table 7 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -9.63, which is smaller than the critical values, (-3.4563, -2.8724, -2.5725). In other words, the log difference of the monthly 10 year Treasury constant maturity is a stationary series.

Graph 1 shows the inverse roots of the characteristic autoregression polynomial test of all the macroeconomic variables and for the 3-month, 5-year and 10 year Treasury with constant maturities for the period 01/01/1990 to 01/01/2013.

-1.5

-1.0

-0.5

0.0

0.5

1.0

1.5

-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5

Inverse Roots of AR Characteristic Polynomial

Source: Author’s calculation based on EViews 6 software.

According to Graph 1, we are trying to verify whether the UVAR model is stationary. All the roots of the polynomial must have an absolute value less than one and reside inside the unit circle. In our case, all the roots are less than one and

128

Page 129: Introduction to Econometrics 2

lies inside the unit circle. Therefore, the UVAR model is stationary. The fact that the roots have an absolute value less than one indicates that the impulse shock in the variables will decrease with time.

Table 8 shows the lag length criteria that have been used based on the five indicators which are LR: sequential modified LR test statistic, FPE: Final prediction error, AIC: Akaike information criterion, SC: Schwarz information criterion, and HQ: Hannan-Quinn information criterion. The optimal number of lags will be selected based on the row that has the most *, which indicates lag order selected by the criterion.

VAR Lag Order Selection CriteriaEndogenous variables: RLN3 RLN5 RLN10 RLM1 RLNIP LNCCExogenous variables: CDate: 09/02/13 Time: 16:06Sample: 2 277Included observations: 264

 Lag LogL LR FPE AIC SC HQ

0  3808.086 NA  1.25e-20 -28.80368  -28.72241* -28.771031  3884.980  149.7111  9.15e-21 -29.11349 -28.54459 -28.884892  3965.720  153.5265  6.52e-21 -29.45242 -28.39589 -29.027873  4033.038  124.9466  5.15e-21 -29.68968 -28.14552  -29.06919*4  4077.089  79.75885  4.85e-21 -29.75067 -27.71888 -28.934245  4122.149  79.53785  4.55e-21 -29.81931 -27.29989 -28.806936  4172.989  87.43000   4.08e-21*  -29.93173* -26.92469 -28.723417  4201.400  47.56645  4.35e-21 -29.87424 -26.37956 -28.469978  4237.422  58.67311  4.39e-21 -29.87441 -25.89210 -28.274209  4276.933   62.55840*  4.32e-21 -29.90101 -25.43107 -28.10485

10  4309.473  50.04321  4.50e-21 -29.87480 -24.91723 -27.8827011  4334.163  36.84772  4.98e-21 -29.78911 -24.34392 -27.6010712  4353.727  28.30822  5.76e-21 -29.66460 -23.73177 -27.28061

 * indicates lag order selected by the criterion LR: sequential modified LR test statistic (each test at 5% level) FPE: Final prediction error

129

Page 130: Introduction to Econometrics 2

 AIC: Akaike information criterion SC: Schwarz information criterion HQ: Hannan-Quinn information criterionSource: Author’s calculation based on EViews 6 software.

According to Table 8, we have included initially 12 lags, as we have used monthly observations. Two out of the five criteria indicate 6 lags as the optimal model or number of lags that should be used. The criteria are the Final prediction error and the Akaike information criterion, which has a value of   4.08e-21* and  -29.93173* respectively.

Table 9 displays the Granger Causality, Block Exogeneity Wald tests. RLN3 represents logarithmic mean monthly returns of the 3-month Treasury constant maturity. RLN5 represents logarithmic mean monthly returns of the 5-year Treasury constant maturity. RLN10 shows logarithmic mean monthly returns of the 10-year Treasury constant maturity. RLM1 represents the logarithmic monthly returns of the money supply, M1. RLNIP shows the logarithmic monthly returns of total index industrial production. LNCC represents the logarithmic monthly returns of seasonally adjusted total consumer credit outstanding.

VAR Granger Causality/Block Exogeneity Wald TestsDate: 09/02/13 Time: 17:38Sample: 2 277Included observations: 270

Dependent variable: RLN3

Excluded Chi-sq df Prob.

RLN5  34.10164 6  0.0000RLN10  40.55861 6  0.0000RLM1  17.25915 6  0.0084RLNIP  42.54614 6  0.0000LNCC  4.007370 6  0.6757

All  147.8891 30  0.0000

Dependent variable: RLN5

Excluded Chi-sq df Prob.

RLN3  6.893101 6  0.3308RLN10  24.42549 6  0.0004

130

Page 131: Introduction to Econometrics 2

RLM1  9.717540 6  0.1371RLNIP  7.935533 6  0.2429LNCC  6.760889 6  0.3435

All  69.88641 30  0.0001

Dependent variable: RLN10

Excluded Chi-sq df Prob.

RLN3  6.284556 6  0.3921RLN5  21.12580 6  0.0017RLM1  7.732456 6  0.2584RLNIP  9.743712 6  0.1359LNCC  3.821533 6  0.7008

All  54.06501 30  0.0045

Dependent variable: RLM1

Excluded Chi-sq df Prob.

RLN3  11.11457 6  0.0849RLN5  21.79020 6  0.0013

RLN10  18.06122 6  0.0061RLNIP  18.69294 6  0.0047LNCC  8.383644 6  0.2113

All  80.40004 30  0.0000

Dependent variable: RLNIP

Excluded Chi-sq df Prob.

RLN3  9.367086 6  0.1540RLN5  10.65656 6  0.0996

RLN10  8.094984 6  0.2312RLM1  4.349791 6  0.6295LNCC  5.290518 6  0.5071

All  35.11466 30  0.2385

Dependent variable: LNCC

Excluded Chi-sq df Prob.

RLN3  11.08911 6  0.0857RLN5  19.00939 6  0.0041

RLN10  17.85957 6  0.0066RLM1  12.79753 6  0.0464RLNIP  8.065686 6  0.2333

All  54.15567 30  0.0044Source: Author’s calculation based on EViews 6 software.

The hypotheses that have been formulated and tested for pairwise Granger causality exogeneity tests are as follows:

The null hypothesis, H0, states that the macroeconomic variables are not a Granger cause of short, medium and long-term Treasury with constant maturities and vice versa.

131

Page 132: Introduction to Econometrics 2

The alternative hypothesis, H1, states that the macroeconomic variables are Granger cause of short, medium and long-term Treasury with constant maturities and vice versa.

According to Table 9, at the 5% significance level, we have found significant causality for all pairs of variables in both directions except for the dependent variable, RLNIP, which measures the logarithmic monthly returns of total index

industrial production in relation to the other variables. Specifically, The all χ2

statistic for the joint significance of all other lagged endogenous variables in relation to RLNIP in the equation was 35.11 and the p-value was 0.24.

We reject the null hypothesis, H0, that RLN3, RLN5, RLN10, RLM1, and LNCC variables are not Granger cause of the others. In the case of, RLNIP, which shows the logarithmic monthly returns of total index industrial production, the sample evidence cannot reject the null hypothesis. Thus, in this case, there is no Granger cause between the variables. For this reason, we are going to use the RLNIP variable of Industrial Production in the UVAR model as exogenous variable in relation to the constant.

Table 10 shows the Unrestricted Vector Autoregression Model. RLN3 represents logarithmic mean monthly returns of the 3-month Treasury constant maturity. RLN5 represents logarithmic mean monthly returns of the 5-year Treasury constant maturity. RLN10 shows logarithmic mean monthly returns of the 10-year Treasury constant maturity. RLM1 represents the logarithmic monthly returns of the money supply, M1. RLNIP shows the logarithmic monthly returns of total index industrial production. LNCC represents the logarithmic monthly returns of seasonally adjusted total consumer credit outstanding.

Date: 09/04/13 Time: 17:43 Sample(adjusted): 8 277 Included observations: 270 after adjusting endpoints Standard errors & t-statistics in parentheses

RLN3 RLN5 RLN10 RLM1 LNCCRLN3(-1) 0.463777 0.038338 0.035935 -0.003668 0.003327

(0.07454) (0.03360) (0.02591) (0.00303) (0.00159) (6.22184) (1.14110) (1.38669) (-1.21045) (2.09671)

RLN3(-2) -0.408080 -0.016785 -0.024229 0.001305 -0.002601 (0.08044) (0.03626) (0.02797) (0.00327) (0.00171)(-5.07301) (-0.46296) (-0.86637) (0.39896) (-1.51876)

RLN3(-3) 0.051950 -0.038388 -0.021977 0.007711 0.003985 (0.08405) (0.03788) (0.02922) (0.00342) (0.00179) (0.61810) (-1.01335) (-0.75211) (2.25675) (2.22707)

RLN3(-4) -0.091397 -0.014120 -0.026170 -0.004249 0.001714 (0.08541) (0.03849) (0.02969) (0.00347) (0.00182)(-1.07016) (-0.36681) (-0.88139) (-1.22392) (0.94252)

RLN3(-5) -0.026617 0.009879 -0.001270 0.006095 0.000867

132

Page 133: Introduction to Econometrics 2

(0.07995) (0.03604) (0.02780) (0.00325) (0.00170)(-0.33292) (0.27414) (-0.04569) (1.87521) (0.50913)

RLN3(-6) 0.072814 -0.058989 -0.034724 0.002122 0.000169 (0.07308) (0.03294) (0.02541) (0.00297) (0.00156) (0.99633) (-1.79081) (-1.36667) (0.71431) (0.10836)

RLN5(-1) 2.476172 0.603355 0.505169 -0.059578 0.002360 (0.47840) (0.21563) (0.16632) (0.01945) (0.01018) (5.17596) (2.79816) (3.03735) (-3.06344) (0.23174)

RLN5(-2) -0.517021 0.283362 0.248317 -0.044438 -0.040546 (0.50079) (0.22572) (0.17410) (0.02036) (0.01066)(-1.03241) (1.25538) (1.42626) (-2.18279) (-3.80311)

RLN5(-3) 0.492531 0.301716 0.299267 -0.035337 0.010475 (0.50794) (0.22894) (0.17659) (0.02065) (0.01081) (0.96966) (1.31787) (1.69470) (-1.71128) (0.96873)

RLN5(-4) 0.176980 -0.611515 -0.322049 0.013737 -0.012411 (0.51086) (0.23025) (0.17760) (0.02077) (0.01088) (0.34644) (-2.65582) (-1.81331) (0.66147) (-1.14118)

RLN5(-5) -0.415149 0.505353 0.350523 0.001039 -0.002686 (0.52908) (0.23847) (0.18394) (0.02151) (0.01126)(-0.78466) (2.11914) (1.90564) (0.04831) (-0.23846)

RLN5(-6) -0.203319 0.105612 0.235353 0.032915 -0.009947 (0.52524) (0.23674) (0.18260) (0.02135) (0.01118)(-0.38709) (0.44611) (1.28887) (1.54152) (-0.88961)

RLN10(-1) -3.807717 -0.810239 -0.711568 0.078910 0.000248 (0.61373) (0.27662) (0.21337) (0.02495) (0.01307)(-6.20418) (-2.92902) (-3.33492) (3.16276) (0.01895)

RLN10(-2) 0.854546 -0.209010 -0.205870 0.062070 0.045434 (0.66229) (0.29851) (0.23025) (0.02692) (0.01410) (1.29029) (-0.70018) (-0.89412) (2.30542) (3.22243)

RLN10(-3) -0.617711 -0.427695 -0.471936 0.034093 -0.019874 (0.67523) (0.30434) (0.23475) (0.02745) (0.01437)(-0.91482) (-1.40532) (-2.01040) (1.24201) (-1.38260)

RLN10(-4) -0.439899 0.711668 0.398747 -0.016380 0.011786 (0.67825) (0.30570) (0.23580) (0.02757) (0.01444)(-0.64858) (2.32798) (1.69106) (-0.59406) (0.81630)

RLN10(-5) -0.025757 -0.939208 -0.677637 -0.007972 -0.001807 (0.70195) (0.31639) (0.24404) (0.02854) (0.01494)(-0.03669) (-2.96853) (-2.77675) (-0.27936) (-0.12092)

RLN10(-6) -0.067311 -0.231810 -0.345822 -0.033191 0.016653 (0.69719) (0.31424) (0.24238) (0.02834) (0.01484)(-0.09655) (-0.73768) (-1.42675) (-1.17106) (1.12196)

RLM1(-1) 0.485291 -0.185510 -0.082648 -0.058860 0.046788 (1.64422) (0.74109) (0.57162) (0.06684) (0.03500) (0.29515) (-0.25032) (-0.14459) (-0.88059) (1.33669)

133

Page 134: Introduction to Econometrics 2

RLM1(-2) -1.411003 1.054460 0.470623 0.067822 0.000566 (1.62731) (0.73347) (0.56575) (0.06615) (0.03464)(-0.86708) (1.43764) (0.83186) (1.02520) (0.01634)

RLM1(-3) -5.509387 -1.072746 -0.884936 0.246350 0.009882 (1.65343) (0.74524) (0.57483) (0.06722) (0.03520)(-3.33209) (-1.43946) (-1.53948) (3.66505) (0.28074)

RLM1(-4) -0.421862 -0.308028 -0.124654 -0.008912 0.054462 (1.66688) (0.75130) (0.57950) (0.06776) (0.03549)(-0.25308) (-0.40999) (-0.21510) (-0.13151) (1.53478)

RLM1(-5) -0.400787 -1.552201 -0.883213 0.156740 -0.081706 (1.60371) (0.72283) (0.55754) (0.06519) (0.03414)(-0.24991) (-2.14740) (-1.58413) (2.40419) (-2.39322)

RLM1(-6) 3.835610 0.860345 0.744680 0.237824 0.014704 (1.65069) (0.74401) (0.57387) (0.06710) (0.03514) (2.32364) (1.15637) (1.29764) (3.54408) (0.41842)

LNCC(-1) 3.326889 -0.222112 0.171855 0.003817 0.106318 (3.11429) (1.40368) (1.08270) (0.12660) (0.06630) (1.06827) (-0.15823) (0.15873) (0.03015) (1.60362)

LNCC(-2) 2.178177 0.453103 -0.307083 -0.011932 0.238585 (3.09414) (1.39460) (1.07570) (0.12578) (0.06587) (0.70397) (0.32490) (-0.28547) (-0.09486) (3.62205)

LNCC(-3) -2.350621 1.260648 0.529375 -0.017843 0.094249 (3.15817) (1.42346) (1.09796) (0.12839) (0.06723)(-0.74430) (0.88562) (0.48214) (-0.13898) (1.40182)

LNCC(-4) -3.593158 1.965383 1.182607 -0.192501 0.188646 (3.09674) (1.39578) (1.07660) (0.12589) (0.06593)(-1.16030) (1.40809) (1.09846) (-1.52912) (2.86151)

LNCC(-5) -2.232457 -3.190555 -2.022610 0.175060 0.110959 (3.06462) (1.38130) (1.06544) (0.12458) (0.06524)(-0.72846) (-2.30982) (-1.89839) (1.40516) (1.70073)

LNCC(-6) 1.210720 -0.455374 0.025999 -0.184496 0.126581 (3.11062) (1.40203) (1.08143) (0.12645) (0.06622) (0.38922) (-0.32480) (0.02404) (-1.45899) (1.91150)

C -0.007280 -0.006901 -0.003303 0.003007 0.000240 (0.02500) (0.01127) (0.00869) (0.00102) (0.00053)(-0.29113) (-0.61229) (-0.37992) (2.95773) (0.45082)

RLNIP 3.791696 0.966689 0.811968 -0.248352 0.098049 (1.89349) (0.85344) (0.65829) (0.07698) (0.04031) (2.00249) (1.13270) (1.23346) (-3.22639) (2.43238)

R-squared 0.396979 0.268849 0.242781 0.375776 0.491295 Adj. R-squared 0.318434 0.173615 0.144151 0.294469 0.425035 Sum sq. resids 8.491171 1.724999 1.026289 0.014033 0.003848 S.E. equation 0.188884 0.085135 0.065667 0.007679 0.004021 F-statistic 5.054166 2.823037 2.461544 4.621715 7.414658 Log likelihood 83.90494 299.0680 369.1705 948.6329 1123.292

134

Page 135: Introduction to Econometrics 2

Akaike AIC -0.384481 -1.978281 -2.497559 -6.789873 -8.083643 Schwarz SC 0.041999 -1.551802 -2.071079 -6.363394 -7.657163 Mean dependent -0.017501 -0.009202 -0.005916 0.004123 0.004584 S.D. dependent 0.228792 0.093652 0.070982 0.009142 0.005303 Determinant Residual Covariance

3.18E-17

Log Likelihood 3212.693 Akaike Information Criteria -22.61254 Schwarz Criteria -20.48015

Source: Author’s calculation based on EViews 6 software.

According to Table 10, under each column of every variable, it is reported the estimated coefficient, its standard error and the t-statistic. For example, the coefficient for RLM1(-6) in the RLN3 equation is 3.84 with a significant t-statistic of 2.32. Then, the table shows standard OLS regression statistics for each equation. The log likelihood statistic is for the first column of 3-month Treasury constant maturity regression equation 83.90. The two information criteria Akaike AIC and Schwarz SC are used for model selection. We prefer the model that provides the smaller values of the information criterion. In our case, we select the fifth column, which represents the logarithmic monthly returns of seasonally adjusted total consumer credit outstanding.The values for Akaike AIC and Schwarz SC are -8.08 and -7.66 respectively. All the regression equations showed a significant F-statistic. Let’s take as an example the regression equation (1) that was explained in the methodological section to show that one variable is the result of the change of the other.

ln M 1=α11 M 1t−1+α 123 MTCM t−1+b11 M 1t−2+b123 MTCM t−2+c11 M 1t−3+c123 MTCM t−3

+d11 M 1t−4+d12 3 MTCM t−4+e11 M 1t−5+e123 MTCM t−5+ f 11 M 1t−6+ f 123MTCM t−6+c1+ε1 t

Where α ij , b ij , cij , d ij , e ij, f ij are the parameters to be calculated and c i is thecons tan t term and εij is the error term of the UVAR regression equation .The parentheses represents the t-statistics of the coefficients.

ln M 1=0 .49 M 1t−1+0 .46 3 MTCM t−1−1 . 41 M 1t−2−0. 41 3 MTCM t−2−5 .51 M 1 t−3+0 .05 3 MTCM t−3

(0.30 ) (6 .22 ) ( -0 .87 ) ( -5 .07 ) ( -3. 33) (0 .62)−0 .42 M 1t−4−0 . 09 3 MTCM t−4−0 . 40 M 1t−5−0 .03 3 MTCM t−5+3. 84 M 1t−6+0 .07 3 MTCM t−6−0. 007 ( -0 . 25) ( -1 .07 ) ( -0 .25) ( -0 .330 ) (2 .32 ) (0 .996) ( -0 .29 )+ε 1tThe constant t-statistic from equation 1 is not significant. The t-statistics of the lagged value of the money supply and the 3-month Treasury constant maturity are decreasing with time. We will check in mode detail this relationship and for the other variables by using the impulse - response and variance decomposition graphs.

135

Page 136: Introduction to Econometrics 2

Graph 2 displays the impulse - responses functions and appendix 2 shows the impulse responses functions tables and the remaining graphs. RLN3 represents logarithmic mean monthly returns of the 3 month Treasury constant maturity. RLN5 represents logarithmic mean monthly returns of the 5 year Treasury constant maturity. RLN10 shows logarithmic mean monthly returns of the 10 year Treasury constant maturity. RLM1 represents the logarithmic monthly returns of the money supply, M1. LNCC represents the logarithmic monthly returns of seasonally adjusted total consumer credit outstanding. RLNIP which shows the logarithmic monthly returns of total index industrial productionwas excluded as exogenous variable from the impulse - responses graphs.

136

Page 137: Introduction to Econometrics 2

-0.004

-0.002

0.000

0.002

0.004

2 4 6 8 10 12 14 16 18 20 22

Response of RLM1 to RLN3

-0.004

-0.002

0.000

0.002

0.004

2 4 6 8 10 12 14 16 18 20 22

Response of RLM1 to RLN5

-0.004

-0.002

0.000

0.002

0.004

2 4 6 8 10 12 14 16 18 20 22

Response of RLM1 to RLN10

-0.0015

-0.0010

-0.0005

0.0000

0.0005

0.0010

0.0015

2 4 6 8 10 12 14 16 18 20 22

Response of LNCC to RLN3

-0.0015

-0.0010

-0.0005

0.0000

0.0005

0.0010

0.0015

2 4 6 8 10 12 14 16 18 20 22

Response of LNCC to RLN5

-0.0015

-0.0010

-0.0005

0.0000

0.0005

0.0010

0.0015

2 4 6 8 10 12 14 16 18 20 22

Response of LNCC to RLN10

Response to Cholesky One S.D. Innovations

Source: Author’s calculation based on EViews 6 software.

Graph 2 shows the innovations, impulses of 3month, 5 year and 10 year Treasury constant maturities and the responses of macroeconomic variables, RLM1 and LNCC.As it can be seen from all graphs due to the stationary effect of the variables, the magnitude of the shock positive or negative between two variables gradually decrease and then dies off slowly as the time passes away. For example, the impulse - responses of the RLM1 to RLN10 started with a hump-shaped negative shock and then gradually declined close to zero the 23rd period. According to appendix 2, the first period, the impulse responses were -3.05E-05 and in the 23rd period it was 1.51E-05. Another example is the impulse - responses of the LNCC to RLN5. It has started with an inverse hump- shaped positive shock and then gradually declined. The first period the shock was 0.000824 and the 23rd period it

137

Page 138: Introduction to Econometrics 2

reached the value of -9.99E-05. The periods represent the number of years and not the monthly observations. The dataset of this analysis has 23 years or 23 x12 = 276 periods or monthly observations.

Graph 3 shows the variance decompositions and appendix 3 displays the variance decomposition tables. RLN3 represents logarithmic mean monthly returns of the 3-month Treasury constant maturity. RLN5 represents logarithmic mean monthly returns of the 5-year Treasury constant maturity. RLN10 shows logarithmic mean monthly returns of the 10-year Treasury constant maturity. RLM1 represents the logarithmic monthly returns of the money supply, M1. RLNIP which shows the logarithmic monthly returns of total index industrial production was excluded as exogenous variable from the variance decompositions, (VD), graphs. LNCC represents the logarithmic monthly returns of seasonally adjusted total consumer credit outstanding.

138

Page 139: Introduction to Econometrics 2

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e n t RL M 1 v a ria n c e d u e to RL N3

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e n t RL M 1 v a ria n c e d u e to RL N5

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e n t RL M 1 v a ria nc e d u e to RLN1 0

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e n t RL M 1 v a ria n c e du e to RL M 1

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e n t RL M 1 v a ria nc e du e to L NCC

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e nt LNCC v a rian c e d u e to RLN3

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e nt LNCC v a rian c e d u e to RLN5

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e n t L NCC v a rian c e du e to RLN1 0

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e n t LNCC v a rian c e du e to RLM 1

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e n t L NCC v a rian c e du e to L NCC

Varianc e Decompos ition

Source: Author’s calculation based on EViews 6 software.

According to Graph 3 and appendix 3, we can see that variance decompositions show the proportion of the variance movements in the dependent variables that are due to their own shocks rather than shocks to other variables. For example, the variance of the dependent variable RLN3 has decreased from 100% to 74.70%. The variance of RLN5 has decreased from 83.96% to 69.92%. The variance of RLN10 has increased from 8.87% to 16.45%. The variance of RLM1 has decreased from 83.66% to 72.41% and finally the variance of LNCC has decreased from 90.25% to 8.02%. The proportion of the variance that is present from the macro factors is decreasing or increasing, as the number of periods increases. For example, percent LNCC variance due to RLN3 has increased from 0.65% in 1st period to 6.31% the 23rd period. There was a 5.66% increase, see appendix 3. Another example is the percent RLM1 variance due to RLN3. There

139

Page 140: Introduction to Econometrics 2

was a decrease of the variance from 14.93% the 1st period to 13.00% the 23rd

period. There was a drop of the variance, which accounted to -1.93%. A final example is the percent RLM1 variance due to RLN10. There was an increase of the variance in the 1st period from 0.002% to 5.82% the 23rd period. There was a variance increase of 5.82%.

3. Summarizes and Concludes

140

Page 141: Introduction to Econometrics 2

In this article, we have attempted to model the effects of macroeconomic variables, namely seasonally adjusted money supply,(M1), total index of industrial production,(IP) and seasonally adjusted of total consumer credit outstanding(CCO) on the logarithmic mean monthly returns of the US term structure of interest rates. We have applied an Unrestricted Vector Autoregression system to check for exogeneity tests, impulse - responses and variance decomposition.

We have found that the Jarque – Bera χ2

statistics for all variables are very significant at the 5% significance value. We have rejected the null hypothesis, H0,

in favourite of the alternative, H1. In other words, the log difference of the monthly mean returns of the 3month, 5-year and 10-year Treasury constant maturity and the log difference of the monthly returns of the macroeconomic factors are not normally distributed. All the variables showed to be stationary and the fact that the roots have an absolute value less than one indicates that the impulse shock in the variables will decrease with time. Two out of the five criteria indicate 6 lags as the optimal UVAR model to be constructed.

We rejected the null hypothesis, H0, that RLN3, RLN5, RLN10, RLM1, and LNCC variables are not Granger cause of the others. In the case of, RLNIP, which shows the logarithmic monthly returns of total index industrial production, the sample evidence cannot reject the null hypothesis. Thus, in this case, there is no Granger causes among the variables.

Then, we constructed impulse - responses tables associated with their graphs based on E-views results.The innovations, impulses were the 3month, 5 year and 10 year Treasury constant maturities and the responses of the macroeconomic variables were RLM1 and LNCC. From all graphs due to the stationary effect of the variables, the magnitude of the shock positive or negative among two variables gradually decrease and then dies off slowly as the time passes away or as the periods increase. Finally, variance decomposition was illustrated through tables and graphs. It showed the proportion of the variance movements in the dependent variables that are due to their shocks. For example, the variance of the logarithmic mean monthly returns of the 5-year Treasury constant maturity, RLN5, has decreased from 83.96% to 69.92%.

In a future article, we will construct confidence intervals around the impulse – responses and variance decompositions to interpret better the results.

141

Page 142: Introduction to Econometrics 2

Appendix 1 shows the orthogonalized residual normality tests. Component 1 represents, RLN3, which is a logarithmic mean monthly return of the 3-month Treasury constant maturity. Component 2 represents,RLN5, which is logarithmic mean monthly returns of the 5 - year Treasury constant maturity.Component 3 represents, RLNIP, which shows the logarithmic monthly returns of total index industrial production. Component 4 represents, RLN10, which shows logarithmic mean monthly returns of the 10-year Treasury constant maturity. Component 5 represents, RLM1, which represents the logarithmic monthly returns of the money supply, M1. Component 6 represents, LNCC, which shows the logarithmic monthly returns of seasonally adjusted total consumer credit outstanding.

VAR Residual Normality TestsOrthogonalization: Cholesky (Lutkepohl)Null Hypothesis: residuals are multivariate normalDate: 09/05/13 Time: 08:28Sample: 2 277Included observations: 270

Component Skewness Chi-sq df Prob.

1  0.286335  3.689444 1  0.05482  0.337953  5.139559 1  0.02343 -0.635545  18.17629 1  0.00004  0.099271  0.443463 1  0.50555  0.402610  7.294268 1  0.00696  1.607438  116.2736 1  0.0000

Joint  151.0166 6  0.0000

Component Kurtosis Chi-sq df Prob.

1  7.971573  278.0610 1  0.00002  4.356658  20.70585 1  0.00003  6.218751  116.5540 1  0.00004  3.134573  0.203735 1  0.65175  6.539411  140.9336 1  0.00006  13.50198  1240.780 1  0.0000

Joint  1797.238 6  0.0000

Component Jarque-Bera df Prob.

1  281.7505 2  0.0000

142

Page 143: Introduction to Econometrics 2

2  25.84541 2  0.00003  134.7303 2  0.00004  0.647198 2  0.72355  148.2278 2  0.00006  1357.053 2  0.0000

Joint  1948.254 12  0.0000Source: Author’s calculation based on EViews 6 software.

The attached graphs show the pairwise cross – correlograms for the estimated residuals in the UVAR for 6 lags.

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN3,RLN3(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN3,RLN5(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN3,RLN10(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN3,RLM1(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN3,RLNIP(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN3,LNCC(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN5,RLN3(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN5,RLN5(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN5,RLN10(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN5,RLM1(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN5,RLNIP(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN5,LNCC(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN10,RLN3(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN10,RLN5(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN10,RLN10(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN10,RLM1(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN10,RLNIP(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN10,LNCC(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLM1,RLN3(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLM1,RLN5(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLM1,RLN10(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLM1,RLM1(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLM1,RLNIP(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLM1,LNCC(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLNIP,RLN3(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLNIP,RLN5(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLNIP,RLN10(-i))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLNIP,RLM1(-i))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLNIP,RLNIP(-i))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLNIP,LNCC(-i))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(LNCC,RLN3(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(LNCC,RLN5(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(LNCC,RLN10(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(LNCC,RLM1(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(LNCC,RLNIP(-i))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(LNCC,LNCC(-i ))

Autocorrelations with 2 Std.Err. Bounds

Source: Author’s calculation based on EViews 6 software.

143

Page 144: Introduction to Econometrics 2

The attached table displays the multivariate LM test statistics for residual serial correlation for 6 lags.

VAR Residual Serial Correlation LM TestsNull Hypothesis: no serial correlation at lag order hDate: 09/05/13 Time: 09:32Sample: 2 277Included observations: 270

Lags LM-Stat Prob

1  50.03072  0.06012  65.90542  0.00173  54.28053  0.02584  69.04899  0.00085  45.01862  0.14406  42.86275  0.2005

Probs from chi-square with 36 df.Source: Author’s calculation based on EViews 6 software.

144

Page 145: Introduction to Econometrics 2

Appendix 2 shows impulse – responses tables and graphs. RLN3, represents the logarithmic mean monthly returns of the 3 month Treasury constant maturity. RLN5, shows the mean monthly returns of the 5 year Treasury constant maturity. RLN10, shows logarithmic mean monthly returns of the 10-year Treasury constant maturity. RLM1, represents the logarithmic monthly returns of the money supply, M1. LNCC, shows the logarithmic monthly returns of seasonally adjusted total consumer credit outstanding.

Response of RLM1:

Period RLN3 RLN5 RLN10 1 -0.002786 -0.000856 -3.05E-05

(0.00042) (0.00040) (0.00040) 2 -0.000496 -7.05E-05 0.001448

(0.00045) (0.00044) (0.00043) 3 -8.68E-05 -2.43E-05 0.001162

(0.00044) (0.00044) (0.00044) 4 0.000508 -0.001156 0.000401

(0.00045) (0.00044) (0.00043) 5 -0.000576 -0.000476 -0.000466

(0.00045) (0.00044) (0.00044) 6 -0.000300 -0.000314 0.000120

(0.00044) (0.00044) (0.00045) 7 0.000251 0.000263 -0.000481

(0.00044) (0.00046) (0.00047) 8 -0.000466 0.000149 -2.04E-05

(0.00033) (0.00029) (0.00028) 9 1.02E-05 7.70E-05 0.000328

(0.00027) (0.00028) (0.00026) 10 0.000428 7.67E-05 -1.26E-05

(0.00025) (0.00027) (0.00024) 11 -0.000304 -0.000314 0.000262

(0.00024) (0.00024) (0.00022) 12 -4.96E-05 -0.000285 1.29E-05

(0.00022) (0.00023) (0.00019) 13 -0.000229 -0.000286 -0.000159

(0.00021) (0.00022) (0.00017) 14 -0.000496 -0.000214 0.000209

(0.00018) (0.00017) (0.00014) 15 -8.03E-05 -0.000106 -2.73E-05

(0.00016) (0.00016) (0.00014) 16 -0.000148 -4.00E-05 5.82E-05

(0.00016) (0.00015) (0.00012) 17 -0.000131 -1.87E-06 0.000138

(0.00014) (0.00014) (0.00011) 18 4.08E-05 8.52E-05 4.39E-05

(0.00013) (0.00014) (0.00011) 19 -0.000111 3.84E-06 2.80E-05

145

Page 146: Introduction to Econometrics 2

(0.00013) (0.00013) (9.9E-05) 20 -0.000152 6.36E-05 3.63E-05

(0.00011) (0.00012) (8.7E-05) 21 -9.41E-05 -1.29E-05 -3.23E-05

(0.00010) (0.00011) (8.1E-05) 22 -0.000176 9.69E-08 -1.41E-05

(9.9E-05) (0.00010) (7.0E-05) 23 -0.000173 -4.30E-06 1.51E-05

(8.9E-05) (9.5E-05) (6.4E-05) Response of LNCC:

Period RLN3 RLN5 RLN10 1 0.000305 0.000824 -0.000703

(0.00023) (0.00023) (0.00022) 2 0.000574 0.000234 -7.16E-05

(0.00023) (0.00023) (0.00022)And -0.000308 -0.000379 0.000448

(0.00024) (0.00023) (0.00023) 4 0.000345 -0.000186 -0.000191

(0.00024) (0.00023) (0.00023) 5 0.000352 -0.000407 8.69E-05

(0.00024) (0.00024) (0.00024) 6 0.000403 -0.000492 -0.000157

(0.00024) (0.00024) (0.00024) 7 0.000179 -5.51E-06 0.000115

(0.00024) (0.00025) (0.00025) 8 0.000131 -0.000272 1.03E-05

(0.00018) (0.00017) (0.00016) 9 0.000306 3.90E-05 -7.74E-06

(0.00015) (0.00017) (0.00015) 10 0.000273 -0.000295 -2.46E-05

(0.00014) (0.00016) (0.00013) 11 0.000308 -5.61E-05 8.55E-05

(0.00014) (0.00016) (0.00013) 12 0.000241 -0.000190 2.46E-05

(0.00014) (0.00015) (0.00012) 13 0.000224 -9.83E-05 6.49E-05

(0.00012) (0.00014) (9.9E-05) 14 0.000208 -0.000220 -0.000101

(0.00011) (0.00012) (9.3E-05) 15 7.45E-05 -0.000159 5.31E-05

(0.00010) (0.00012) (9.0E-05) 16 0.000190 -0.000202 -7.58E-06

(0.00010) (0.00012) (8.1E-05) 17 0.000162 -9.57E-05 2.47E-05

(9.8E-05) (0.00012) (7.8E-05) 18 0.000153 -0.000169 4.26E-05

(9.2E-05) (0.00011) (7.0E-05) 19 0.000186 -8.78E-05 1.66E-05

(8.8E-05) (0.00010) (6.9E-05) 20 0.000132 -0.000144 1.79E-05

(8.3E-05) (9.8E-05) (6.6E-05) 21 0.000127 -9.40E-05 3.28E-05

(7.8E-05) (9.4E-05) (6.1E-05) 22 0.000144 -0.000122 -1.32E-05

(7.6E-05) (9.0E-05) (5.8E-05) 23 9.60E-05 -9.99E-05 2.15E-05

(7.3E-05) (8.6E-05) (5.4E-05)

146

Page 147: Introduction to Econometrics 2

Ordering: RLN3 RLN5

RLN10 RLM1 LNCC

Source: Author’s calculation based on EViews 6 software.

147

Page 148: Introduction to Econometrics 2

- 0 . 1 0

- 0 . 0 5

0 . 0 0

0 . 0 5

0 . 1 0

0 . 1 5

0 . 2 0

5 1 0 15 2 0

Response of RLN3 t o RLN3

- 0 . 1 0

- 0 . 0 5

0 . 0 0

0 . 0 5

0 . 1 0

0 . 1 5

0 . 2 0

5 10 1 5 2 0

Response of RLN3 t o RLN5

- 0 . 1 0

- 0 . 0 5

0 . 0 0

0 . 0 5

0 . 1 0

0 . 1 5

0 . 2 0

5 10 1 5 20

Response of RLN3 t o RLN10

- 0 . 1 0

- 0 . 0 5

0 . 0 0

0 . 0 5

0 . 1 0

0 . 1 5

0 . 2 0

5 1 0 1 5 2 0

Response of RLN3 t o RLM1

- 0 . 1 0

- 0 . 0 5

0 . 0 0

0 . 0 5

0 . 1 0

0 . 1 5

0 . 2 0

5 1 0 1 5 20

Response of RLN3 t o LNCC

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

0 . 0 8

0 . 1 0

5 1 0 15 2 0

Response of RLN5 t o RLN3

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

0 . 0 8

0 . 1 0

5 10 1 5 2 0

Response of RLN5 t o RLN5

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

0 . 0 8

0 . 1 0

5 10 1 5 20

Response of RLN5 t o RLN10

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

0 . 0 8

0 . 1 0

5 1 0 1 5 2 0

Response of RLN5 t o RLM1

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

0 . 0 8

0 . 1 0

5 1 0 1 5 20

Response of RLN5 t o LNCC

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

5 1 0 15 2 0

Response of RLN10 t o RLN3

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

5 10 1 5 2 0

Response of RLN10 t o RLN5

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

5 10 1 5 20

Response of RLN10 t o RLN10

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

5 1 0 1 5 2 0

Response of RLN10 t o RLM1

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

5 1 0 1 5 20

Response of RLN10 t o LNCC

- 0 . 0 04

- 0 . 0 02

0 . 0 0 0

0 . 0 0 2

0 . 0 0 4

0 . 0 0 6

0 . 0 0 8

5 1 0 15 2 0

Response of RLM1 t o RLN3

- 0 . 0 0 4

- 0 . 0 0 2

0 . 0 0 0

0 . 0 0 2

0 . 0 0 4

0 . 0 0 6

0 . 0 0 8

5 10 1 5 2 0

Response of RLM1 t o RLN5

- 0 . 0 0 4

- 0 . 0 0 2

0 . 0 0 0

0 . 0 0 2

0 . 0 0 4

0 . 0 0 6

0 . 0 0 8

5 10 1 5 20

Response of RLM1 t o RLN10

- 0 . 0 04

- 0 . 0 02

0 . 0 0 0

0 . 0 0 2

0 . 0 0 4

0 . 0 0 6

0 . 0 0 8

5 1 0 1 5 2 0

Response of RLM1 t o RLM1

- 0 . 0 0 4

- 0 . 0 0 2

0 . 0 0 0

0 . 0 0 2

0 . 0 0 4

0 . 0 0 6

0 . 0 0 8

5 1 0 1 5 20

Response of RLM1 t o LNCC

- 0 . 0 02

- 0 . 0 01

0 . 0 0 0

0 . 0 0 1

0 . 0 0 2

0 . 0 0 3

0 . 0 0 4

5 1 0 15 2 0

Response of LNCC t o RLN3

- 0 . 0 0 2

- 0 . 0 0 1

0 . 0 0 0

0 . 0 0 1

0 . 0 0 2

0 . 0 0 3

0 . 0 0 4

5 10 1 5 2 0

Response of LNCC t o RLN5

- 0 . 0 0 2

- 0 . 0 0 1

0 . 0 0 0

0 . 0 0 1

0 . 0 0 2

0 . 0 0 3

0 . 0 0 4

5 10 1 5 20

Response of LNCC t o RLN10

- 0 . 0 02

- 0 . 0 01

0 . 0 0 0

0 . 0 0 1

0 . 0 0 2

0 . 0 0 3

0 . 0 0 4

5 1 0 1 5 2 0

Response of LNCC t o RLM1

- 0 . 0 0 2

- 0 . 0 0 1

0 . 0 0 0

0 . 0 0 1

0 . 0 0 2

0 . 0 0 3

0 . 0 0 4

5 1 0 1 5 20

Response of LNCC t o LNCC

Response t o One S. D. Innovat ions ± 2 S.E.

Source: Author’s calculation based on EViews 6 software.

148

Page 149: Introduction to Econometrics 2

Appendix 3 shows variance decompositions tables and their graphs. RLN3, represents the logarithmic mean monthly returns of the 3 - month Treasury constant maturity. RLN5, shows the mean monthly returns of the 5 - year Treasury constant maturity. RLN10, shows logarithmic mean monthly returns of the 10-year Treasury constant maturity. RLM1, represents the logarithmic monthly returns of the money supply, M1. LNCC, shows the logarithmic monthly returns of seasonally adjusted total consumer credit outstanding.

 Variance Decomposition of RLN3: Period S.E. RLN3 RLN5 RLN10 RLM1 LNCC

 1  0.188884  100.0000  0.000000  0.000000  0.000000  0.000000 2  0.218559  86.21577  1.036778  12.39962  0.009723  0.338109 3  0.222236  86.05002  1.133694  12.05206  0.193459  0.570764 4  0.227019  82.88804  1.462946  12.18694  2.872985  0.589095 5  0.227522  82.52764  1.456973  12.13412  3.017844  0.863423 6  0.234621  78.25319  6.365851  11.47242  2.838062  1.070479 7  0.237156  76.62955  7.618428  11.27195  3.368769  1.111300 8  0.238340  75.95021  7.701051  11.84266  3.337412  1.168662 9  0.239058  75.49572  7.744882  12.04522  3.548271  1.165909

 10  0.239498  75.22393  7.735826  12.04264  3.729866  1.267733 11  0.239706  75.09333  7.816822  12.02830  3.740877  1.320674 12  0.239859  75.04736  7.868141  12.02589  3.736437  1.322174 13  0.240157  74.89201  7.953534  12.00024  3.800148  1.354060 14  0.240272  74.84630  7.971255  12.00594  3.796825  1.379681 15  0.240350  74.79800  7.983874  11.99823  3.839462  1.380438 16  0.240414  74.76806  8.009390  11.99233  3.850383  1.379842 17  0.240462  74.75110  8.028543  11.99019  3.849191  1.380976 18  0.240520  74.72503  8.059063  11.98545  3.849601  1.380855 19  0.240534  74.71634  8.063058  11.98442  3.855322  1.380855 20  0.240535  74.71578  8.063133  11.98484  3.855304  1.380942 21  0.240551  74.70586  8.065824  11.99012  3.856623  1.381580 22  0.240566  74.70460  8.068680  11.98877  3.856208  1.381741 23  0.240573  74.70332  8.070570  11.98817  3.856254  1.381693

 Variance Decomposition of RLN5: Period S.E. RLN3 RLN5 RLN10 RLM1 LNCC

 1  0.085135  16.03689  83.96311  0.000000  0.000000  0.000000 2  0.086909  16.14380  80.57563  3.251359  0.019680  0.009531 3  0.087966  15.76259  79.93669  3.602403  0.673910  0.024413 4  0.089102  15.73795  77.99075  4.347889  1.363416  0.559997 5  0.091483  15.17507  74.00765  7.974948  1.322630  1.519704 6  0.095377  13.99306  72.39490  9.794052  1.918132  1.899860 7  0.097276  16.43440  70.33244  9.415898  1.921306  1.895953 8  0.097942  16.35744  70.32970  9.530793  1.911185  1.870883 9  0.098225  16.33695  70.30628  9.558314  1.905074  1.893375

 10  0.098283  16.31796  70.30032  9.571765  1.917484  1.892473 11  0.098885  16.42182  70.31841  9.456705  1.928539  1.874522 12  0.099020  16.55079  70.16740  9.432900  1.979436  1.869472 13  0.099140  16.51827  70.19905  9.425477  1.975250  1.881957 14  0.099266  16.49360  70.02356  9.591979  2.013501  1.877360 15  0.099292  16.52817  69.98899  9.587448  2.012695  1.882697 16  0.099348  16.51089  69.96334  9.617384  2.027440  1.880940

149

Page 150: Introduction to Econometrics 2

 17  0.099376  16.54920  69.92686  9.612556  2.031262  1.880125 18  0.099406  16.54254  69.93020  9.608989  2.033558  1.884716 19  0.099412  16.54578  69.92698  9.609224  2.033505  1.884514 20  0.099422  16.54245  69.92530  9.612488  2.035054  1.884703 21  0.099430  16.54322  69.92453  9.612371  2.035013  1.884865 22  0.099434  16.54529  69.91912  9.611876  2.038345  1.885366 23  0.099438  16.54475  69.91922  9.611357  2.038231  1.886439

 Variance Decompositi

on of RLN10:

 Period S.E. RLN3 RLN5 RLN10 RLM1 LNCC

 1  0.065667  15.19186  75.94124  8.866899  0.000000  0.000000 2  0.067442  15.22030  72.02121  12.73986  0.009152  0.009475 3  0.068096  14.93670  71.77935  12.96126  0.262016  0.060671 4  0.069139  15.11924  69.67642  13.84710  1.047985  0.309244 5  0.070493  14.96706  67.35286  15.86591  1.008886  0.805286 6  0.073135  14.01026  66.84137  16.69644  1.344534  1.107402 7  0.073855  15.34640  65.62953  16.42158  1.509181  1.093308 8  0.074236  15.20333  65.76202  16.44786  1.498225  1.088558 9  0.074503  15.15688  65.68395  16.56947  1.505804  1.083896

 10  0.074552  15.14521  65.65553  16.55148  1.565161  1.082623 11  0.074903  15.17259  65.76055  16.41297  1.573616  1.080282 12  0.075009  15.28807  65.57739  16.37969  1.670258  1.084587 13  0.075069  15.27153  65.60075  16.36193  1.678861  1.086928 14  0.075129  15.24919  65.49769  16.47101  1.696230  1.085878 15  0.075146  15.26189  65.48881  16.46466  1.695468  1.089173 16  0.075183  15.24726  65.48803  16.46855  1.707667  1.088497 17  0.075195  15.26641  65.46718  16.46457  1.713680  1.088152 18  0.075215  15.26011  65.47060  16.45776  1.720568  1.090969 19  0.075220  15.26196  65.46889  16.45799  1.720349  1.090815 20  0.075225  15.26004  65.47059  16.45574  1.722955  1.090672 21  0.075230  15.25844  65.47224  16.45574  1.722798  1.090772 22  0.075232  15.26034  65.46811  16.45542  1.725103  1.091040 23  0.075235  15.25948  65.46934  16.45463  1.725137  1.091415

 Variance Decomposition of RLM1: Period S.E. RLN3 RLN5 RLN10 RLM1 LNCC

 1  0.007679  14.93201  1.408790  0.001793  83.65741  0.000000 2  0.007861  14.69884  1.353274  3.849742  80.09780  0.000344 3  0.007975  14.29375  1.315793  6.147471  78.24247  0.000519 4  0.008253  13.77743  3.455566  6.009316  76.69831  0.059381 5  0.008350  13.99751  3.743413  6.223776  74.94617  1.089135 6  0.008433  13.86839  3.828040  6.125477  74.93354  1.244554 7  0.008762  12.93900  3.648154  6.016128  74.96689  2.429825 8  0.008782  13.20249  3.664885  5.990500  74.64585  2.496283 9  0.008842  13.02165  3.623240  6.064688  74.82685  2.463578

 10  0.008911  13.08389  3.576200  5.972101  74.78541  2.582403 11  0.008934  13.14819  3.698044  6.039478  74.42828  2.686012 12  0.008972  13.03811  3.780202  5.987556  74.45766  2.736475 13  0.009030  12.94587  3.845841  5.946986  73.93658  3.324723 14  0.009058  13.20594  3.885378  5.970688  73.48079  3.457195 15  0.009090  13.12297  3.873900  5.930189  73.43309  3.639854 16  0.009116  13.07628  3.853471  5.900208  73.26277  3.907268 17  0.009124  13.07749  3.846909  5.916065  73.14340  4.016133 18  0.009146  13.01773  3.838518  5.890622  73.08959  4.163536 19  0.009160  12.99413  3.826681  5.873484  72.91110  4.394608

150

Page 151: Introduction to Econometrics 2

 20  0.009168  13.00325  3.825628  5.865272  72.78787  4.517976 21  0.009182  12.97395  3.813728  5.848091  72.68372  4.680509 22  0.009194  12.98196  3.803873  5.833245  72.51926  4.861665 23  0.009202  13.00056  3.797639  5.823951  72.40657  4.971280

 Variance Decomposition of LNCC: Period S.E. RLN3 RLN5 RLN10 RLM1 LNCC

 1  0.004021  0.653365  4.758565  3.464337  0.877526  90.24621 2  0.004106  2.840979  4.931486  3.357184  1.335548  87.53480 3  0.004284  3.197107  5.419148  4.325109  1.253813  85.80482 4  0.004357  3.800864  5.445736  4.399050  1.287759  85.06659 5  0.004505  4.246718  6.022674  4.157454  1.333652  84.23950 6  0.004653  4.832902  6.913284  4.025814  2.307631  81.92037 7  0.004757  4.784219  6.614600  3.918434  2.247738  82.43501 8  0.004816  4.751350  6.813469  3.822861  2.290651  82.32167 9  0.004870  5.094298  6.670209  3.738691  2.287851  82.20895

 10  0.004936  5.307554  6.898374  3.642827  2.337586  81.81366 11  0.004995  5.612988  6.748992  3.589659  2.372267  81.67609 12  0.005053  5.743379  6.755517  3.510741  2.426843  81.56352 13  0.005101  5.855018  6.672008  3.463837  2.456900  81.55224 14  0.005132  5.971197  6.801086  3.465943  2.427677  81.33410 15  0.005161  5.926920  6.831699  3.438531  2.401480  81.40137 16  0.005194  6.003626  6.917563  3.395274  2.433160  81.25038 17  0.005218  6.059145  6.893116  3.367064  2.431557  81.24912 18  0.005244  6.095877  6.942210  3.340979  2.407476  81.21346 19  0.005265  6.189533  6.918309  3.315419  2.445803  81.13094 20  0.005282  6.220746  6.958205  3.295645  2.433496  81.09191 21  0.005298  6.247999  6.951528  3.279897  2.427499  81.09308 22  0.005313  6.297396  6.972491  3.262487  2.442194  81.02543 23  0.005324  6.306734  6.981937  3.250054  2.442243  81.01903

 Cholesky Ordering:

RLN3 RLN5 RLN10

RLM1 LNCC

Source: Author’s calculation based on EViews 6 software.

151

Page 152: Introduction to Econometrics 2

0

20

40

60

80

100

2 4 6 8 10

Percent RLN3 variance due to RLN3

0

20

40

60

80

100

2 4 6 8 10

Percent RLN3 variance due to RLN5

0

20

40

60

80

100

2 4 6 8 10

Percent RLN3 variance due to RLN10

0

20

40

60

80

100

2 4 6 8 10

Percent RLN3 variance due to RLM1

0

20

40

60

80

100

2 4 6 8 10

Percent RLN3 variance due to RLNIP

0

20

40

60

80

100

2 4 6 8 10

Percent RLN3 variance due to LNCC

0

20

40

60

80

100

2 4 6 8 10

Percent RLN5 variance due to RLN3

0

20

40

60

80

100

2 4 6 8 10

Percent RLN5 variance due to RLN5

0

20

40

60

80

100

2 4 6 8 10

Percent RLN5 variance due to RLN10

0

20

40

60

80

100

2 4 6 8 10

Percent RLN5 variance due to RLM1

0

20

40

60

80

100

2 4 6 8 10

Percent RLN5 variance due to RLNIP

0

20

40

60

80

100

2 4 6 8 10

Percent RLN5 variance due to LNCC

0

20

40

60

80

2 4 6 8 10

Percent RLN10 variance due to RLN3

0

20

40

60

80

2 4 6 8 10

Percent RLN10 variance due to RLN5

0

20

40

60

80

2 4 6 8 10

Percent RLN10 variance due to RLN10

0

20

40

60

80

2 4 6 8 10

Percent RLN10 variance due to RLM1

0

20

40

60

80

2 4 6 8 10

Percent RLN10 variance due to RLNIP

0

20

40

60

80

2 4 6 8 10

Percent RLN10 variance due to LNCC

0

20

40

60

80

100

2 4 6 8 10

Percent RLM1 variance due to RLN3

0

20

40

60

80

100

2 4 6 8 10

Percent RLM1 variance due to RLN5

0

20

40

60

80

100

2 4 6 8 10

Percent RLM1 variance due to RLN10

0

20

40

60

80

100

2 4 6 8 10

Percent RLM1 variance due to RLM1

0

20

40

60

80

100

2 4 6 8 10

Percent RLM1 variance due to RLNIP

0

20

40

60

80

100

2 4 6 8 10

Percent RLM1 variance due to LNCC

0

20

40

60

80

100

2 4 6 8 10

Percent RLNIP variance due to RLN3

0

20

40

60

80

100

2 4 6 8 10

Percent RLNIP variance due to RLN5

0

20

40

60

80

100

2 4 6 8 10

Percent RLNIP variance due to RLN10

0

20

40

60

80

100

2 4 6 8 10

Percent RLNIP variance due to RLM1

0

20

40

60

80

100

2 4 6 8 10

Percent RLNIP variance due to RLNIP

0

20

40

60

80

100

2 4 6 8 10

Percent RLNIP variance due to LNCC

0

20

40

60

80

100

2 4 6 8 10

Percent LNCC variance due to RLN3

0

20

40

60

80

100

2 4 6 8 10

Percent LNCC variance due to RLN5

0

20

40

60

80

100

2 4 6 8 10

Percent LNCC variance due to RLN10

0

20

40

60

80

100

2 4 6 8 10

Percent LNCC variance due to RLM1

0

20

40

60

80

100

2 4 6 8 10

Percent LNCC variance due to RLNIP

0

20

40

60

80

100

2 4 6 8 10

Percent LNCC variance due to LNCC

Variance Decomposition

Source: Author’s calculation based on EViews 6 software.

152

Page 153: Introduction to Econometrics 2

References

Alexander, C., (2003), Market Models. A Guide to Financial Data Analysis. John Wiley and Sons Ltd. ISMA Centre. The Business School For Financial Markets.

Amisano, G., and Giannini, C., (1997), “ Topics in Structural VAR Econometrics, 2nd ed, Berlin: Springer – Verlag.

Boswijk, P,H., (1995), “identifiability of Cointegrated Systems,”. Technical report. Tinbergen Institute.

Brooks,C.,(2002), Introductory econometrics for Finance. Cambridge University Press.

Christiano, L.J.,M Eichenbaum, C.L.E, (1999), “Monetary Policy Shocks: What have we learned and to what end?” Chapter 2 in J.B.Taylor and M. Woodford, (eds.), Handbook of Macroeconomics, Volume 1A, Amsterdam: Elsevier Science Publishers B.V.

Doornik, J.A and Hansen,H., (1994), “An Omnibus Test for Univariate and Multivariate Normality”. Manuscript.

Fisher, R.A., (1932), “Statistical Methods for Research Workers, 4 th Edition, Edinburgh: Oliver & Boyd.

Sims, C.A.(1980), “ Macroeconomics and Reality”. Economterica, 48, 1 – 48.

153

Page 154: Introduction to Econometrics 2

Time – series analysis

Definition of time – series

A time – series is a statistical series which shows how a given set of data has been changing over time.

Components of a time – series

Time – series are often composed of four distinct types of movement:

(a) Trend (T) this is the general movement in the data which represent the general direction in which the figures are moving.

(b) Seasonal Variation (S) these are regular fluctuations which take place within one complete period. If the data are quarterly, then they are fluctuations specifically associated with each quarter. If the data are daily, then fluctuations are associated with each day.

I Cyclical Variation I this is a longer term regular fluctuation which may take several years to complete. To identify this factor we would need to have annual data

3. Random Variation I. These are all those factors that may make a difference at a particular point in time. However, from time to time they do have a significant, but unpredictable, effect on the data. For example, weather forecasting these are not yet predictable.

In the following example, we ignore cyclical variationI and consider T, S and R only.

In this session, we will assume that the total variation in the time-series (denoted by y) is the sum of the trend (T), the seasonal variation (S) and the random variation I.

This is called the additive model:

y = T + S + R

Since the random element is unpredictable, we shall make an assumption that its overall value, or average value, is 0. Thus, the equation becomes as follows:

y = T + S or S = y – T

In the alternative multiplicative model, it is assumed that:

154

Page 155: Introduction to Econometrics 2

y = T x S x R

The random element is still assumed to have an average value of 0, but in this case the assumption is that this average value is 1. Thus, the equation becomes as follows:

y = T x S or S = y / T

The additive model is appropriate when the variations about the trend are of similar magnitude in the same period of each year.

The multiplicative model is preferred when the variations about the trend tend to increase or decrease proportionately with the trend.

Example of an additive model of time – series

Consider the following example in which y represents a company’s quarterly sales (£000).

Year Quarter y 4 quarter Centred Seasonal moving average MA effect (MA) (T) y – T 1 1 87.5 2 73.2 78.5 3 64.8 79.2 78.85 -14.05 4 88.5 79.9 79.55 2 1 90.3

2 76.0

3 69.2

4 94.7

3 1 93.9

2 78.4

3 72.0

4 100.3

155

Page 156: Introduction to Econometrics 2

Complete the table

The four quarter moving average for the first observation is displayed between quarter 2 and 3. The 78.5 figure was obtained by adding 87.5 + 73.2 + 64.8 + 88.5 and dividing by four. The 78.85 figure was obtained by adding 78.5 + 79.2 and dividing by two. The seasonal effect was obtained by subtracting sales from the trend.

Plot the y and T values on the vertical axis against time in years and quarters on the horizontal axis.

156

Page 157: Introduction to Econometrics 2

Estimating the seasonal variation

The seasonal variation can be estimated by averaging the values of y-t for each quarter.

Quarters 1 2 3 4

Years 1 2

3

Total

Average

Strictly, these seasonal factors should sum to zero. If the sum differs significantly from zero the seasonal factors should be adjusted to ensure a zero sum. The net value of unadjusted average could be adjusted by changing the sign for example – to + then dividing by 4.

Quarters 1 2 3 4

Average(unadjusted S)

Adjusted S

Sum all adjusted S = 0

157

Page 158: Introduction to Econometrics 2

Forecasting

To forecast the company’s sales in the first quarter of year 4:

(1) Calculate the average increase in the trend from the formula:

Tn−T 1

n−1

Where Tn is the last trend estimate (85.5 in the example), T1 is the first trend estimate (78.9) and n is the number of trend estimates calculated.

In the example, the average increase in the trend is:

(85.5 – 78.90) / 7 = 0.94

(2) Forecast the trend for the first quarter of year 4 by taking the last trend estimate and adding on three average increases in the trend. This gives:

85.5 + (3 x 0.94) =

(3)Now adjust for the seasonal variation by adding on the appropriate seasonal factor for the first quarter.

Forecast = 88.32 + =

Complete the calculation

Now repeat the above for the second, third and fourth quarters of year 4

158

Page 159: Introduction to Econometrics 2

Forecasting the company sales year 4

Year Quarter Sales Trend

4. 1 2

3 4

159

Page 160: Introduction to Econometrics 2

Solution of the additive model of the time – series problem

Consider the following example in which y represents a company’s quarterly sales (£000).

Year Quarter y 4 quarter Centred moving average MA y – T (MA) (T) 1 1 87.5 2 73.2 78.5 3 64.8 79.2 78.85 -14.05 4 88.5 79.9 79.55 8.95 2 1 90.3 81.0 80.45 9.85 2 76.0 82.55 81.775 -5.775 3 69.2 83.45 83 -13.8 4 94.7 84.05 83.75 10.95 3 1 93.9 84.75 84.4 9.5

2 78.4 86.15 85.45 -7.05 3 72.0

5. 100.3

160

Page 161: Introduction to Econometrics 2

Estimating the seasonal variation

The seasonal variation can be estimated by averaging the values of y-t for each quarter.

Quarters 1 2 3 4

Years 1 -14.05 8.95 2 9.85 -5.775 -13.8 10.95

3 9.5 -7.05 Total 19.35 -12.825 -27.85 19.9Average 9.675 -6.4125 -13.925 9.95

These results imply that quarters 1 and 4 are high sales quarters whereas quarter 2 and quarter 3 are low sales quarters.

Strictly, these seasonal factors should sum to zero. If the sum differs significantly from zero the seasonal factors should be adjusted to ensure a zero sum.

In this case, a net value of unadjusted S = - 0.7125

+ 0.7125 / 4 = 0.178125

Quarters 1 2 3 4

161

Page 162: Introduction to Econometrics 2

Average 9.675 -6.4125 -13.925 9.95 (Unadjusted S)

Adjusted S 9.853125 -6.234375 -13.746875 10.128125

Adjusted S = 0

I have included the layout of the table with the data that you will input in Excel to get the line chart. Sales + trend are plotted in the vertical axis and the quarters in the horizontal axis. To calculate a four quarter moving average, press tools in Excel, then, data analysis, then, select moving average. In the input range select and input all the sales figure. In the box of interval, please write 4, as we use quarterly data. In output range, select any cell and press OK. Adjust the data to start from the second quarter of year 1. Then, calculate the centered moving average by adding, for example, the first two figures of the four quarter moving average and dividing by two. Then, calculate the seasonal effect. It is the sales minus the centered moving average for each quarter.

Quarter Sales Trend1 87.52 73.23 64.8 78.854 88.5 79.551 90.3 80.452 76 81.7753 69.2 834 94.7 83.751 93.9 84.42 78.4 85.453 724 100.3

162

Page 163: Introduction to Econometrics 2

Additive model of time series

0

20

40

60

80

100

120

1 2 3 4 1 2 3 4 1 2 3 4

Quarters

Sale

s +

tren

d

SalesTrend

I have also added the layout of the table and the graph in Excel that shows sales in different quarters.

Quarter Sales1 87.52 73.23 64.84 88.51 90.32 763 69.24 94.71 93.92 78.43 724 100.3

163

Page 164: Introduction to Econometrics 2

Sales in different quarters

0

20

40

60

80

100

120

1 2 3 4 1 2 3 4 1 2 3 4

Quarters

Sale

s

Sales

Forecasting

To forecast the company’s sales in the first quarter of year 4:

(1) Calculate the average increase in the trend from the formula:

Tn−T 1

n−1

Where Tn is the last trend estimate (85.45 in the example), T1 is the first trend estimates (78.85) and n is the number of trend estimates calculated. In our case, we have eight.

In the example, the average increase in the trend is:

(85.45 – 78.85) / 7 = 0.94

164

Page 165: Introduction to Econometrics 2

(2) Forecast the trend for the first quarter of year 4 by taking the last trend estimate and adding on three average increases in the trend.

This gives:

85.45 + (3 x 0.94) = 88.27

(3) Now adjust for the seasonal variation by adding on the appropriate

seasonal factor for the first quarter.

Forecast = 88.27 + 9.853125 = 98.12

Now repeat the above for the second, third and fourth quarters of year 4

165

Page 166: Introduction to Econometrics 2

Forecasting the company sales year 4

Year Quarter Trend (y) Seasonal effect Forecast(trend + or –

seasonal effect)4 1 88.27 9.853125 98.12 (to 2.d.p.).

2 89.21 -6.234375 82.98 (to 2.d.p.).3 90.15 -13.746875 76.40 (to 2.d.p.).4 91.09 10.128125 101.22 (to 2.d.p.).

Example of a multiplicative model of time – series.

166

Page 167: Introduction to Econometrics 2

Consider the level of economic activity over three years expressed in millions pounds.

Year Quarter Economic activity

1 1 102 2 110 3 112 4 115

2 1 101 2 113 3 114 4 118

3 1 120 2 121 3 122 4 123

It is the required to calculate the following:

1) Calculate a moving average trend.

2) Calculate the seasonal factors for each quarter using the multiplicative model.

6. Forecast the economic activity for the four quarter of year 4.

167

Page 168: Introduction to Econometrics 2

Consider the following example in which y represents the economic activity expressed in millions pounds.

Year QuartersEconomic activityy

4 quarter moving average

Centred moving average, (T)

Seasonal effect, trend.S = y / T

1 1 1022 110 109.753 112 109.5 109.625 1.0224 115 110.25 109.875 1.047

2 1 101 110.75 110.5 0.9142 113 111.5 111.125 1.0173 114 116.25 113.875 1.0014 118 118.25 117.25 1.006

3 1 120 120.25 119.25 1.0062 121 121.5 120.875 1.0013 1224 123

Estimating the seasonal variation

The seasonal variation can be estimated by averaging the values of y/T for each quarter.

Year Quarter 1 Quarter 2 Quarter 3 Quarter 41 1.022 1.0472 0.914 1.017 1.001 1.0063 1.006 1.001 0 0

Total 1.92 2.018 2.023 2.053Average 0.96 1.009 1.0115 1.0265

These results imply that quarter 1 recorded a low economic activity and quarters 2, 3 and 4 recorded high economic activity.The sum of the averages should equal to 4. In our case, we have 0.96 + 1.009 + 1.0115 + 1.0265 = 4.0105. The adjustments are made by multiplying each average by 4 / 4.0105 = 0.998. The following table shows the adjustments.

Quarter Average Adjustment Adjusted average Seasonal effect rounding1 0.96 0.998 0.958 0.962 1.009 0.998 1.007 1.013 1.0115 0.998 1.009 1.014 1.0265 0.998 1.024 1.02

Total 4

168

Page 169: Introduction to Econometrics 2

Forecasting

We have used the same steps as the additive models except that we multiplied the seasonal factor instead of adding it.

Calculate the average increase in the trend from the formula:

Tn−T 1

n−1

Where Tn is the last trend estimate (120.875 in the example), T1 is the first trend estimates (109.625) and n is the number of trend estimates calculated. In our case, we have eight.

In the example, the average increase in the trend is:

(120.875 – 109.625) / 7 = 1.61

Forecast the trend for the first quarter of year 4 by taking the last trend estimate and adding on three average increases in the trend.

This gives:

120.875 + (3 x 1.61) = 125.705 120.875 + (4 x 1.61) = 127.315 120.875 + (5 x 1.61) = 128.925 120.875 + (6 x 1.61) = 130.535

Now adjust for the seasonal variation by multiplying on the appropriate seasonal factor for the first quarter.

Forecast = 125.705 x 0.96 = 120.6768 or 120.68 ( to 2.d.p.).

169

Page 170: Introduction to Econometrics 2

Forecasting the economic activity for year 4

Year Quarter Trend (y) Seasonal effect Forecast(trend + or – seasonal

effect)4 1 125.705 0.96 120.68 ( to 2.d.p.).

2 127.315 1.01 128.59 (to 2.d.p.).3 128.925 1.01 130.21 ( to 2.d.p.).4 130.535 1.02 133.15 ( to 2.d.p.).

I have included the layout of the table with the data that you will input in Excel to get the line chart. Sales + trend are plotted in the vertical axis and the quarters in the horizontal axis. To calculate a four quarter moving average, press tools in Excel, then, data analysis, then, select moving average. In the input range select and input all the sales figure. In the box of interval, please write 4, as we use quarterly data. In output range, select any cell and press OK. Adjust the data to start from the second quarter of year 1. Then, calculate the centered moving average by adding, for example, the first two figures of the four quarter moving average and dividing by two. Then, calculate the seasonal effect. It is the sales divided by the centered moving average for each quarter.

Quarters Economic activity Trend1 1022 1103 112 109.6254 115 109.8751 101 110.52 113 111.1253 114 113.8754 118 117.251 120 119.252 121 120.8753 1224 123

170

Page 171: Introduction to Econometrics 2

Multiplicative time series model

0

20

40

60

80

100

120

140

1 2 3 4 1 2 3 4 1 2 3 4

Quarters

Sale

s +

trend

Economic activity Trend

171

Page 172: Introduction to Econometrics 2

I have also added the layout of the table and the graph in Excel that shows sales in different quarters.

Quarters Economic activity

1 1022 1103 1124 1151 1012 1133 1144 1181 1202 1213 1224 123

Sales over quarters

0

20

40

60

80

100

120

140

1 2 3 4 1 2 3 4 1 2 3 4

Quarters

Econ

omic

act

ivity

Economic activity

172

Page 173: Introduction to Econometrics 2

The exponential smoothing analysis tool predicts a value based on the forecast for the prior period. We add a damping factor a to determine determines how strongly forecasts respond to errors in the prior forecast.

Where a = 1 – damping factor. For example, if the damping factor is 0.3, then,

a = 1 – 0.3 = 0.7

The alpha value a varies from 0 to 1. 0 ≤ a ≤ 1. Excel states that values of 0.2 to 0.3 are reasonable smoothing constants. The current forecast should be adjusted 20 to 30 percent for error in the prior forecast.

As an example, please, calculate the exponential smoothing of the price level for the first ten period observations. In Excel, please, go to Tools, then, select data analysis and then exponential smoothing. In the input range select all prices. In the damping factor insert for example 0.3, then, the alpha will be 0.7. The formula for calculating the exponential smoothing will be as follows:

C3 = 0.7 * B2 + 0.3* C2

C4 = 0.7 * B3 + 0.3 * C3

C5 = 0.7 * B4 + 0.3 * C4

C6 = 0.7 * B5 + 0.3 * C5

C7 = 0.7 * B6 + 0.3 * C6

C8 = 0.7 * B7 + 0.3 * C7

C9 = 0.7 * B8 + 0.3 * C8

C10 = 0.7 * B9 + 0.3 * C9

Observations (A) Prices (B) Exponential smoothing calculations in Excel I

1 150 N/A2 110 1503 105 1224 102 110.105 90 104.436 80 94.337 70 84.308 60 74.299 50 64.2910 30 54.29

The exponential smoothing chart will be as follows:

173

Page 174: Introduction to Econometrics 2

Exponential Smoothing

0

20

40

60

80

100

120

140

160

1 2 3 4 5 6 7 8 9 10

Data Point

Valu

e Actual

Forecast

Financial time – series model

174

Page 175: Introduction to Econometrics 2

Let’s take as an example a financial model related to the returns of a share price in relation to market returns.

Number ofobservations

Share returns

Market returns

1 3.526787 8.732092 -4.34533 -5.198153 5.222709 6.218654 -4.99619 -5.53935 -3.04336 7.698086 -2.375422 -4.997357 2.651303 5.427778 -0.68924 -1.54249 0.205664 1.463910 2.4783 3.652811 0.237407 -0.149412 0.329728 0.1668813 -0.26869 -0.144414 0.064769 0.09787315 -0.5873 -0.0991116 0.329225 -0.0834417 -0.11849 0.12276718 0.011541 -0.4576719 -0.18757 -0.5304620 -0.38752 -0.1111821 -0.26835 -0.2894722 0.262798 -0.1767623 0.355054 -1.1568624 -1.34302 -0.577125 -0.77964 0.57818226 -0.04649 -0.0533127 0.098381 -0.2305428 -0.09585 -0.6662529 -0.0059 -0.5007130 -0.05415 -0.53128

The dependent variable is share price returns expressed in percentage and the independent variable is market price returns expressed in percentage.

The mathematical equation will be as follows:

yi = α+βx i+εt

Where: yi = share price returns. It is the dependent variable.

175

Page 176: Introduction to Econometrics 2

α is the intercept .β is the coefficient of the independent or explanatory variable .x i is the market price returns or the independent variable .ε t is the error term.

We are investigating the changes of market price returns on share price returns. We examine if there are significant effects of the explanatory variable on the dependent variable. Finally, we could forecast the value of the dependent variable y given different values of the independent variable x.

The stochastic equation in Econometrics will incorporate the error term and it will be as follows:

yi = α+β xi+ε t

Our focus will be on the additional tests of the error term in terms of normality, heteroskedasticity and autocorrelation. They will be covered in more detail in the relevant sections. The role of the error term is to predict the element of randomness that is not observed. It is also used to include the effects of independent variables that are omitted. Finally, it is used to show the measurement error that is incorporated in the dependent variable y.

If you still face difficulties concerning how to run and interpret the regression equation, then, please refer to the following document,

“Introduction to Statistics, Probability and Econometrics”. There is a detailed section related to regression.

The first step in EViews 6 is to check for normality or descriptive statistics, stationarity and the correlogram of both the dependent and independent variable.

Before loading and transferring your data in EViews 6, you have to insert the numerical values or the data in Excel. Do not use long titles for each time – series. Use abbreviations. For example, share for the first time – series and market for the second time - series. Name the sheet of the Excel file and delete the other sheets. For example, the name of the sheet is reg. Once the Excel file is ready, you close it and you open the statistical package EViews 6. You press file and then you select new worksheet. Then, in workfile structure type, select unstructured / undated. Insert the number of observations. In our case, the number is 30. You will get and untitled worksheet. Then press file, then, import, then, read text - lotus – excel. Select the Excel file and press OK. Excel spreadsheet import screen will open. In the box, upper left data cell, select A2. This is the cell that your first observation starts. In Excel 5 + sheet name write reg. This is the name of your filename. In the box name of series, please write share then press space bar and then market. These are the names of the variables.

Once you done these steps, you will be able to see the time series transferred from Excel to EViews 6. Press file, then, save as. Then, write the filename reg and save it with the extension wf.

176

Page 177: Introduction to Econometrics 2

You click on the time series for example share to do the following tests. You press view and then you select descriptive statistics and tests……… histogram and stats or correlogram or unit root test. The same thing you do for the file market. You open it and then you press view.

To run a regression, you press quick, and then, estimate equation. In the box, you write

share c market and you press OK.

Then, your output will be displayed. From the top menu you have options to do forecast and check the residuals or error term for further tests.

Good luck !

177

Page 178: Introduction to Econometrics 2

We start with the share price returns histogram and statistics for normality in EViews 6.

0

2

4

6

8

10

12

14

-5 -4 -3 -2 -1 0 1 2 3 4 5 6

Series: SHARESample 1 30Observations 30

Mean -0.127295Median -0.050320Maximum 5.222709Minimum -4.996190Std. Dev. 1.993636Skewness 0.055478Kurtosis 4.702093

Jarque-Bera 3.636790Probability 0.162286

The null and alternative hypotheses are as follows:

H0: The dependent variable is normally distributed

H1: The dependent variable is not normally distributed

From the above table the χ2 statistic, namely 3.64, The probability value is above the critical value at 5% significance level or 95% confidence level, so we can not reject H0. We accept H0 and we reject the alternative hypothesis. Even though the distribution is slightly positively skewed and has positive kurtosis. The joint test of the null hypothesis that sample skewness equals 0 and sample kurtosis equals 3 is rejected. The distribution of the dependent variable shows excess kurtosis. It is leptokurtic. The kurtosis is 4.70 which is greater than 3.

Please make sure that you are familiar with the measures of location and dispersion in addition to the graph of the normal distribution. Please compare the mean with the standard deviation. Is the distribution leptokurtic, platykurtic or mesokurtic? Compare the probability at the 5% significance level with the Jarque – Bera statistic. If you have difficulties, then, please review the sections related to skewness and kurtosis.

178

Page 179: Introduction to Econometrics 2

Jarque –Bera normality test

This section focuses on tests of normality related to the dependent or independent variable. The results of Jarque Bera test is used to test if the series is normal or non-normal. This type of test uses the chi-squared distribution and specifically is a goodness-of-fit test. So we state the hypotheses as follows:

H0: The dependent variable or independent variable are normally distributed

H1: The dependent or independent variable are not normally distributed

The Jarque – Bera test is a test that check the relationship of normality in comparison with skewness and kurtosis.

The mathematical formula is as follows:

JB=n6∗(S2+1

4(K−3 )2) (1)

Where n is the sample size . S is the skewness . K is the kurtosis .

The result or the x2 statistic is then compared with the p-value or probability at the 5% significance level. If it is below 5%, then, the test is significant. If it is above 5%, then, it is insignificant and accordingly we accept or reject H0.

For example, we will use the results of the skewness and kurtosis that we have calculated in the section related to measures of dispersion.

Sample size n = 5Skewness = -0.77Kurtosis = -0.58

By substituting the results in equation (1), we have the following results:

JB=56∗[ (−0 .77 )2+ 1

4(−0 . 58−3 )2]=5

6∗(0 . 5929+0 .25∗12 .8164 )=0 .833∗3 .797=3 .16

By checking the chi-square distribution, you will find that the value 3.6 is below the critical value at the 95% confidence level in a two-tailed test. The critical value is 9.49 as the degrees of freedom are 5 -1 = 4. In this case, we cannot reject H0. The dependent or independent variables are normally distributed.

179

Page 180: Introduction to Econometrics 2

As an example, I have attached a screenshot of the Net Asset value of a UK investment trust. The table and the graph shows measures of location and dispersion in addition to the Jarque – Berra statistic. It is very common normality test that is used in EViews Econometrics software.

NAV of a closed-end fund or investment trust, usually expressed on a per share basis, is the value of all its assets, less its liabilities, divided by the number of shares.

NAV=Total Assets - LiabilitiesNumber of shares

When the share price is below the net asset value it is trading at a discount. Share prices above the net asset value are at a premium. If the closed-end fund is trading at £9.50 and its net asset value is £10, then it is trading at a 5% discount.

0

4

8

12

16

-15 -10 -5 0 5 10 15

Series: NAVSample 2 158Observations 156

Mean 1.160473Median 0.957070Maximum 18.62115Minimum -17.81061Std. Dev. 6.583173Skewness 0.199221Kurtosis 3.481567

Jarque-Bera 2.539308Probability 0.280929

From the above table the χ2 statistic, namely 2.54, is below the critical value at 5% significance level, so we accept H0. Even though the distribution is slightly positively skewed and has positive kurtosis.

Please make sure that you are familiar with the measures of location and dispersion in addition to the graph of the normal distribution. Please compare the mean with the standard deviation. Is the distribution leptokurtic, platykurtic or mesokurtic? Compare the probability at the 5% significance level with the Jarque – Bera statistic. If you have difficulties, then, please review the sections related to skewness and kurtosis.

180

Page 181: Introduction to Econometrics 2

We, then, check the correlogram of the share price returns in EViews 6.

Date: 02/04/15 Time: 14:47Sample: 1 30Included observations: 30

Autocorrelation Partial Correlation AC  PAC  Q-Stat  Prob

     ***| . |      ***| . | 1 -0.428 -0.428 6.0562 0.014     . |**. |      . |* . | 2 0.247 0.078 8.1397 0.017     .**| . |      . *| . | 3 -0.209 -0.097 9.6963 0.021     . |* . |      . | . | 4 0.098 -0.043 10.048 0.040     .**| . |      .**| . | 5 -0.281 -0.279 13.075 0.023     . | . |      .**| . | 6 0.001 -0.292 13.075 0.042     . |* . |      . | . | 7 0.075 0.030 13.310 0.065     . *| . |      . *| . | 8 -0.105 -0.136 13.788 0.087     . |* . |      . *| . | 9 0.112 -0.071 14.358 0.110     . | . |      . *| . | 10 -0.022 -0.072 14.382 0.156     . | . |      . *| . | 11 0.040 -0.136 14.464 0.208     . | . |      . *| . | 12 -0.050 -0.068 14.596 0.264     . | . |      . *| . | 13 0.042 -0.073 14.696 0.327     . | . |      . *| . | 14 -0.064 -0.133 14.940 0.382     . | . |      . *| . | 15 0.022 -0.074 14.971 0.453     . | . |      . | . | 16 0.018 -0.055 14.993 0.525     . | . |      . *| . | 17 -0.039 -0.128 15.106 0.588     . | . |      . *| . | 18 -0.018 -0.173 15.132 0.653     . | . |      . *| . | 19 0.042 -0.115 15.286 0.704     . |* . |      . | . | 20 0.077 0.035 15.858 0.725

The hypotheses that have been formulated and tested are as follows:

H0: The time series of the share price returns have no serial correlation.

H1: The time series of the share price returns have serial correlation.

According to the above table, the Q-statistic associated with the p-values is statistically significant at the first observations but then it is insignificant, which is a sign that there is no serial correlation.

181

Page 182: Introduction to Econometrics 2

Finally, we check the ADF unit root test of the share price returns in EViews 6.

Null Hypothesis: SHARE has a unit rootExogenous: ConstantLag Length: 0 (Automatic based on SIC, MAXLAG=7)

t-Statistic   Prob.*

Augmented Dickey-Fuller test statistic -8.885614  0.0000Test critical values: 1% level -3.679322

5% level -2.96776710% level -2.622989

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test EquationDependent Variable: D(SHARE)Method: Least SquaresDate: 02/04/15 Time: 14:48Sample (adjusted): 2 30Included observations: 29 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

SHARE(-1) -1.427826 0.160690 -8.885614 0.0000C -0.308837 0.321027 -0.962026 0.3446

R-squared 0.745173    Mean dependent var -0.123481Adjusted R-squared 0.735735    S.D. dependent var 3.355847S.E. of regression 1.725132    Akaike info criterion 3.994956Sum squared resid 80.35413    Schwarz criterion 4.089252Log likelihood -55.92686    Hannan-Quinn criter. 4.024488F-statistic 78.95413    Durbin-Watson stat 1.604154Prob(F-statistic) 0.000000

The critical value of the t-statistic Dickey-Fuller’s table is -8.89. The sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -8.89, which is smaller than the critical values, (-3.68, -2.97, -2.62). In other words, the share price returns is a stationary series.

182

Page 183: Introduction to Econometrics 2

Stationarity

A non-stationary series tends to show a statistically significant spurious correlation when variables are regressed. Thus, we have significant R2 . We test whether NAVof UK investment trusts follow a random walk, a random walk with drift and trend or are stationary. In this section, I will illustrate the EViews output.

For non-stationary series the mathematical formulas for random walk and random woalk with drift are as follows:

Random walk : yt = yt-1 + ε t

Random walk with drift: yt = μ+ y t−1+εt

Where:

μ is the drift .y t-1 is the dependent variable lagged one period .ε t is the error term or the residuals .

The unit root test

A popular test of stationarity is the unit root test. The specifications of the test are the following:

Δy t=γy t−1+εt

Where the null hypothesis to be tested is γ =1. ε t is the stochastic error term that is assumed to be non-autocorrelated with a zero mean and with a constant variance. Such an error term is also known as a white noise error term.

The main problem when performing ADF test is to decide whether to include a constant term and a linear trend or neither in the test regression. The general principle is to choose a specification that is a plausible description of the data under both the null and alternative hypothesis (Hamilton 1994, p.501). If the series seems to contain a trend we should include both a constant and trend in the test regression. If the series seems not to contain a trend we should include neither a constant nor a trend in the test regression. We start by testing if the NAV in the UK follow simple random walks (with no constant and a time trend) or are stationary. We state the hypothesis as follows:

Ho: γ=0 H1: γ < 0

183

Page 184: Introduction to Econometrics 2

The ADF test for NAV of UK investment trusts sector defined by AITC is as follows:

ADF test of the NAV return by excluding a constant and a trend.

Table 1 shows ADF test of the NAV return by all AITC sector for the period January 1990 to January 2003 for two different critical values one per cent and five per cent. We test if NAV return follows a random walk by excluding a constant and a linear time trend.1

ADF Test Statistic -4.189743 1% Critical Value* -2.5798 5% Critical Value -1.9420

*MacKinnon critical values for rejection of hypothesis of a unit root.Source: calculated by the author

For a level of significance of 1 per cent and a sample size larger than 100 observations, the critical value of the t-statistic from Dickey-Fuller’s tables for no intercept and no trend is -2.58. According to Table 1, we can reject the null hypothesis namely the existence of a unit root with one per cent significance level. The ADF test statistic is -4.19. In other words, the NAV return is stationary.

The following tables summarise the unit root test with constant and time trend for NAV for UK investment trusts. The specifications and hypothesis of the test are the following:

ΔY t=μ+γyt−1+∑λ=4

aλ Δyt− λ+ βt+εt

Whereμ is the drift, ∑λ=4

aλ Δyt− λ are lags included so that ε t contains no

autocorrelation, is the measure of stationarity, βt is a measure of time trend.

We state the hypothesis as follows:H0 : β , γ=0 (existence of a unit root) H1 : β , γ < 0 (stationarity)

The existence of a unit root is measured using an ADF test. For a 1 per cent significance level and a sample size larger than 100 observations, the critical value of the t-statistic from Dickey-Fuller’s tables is - 4.02. Table 2 summarises the unit root test of NAV return for UK investment trusts sector by AITC.

1

184

Page 185: Introduction to Econometrics 2

Table 2 ADF test of UK NAV return by including a constant and a trend.

Table 2 shows ADF test for the period January 1990 to January 2003 for two different critical values one per cent and five per cent. We test if NAV return follows a random walk by including a constant and a linear time trend.2

ADF Test Statistic -4.531134 1% Critical Value* -4.0237 5% Critical Value -3.4413

*MacKinnon critical values for rejection of hypothesis of a unit root. F-statistic 17.97964

Source: calculated by the author

According to the Table 2, the sample evidence suggests that we can reject the null hypothesis namely the existence of a unit root with one per cent significance level. The t-statistics are greater than the critical value of -4.02 with one per cent significance level. The t-statistic for all UK sectors is -4.53. Thus the NAV return is stationary. To check if there is a time trend we compare the F-statistic of the model with the one given from the tables of ADF. From our model F statistic 17.98 6.34 so we reject the null hypothesis.

Please review the F-statistic concept stated in the regression section.

2

185

Page 186: Introduction to Econometrics 2

Let’s solve a detailed numerical example to understand the ADF unit root test with and without a trend.

Please consider the following time series in different time periods. The time series represent the return of the share prices of a hypothetical supermarket in Boscombe. It is located in the South - West of England.

T = trend yt = dependent variable1 -2.34782 -1.27313 0.84674 0.78295 3.03726 4.34057 0.73418 -0.89129 -2.382410 -1.482711 -0.856812 0.078513 2.486714 3.644815 4.752216 2.864417 3.356518 5.324719 4.312420 5.237621 4.786322 3.289723 6.372924 7.125625 2.638926 3.1567

Our purpose is to test if there is a unit root or not. By differentiating the time series yt, we will get a white noise series. We are testing the null hypothesis of the existence of a unit root. Unit root tests should be carried in all the variables dependent and independent before deciding if the statistisician / econometrician will run a regression or a cointegration integrated with an error correction model.

The null hypothesis of no unit root is H0 : β=γ=0 . By including a trend we check the F-test in relation to the tables in the appendix of the ADF critical values at the end of the Econometrics book.

186

Page 187: Introduction to Econometrics 2

If there is no trend, then, H0 : γ=0 and we check the t – statistic from the regression equation in relation to the appendix of the ADF critical values at the end of the Econometrics book.

The ADF critical values table is compromised from your sample n that includes the individual observations. The next three columns compromise three titles. No intercept and no trend, intercept and no trend and intercept and trend.

We want to make sure that our time series is stationary before running the regression with the independent variable. In EViews you will have three options. The first option is to check for unit root for both the dependent and independent variables by using their level. If the time series is stationary, then, it can be used to run a regression without differencing. If it is not stationary, then, you click on 1st difference. If the problem continues, then, click on 2nd difference.

The firt step is to differentiate the time series yt in terms of Δy and then run a regression on yt-1, which is the dependent variable lagged one period. Thus, by differentiating we loose the first observation. By using the lagges expression y t-1, you loose the last observation. Let me explain and illustrate this in more detail. I have included an example to show the calculation.

T = trend yt = dependent variable

Δy yt-1

-2.34781

-1.2731-1.2731 – (- 2.3478) = 1.0747 -2.3478

20.8467

0.8467 – (-1.2731) = 2.1198 -1.2731

3 0.7829 -0.0638 0.84674 3.0372 2.2543 0.78295 4.3405 1.3033 3.03726 0.7341 -3.6064 4.34057 -0.8912 -1.6253 0.73418 -2.3824 -1.4912 -0.89129 -1.4827 0.8997 -2.382410 -0.8568 0.6259 -1.482711 0.0785 0.9353 -0.856812 2.4867 2.4082 0.078513 3.6448 1.1581 2.486714 4.7522 1.1074 3.644815 2.8644 -1.8878 4.752216 3.3565 0.4921 2.864417 5.3247 1.9682 3.356518 4.3124 -1.0123 5.324719 5.2376 0.9252 4.312420 4.7863 -0.4513 5.237621 3.2897 -1.4966 4.786322 6.3729 3.0832 3.289723 7.1256 0.7527 6.372924 2.6389 -4.4867 7.1256

187

Page 188: Introduction to Econometrics 2

25 3.1567 0.5178 2.6389

Then, we run the regression of Δy on y t-1 to check for unit root without a trend.

Δy t yt-1

1.0747 -2.34782.1198 -1.2731

-0.0638 0.84672.2543 0.78291.3033 3.0372

-3.6064 4.3405-1.6253 0.7341-1.4912 -0.89120.8997 -2.38240.6259 -1.48270.9353 -0.85682.4082 0.07851.1581 2.48671.1074 3.6448

-1.8878 4.75220.4921 2.86441.9682 3.3565

-1.0123 5.32470.9252 4.3124

-0.4513 5.2376-1.4966 4.78633.0832 3.28970.7527 6.3729

-4.4867 7.12560.5178 2.6389

188

Page 189: Introduction to Econometrics 2

SUMMARY OUTPUT

Regression Statistics

Multiple R 0.420665

R Square 0.176959Adjusted R Square 0.141175

Standard Error 1.713097

Observations 25

ANOVA

df SS MS FSignificanc

e F

Regression 1 14.5125514.5125

54.94515

4 0.036267

Residual 23 67.498152.93470

2

Total 24 82.0107

Coefficients

Standard Error t Stat P-value Lower 95%

Upper 95%

Lower 95.0%

Upper 95.0%

Intercept 0.855414 0.446081.91762

3 0.06766 -0.06737 1.7782 -0.06737 1.7782

yt-1 -0.2797 0.125776 -2.223770.03626

7 -0.53989-

0.01951 -0.53989 -0.01951

The regression is Δ yt=0 .86 − 0 .28 y t−1

The t –statistic is (-2.22)

Then, we compate the t-statistic with the ADF critical values. In our case t = -2.22 > -3.33. I have found the value – 3.33 in the table related to intercept but no trend with a sample size n = 25. The sample evidence suggest that we could not reject the null hypothesis. There is a unit root.

The time series is not stationary. We differentiate the values once again. In other words, we subtract each numerical value from the previous one. We subtract the value at period t from the numerical value at period t-1.

189

Page 190: Introduction to Econometrics 2

Δ Δyt

Δ yt-1

1.0451 1.0747-2.1836 2.11982.3181 -0.0638-0.951 2.2543

-4.9097 1.30331.9811 -3.60640.1341 -1.62532.3909 -1.4912

-0.2738 0.89970.3094 0.62591.4729 0.9353

-1.2501 2.4082-0.0507 1.1581-2.9952 1.10742.3799 -1.88781.4761 0.4921

-2.9805 1.96821.9375 -1.0123

-1.3765 0.9252-1.0453 -0.45134.5798 -1.4966

-2.3305 3.0832-5.2394 0.75275.0045 -4.4867

190

Page 191: Introduction to Econometrics 2

SUMMARY OUTPUT

Regression StatisticsMultiple R 0.704163R Square 0.495845Adjusted R Square 0.472929Standard Error 1.921617Observations 24

ANOVA

df SS MS FSignificanc

e F

Regression 1 79.8985379.8985

3 21.6374 0.000123

Residual 22 81.237463.69261

2Total 23 161.136

Coefficients

Standard Error t Stat P-value Lower 95%

Upper 95%

Lower 95.0%

Upper 95.0%

Intercept 0.181997 0.3947210.46107

80.64926

8 -0.63661 1.0006 -0.63661 1.0006

Δ yt-1 -0.98759 0.212313 -4.65160.00012

3 -1.4279-

0.54728 -1.4279 -0.54728

The regression is ΔΔ { y t=0.18 − 0 . 99 Δyt−1¿ The t –statistic is (-4.65)

Then, we compate the t-statistic with the ADF critical values. In our case t = -4.65 <-3.33. I have found the value – 3.33 in the table related to intercept but no trend. The sample evidence suggest that we could reject the null hypothesis.

There is no unit root. The time series is stationary and can be used for regression analysis.

191

Page 192: Introduction to Econometrics 2

We run the same regression by including the trend.

T = trend Δy t yt-1

1 1.0747 -2.34782 2.1198 -1.27313 -0.0638 0.84674 2.2543 0.78295 1.3033 3.03726 -3.6064 4.34057 -1.6253 0.73418 -1.4912 -0.89129 0.8997 -2.382410 0.6259 -1.482711 0.9353 -0.856812 2.4082 0.078513 1.1581 2.486714 1.1074 3.644815 -1.8878 4.752216 0.4921 2.864417 1.9682 3.356518 -1.0123 5.324719 0.9252 4.312420 -0.4513 5.237621 -1.4966 4.786322 3.0832 3.289723 0.7527 6.372924 -4.4867 7.125625 0.5178 2.6389

192

Page 193: Introduction to Econometrics 2

SUMMARY OUTPUT

Regression Statistics

Multiple R 0.474019

R Square 0.224694Adjusted R Square 0.154212

Standard Error 1.700045

Observations 25

ANOVA

df SS MS FSignificanc

e F

Regression 2 18.427329.21365

83.18794

7 0.060842

Residual 22 63.583382.89015

4

Total 24 82.0107

Coefficients

Standard Error t Stat P-value Lower 95%

Upper 95%

Lower 95.0%

Upper 95.0%

Intercept 0.169658 0.7369850.23020

60.82005

9 -1.358761.69807

3 -1.35876 1.698073

yt-1 -0.43048 0.1799 -2.392880.02568

6 -0.80357 -0.05739 -0.80357 -0.05739

T = trend 0.079092 0.0679581.16383

70.25695

7 -0.061840.22002

9 -0.06184 0.220029

The regression is Δ y t=0 .17+0. 08 T− 0 . 43 y t−1

The t –statistics are (1.16) (-2.39)

Then, we compate the F-statistic with the ADF critical values. In our case F = 3.19 < 7.24. Again, the time series by including the trend is not stationary.

Please, repeat the steps as above by differentiating once again the series and including the trend. Comment on your result.

193

Page 194: Introduction to Econometrics 2

We repeat the same procedure for the independent variable market price returns in EViews 6.

We start with the market price returns histogram and statistics for normality inEViews 6.

0

2

4

6

8

10

12

14

16

-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9

Series: MARKETSample 1 30Observations 30

Mean 0.370795Median -0.146900Maximum 8.732090Minimum -5.539300Std. Dev. 3.236802Skewness 0.820624Kurtosis 4.142626

Jarque-Bera 4.999109Probability 0.082122

The null and alternative hypotheses are as follows:

H0: The independent variable is normally distributed

H1: The independent variable is not normally distributed

From the above table the χ2 statistic, namely 4.999, The probability value is above the critical value at 5% significance level, so we can not reject H0.

194

Page 195: Introduction to Econometrics 2

We, then, check the correlogram of the market price returns in EViews 6.

Date: 02/04/15 Time: 14:52Sample: 1 30Included observations: 30

Autocorrelation Partial Correlation AC  PAC  Q-Stat  Prob

   *****| . |    *****| . | 1 -0.744 -0.744 18.303 0.000     . |***** |      . |**. | 2 0.684 0.293 34.337 0.000    ****| . |      . |* . | 3 -0.494 0.204 43.008 0.000     . |*** |      . | . | 4 0.410 -0.001 49.229 0.000     . *| . |      . |**. | 5 -0.195 0.263 50.691 0.000     . |* . |      . | . | 6 0.128 0.030 51.347 0.000     . | . |      . | . | 7 0.016 0.057 51.358 0.000     . | . |      . | . | 8 -0.043 0.035 51.440 0.000     . |* . |      . | . | 9 0.102 -0.056 51.916 0.000     . | . |      . |* . | 10 -0.013 0.206 51.924 0.000     . | . |      . *| . | 11 -0.005 -0.072 51.926 0.000     . | . |      .**| . | 12 -0.012 -0.283 51.933 0.000     . | . |      . | . | 13 -0.023 -0.065 51.963 0.000     . | . |      .**| . | 14 -0.025 -0.231 52.001 0.000     . | . |      . *| . | 15 -0.002 -0.137 52.001 0.000     . | . |      . | . | 16 -0.025 0.063 52.045 0.000     . | . |      . *| . | 17 -0.010 -0.081 52.052 0.000     . | . |      . | . | 18 -0.043 0.002 52.202 0.000     . | . |      . *| . | 19 -0.023 -0.093 52.250 0.000     . | . |      . *| . | 20 -0.023 -0.114 52.300 0.000

195

Page 196: Introduction to Econometrics 2

Finally, we check the ADF unit root test of the market price returns in EViews 6.

Null Hypothesis: MARKET has a unit rootExogenous: ConstantLag Length: 0 (Automatic based on SIC, MAXLAG=7)

t-Statistic   Prob.*

Augmented Dickey-Fuller test statistic -19.81362  0.0001Test critical values: 1% level -3.679322

5% level -2.96776710% level -2.622989

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test EquationDependent Variable: D(MARKET)Method: Least SquaresDate: 02/04/15 Time: 14:52Sample (adjusted): 2 30Included observations: 29 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

MARKET(-1) -1.744788 0.088060 -19.81362 0.0000C 0.381806 0.286830 1.331124 0.1943

R-squared 0.935650    Mean dependent var -0.319427Adjusted R-squared 0.933267    S.D. dependent var 5.933619S.E. of regression 1.532821    Akaike info criterion 3.758569Sum squared resid 63.43760    Schwarz criterion 3.852865Log likelihood -52.49925    Hannan-Quinn criter. 3.788101F-statistic 392.5795    Durbin-Watson stat 1.286718Prob(F-statistic) 0.000000

Please comment on the ADF unit root test result.

196

Page 197: Introduction to Econometrics 2

Simultaneous equations models

I have started this section with basic mathematical concepts. Then, I have attached an article to show you how simultaneous equations models are used in econometrics. The variables that we study are defined as endogenous and exogenous. The endogenous are determined based on the financial or economic theory. The exogenous variables are external factors that are used to substantiate and increase the validity of the financial or economic theory. The endogenous variables are also known as jointly determined and the exogenous variables are known as predetermined. There is a bilateral relationship between the dependent and independent variables. In this case, the causality is two-way relationship. x affects y and y influences x. Thus, if we have more than one equation between the variables, then, we discuss about simultaneous regression models. You will find a detailed example in the research paper that I have included in this chapter. The research paper is using different methodology by including jointly determined equations. It does not include reduced form equations or reduced form parameters. My purpose was to show you how structural equations are defined in terms of endogenous and exogenous variables with lagged periods.

Example

Let’s assume that logarithmic monthly returns of the money supply, M1, and logarithmic monthly mean returns of the 3-month Treasury constant maturity, (3MTCM), are endogenous variables and are jointly determined.

We state first a system of two linear structural regression equations. The money supply depends on the 3 moth interest rates and vice versa. We are going to use the indirect least squares regression method. You substitute equation (2) into equation (1).

ln M 1, t=α1+ β1 ln3 MTCM +ε1 , t (1)

ln 3 MTCM t=α 2+β2 ln M 1, t+ε2 , t (2)

Thus, we have equation (3)

ln M 1, t=α1+ β1 (α2+β2 ln M1,t+ε2 ,t )+ε1 ,t (3)

By solving equation (3) we have the following:

197

Page 198: Introduction to Econometrics 2

ln M1,t=α1+β1α 2+β1 β2 ln M 1 , t+β1 ε2 , t+ε1, t (4)

Transfer β1 β2 ln M1 , t to the left hand side and simplify equation (4)

ln M1,t−β1 β2 ln M1,t=α 1+β1α 2+β1ε2 , t+ε1 ,tln M1,t (1−β1 β2 )= α 1+β1 α 2+β1 ε2 , t+ε1 ,t

ln M 1, t=α 1

1−β1 β2+

β1 α 2

1−β1 β2+

β1 ε2, t

1−β1 β2+

ε1, t

1−β1 β2 (5)

The coefficients or reduced form parameters of the reduced form regression are as

follows in addition to the error terms ε 1 and ε 2 :

α=

α1

1−β1 β2

β=

β1 α 2

1−β1 β2

Before to proceed, you should conclude concerning the condition of identification. The equation could be exactly identified, overidentified or underidentified. Specifically, if k = n -1, the equation is exactly identified. If k >n-1, the equation is overidentified. If k < n-1, the equation is underidentified. K is the total number of variables, which are endogenous and exogenous. m is the number of endogenous variables in the model. Only if an equation is exactly identified, ten, we can use the method of indirect least squares to estimate the coefficients. If an equation is overidentified or underidentified, the indirect least squares method is not going to work properly. To solve this problem, use in EViews 6, the method of two - stage least squares, (2SLS) to estimate the coefficients. Open the workfile in EViews 6. Identify your regression equation into the equation specification box. Set the instrument list in terms of exogenous variables. Instrumental variable is defined as variable that is independent from the error term of a system of equations. Check the covariance of the dependent and independent variables with the error term. In addition, check the R2 of the two-stage least squares, (2SLS) method that includes the instrumental variable with the ordinary least squares method, (OLS). If you get a negative R2, then, check the variables in the equation. You have a specification problem. Please note that the limited – information maximum likelihood method, (LIML), was used before the two –stage least squares, (2SLS) method. The purpose of this method was to minimize the ratio of the variance RSS1 / RSS2. In contrast, 2SLS minimize the difference between RSS1 – RSS2.

Application of the 2SLS is as follows:

198

Page 199: Introduction to Econometrics 2

At the first stage, regress the dependent variable in relation to the endogenous or exogenous independent variables. Estimate the reduced form equations.Obtain the coefficients estimates, the standard errors, the t-statistics and the R2. Test for significance at the 5% significant level. At the second stage, regress the dependent variable of the second regression, third, etc on the estimated value and not on the original values of the endogenous or dependent variable of the first equation. Then, run the ordinary least squares, (OLS) regression equation and compare your results with the 2SLS.

Estimate the indirect reduced form regression equation, (ILS), (5) and compare the results of the coefficients, the standard errors an the t-statistics with the ordinary least squares regression, (OLS).Use the coefficients of the reduced form regression to estimate the money supply function.

Time Period M1 ln M1

Total Consumer Credit Outstanding seasonally adjusted ln CC0 ln 3MTCM

1990-01 795.4 6.678845133 797714.86 13.58950649 1.9756498431990-02 798.1 6.682233903 798773.19 13.59083232 2.0284113711990-03 801.5 6.686484972 798748.09 13.59080089 2.1004689091990-04 806.1 6.692207804 798747.01 13.59079954 2.0357011061990-05 804.2 6.689847994 799751.03 13.59205575 2.0358416891990-06 808.8 6.695551668 802892.74 13.59597641 2.0777734851990-07 810.1 6.697157697 806887.5 13.60093953 2.0171429341990-08 815.7 6.70404664 808758.57 13.60325572 2.0404860111990-09 820.2 6.709548213 810436.23 13.60532794 1.9765779061990-10 819.9 6.709182382 812655.11 13.60806208 1.9570282381990-11 822.1 6.711862042 813662.66 13.60930114 1.8912619511990-12 824.7 6.715019684 808230.57 13.60260266 1.889879551991-01 827.2 6.718046504 806600.5 13.60058378 1.7669617791991-02 832.6 6.724553335 807030.43 13.60111665 1.7596666261991-03 838.7 6.731853074 808351.83 13.60275268 1.7584324651991-04 843.1 6.737085575 807818.95 13.60209324 1.762237031991-05 848.8 6.743823587 807832.16 13.60210959 1.6843033621991-06 856.7 6.753087799 805994.66 13.5998324 1.7493737531991-07 861.6 6.758791126 804027.56 13.59738883 1.7049061831991-08 866.8 6.76480827 802021.92 13.59489122 1.7043347841991-09 869.7 6.768148325 800889.34 13.59347806 1.6326893011991-10 878 6.777646594 798616.98 13.59063674 1.5933968971991-11 887.6 6.788521191 798274.98 13.5902084 1.4452368971991-12 897 6.799055862 798028.97 13.58990018 1.3848159961992-01 910.4 6.813884063 798982.71 13.59109458 1.2715908181992-02 925.2 6.83000993 799640.38 13.59191738 1.3224222851992-03 936.7 6.84236306 799442.11 13.5916694 1.4203663531992-04 943.8 6.849914279 797658.14 13.58943539 1.2987043041992-05 950.6 6.857093364 797667.9 13.58944762 1.2646646511992-06 954.3 6.860978087 797276.31 13.58895659 1.3204216171992-07 963.3 6.87036489 798068.16 13.58994929 1.1432530691992-08 973.7 6.881103248 799823.8 13.59214673 1.162853146

199

Page 200: Introduction to Econometrics 2

1992-09 988 6.895682698 800198.9 13.5926156 1.0415608211992-10 1003.7 6.911448451 799555.99 13.59181184 1.0294570661992-11 1015.7 6.923333309 802697.98 13.59573381 1.0648749271992-12 1024.9 6.932350326 806118.69 13.59998627 1.1468501951993-01 1030.4 6.937702355 809325.43 13.60395638 1.0219369211993-02 1033.5 6.940706379 814058.92 13.60978803 1.0448598331993-03 1038.6 6.945628931 813656.29 13.60929331 1.1033834971993-04 1047.7 6.954352564 819756.94 13.61676316 1.0286449171993-05 1065.9 6.971574792 819759.05 13.61676573 1.0581209271993-06 1075 6.980075941 823811.81 13.6216974 1.145669351993-07 1084.5 6.98887433 829518.73 13.62860097 1.0884088951993-08 1094.2 6.997778782 834235.53 13.63427105 1.1283181821993-09 1104.1 7.006785802 840602.98 13.64187475 1.0549453421993-10 1112.9 7.0147245 847616.63 13.65018373 1.0800279671993-11 1124.2 7.024826951 856523.42 13.66063694 1.0614137721993-12 1129.8 7.029795905 865650.58 13.67123662 1.0958548671994-01 1131.6 7.031387839 872021.82 13.67856973 1.064546521994-02 1136.3 7.035532649 880352.86 13.68807808 1.151837051994-03 1140.3 7.039046665 891333.16 13.70047355 1.2783943921994-04 1141.1 7.039747988 900989.53 13.71124892 1.2297797781994-05 1143.3 7.041674097 912863.7 13.72434186 1.4045360561994-06 1145.1 7.043247248 924216.02 13.73670111 1.4467050561994-07 1150.6 7.048038824 931134.74 13.74415927 1.445573541994-08 1150.5 7.047951909 944803.17 13.7587319 1.5286050381994-09 1151.8 7.049081215 957375.6 13.77195107 1.5119253091994-10 1150.1 7.047604174 968468.71 13.78347145 1.5808424561994-11 1151 7.048386409 981426.11 13.79676201 1.6001219261994-12 1150.8 7.048212632 997301.74 13.81280865 1.7052438371995-01 1151.5 7.048820719 1010395.04 13.82585194 1.6798963761995-02 1147.5 7.045340942 1018559.11 13.83389955 1.730327231995-03 1146.8 7.044730734 1033632.09 13.84858946 1.7767193961995-04 1149.2 7.046821327 1043695.17 13.85827802 1.7130769471995-05 1145.3 7.04342189 1057470.83 13.87139061 1.7216013231995-06 1144 7.042286172 1069377.91 13.88258764 1.7297228661995-07 1145.4 7.0435092 1077737 13.89037403 1.6727256521995-08 1145.5 7.043596502 1089602.05 13.9013231 1.716614171995-09 1142 7.04053639 1106612.68 13.91681427 1.6438853471995-10 1137.3 7.036412311 1113528.51 13.92304437 1.648046551995-11 1134.1 7.033594664 1132160.17 13.93963802 1.661512721995-12 1127.5 7.027758071 1140744.36 13.94719155 1.617783941996-01 1123.5 7.024204092 1153270.3 13.9581122 1.5481173961996-02 1118.5 7.019743781 1163093.78 13.96659406 1.5535224231996-03 1122.6 7.023402703 1171684.26 13.97395281 1.6287735771996-04 1124.8 7.025360521 1180903.1 13.98179004 1.6268312231996-05 1116.6 7.018043633 1191113.69 13.9903993 1.5952507931996-06 1115.1 7.016699366 1200950.02 13.99862348 1.6546980441996-07 1112.4 7.014275122 1211897.36 14.00769776 1.6229119461996-08 1101.5 7.004428166 1220503 14.01477363 1.646558521996-09 1096.2 6.999604933 1225271.76 14.01867322 1.6070541211996-10 1086.3 6.990532705 1231336.1 14.0236104 1.5893239321996-11 1083.4 6.987859523 1242421.91 14.03257319 1.5426874231996-12 1081.3 6.9859193 1253437.09 14.04140001 1.5709805441997-01 1081.3 6.9859193 1258365.28 14.04532404 1.5510716081997-02 1078.9 6.983697283 1261850.81 14.0480901 1.5861692851997-03 1072.1 6.977374621 1264146.34 14.04990762 1.6159878431997-04 1064 6.96979067 1274087.38 14.0577407 1.6685640861997-05 1064.1 6.96988465 1278678.42 14.06133762 1.6014973791997-06 1065.6 6.9712933 1282858.52 14.06460136 1.623528646

200

Page 201: Introduction to Econometrics 2

1997-07 1066.3 6.971949991 1288782.03 14.06920817 1.6031573611997-08 1074.2 6.979331477 1294219.24 14.07341817 1.6638359061997-09 1067.6 6.973168418 1301634.97 14.0791317 1.579728191997-10 1065.6 6.9712933 1309389.39 14.08507147 1.5872812121997-11 1070.1 6.975507381 1312457.18 14.08741165 1.5587759981997-12 1072.8 6.978027332 1324757.33 14.09673985 1.6241123231998-01 1074.2 6.979331477 1320044.02 14.09317564 1.5493018081998-02 1078 6.982862751 1324342.72 14.09642683 1.6031179841998-03 1077.1 6.982027523 1332903.44 14.10287016 1.6412888781998-04 1076.7 6.981656087 1360456.29 14.12333071 1.5793535181998-05 1078.3 6.983141006 1363080.86 14.12525803 1.5882629151998-06 1076.9 6.981841822 1373942.33 14.13319478 1.6339531261998-07 1075 6.980075941 1380720.85 14.13811628 1.5837186861998-08 1075.2 6.98026197 1386666.45 14.14241319 1.6175950291998-09 1080.2 6.984901488 1397289.73 14.15004501 1.5093161761998-10 1086.1 6.990348577 1405396.66 14.15583014 1.3572399771998-11 1094.3 6.997870168 1408442.69 14.15799518 1.4108708231998-12 1096.1 6.999513704 1420996.44 14.1668689 1.4589183131999-01 1097.4 7.000699025 1431236.93 14.17404961 1.3919923821999-02 1097.1 7.000425614 1441314.59 14.18106616 1.4649900081999-03 1097.2 7.000516759 1450878.82 14.18768001 1.5187518081999-04 1102 7.00488199 1457670.57 14.19235022 1.4836685251999-05 1102.8 7.005607679 1468146.61 14.19951135 1.4839826631999-06 1099.7 7.002792694 1479315.32 14.20708992 1.5508453161999-07 1098.5 7.001700892 1492377.6 14.21588111 1.4979983511999-08 1099.2 7.002337921 1503226.61 14.22312443 1.5838403451999-09 1097 7.00033446 1511059.26 14.22832146 1.5267477641999-10 1102.2 7.005063461 1517281.83 14.23243102 1.5653367381999-11 1111.4 7.013375761 1525334.42 14.23772424 1.5586229731999-12 1122.9 7.023669904 1531105.96 14.24150088 1.6339184132000-01 1121.6 7.022511516 1538520.18 14.24633159 1.6557760932000-02 1109.1 7.011304155 1549185.29 14.25323973 1.6964016692000-03 1107.6 7.009950791 1561450.69 14.26112588 1.7688171362000-04 1114.6 7.016250875 1570527.74 14.26692226 1.7102782272000-05 1105.6 7.008143453 1583507.92 14.27515315 1.7463982022000-06 1102.9 7.005698353 1609548.04 14.29146398 1.7684598252000-07 1102.8 7.005607679 1629751.44 14.30393807 1.7664416612000-08 1100.9 7.003883306 1651615.4 14.3172644 1.8369544972000-09 1099.5 7.00261081 1669000.81 14.32773569 1.7717187182000-10 1098.7 7.001882942 1684599.32 14.3370383 1.7931978282000-11 1092.7 6.996406977 1700944.54 14.34669427 1.8029092012000-12 1087.9 6.992004511 1716969.72 14.3560715 1.7324137912001-01 1097.1 7.000425614 1729851.68 14.36354623 1.5739458912001-02 1101.2 7.004155773 1748702.02 14.37438437 1.5591966962001-03 1109.1 7.011304155 1760541.84 14.38113218 1.5119253092001-04 1116.1 7.017595745 1773098.33 14.38823904 1.3289678662001-05 1119.2 7.020369423 1781177.46 14.3927852 1.2647406382001-06 1126.1 7.026515615 1792368.09 14.39904826 1.2712308392001-07 1139 7.037905963 1797350.57 14.40182423 1.2325602612001-08 1149.9 7.047430261 1806359.04 14.4068238 1.234839322001-09 1204.8 7.094068857 1814252.38 14.41118403 0.8278965812001-10 1166.2 7.061505879 1829840.26 14.41973923 0.7429720062001-11 1171.3 7.065869522 1857842.79 14.43492658 0.5499589212001-12 1182.9 7.07572433 1867852.87 14.44030013 0.4935341272002-01 1191.3 7.082800427 1867418.69 14.44006766 0.4306524732002-02 1190.5 7.082128666 1881905.37 14.44779532 0.5122246452002-03 1192.4 7.083723362 1894690.33 14.45456597 0.5527898232002-04 1187.7 7.079773943 1903957.99 14.45944543 0.556754556

201

Page 202: Introduction to Econometrics 2

2002-05 1189.3 7.081120178 1914613.99 14.46502659 0.5211202782002-06 1192.4 7.083723362 1927180.98 14.47156886 0.549276812002-07 1199.8 7.089910155 1939211.4 14.47779195 0.4941658792002-08 1186.7 7.078931625 1947500.09 14.4820571 0.4991210282002-09 1195.7 7.086487067 1950651.39 14.48367392 0.4593583262002-10 1204.7 7.093985852 1957151.3 14.48700056 0.4317824162002-11 1209.3 7.097796959 1965179.13 14.49109396 0.1268424042002-12 1220.4 7.106933953 1972112.21 14.49461571 0.1448871262003-01 1227.1 7.112408941 1982561.31 14.49990016 0.0821808882003-02 1238 7.121252453 1993405.69 14.50535514 0.118671532003-03 1238.6 7.121736989 2001208.15 14.50926163 0.1405897572003-04 1250.3 7.131138802 2014099.78 14.51568289 0.095310182003-05 1268.7 7.145748033 2027502.51 14.5223153 0.037908664

2003-06 1280 7.154615357 2033268.41 14.52515511-

0.065936323

2003-07 1288.3 7.161078799 2040064.39 14.52849193-

0.127339422

2003-08 1294.5 7.165879799 2049124.34 14.53292311-

0.035832886

2003-09 1297.6 7.168271683 2057090.79 14.5368033-

0.089791451

2003-10 1297.3 7.168040461 2065495.1 14.54088051-

0.104394799

2003-11 1297.8 7.168425802 2066400.09 14.54131856-

0.141720381

2003-12 1306.6 7.175183622 2077360.69 14.54660875-

0.133779871

2004-01 1306 7.17472431 2088243.04 14.55183362-

0.1971377712004-02 1320.7 7.185917178 2095340.36 14.55522656 -0.10869566

2004-03 1329.3 7.192407767 2109720.32 14.56206595-

0.047638653

2004-04 1333.2 7.195337346 2112716.15 14.56348495-

0.087478921

2004-05 1332.6 7.1948872 2120839.15 14.56732239-

0.0137357232004-06 1342.2 7.202065338 2127989.41 14.57068815 0.2066445522004-07 1340.6 7.200872554 2137092.98 14.57495704 0.259212452004-08 1353.2 7.210227437 2143435.12 14.5779203 0.4075840742004-09 1362.1 7.216782905 2160148.3 14.58568743 0.4731237572004-10 1360.7 7.215754552 2176719.34 14.59332941 0.5333100742004-11 1374.9 7.22613628 2183746.35 14.59655247 0.6453846072004-12 1375.9 7.226863341 2192246.17 14.60043722 0.7542831812005-01 1366.4 7.219934823 2196777.63 14.60250213 0.8112410272005-02 1371.6 7.223733221 2211558.71 14.60920812 0.8939686452005-03 1371.9 7.223951919 2220199.19 14.61310748 0.9837055472005-04 1358.2 7.213915573 2232535.38 14.61864844 1.0394213422005-05 1366 7.21964204 2232671.52 14.61870942 1.01975072005-06 1379.2 7.2292589 2247833.43 14.62547739 1.1106606272005-07 1366.9 7.220300681 2257230.94 14.62964937 1.1413372332005-08 1377.2 7.227807731 2267044.88 14.63398773 1.2579667962005-09 1377.5 7.228025541 2271633.28 14.63600964 1.2046089662005-10 1375.6 7.226645279 2276843.14 14.63830045 1.2842352682005-11 1376.1 7.22700869 2283661.27 14.64129053 1.2845887742005-12 1374.9 7.22613628 2290928.13 14.64446759 1.3324859452006-01 1379.9 7.229766312 2280586.5 14.6399432 1.3716420842006-02 1379.3 7.229331403 2286233.24 14.64241615 1.4615177822006-03 1383.7 7.23251635 2295501.73 14.646462 1.53208723

202

Page 203: Introduction to Econometrics 2

2006-04 1381 7.230563153 2307064.78 14.65148662 1.5008499722006-05 1387.5 7.235258846 2317243.66 14.65588896 1.5317113612006-06 1373.8 7.225335902 2305204.7 14.65068004 1.5928464872006-07 1370 7.222566019 2313707.73 14.65436187 1.5758318842006-08 1371.8 7.223879025 2327885.82 14.66047104 1.6273632462006-09 1362.7 7.217223305 2337322.47 14.66451659 1.5465488242006-10 1370.3 7.222784973 2335770.83 14.66385251 1.5721135822006-11 1370.5 7.222930916 2347784.33 14.6689826 1.5774780482006-12 1366.5 7.220008005 2361804.71 14.67493659 1.5552331162007-01 1372.8 7.224607729 2365897.79 14.67666812 1.5392953122007-02 1364.2 7.218323455 2376638.03 14.68119745 1.5902550932007-03 1366.9 7.220300681 2388460.49 14.68615957 1.6253112622007-04 1378.2 7.228533579 2396709.56 14.68960734 1.6107703582007-05 1381.6 7.230997527 2413374.04 14.69653634 1.5381753862007-06 1365.4 7.219202705 2423664.02 14.70079101 1.5564389032007-07 1368.9 7.221762777 2439288.64 14.70721701 1.5550777192007-08 1375.2 7.226354454 2457917.13 14.71482485 1.4625506452007-09 1373.4 7.225044696 2471141.78 14.72019086 1.333552652007-10 1379.9 7.229766312 2483040.24 14.72499427 1.3425241842007-11 1371 7.22329568 2497934.15 14.73097461 1.1149925482007-12 1374.3 7.22569979 2506291.94 14.7343149 1.0717466822008-01 1379.5 7.229476394 2520082.07 14.73980203 0.9457651072008-02 1381.5 7.230925144 2533699.77 14.74519115 0.7277786252008-03 1388.3 7.235835256 2541204.83 14.74814887 0.2008005572008-04 1392.6 7.238927783 2549107.35 14.7512538 0.27141412008-05 1393.9 7.239860853 2551909.56 14.75235248 0.5212258982008-06 1401.2 7.245084291 2556403.56 14.75411197 0.636828752008-07 1418.5 7.257355254 2559326.24 14.75525459 0.4593492462008-08 1405 7.247792582 2550873.02 14.75194622 0.561247112008-09 1461.8 7.287423832 2542203.4 14.74854174 0.090339167

2008-10 1474.2 7.295870749 2542980.63 14.74884743-

0.421461943

2008-11 1512.5 7.32151919 2534856.42 14.74564755-

1.745830537

2008-12 1604.9 7.380816728 2525913.47 14.74211333-

3.423176288

2009-01 1584.4 7.367961066 2532744.29 14.74481397-

2.139384578

2009-02 1568.7 7.35800253 2522458.19 14.74074446-

1.271181554

2009-03 1579.2 7.364673669 2504257.06 14.73350267-

1.5328978352009-04 1612.7 7.385665072 2496761.33 14.73050498 -1.89107767

2009-05 1616.8 7.388204166 2491629.59 14.72844751-

1.780395711

2009-06 1656.6 7.412522588 2480976.51 14.72416279-

1.722403027

2009-07 1658.6 7.413729152 2475799.39 14.72207389-

1.739249524

2009-08 1657.6 7.413126052 2463980.44 14.71728867-

1.760814665

2009-09 1663.3 7.41655886 2456877.58 14.71440183-

2.139384578

2009-10 1678.7 7.425774963 2449197.95 14.71127116-

2.646356632

2009-11 1680.3 7.426727628 2427738.44 14.7024707-

3.0545727742009-12 1695.8 7.435909885 2420165.69 14.69934656 -

203

Page 204: Introduction to Econometrics 2

2.953172659

2010-01 1675.4 7.423807222 2416604.15 14.69787387-

2.887518689

2010-02 1701.9 7.439500553 2406944.59 14.6938687-

2.268183666

2010-03 1712.3 7.445592775 2410095.11 14.69517677-

1.894225627

2010-04 1700.4 7.438618796 2404013.77 14.6926503-

1.818476858

2010-05 1710.6 7.444599465 2393945.2 14.68845327-

1.881371628

2010-06 1728.4 7.454951404 2390881.7 14.68717277-

2.090410573

2010-07 1721.4 7.450893192 2388914.27 14.68634954-

1.894094264

2010-08 1747.1 7.465712549 2389541.46 14.68661205-

1.864330162

2010-09 1764.1 7.475395924 2393501.7 14.688268-

1.931021537

2010-10 1780.5 7.484649503 2401948.97 14.69179104-

2.054981244

2010-11 1826 7.509883061 2403738.89 14.69253595-

2.040220829

2010-12 1836.7 7.515725762 2522228.3 14.74065331-

2.004092104

2011-01 1854.8 7.525532153 2519472.94 14.73956029-

1.9326649222011-02 1876.4 7.537110326 2531785.14 14.7444352 -2.07544952

2011-03 1890.4 7.544543726 2538999.25 14.74728057-

2.298246691

2011-04 1902.5 7.55092409 2543467.28 14.74903878-

2.887518689

2011-05 1940.6 7.570752483 2551868.18 14.75233627-

3.241865343

2011-06 1952.4 7.576814664 2562461.1 14.75647872-

3.2894933922011-07 1998.8 7.600302279 2578301.68 14.76264148 -3.33220451

2011-08 2112.5 7.655627359 2570828.59 14.75973881-

3.715312711

2011-09 2123.6 7.660868041 2580823.49 14.76361909-

4.328916809

2011-10 2142.2 7.669588617 2586741.9 14.76590969-

4.012106464

2011-11 2159.9 7.677817203 2602705.75 14.77206213-

4.364008129

2011-12 2160.9 7.678280081 2615654.77 14.77702501-

4.518158809

2012-01 2201.9 7.697075903 2628351.6 14.78186744-

3.462106135

2012-02 2216.8 7.703819994 2640274.09 14.78639329-

2.434756866

2012-03 2223.5 7.70683781 2650873.36 14.79039971-

2.475856814

2012-04 2252.6 7.719840384 2663259.94 14.79506147-

2.479208629

2012-05 2262.6 7.724269873 2683143.12 14.80249947-

2.457460673

2012-06 2267.4 7.72638908 2690996.72 14.80542221-

2.392197252

204

Page 205: Introduction to Econometrics 2

2012-07 2312.6 7.746127712 2695220.29 14.8069905-

2.378092646

2012-08 2340 7.757906208 2713842.11 14.81387594-

2.276832597

2012-09 2374.3 7.772457936 2723677.77 14.81749365-

2.302585093

2012-10 2420.9 7.791894651 2739920.72 14.82343954-

2.347036856

2012-11 2406.5 7.785928689 2753917.31 14.82853493-

2.465104022

2012-12 2445.6 7.802045771 2768112.81 14.83367635-

2.7080502012013-01 2466.2 7.810433783 2781871.77 14.83863456 0

Use in EViews 6, the method of two - stage least squares, (2SLS) to estimate the coefficients. Open the workfile in EViews 6. Identify your regression equation into the equation specification box. Set the instrument list in terms of exogenous or endogenous variables.

Exercise

Repeat the same exercise by using a system of three linear regression equations. Identify the endogenous and exogenous variables. Then, identify which equation is exactly identified, overidentified or underidentified. Use macro theory from an economics book to justify your reasoning.

Let’s assume that logarithmic monthly returns of the money supply, M1, logarithmic monthly returns of the seasonally adjusted total consumer credit outstanding, (CCO) and logarithmic monthly mean returns of the 3 month Treasury constant maturity, (3MTCM), are endogenous variables and are jointly determined.

ln M 1, t=α1+ β1 ln3 MTCM +β2 ln CCO+ε1 , t (1)

ln 3 MTCM t=α 2+β3 ln M 1 ,t+β4 lnCCO+ε2 ,t (2)

ln CCOt = α 3+β5 ln M 1 ,t+β6 3 MTCM t+ε3 , t (3)

Good luck!

205

Page 206: Introduction to Econometrics 2

To facilitate you, I have attached the layout of a two-stage least squares with different dependent and independent variables. The endogenous variable is the discount and the exogenous variables are the book-to market, bm, expenses, ex, momentum, m, market, makt, size, s and sentiment, se.

Dependent Variable: DISMethod: Two-Stage Least SquaresDate: 10/05/16 Time: 17:05Sample (adjusted): 3 158Included observations: 156 after adjustmentsInstrument list: S SE EX BM M MAKT

Variable Coefficient Std. Error t-Statistic Prob.

C -4.087345 4.274785 -0.956152 0.3405BM -0.127326 0.160254 -0.794526 0.4282EX 0.550371 0.443459 1.241086 0.2165M -0.629953 0.383585 -1.642279 0.1026

MAKT 0.288494 0.044681 6.456687 0.0000S 0.105861 0.037678 2.809623 0.0056

SE 0.811759 0.308287 2.633125 0.0094

R-squared 0.320213    Mean dependent var 0.101304Adjusted R-squared 0.292839    S.D. dependent var 5.033194S.E. of regression 4.232556    Sum squared resid 2669.265F-statistic 11.69773    Durbin-Watson stat 2.081605Prob(F-statistic) 0.000000    Second-Stage SSR 2669.265

206

Page 207: Introduction to Econometrics 2

Exercise

It is given the following two – equations model

y1=α 1+β2 z1+β3 x1+ε1 , t (1 )y2=α 2+β3 z2+β4 x1+ε2 , t (2 )

Where: y1 and y2 are endogenous variables. The independent variables are

exogenous and ε 1, t , ε2,t are the error terms.

You are required the following:

(a) Transform the structural regression equations into reduced form regression. State the reduced – form parameters.

(b) Identify each equation.(c) Are you going to use two-stage least squares regression?

Simultaneous regressions and economic variables

I have included several economics variables and their equations. Please formulate systems of simultaneous equations and use econometrics test such as the two-stage least squares regression.

The main variables that are using in macroeconomic analysis are Yd = disposable income, Y =national income, C= consumer expenditure, I = investment, G = government expenditure, X= exports, M = imports, S= saving, AD = aggregate demand, AS = aggregate supply, W = withdrawals, J= injections. Based on the above mentioned variables, economists construct economic equations to show the relationship and the degree of association between the variables. Towards this direction marginal propensity to save and consume is used to show the differentiation or change that is taking place with the variables through multipliers. For example, marginal propensity to consume, MPC, shows the ratio of the change of consumption and the change in national or disposable income.

As an example, we mention consumption as a linear function of disposable income. Thus, the function is represented as follows:

C = f Yd

The same function could apply for national income and the mathematical notation will be as follows:

C = f Y

207

Page 208: Introduction to Econometrics 2

Another example is the saving function as a linear representation of the difference of disposable income – taxes – consumption. Thus, the function is represented as follows:

S = Yd – T - C

The same function could apply for national income and the mathematical notation will be as follows:

S = Y – T - C

Exercise

If the marginal propensity to consume is 0.85. Calculate the change in consumption? The disposable income of the household increases from 700 to 1000 Euro.

The equation of the marginal propensity to consume, MPC, is as follows:

MPC=ΔCΔY d

0 . 85=ΔC1000−700

=0 .85=ΔC ¿300 ¿¿

¿The rate of change in consumption is 0 .85 *300 =255 Euro . ¿¿

Exercise

If the marginal propensity to save is 0.50. Calculate the change in saving? The disposable income of the household increases from 400 to 800 Euro.

The equation of the marginal propensity to consume, MPS, is as follows:

MPS= ΔSΔY d

The formula of aggregate demand for goods and services is as follows:

AD = C+ I + G +X – M

Where: C is consumption I is investment G is government expenditure X is exports M is imports

208

Page 209: Introduction to Econometrics 2

Investment, government expenditure and exports are injections and are represented by the letter J.

J = I + G + X

Saving, taxes and imports are withdrawals and are represented by the letter W.

W = S + T + M

The circular flow of income creates surplus or deficit. For example, if exports are greater than imports, then, there is a trade surplus. If the government is receiving income taxes less than government expenditure, then, it is recording a deficit. If saving is less than investment, then, the companies are recording a deficit.

Exercise

In an open economy consumption is represented by the function C= 20 + 0.5Y and exports are represented by the function X=10 + 0.20Y. Y=200, I=20, T=10, G =100 and M=45.

Calculate the values of consumption, C, saving, S and exports, X. Is the open economy recording a surplus or a deficit? Is the economy recording an equilibrium national income? In other words is the open economy satisfying the condition that withdrawals equal injections, W=J.

Solution

Consumption is calculated through the given function C = 20 + 0.5YY =200

C = 20 +0.5*200 = 100 + 20 = 120.

Saving is calculated as the difference of income minus taxation minus consumption.

S = Y – T - CS = 200 – 10 – 120 = 70

Exports is calculated through the given function X=10 + 0.20YY = 200

X = 10 + 0.20*200 = 10 + 40 = 50

Exports exceeds imports as 50 > 45. Thus, the open economy is recording a surplus.

209

Page 210: Introduction to Econometrics 2

We test whether the open economy satisfy the condition that withdrawals equal injections, W=J.

W = S + T + MW = 70 + 10 + 45 = 125

J = I + G + XJ = 20 + 100 + 50 = 170

Injections are greater than with withdrawals and therefore the open economy is not recording equilibrium of national income. The fact that injections are greater than withdrawals means that we have a higher level of national income and employment.

Exercise

In an open economy consumption is represented by the function C= 10 + 0.35Y and imports are represented by the function M=20 + 0.70Y. Y=700, I=40, T=20, G =200 and X=55.

Calculate the values of consumption, C, saving, S and imports, M. Is the open economy recording a surplus or a deficit? Is the economy recording an equilibrium national income? In other words is the open economy satisfying the condition that withdrawals equal injections, W=J.

Solution

This is a similar problem as the above case. Good luck!

Exercise

An open economy has the consumption function C = 70 + 0.85Y.

The disposable income has increased from 700 Euro to 1000 Euro. Calculate the changes of the marginal propensity to consume?

Solution

MPC= ΔCΔY d

Please complete the calculation………….

Exercise

Calculate the national income from the following equation:

210

Page 211: Introduction to Econometrics 2

Y = C + I + X – MIf C = 100 I = 50 X = 40 M = 30

Y =…………………..

Please complete the equation.

Exercise

Money demand is given by the function, MD = 0.55Y – 100r. The money supply is MS = 400 and r = 0.5.

Find the LM equation in terms of Y

Solution

Money market equilibrium is obtained when MD =MS.

0.55Y – 100 r = 400

0.55Y – 100 * 0.5 = 400

0.55Y – 50 = 400

0.55Y = 400 +50

Y = 450 / 0.55 = 818.18 ( 2.d.p)Exercise

A closed economy has the following functions:

Consumption: C = 90 +0.33YInvestment: I = 100 – 300rSaving: S = 50 + 20r

Find the IS equation in terms of Y.

Solution

The equation is Y = C + I + S

Substitute the values of the equations consumption, investment and saving and solve for Y.

211

Page 212: Introduction to Econometrics 2

Systems of linear equations

There are two methods to solve a system of linear equations. It could be done by elimination or substitution.

Consider the following two equations and the elimination method:

2x – 4y = 2 (1)3x + 4y = 8 (2)

The first observation is that 4y is the common term in both equations. We add the equations and we get:

2x – 4y = 2 3x + 4y = 8

5x -4y +4y = 105x = 10x = 10/5 = 2. We substitute this value in the first equation and we get the y value.

2x – 4y = 24 – 4y = 2-4y = -4+24y = +4 -2y = 2/4 = 0.5

Let’s try the substitution method:

2x – 4y = 2 (1)3x + 4y = 8 (2)

We use equation (1) and we solve for the x value.

2x – 4y = 2x = 2 +4y / 2

We substitute the x value into the second equation.

212

Page 213: Introduction to Econometrics 2

3∗(2+4 y2 )+4 y=8

6+12 y+8 y2

=8

20 y=16−6

y=1020

=0 . 5

We then substitute the y value into equation (1) and we get the following result:

2x – 4y = 2

2x – 4*0.5 = 2

2x – 2 = 2x = 4 / 2 = 2

Let’s try another example:

4x + y = 5 (1)5x – 4y = 8 (2)

If you want to try the elimination method, then, you have to eliminate the y value.Multiply equation (1) by 4.

16x + 4y = 205x - 4y = 8

Then, add both equations to eliminate the y value.16x + 5x +4y -4y = 20 + 821x + 0 = 28x = 28 /21 = 1.3333 (to 4.d.p.).

Substitute the x value into equation (1) to find the y value.4x + y = 54 * 1.3333 + y = 55.3332 + y = 5y = 5 – 5.3332 = -0.3332

Let’s try the substitution method:4x + y = 5x = 5 – y / 4

We then input the x value into equation (2) and we have the following result.

5x – 4y = 8

213

Page 214: Introduction to Econometrics 2

5∗(5− y4 )−4 y=8

25−5 y−16 y4

=8

−21 y=32−25

y=−721

=−0 .33 ( to 2 . d . p .) .

We then substitute the y value into 4x + y = 54x – 0.33 = 5x = 5 + 0.33 / 4 = 1.33 ( to 2.d.p.).

Please try the following equations:

4x + 2y = 4 6x + 4y = 6

5x + 8y = 104x + 8y = 12

2x + 4y = 23x + y = 8

Let’s try to solve a system of three equations by substitution:

x + y = 4 (1) y + z = 6 (2) x + z = 2 (3)

Let’s solve the x value from equation (1). x = 4 – y. Then, we substitute the x value into equation (3)

4 – y + z = 2-y = 2 – 4 – z y = 4 + z – 2 y = 2 + z

We then substitute the y value into equation (2).

2 + z + z = 62 + 2z = 62z = 6-2

214

Page 215: Introduction to Econometrics 2

z = 6 – 2 /2 = 2. Thus, z = 2. Then, equation (2) will become as follows:

y + 2 = 6y = 6 / 2 = 3. Thus, y = 3. Then, equation (1) will become as follows:

x + 3 = 4x = 4 -3 = 1. Thus, x = 1.

Exercises

2x + y = 4y + 4z = 8x + 2z = 6

4x + 2y = 62y + 2z = 4 x + 4z = 10

8x + 2y = 34y + 6y = 82x + 3z = 12

215

Page 216: Introduction to Econometrics 2

Application of an Unrestricted Vector Autoregressive system in the term structure of the US interest rates. Evidence from short, medium and long-term yields of the US interest rates.

Preface

In this article, we are investigating the effects of macroeconomic variables logarithmic returns, namely seasonally adjusted money supply,(M1), total index of industrial production,(IP) and seasonally adjusted of total consumer credit outstanding, (CCO), on the logarithmic mean monthly returns of the US term structure of interest rates. We have applied an Unrestricted Vector Autoregressive system to check for exogeneity tests, impulse- responses and variance decompositions of the macro factors on the logarithmic mean returns of 3 month, 5 year and 10 year Treasury with constant maturities. Impulse – responses showed that the magnitude of the time series shock, positive or negative between two variables gradually decrease and, then, dies off slowly, as the time passes away or as the periods increases. Variance decompositions showed that macro factors variance increase or decrease in percentage terms in relation to the monthly mean returns of the US interest rates. The data that we have used are monthly returns starting from 01/01/1990 to 01/01/2013, which total to 276 observations. The data was obtained from the Federal Reserve Statistical Release Department and the symbols of the series are H.6, G.17, G.19, and H.15.

Keywords: Seasonally adjusted money supply, (M1), total index of industrial production, (IP), seasonally adjusted total consumer credit outstanding, (CCO), 3 month Treasury with constant maturity, 5-year and 10-year Treasury with constant maturities, Vector Autoregressive system, block exogeneity tests, impulse - responses, variance decompositions.

216

Page 217: Introduction to Econometrics 2

Introduction

This article will focus on modeling the logarithmic monthly returns of the seasonally adjusted money supply,(M1), total index of industrial production,(IP) and seasonally adjusted of total consumer credit outstanding(CCO) on the logarithmic mean monthly returns of the US interest rates. By using E-Views the model will be tested to validate the hypotheses that will be formulated.

Vector autoregressive model (VARs) is used to analyze multivariate time series and was developed by Sims (1980). It was a generalization of univariate time series models and it is a helpful tool for macro analysis. The estimation output helps the researcher to carries out pairwise Granger causality tests, impulse responses, (IRs) and variance decompositions, (VD). Specifically, the impulse responses are used to test the magnitude of the shocks between two variables. The first variable generates innovations and it is known as impulses and the second variable observe the responses and it is knows as responses. The impulses are orthogonalized through transformation of the inverse of the Cholesky factor of the residual covariance matrix. The variance decomposition shows the effects of a shock and the variation of an endogenous variable in relation to the other variables in the Unrestricted Vector Autoregressive system. The same lags of the variables that will be analyzed have to be stationary to be able to carry out the joint significance regression tests.

The rest of the paper is organized as follows. Section 1 describes the methodology and the data. Section 2 is an analysis of statistical and econometric tests and Section 3 summarizes and concludes.

217

Page 218: Introduction to Econometrics 2

1.Methodology and data description

In this article, we are going to use an unrestricted vector autoregressive system, (UVAR), to check for pairwise Granger causality exogeneity tests, impulse- responses and variance decompositions of the macro factors on the logarithmic mean returns of 3 month, 5 year and 10 year Treasury with constant maturities. Unrestricted vector autoregressive models have been studied by various researchers such as: Alexander,(2003), Brooks, (2002), Amisano and Giannini, (1997), Boswijk, (1995), Christiano, Eichenbaum, Evans, (1999), Doornik and Hansen, (1994), Fisher, (1932).

By using the lag length criteria as shown in Section 2 related to econometric tests, we have found that 2 out of the five criteria indicates 6 lags as the optimal model. Thus, our mathematical notation of the Unrestricted Vector Autoregressive system will include 6 lags and then for simplicity, we are going to use two lags for the pairwise equations. The combinations of the pair equations that have been used are illustrated starting from equation (1) to equation (30).

Let’s assume that logarithmic monthly returns of the money supply, M1, and logarithmic monthly mean returns of the 3 month Treasury constant maturity, (3MTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 6 lags of the endogenous variables, the mathematical equations, (1) and (2) are as follows:

ln M 1=α 11 ln M 1t−1+α 12 ln3 MTCM t−1+b11 ln M 1t−2+b12 ln 3 MTCM t−2+c11 ln M 1t−3+c12 ln 3MTCM t−3

+d11 ln M 1t−4+d12 ln 3 MTCM t−4+e11 ln M 1t−5+e12 ln 3MTCM t−5+ f 11 ln M 1t−6+f 12ln 3 MTCM t−6

+c1+ε1 t (1)

ln3 MTCM t=α21ln M 1t−1+α22 ln 3 MTCM t−1+b21 ln M 1t−2+b22 ln 3 MTCM t−2+c21 ln M 1t−3+c22 ln 3 MTCM t−3+d21 ln M 1t−4+d22 ln 3 MTCM t−4+e21 ln M 1t−5+e22 ln3 MTCM t−5+ f 21 ln M 1t−6+f 22 ln3 MTCM t−6+c2+ε2 t (2)

Where α ij , b ij , cij , d ij , eij , f ij are the parameters to be calculated and c i is the constant and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the money supply, M1, and logarithmic monthly mean returns of the 5 year Treasury constant maturity, (5YTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (3) and (4) are as follows:

218

Page 219: Introduction to Econometrics 2

ln M 1=α 11 ln M 1t−1+α 12 ln5 YTCM t−1+b11 ln M 1t−2+b12 ln 5 YTCM t−2+c1+ε1t (3)

ln 5 YTCM t=α21 ln M 1t−1+α 22 ln5YTCM t−1+b21 ln M 1t−2+b22 ln 5YTCM t−2+c2+ε2 t (4)

Where α ij , b ij are the parameters to be calculated and c i is the constant and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the money supply, M1, and logarithmic monthly mean returns of the 10 year Treasury constant maturity, (10YTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (5) and (6) are as follows:

ln M 1=α 11 ln M 1t−1+α 12 ln10 YTCM t−1+b11 ln M 1t−2+b12 ln 10 YTCM t−2+c1+ε1t (5)

ln 10 YTCM t=α21 ln M 1t−1+α 22 ln 10 YTCM t−1+b21 ln M 1t−2+b22 ln 10YTCM t−2+c2+ε2 t (6)

Where α ij , b ij are the parameters to be calculated and c i is the constant and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the industrial production,(IP), and logarithmic monthly mean returns of the 3 month Treasury constant maturity, (3MTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (7) and (8) are as follows:

ln IP=α 11 ln IPt−1+α 12 ln3 MTCM t−1+b11 ln IP t−2+b12 ln3 MTCM t−2+c1+ε1 t (7)

ln 3 MTCM t=α 21 ln IPt−1+α22 ln 3 MTCM t−1+b21 ln IP t−2+b22 ln3 MTCM t−2+c2+ε 2t (8)

Where α ij , b ij are the parameters to be calculated and c i is the constant and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the industrial production, (IP), and logarithmic monthly mean returns of the 5 year Treasury constant maturity,

219

Page 220: Introduction to Econometrics 2

(5YTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (9) and (10) are as follows:

ln IP=α11 ln IPt−1+α 12 ln5 YTCM t−1+b11 ln IPt−2+b12 ln 5 YTCM t−2+c1+ε 1t (9)

ln 5 YTCM t=α21 ln IP t−1+α 22 ln5 YTCM t−1+b21 ln IP t−2+b22 ln5YTCM t−2+c2+ε2 t (10)

Where α ij , b ij are the parameters to be calculated and c i is the constant and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the industrial production,(IP), and logarithmic monthly mean returns of the 10 year Treasury constant maturity, (10YTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (11) and (12) are as follows:

ln IP=α 11 ln IPt−1+α 12 ln10 YTCM t−1+b11 ln IPt−2+b12 ln 10 YTCM t−2+c1+ε 1t (11)

ln 10 YTCM t=α21 ln IP t−1+α 22 ln10 YTCM t−1+b21 ln IP t−2+b22 ln10 YTCM t−2+c2+ε2 t (12)

Where α ij , b ij are the parameters to be calculated and c i is the constant and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the seasonally adjusted total consumer credit outstanding, (CCO), and logarithmic monthly mean returns of the 3 month Treasury constant maturity, (3MTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (13) and (14) are as follows:

ln CCO=α 11 lnCCO t−1+α 12 ln 3 MTCM t−1+b11 ln CCOt−2+b12 ln 3 MTCMt− 2+c1+ε1 t (13)

ln 3 MTCM t=α 21 ln CCOt−1+α22 ln 3 MTCM t−1+b21 ln CCOt−2+b22 ln 3 MTCMt− 2+c2+ε2 t (14)

220

Page 221: Introduction to Econometrics 2

Where α ij , b ij are the parameters to be calculated and c i is the constant terms and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the seasonally adjusted total consumer credit outstanding, (CCO), and logarithmic monthly mean returns of the 5 year Treasury constant maturity, (5YTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (15) and (16) are as follows:

ln CCO=α 11 lnCCO t−1+α 12 ln 5 YTCM t−1+b11 ln CCOt−2+b12 ln5 YTCM t−2+c1+ε1 t (15)

ln 5 YTCM t=α21 lnCCO t−1+α 22 ln5YTCM t−1+b21 lnCCO t−2+b22 ln5 YTCM t−2+c2+ε2 t (16)

Where α ij , b ij are the parameters to be calculated and c i is the constant term and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the seasonally adjusted total consumer credit outstanding, (CCO), and logarithmic monthly mean returns of the 10 year Treasury constant maturity, (10YTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (17) and (18) are as follows:

ln CCO=α 11 lnCCO t−1+α 12 ln 10 YTCM t−1+b11 ln CCOt−2+b12 ln10 YTCM t− 2+c1+ε1 t (17)

ln 10 YTCM t=α21 lnCCO t−1+α 22 ln10 YTCM t−1+b21 lnCCOt−2+b22 ln10 YTCM t−2+c2+ε2 t (18)

Where α ij , b ij are the parameters to be calculated and c i is the constant term and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly mean returns of the 3 month Treasury with constant maturity,(3MTCM) and logarithmic monthly mean returns of the 5 year Treasury constant maturity, (5YTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By

221

Page 222: Introduction to Econometrics 2

taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (19) and (20) are as follows:

ln 3 MTCM=α11 ln 3 MTCM t−1+α12 ln5 YTCM t−1+b11 ln 3 MTCM t− 2+b12 ln 5YTCM t−2+c1+ε1t (19)

ln 5 YTCM t=α21 ln 3 MTCM t−1+α 22 ln5 YTCM t−1+b21 ln3 MTCM t−2+b22 ln 5YTCM t−2+c2+ε2 t (20)

Where α ij , b ij are the parameters to be calculated and c i is the constant term and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly mean returns of the 3 month Treasury with constant maturity,(3MTCM) and logarithmic monthly mean returns of the 10 year Treasury constant maturity, (10YTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (21) and (22) are as follows:

ln3 MTCM=α 11 ln 3 MTCMt−1+α12 ln10YTCM t−1+b11 ln 3 MTCM t− 2+b12 ln 10 YTCM t−2+c1+ε1 t (21)

ln10 YTCM t=α21 ln 3MTCM t−1+α22 ln10 YTCM t−1+b21 ln3 MTCM t−2+b22 ln 10YTCM t−2+c2+ε2 t (22)

Where α ij , b ij are the parameters to be calculated and c i is the constant term and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly mean returns of the 5 year Treasury with constant maturity,(5YTCM) and logarithmic monthly mean returns of the 10 year Treasury constant maturity, (10YTCM), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (23) and (24) are as follows:

ln 5YTCM=α11 ln5 YTCM t−1+α12 ln 10YTCM t−1+b11 ln 5 YTCM t−2+b12 ln 10 YTCM t−2+c1+ε1 t (23)

ln 10 YTCM t=α21 ln5 YTCM t−1+α22 ln 10 YTCM t−1+b21 ln 5 YTCM t−2+b22 ln 10YTCM t−2+c2+ε 2t (24)

222

Page 223: Introduction to Econometrics 2

Where α ij , b ij are the parameters to be calculated and c i is the constant term and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the seasonally adjusted total consumer credit outstanding, (CCO), and logarithmic monthly returns of the seasonally adjusted money supply, (M1), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (25) and (26) are as follows:

ln CCO=α 11 lnCCO t−1+α 12 ln M 1t−1+b11 ln CCOt−2+b12 ln M 1t−2+c1+ε1 t (25)

ln M 1t=α 21 ln CCOt−1+α22 ln M t−1+b21 lnCCO t−2+b22 ln M t−2+c2+ε2 t (26)

Where α ij , b ij are the parameters to be calculated and c i is the constant term and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the seasonally adjusted total consumer credit outstanding, (CCO), and the logarithmic monthly returns of the industrial production, (IP), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (27) and (28) are as follows:

ln CCO=α 11 lnCCO t−1+α 12 ln IPt−1+b11 ln CCOt−2+b12 ln IPt−2+c1+ε1t (27)

ln IP t=α 21 ln CCOt−1+α22 ln IP t−1+b21 ln CCOt−2+b22 ln IPt−2+c2+ε2 t (28)

Where α ij , b ij are the parameters to be calculated and c i is the constant term and εij is the error term of the UVAR regression equation .

Let’s assume that logarithmic monthly returns of the seasonally adjusted money supply, (M1), and the logarithmic monthly returns of the industrial production, (IP), are endogenous variables and are jointly determined by a UVAR. Let’s a constant be the only exogenous variable. By taking 2 lags for simplicity of the endogenous variables, the mathematical equations, (29) and (30) are as follows:

ln M 1=α11 ln M 1t−1+α 12 ln IP t−1+b11 ln M 1t−2+b12 ln IP t−2+c1+ε1 t (29)

223

Page 224: Introduction to Econometrics 2

ln IP t=α 21 ln M 1t−1+α22 ln IP t−1+b21 ln M 1t−2+b22 ln IP t−2+c2+ε 2t (30)

Where α ij , b ij are the parameters to be calculated and c i is the constant term and εij is the error term of the UVAR regression equation .

The log likelihood statistic that is used in the UVAR model is computed assuming a multivariate normal Gaussian distribution. According to E-views user’s guide II the equation is:

l=−T2 {k (1+ log 2 π )+ log|Ω|}

(31)

The two information criteria in the UVAR model are computed as follows:

AIC= -2l / T +2n / T (32)SC= -2l / T +nlog T / T (33)

The hypotheses that we are going to formulate and test for pairwise Granger causality exogeneity tests are as follows:

The null hypothesis, H0, states that the macroeconomic variables are not a Granger cause of short, medium and long-term Treasury with constant maturities and vice versa.

The alternative hypothesis, H1, states that the macroeconomic variables are Granger cause of short, medium and long-term Treasury with constant maturities and vice versa.

Descriptive statistics will be displayed and to test for normality the Jarque – Bera statistic is analysed. We check for stationarity of the series by applying the Augmented Dickey – Fuller’s stationarity test, (ADF), statistic to calculate and compare the critical values.

The data that we have used are monthly returns starting from 01/01/1990 to 01/01/2013, which total to 276 observations. The data has been derived from money stock measures, industrial production and capacity utilization, consumer credit, and selected interest rates. All the data were obtained from the Federal Reserve Statistical Release Department and they are denoted by the symbols, H.6, G.17, G.19, and H.15. According to the Federal Reserve Statistical Release, the seasonally adjusted money supply, (M1), consists of currency outside the US Treasury, Federal Reserve Banks, the vaults of depository institutions, traveller’s checks of nonblank issuers, demand deposits at commercial banks less cash items

224

Page 225: Introduction to Econometrics 2

in the process of collection and Federal Reserve Float, other checkable deposits, credit union share draft accounts, and demand deposits at thrift institutions. There is a contradiction of whether the money supply should be regarded as exogenous or endogenous variable. Some monetary economists perceive it as exogenous and not related to interest rates. Others believe that higher interest rates lead to increase in the money supply. In our study, we will use money supply as endogenous variable.

According to the Federal Reserve Statistical Release, the industrial production index, (IP), measures the real output of all manufacturing, mining, electric and gas industries. Manufacturing is consisted of those industries included in the North American Industry Classification System, (NAICS). It has been constructed from 312 individual series, which are market groups and industry groups. Te current formula that is used to measure IP is the geometric mean of the change in output and is calculated using the unit value estimate for the current month and the estimate for previous month. Production indexes for a restricted number of industries are calculated by dividing estimated nominal output by a corresponding Fischer price index.

According to the Federal Reserve Statistical Release, the seasonally adjusted consumer credit outstanding covers short and intermediate term extended to individuals by excluding loans secured by real estate.

The return of the financial series are calculated by taking the log difference mean returns of the monthly price changes of the 3 – month, 5-year and 10-year Treasury with constant maturities and the log difference returns of the macroeconomic factors.

The logarithmic formula that we have used is:

Rt= ln(Pt /P t−1 ) (34)

Where: Rt is the monthly return for month t, Pt is the closing price for month t, and Pt-1 is the closing price lagged one period for month t-1.

Figure 1, shows the fluctuations of the logarithmic monthly returns of the seasonally adjusted money supply, (M1) for the period 01/01/1990 to 01/01/2013.

225

Page 226: Introduction to Econometrics 2

Logarithmic monthly returns of the seasonally adjusted money supply,(M1).

-0,04

-0,02

0

0,02

0,04

0,06

0,0819

90-0

2

1991

-05

1992

-08

1993

-11

1995

-02

1996

-05

1997

-08

1998

-11

2000

-02

2001

-05

2002

-08

2003

-11

2005

-02

2006

-05

2007

-08

2008

-11

2010

-02

2011

-05

2012

-08

Source: Author’s calculation based on Excel software. Data were obtained from the Federal Reserve Statistical Release Department.

As shown from Figure 1, there was a substantial increase in the money supply in 2011 and 2012. The Federal Open Market Committee has decided to purchase $600 billion of longer – term Treasury securities by the end of the second quarter of 2011. The purpose of asset purchase program is to maximise employment and achieve price stability. The committee has adopted an expansionary monetary policy with low interest rates to reduce the unemployment rate as a result of the recession of 2008.

226

Page 227: Introduction to Econometrics 2

Figure 2, shows the fluctuations of the logarithmic monthly returns of the seasonally adjusted total consumer credit outstanding for the period 01/01/1990 to 01/01/2013.

Logarithmic monthly returns of the seasonally adjusted total consumer credit outstanding, (cco).

-0,02

-0,01

0

0,01

0,02

0,03

0,04

0,05

0,06

1990

-02

1991

-03

1992

-04

1993

-05

1994

-06

1995

-07

1996

-08

1997

-09

1998

-10

1999

-11

2000

-12

2002

-01

2003

-02

2004

-03

2005

-04

2006

-05

2007

-06

2008

-07

2009

-08

2010

-09

2011

-10

2012

-11

Source: Author’s calculation based on Excel software. Data were obtained from the Federal Reserve Statistical Release Department.

As shown from Figure 2, during 2008 and 2009 we had a decline and negative figures of total consumer credit outstanding. Household wealth was reduced and credit supply was tightened from the banks by adopting a stricter lending standards. There was a sharp contraction on consumer spending. Then, there was a positive increase of the seasonally adjusted total consumer credit outstanding from December 2010.

227

Page 228: Introduction to Econometrics 2

Figure 3, shows the fluctuations of the logarithmic monthly returns of the Industrial Production, (IP) for the period 01/01/1990 to 01/01/2013.

Logarithmic monthly returns of Industrial Production, (IP).

-0,05

-0,04

-0,03

-0,02

-0,01

0

0,01

0,02

0,03

1990

-02

1991

-04

1992

-06

1993

-08

1994

-10

1995

-12

1997

-02

1998

-04

1999

-06

2000

-08

2001

-10

2002

-12

2004

-02

2005

-04

2006

-06

2007

-08

2008

-10

2009

-12

2011

-02

2012

-04

Source: Author’s calculation based on Excel software. Data were obtained from the Federal Reserve Statistical Release Department.

As shown from Figure 3, there was a decrease in the industrial production in 2008 and 2009. The US recession in 2008 has affected negatively the growth of the economy. Specifically, the index in March 2008 was 100,0078 and in December 2008, it was 89,5631. The decline was -10.44 percent. There was contraction in the industrial output of consumer goods, production of raw materials and manufacturing output. Then, in May 2010, the industrial production index started to rise. There was an obvious increase in all manufacturing sectors.

.

228

Page 229: Introduction to Econometrics 2

Figure 4, shows the fluctuations of the logarithmic monthly mean returns of 3-month Treasury constant maturity for the period 01/01/1990 to 01/01/2013.

1990

-02

1990

-12

1991

-10

1992

-08

1993

-06

1994

-04

1995

-02

1995

-12

1996

-10

1997

-08

1998

-06

1999

-04

2000

-02

2000

-12

2001

-10

2002

-08

2003

-06

2004

-04

2005

-02

2005

-12

2006

-10

2007

-08

2008

-06

2009

-04

2010

-02

2010

-12

2011

-10

2012

-08

-2

-1.5

-1

-0.5

0

0.5

1

1.5

Logarithmic monthly mean returns of 3 month Treasury constant maturity.

Source: Author’s calculation based on Excel software. Data were obtained from the Federal Reserve Statistical Release Department.

According to Figure 4, from March 2007 to March 2008, the rate declined from 5,08 percent to 1,22 percent. The expansionary monetary policy that was adopted as a result of the recession in 2008 created low short-term interest rates. The purpose was to foster business growth and credit supply as a result of the recession of 2008.

229

Page 230: Introduction to Econometrics 2

Figure 5, shows the fluctuations of the logarithmic monthly returns of 5–year Treasury constant maturity for the period 01/01/1990 to 01/01/2013.

1990

-02

1991

-04

1992

-06

1993

-08

1994

-10

1995

-12

1997

-02

1998

-04

1999

-06

2000

-08

2001

-10

2002

-12

2004

-02

2005

-04

2006

-06

2007

-08

2008

-10

2009

-12

2011

-02

2012

-04

-0.5-0.4-0.3-0.2-0.1

00.10.20.30.40.5

Logarithmic monthly mean returns of 5-year Treasury constant maturity

Source: Author’s calculation based on Excel software. Data were obtained from the Federal Reserve Statistical Release Department.

According to Figure 5, there was a positive and negative fluctuation of the 5-year Treasury constant maturity rate. The rate in June 2007 was 5,026 percent and in December 2012, it has reached 0,66 percent. The drop was accounted as a negative figure of -86,86 percent. The expansionary monetary policy that was adopted as a result of the recession in 2008 created low interest rates for the medium term.

Figure 6, shows the fluctuations of the logarithmic monthly mean returns of 10–year Treasury constant maturity for the period 01/01/1990 to 01/01/2013.

1990

-02

1990

-12

1991

-10

1992

-08

1993

-06

1994

-04

1995

-02

1995

-12

1996

-10

1997

-08

1998

-06

1999

-04

2000

-02

2000

-12

2001

-10

2002

-08

2003

-06

2004

-04

2005

-02

2005

-12

2006

-10

2007

-08

2008

-06

2009

-04

2010

-02

2010

-12

2011

-10

2012

-08

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

Logarithmic monthly mean returns of 10 - year Treasury constant maturity

230

Page 231: Introduction to Econometrics 2

Source: Author’s calculation based on Excel software. Data were obtained from the Federal Reserve Statistical Release Department.

According to Figure 6, there was a continuous positive and negative fluctuation of the 10-year Treasury constant maturity rate. The rate in June 2000 was 6,097 percent and in December 2012, it has reached 1,6371 percent. The drop was accounted as a negative figure of -73.15 percent. The expansionary monetary policy that was adopted as a result of the recession in 2008 created low interest rates for the long –term Treasury rate.

2. Statistical and econometric tests.

Table 1 shows descriptive statistics and normality tests of the logarithmic mean monthly returns of the US interest rates and the logarithmic monthly returns of the macroeconomic factors.

Table 1 displays Jarque - Bera normality test. RLN3 represents logarithmic mean monthly returns of the 3 month Treasury constant maturity. RLN5 represents logarithmic mean monthly returns of the 5 year Treasury constant maturity. RLN10 shows logarithmic mean monthly returns of the 10 year Treasury constant maturity. RLM1 represents the logarithmic monthly returns of the money supply, M1. RLNIP shows the logarithmic monthly returns of total index industrial production. LNCC represents the logarithmic monthly returns of seasonally adjusted total consumer credit outstanding.

RLN3 RLN5 RLN10 RLM1 RLNIP LNCC Mean -0.016970 -0.008748 -0.005511 0.004100 0.001695 0.004526 Median 7.03E-05 -0.005260 -0.003825 0.003411 0.002172 0.004349 Maximum 1.283792 0.410626 0.225705 0.059298 0.020992 0.048117 Minimum -1.677346 -0.362561 -0.317181 -0.032563 -0.043029 -0.008800 Std. Dev. 0.226450 0.092960 0.070552 0.009052 0.006670 0.005267 Skewness -0.753106 0.014283 -0.278897 1.894240 -1.706891 2.110637 Kurtosis 24.06679 5.494598 4.510561 13.55751 11.56759 19.14710

Jarque-Bera 5129.900 71.57410 29.81866 1446.856 978.1609 3203.302 Probability 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000

Observations 276 276 276 276 276 276Source: Author’s calculation based on EViews 6 software.Significant p-value at 5% significance level.

We state the hypotheses as follows:

H0: The log difference of the monthly mean returns of the 3month, 5-year and 10-year Treasury constant maturities and the log difference of the monthly returns of the macroeconomic factors are normally distributed.

H1: The log difference of the monthly mean returns of the 3month, 5-year and 10-year Treasury constant maturities and the log difference of the monthly returns of the macroeconomic factors are not normally distributed.

231

Page 232: Introduction to Econometrics 2

According to Table 1, the Jarque – Bera χ2

statistics for all variables are very significant at the 5% significance value. For example, the logarithmic monthly

returns of seasonally adjusted total consumer credit outstanding shows a χ2

statistic of 3203.302, which is very significant, as the p-value is 0.0000. The joint test of the null hypothesis that sample skewness equals 0 and sample kurtosis equals 3 is rejected. Thus, we can reject H0 of normality. The distribution of the various variables shows excess kurtosis. It is leptokurtic and slightly positively or negatively skewed. For example, the kurtosis of the logarithmic monthly returns of total index industrial production is 11.57, which is greater than 3. The coefficient of variation for the same variable, which is calculated as the division of the standard deviation from the mean is 3.94 percent compared with the coefficient of variation of 2.21 percent of the logarithmic monthly returns of the money supply, M1. We have also conducted normality tests, correlograms and autocorrelation LM test for the residuals of the six components and we have found that residuals are not normally distributed. Thus, the null hypothesis, H0, concerning normality is rejected at the 5% significance level for all the variables. Correlograms and autocorrelation tests show that as the number of lags increase, the residual serial correlation is not significant at 5% significance level. For example, in the six lags, the LM-statistic is 42.86 and the p-value is 0.2005, which is not significant, as it is above the 5% or 0.05 significance level. The only variable that has residuals, which are normally distributed, is the RLN10. It shows logarithmic mean monthly returns of the 10-year Treasury constant maturity. For detailed explanations, see appendix 1.

Tables 2-7 show the ADF tests of the log differences of the US seasonally adjusted money supply,(M1), the total index of industrial production, the seasonally adjusted total consumer credit outstanding, the 3 month Treasury constant maturity, the 5 year Treasury constant maturity, and the 10 year Treasury constant maturity.

Table 2 shows the ADF test of the monthly log difference of the US seasonally adjusted money supply,(M1) for the period 01/01/1990 to 01/01/2013.

ADF Test Statistic -10.05439 1% Critical Value* -3.4564 5% Critical Value -2.8724 10% Critical Value -2.5725

*MacKinnon critical values for rejection of hypothesis of a unit root.

Augmented Dickey-Fuller Test EquationDependent Variable: D(RLM1,2)Method: Least SquaresDate: 09/03/13 Time: 20:05Sample(adjusted): 10 277Included observations: 268 after adjusting endpoints

Variable Coefficient Std. Error t-Statistic Prob. D(RLM1(-1)) -4.492907 0.446860 -10.05439 0.0000

D(RLM1(-1),2) 2.562788 0.414627 6.180950 0.0000D(RLM1(-2),2) 1.752297 0.356906 4.909692 0.0000D(RLM1(-3),2) 1.199162 0.287056 4.177443 0.0000D(RLM1(-4),2) 0.658168 0.213936 3.076474 0.0023

232

Page 233: Introduction to Econometrics 2

D(RLM1(-5),2) 0.237991 0.135026 1.762549 0.0792D(RLM1(-6),2) 0.088057 0.062595 1.406765 0.1607

C 0.000100 0.000519 0.192760 0.8473R-squared 0.839553 Mean dependent var -2.37E-05Adjusted R-squared 0.835233 S.D. dependent var 0.020940S.E. of regression 0.008500 Akaike info criterion -6.668165Sum squared resid 0.018784 Schwarz criterion -6.560971Log likelihood 901.5341 F-statistic 194.3530Durbin-Watson stat 2.015268 Prob(F-statistic) 0.000000

Source: Author’s calculation based on EViews 6 software.

For a level of significance of one per cent, the critical value of the t-statistic from first difference logarithmic returns of Dickey-Fuller’s table is -3.4564. According to Table 2 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -10.05, which is smaller than the critical values, (-3.4564, -2.8724, -2.5725). In other words, the monthly log difference of the US seasonally adjusted money supply,(M1) is a stationary series at first difference.

Table 3 shows the ADF test of the monthly log difference of the total index of industrial production for the period 01/01/1990 to 01/01/2013.

ADF Test Statistic -4.354043 1% Critical Value* -3.4563 5% Critical Value -2.8724 10% Critical Value -2.5725

*MacKinnon critical values for rejection of hypothesis of a unit root.

Augmented Dickey-Fuller Test EquationDependent Variable: D(RLNIP)Method: Least SquaresDate: 09/03/13 Time: 20:01Sample(adjusted): 9 277Included observations: 269 after adjusting endpoints

Variable Coefficient Std. Error t-Statistic Prob.RLNIP(-1) -0.385679 0.088580 -4.354043 0.0000

D(RLNIP(-1)) -0.546625 0.095198 -5.741976 0.0000D(RLNIP(-2)) -0.370037 0.099800 -3.707793 0.0003D(RLNIP(-3)) -0.120089 0.100846 -1.190811 0.2348D(RLNIP(-4)) 0.073570 0.096534 0.762114 0.4467D(RLNIP(-5)) 0.136871 0.084399 1.621710 0.1061D(RLNIP(-6)) 0.141045 0.061900 2.278585 0.0235

C 0.000660 0.000396 1.666640 0.0968R-squared 0.481856 Mean dependent var -9.83E-06Adjusted R-squared 0.467959 S.D. dependent var 0.008271S.E. of regression 0.006033 Akaike info criterion -7.353812Sum squared resid 0.009500 Schwarz criterion -7.246906Log likelihood 997.0877 F-statistic 34.67440Durbin-Watson stat 2.006337 Prob(F-statistic) 0.000000

Source: Author’s calculation based on EViews 6 software.

For a level of significance of one per cent, the critical value of the t-statistic from log difference of Dickey-Fuller’s table is -3.4563. According to Table 3 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -

233

Page 234: Introduction to Econometrics 2

4.35, which is smaller than the critical values, (-3.46, -2.87, -2.57). In other words, the log difference of the total index of industrial production is a stationary series.

Table 4 shows the ADF test of the monthly log difference of the seasonally adjusted total consumer credit outstanding for the period 01/01/1990 to 01/01/2013.

ADF Test Statistic -8.639621 1% Critical Value* -3.4564 5% Critical Value -2.8724 10% Critical Value -2.5725

*MacKinnon critical values for rejection of hypothesis of a unit root.

Augmented Dickey-Fuller Test EquationDependent Variable: D(LNCC,2)Method: Least SquaresDate: 09/03/13 Time: 20:03Sample(adjusted): 10 277Included observations: 268 after adjusting endpoints

Variable Coefficient Std. Error t-Statistic Prob.D(LNCC(-1)) -3.513592 0.406684 -8.639621 0.0000

D(LNCC(-1),2) 1.656438 0.378032 4.381734 0.0000D(LNCC(-2),2) 1.031427 0.329107 3.134020 0.0019D(LNCC(-3),2) 0.576473 0.268388 2.147910 0.0326D(LNCC(-4),2) 0.271776 0.200671 1.354335 0.1768D(LNCC(-5),2) 0.088550 0.130420 0.678965 0.4978D(LNCC(-6),2) 0.026548 0.061972 0.428393 0.6687

C 3.15E-05 0.000263 0.119686 0.9048R-squared 0.815298 Mean dependent var 2.27E-07Adjusted R-squared 0.810325 S.D. dependent var 0.009883S.E. of regression 0.004304 Akaike info criterion -8.029018Sum squared resid 0.004817 Schwarz criterion -7.921824Log likelihood 1083.888 F-statistic 163.9534Durbin-Watson stat 2.005699 Prob(F-statistic) 0.000000

Source: Author’s calculation based on EViews 6 software.

For a level of significance of one per cent, the critical value of the t-statistic from first difference logarithmic returns of Dickey-Fuller’s table is -3.4564. According to Table 4 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -8.64, which is smaller than the critical values, (-3.4564, -2.8724, -2.5725). In other words, the log difference of the seasonally adjusted total consumer credit outstanding is a stationary series at first difference.

234

Page 235: Introduction to Econometrics 2

Table 5 shows the ADF test of the monthly mean log difference of the 3 month Treasury constant maturity for the period 01/01/1990 to 01/01/2013.

ADF Test Statistic -6.472774 1% Critical Value* -3.4563 5% Critical Value -2.8724 10% Critical Value -2.5725

*MacKinnon critical values for rejection of hypothesis of a unit root.

Augmented Dickey-Fuller Test EquationDependent Variable: D(RLN3)Method: Least SquaresDate: 09/03/13 Time: 20:12Sample(adjusted): 9 277Included observations: 269 after adjusting endpoints

Variable Coefficient Std. Error t-Statistic Prob.RLN3(-1) -0.919984 0.142131 -6.472774 0.0000

D(RLN3(-1)) 0.299800 0.130816 2.291760 0.0227D(RLN3(-2)) -0.001907 0.118999 -0.016023 0.9872D(RLN3(-3)) 0.080153 0.106160 0.755015 0.4509D(RLN3(-4)) 0.051031 0.092663 0.550719 0.5823D(RLN3(-5)) -0.006495 0.072980 -0.088995 0.9292D(RLN3(-6)) 0.016635 0.062000 0.268307 0.7887

C -0.016036 0.013244 -1.210806 0.2271R-squared 0.409799 Mean dependent var -8.68E-05Adjusted R-squared 0.393970 S.D. dependent var 0.274589S.E. of regression 0.213762 Akaike info criterion -0.218620Sum squared resid 11.92617 Schwarz criterion -0.111714Log likelihood 37.40433 F-statistic 25.88889Durbin-Watson stat 1.997717 Prob(F-statistic) 0.000000

Source: Author’s calculation based on EViews 6 software.

For a level of significance of one per cent, the critical value of the t-statistic from Dickey-Fuller’s table is -3.4563. According to Table 5 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -6.47, which is smaller than the critical values, (-3.4563, -2.8724, -2.5725). In other words, the monthly log difference of the returns of the 3-month Treasury constant maturity is a stationary series.

235

Page 236: Introduction to Econometrics 2

Table 6 shows the ADF test of the monthly mean log difference of the 5 year Treasury constant maturity for the period 01/01/1990 to 01/01/2013.

ADF Test Statistic -9.762106 1% Critical Value* -3.4563 5% Critical Value -2.8724 10% Critical Value -2.5725

*MacKinnon critical values for rejection of hypothesis of a unit root.

Augmented Dickey-Fuller Test EquationDependent Variable: D(RLN5)Method: Least SquaresDate: 09/03/13 Time: 20:12Sample(adjusted): 9 277Included observations: 269 after adjusting endpoints

Variable Coefficient Std. Error t-Statistic Prob.RLN5(-1) -1.526023 0.156321 -9.762106 0.0000

D(RLN5(-1)) 0.511095 0.138134 3.699997 0.0003D(RLN5(-2)) 0.573164 0.121711 4.709204 0.0000D(RLN5(-3)) 0.526487 0.110618 4.759488 0.0000D(RLN5(-4)) 0.533275 0.097619 5.462816 0.0000D(RLN5(-5)) 0.355052 0.084187 4.217433 0.0000D(RLN5(-6)) 0.230586 0.061120 3.772712 0.0002

C -0.014229 0.005610 -2.536363 0.0118R-squared 0.545295 Mean dependent var -0.000220Adjusted R-squared 0.533100 S.D. dependent var 0.129982S.E. of regression 0.088817 Akaike info criterion -1.975196Sum squared resid 2.058872 Schwarz criterion -1.868290Log likelihood 273.6639 F-statistic 44.71406Durbin-Watson stat 1.953588 Prob(F-statistic) 0.000000

Source: Author’s calculation based on EViews 6 software.

For a level of significance of one per cent, the critical value of the t-statistic from Dickey-Fuller’s table is -3.4563. According to Table 6 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -9.76, which is smaller than the critical values, (-3.4563, -2.8724, -2.5725). In other words, the monthly log difference of the 5-year Treasury constant maturity returns is a stationary series.

236

Page 237: Introduction to Econometrics 2

Table 7 shows the ADF test of the monthly mean log difference of the 10-year Treasury constant maturity for the period 01/01/1990 to 01/01/2013.

ADF Test Statistic -9.631798 1% Critical Value* -3.4563 5% Critical Value -2.8724 10% Critical Value -2.5725

*MacKinnon critical values for rejection of hypothesis of a unit root.

Augmented Dickey-Fuller Test EquationDependent Variable: D(RLN10)Method: Least SquaresDate: 09/03/13 Time: 20:13Sample(adjusted): 9 277Included observations: 269 after adjusting endpoints

Variable Coefficient Std. Error t-Statistic Prob.RLN10(-1) -1.612196 0.167383 -9.631798 0.0000

D(RLN10(-1)) 0.560705 0.148396 3.778430 0.0002D(RLN10(-2)) 0.591852 0.130197 4.545823 0.0000D(RLN10(-3)) 0.499509 0.118148 4.227814 0.0000D(RLN10(-4)) 0.556850 0.101390 5.492162 0.0000D(RLN10(-5)) 0.335896 0.086417 3.886928 0.0001D(RLN10(-6)) 0.211280 0.060782 3.476029 0.0006

C -0.009894 0.004183 -2.364946 0.0188R-squared 0.591031 Mean dependent var -0.000294Adjusted R-squared 0.580063 S.D. dependent var 0.102685S.E. of regression 0.066542 Akaike info criterion -2.552669Sum squared resid 1.155677 Schwarz criterion -2.445763Log likelihood 351.3340 F-statistic 53.88434Durbin-Watson stat 1.986598 Prob(F-statistic) 0.000000

Source: Author’s calculation based on EViews 6 software.

For a level of significance of one per cent, the critical value of the t-statistic from Dickey-Fuller’s table is -3.4563. According to Table 7 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -9.63, which is smaller than the critical values, (-3.4563, -2.8724, -2.5725). In other words, the log difference of the monthly 10 year Treasury constant maturity is a stationary series.

237

Page 238: Introduction to Econometrics 2

Graph 1 shows the inverse roots of the characteristic autoregression polynomial test of all the macroeconomic variables and for the 3-month, 5-year and 10 year Treasury with constant maturities for the period 01/01/1990 to 01/01/2013.

-1.5

-1.0

-0.5

0.0

0.5

1.0

1.5

-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5

Inverse Roots of AR Characteristic Polynomial

Source: Author’s calculation based on EViews 6 software.

According to Graph 1, we are trying to verify whether the UVAR model is stationary. All the roots of the polynomial must have an absolute value less than one and reside inside the unit circle. In our case, all the roots are less than one and lies inside the unit circle. Therefore, the UVAR model is stationary. The fact that the roots have an absolute value less than one indicates that the impulse shock in the variables will decrease with time.

238

Page 239: Introduction to Econometrics 2

Table 8 shows the lag length criteria that have been used based on the five indicators which are LR: sequential modified LR test statistic, FPE: Final prediction error, AIC: Akaike information criterion, SC: Schwarz information criterion, and HQ: Hannan-Quinn information criterion. The optimal number of lags will be selected based on the row that has the most *, which indicates lag order selected by the criterion.

VAR Lag Order Selection CriteriaEndogenous variables: RLN3 RLN5 RLN10 RLM1 RLNIP LNCCExogenous variables: CDate: 09/02/13 Time: 16:06Sample: 2 277Included observations: 264

 Lag LogL LR FPE AIC SC HQ

0  3808.086 NA  1.25e-20 -28.80368  -28.72241* -28.771031  3884.980  149.7111  9.15e-21 -29.11349 -28.54459 -28.884892  3965.720  153.5265  6.52e-21 -29.45242 -28.39589 -29.027873  4033.038  124.9466  5.15e-21 -29.68968 -28.14552  -29.06919*4  4077.089  79.75885  4.85e-21 -29.75067 -27.71888 -28.934245  4122.149  79.53785  4.55e-21 -29.81931 -27.29989 -28.806936  4172.989  87.43000   4.08e-21*  -29.93173* -26.92469 -28.723417  4201.400  47.56645  4.35e-21 -29.87424 -26.37956 -28.469978  4237.422  58.67311  4.39e-21 -29.87441 -25.89210 -28.274209  4276.933   62.55840*  4.32e-21 -29.90101 -25.43107 -28.10485

10  4309.473  50.04321  4.50e-21 -29.87480 -24.91723 -27.8827011  4334.163  36.84772  4.98e-21 -29.78911 -24.34392 -27.6010712  4353.727  28.30822  5.76e-21 -29.66460 -23.73177 -27.28061

 * indicates lag order selected by the criterion LR: sequential modified LR test statistic (each test at 5% level) FPE: Final prediction error AIC: Akaike information criterion SC: Schwarz information criterion HQ: Hannan-Quinn information criterionSource: Author’s calculation based on EViews 6 software.

According to Table 8, we have included initially 12 lags, as we have used monthly observations. Two out of the five criteria indicate 6 lags as the optimal model or number of lags that should be used. The criteria are the Final prediction error and the Akaike information criterion, which has a value of   4.08e-21* and  -29.93173* respectively.

239

Page 240: Introduction to Econometrics 2

Table 9 displays the Granger Causality, Block Exogeneity Wald tests. RLN3 represents logarithmic mean monthly returns of the 3-month Treasury constant maturity. RLN5 represents logarithmic mean monthly returns of the 5-year Treasury constant maturity. RLN10 shows logarithmic mean monthly returns of the 10-year Treasury constant maturity. RLM1 represents the logarithmic monthly returns of the money supply, M1. RLNIP shows the logarithmic monthly returns of total index industrial production. LNCC represents the logarithmic monthly returns of seasonally adjusted total consumer credit outstanding.

VAR Granger Causality/Block Exogeneity Wald TestsDate: 09/02/13 Time: 17:38Sample: 2 277Included observations: 270

Dependent variable: RLN3

Excluded Chi-sq df Prob.

RLN5  34.10164 6  0.0000RLN10  40.55861 6  0.0000RLM1  17.25915 6  0.0084RLNIP  42.54614 6  0.0000LNCC  4.007370 6  0.6757

All  147.8891 30  0.0000

Dependent variable: RLN5

Excluded Chi-sq df Prob.

RLN3  6.893101 6  0.3308RLN10  24.42549 6  0.0004RLM1  9.717540 6  0.1371RLNIP  7.935533 6  0.2429LNCC  6.760889 6  0.3435

All  69.88641 30  0.0001

Dependent variable: RLN10

Excluded Chi-sq df Prob.

RLN3  6.284556 6  0.3921RLN5  21.12580 6  0.0017RLM1  7.732456 6  0.2584RLNIP  9.743712 6  0.1359LNCC  3.821533 6  0.7008

All  54.06501 30  0.0045

Dependent variable: RLM1

240

Page 241: Introduction to Econometrics 2

Excluded Chi-sq df Prob.

RLN3  11.11457 6  0.0849RLN5  21.79020 6  0.0013

RLN10  18.06122 6  0.0061RLNIP  18.69294 6  0.0047LNCC  8.383644 6  0.2113

All  80.40004 30  0.0000

Dependent variable: RLNIP

Excluded Chi-sq df Prob.

RLN3  9.367086 6  0.1540RLN5  10.65656 6  0.0996

RLN10  8.094984 6  0.2312RLM1  4.349791 6  0.6295LNCC  5.290518 6  0.5071

All  35.11466 30  0.2385

Dependent variable: LNCC

Excluded Chi-sq df Prob.

RLN3  11.08911 6  0.0857RLN5  19.00939 6  0.0041

RLN10  17.85957 6  0.0066RLM1  12.79753 6  0.0464RLNIP  8.065686 6  0.2333

All  54.15567 30  0.0044Source: Author’s calculation based on EViews 6 software.

The hypotheses that have been formulated and tested for pairwise Granger causality exogeneity tests are as follows:

The null hypothesis, H0, states that the macroeconomic variables are not a Granger cause of short, medium and long-term Treasury with constant maturities and vice versa.

The alternative hypothesis, H1, states that the macroeconomic variables are Granger cause of short, medium and long-term Treasury with constant maturities and vice versa.

According to Table 9, at the 5% significance level, we have found significant causality for all pairs of variables in both directions except for the dependent variable, RLNIP, which measures the logarithmic monthly returns of total index

industrial production in relation to the other variables. Specifically, The all χ2

statistic for the joint significance of all other lagged endogenous variables in relation to RLNIP in the equation was 35.11 and the p-value was 0.24.

We reject the null hypothesis, H0, that RLN3, RLN5, RLN10, RLM1, and LNCC variables are not Granger cause of the others. In the case of, RLNIP, which shows the logarithmic monthly returns of total index industrial production, the sample

241

Page 242: Introduction to Econometrics 2

evidence cannot reject the null hypothesis. Thus, in this case, there is no Granger cause between the variables. For this reason, we are going to use the RLNIP variable of Industrial Production in the UVAR model as exogenous variable in relation to the constant.

Table 10 shows the Unrestricted Vector Autoregression Model. RLN3 represents logarithmic mean monthly returns of the 3-month Treasury constant maturity. RLN5 represents logarithmic mean monthly returns of the 5-year Treasury constant maturity. RLN10 shows logarithmic mean monthly returns of the 10-year Treasury constant maturity. RLM1 represents the logarithmic monthly returns of the money supply, M1. RLNIP shows the logarithmic monthly returns of total index industrial production. LNCC represents the logarithmic monthly returns of seasonally adjusted total consumer credit outstanding.

Date: 09/04/13 Time: 17:43 Sample(adjusted): 8 277 Included observations: 270 after adjusting endpoints Standard errors & t-statistics in parentheses

RLN3 RLN5 RLN10 RLM1 LNCCRLN3(-1) 0.463777 0.038338 0.035935 -0.003668 0.003327

(0.07454) (0.03360) (0.02591) (0.00303) (0.00159) (6.22184) (1.14110) (1.38669) (-1.21045) (2.09671)

RLN3(-2) -0.408080 -0.016785 -0.024229 0.001305 -0.002601 (0.08044) (0.03626) (0.02797) (0.00327) (0.00171)(-5.07301) (-0.46296) (-0.86637) (0.39896) (-1.51876)

RLN3(-3) 0.051950 -0.038388 -0.021977 0.007711 0.003985 (0.08405) (0.03788) (0.02922) (0.00342) (0.00179) (0.61810) (-1.01335) (-0.75211) (2.25675) (2.22707)

RLN3(-4) -0.091397 -0.014120 -0.026170 -0.004249 0.001714 (0.08541) (0.03849) (0.02969) (0.00347) (0.00182)(-1.07016) (-0.36681) (-0.88139) (-1.22392) (0.94252)

RLN3(-5) -0.026617 0.009879 -0.001270 0.006095 0.000867 (0.07995) (0.03604) (0.02780) (0.00325) (0.00170)(-0.33292) (0.27414) (-0.04569) (1.87521) (0.50913)

RLN3(-6) 0.072814 -0.058989 -0.034724 0.002122 0.000169 (0.07308) (0.03294) (0.02541) (0.00297) (0.00156) (0.99633) (-1.79081) (-1.36667) (0.71431) (0.10836)

RLN5(-1) 2.476172 0.603355 0.505169 -0.059578 0.002360 (0.47840) (0.21563) (0.16632) (0.01945) (0.01018) (5.17596) (2.79816) (3.03735) (-3.06344) (0.23174)

RLN5(-2) -0.517021 0.283362 0.248317 -0.044438 -0.040546 (0.50079) (0.22572) (0.17410) (0.02036) (0.01066)(-1.03241) (1.25538) (1.42626) (-2.18279) (-3.80311)

RLN5(-3) 0.492531 0.301716 0.299267 -0.035337 0.010475 (0.50794) (0.22894) (0.17659) (0.02065) (0.01081) (0.96966) (1.31787) (1.69470) (-1.71128) (0.96873)

242

Page 243: Introduction to Econometrics 2

RLN5(-4) 0.176980 -0.611515 -0.322049 0.013737 -0.012411 (0.51086) (0.23025) (0.17760) (0.02077) (0.01088) (0.34644) (-2.65582) (-1.81331) (0.66147) (-1.14118)

RLN5(-5) -0.415149 0.505353 0.350523 0.001039 -0.002686 (0.52908) (0.23847) (0.18394) (0.02151) (0.01126)(-0.78466) (2.11914) (1.90564) (0.04831) (-0.23846)

RLN5(-6) -0.203319 0.105612 0.235353 0.032915 -0.009947 (0.52524) (0.23674) (0.18260) (0.02135) (0.01118)(-0.38709) (0.44611) (1.28887) (1.54152) (-0.88961)

RLN10(-1) -3.807717 -0.810239 -0.711568 0.078910 0.000248 (0.61373) (0.27662) (0.21337) (0.02495) (0.01307)(-6.20418) (-2.92902) (-3.33492) (3.16276) (0.01895)

RLN10(-2) 0.854546 -0.209010 -0.205870 0.062070 0.045434 (0.66229) (0.29851) (0.23025) (0.02692) (0.01410) (1.29029) (-0.70018) (-0.89412) (2.30542) (3.22243)

RLN10(-3) -0.617711 -0.427695 -0.471936 0.034093 -0.019874 (0.67523) (0.30434) (0.23475) (0.02745) (0.01437)(-0.91482) (-1.40532) (-2.01040) (1.24201) (-1.38260)

RLN10(-4) -0.439899 0.711668 0.398747 -0.016380 0.011786 (0.67825) (0.30570) (0.23580) (0.02757) (0.01444)(-0.64858) (2.32798) (1.69106) (-0.59406) (0.81630)

RLN10(-5) -0.025757 -0.939208 -0.677637 -0.007972 -0.001807 (0.70195) (0.31639) (0.24404) (0.02854) (0.01494)(-0.03669) (-2.96853) (-2.77675) (-0.27936) (-0.12092)

RLN10(-6) -0.067311 -0.231810 -0.345822 -0.033191 0.016653 (0.69719) (0.31424) (0.24238) (0.02834) (0.01484)(-0.09655) (-0.73768) (-1.42675) (-1.17106) (1.12196)

RLM1(-1) 0.485291 -0.185510 -0.082648 -0.058860 0.046788 (1.64422) (0.74109) (0.57162) (0.06684) (0.03500) (0.29515) (-0.25032) (-0.14459) (-0.88059) (1.33669)

RLM1(-2) -1.411003 1.054460 0.470623 0.067822 0.000566 (1.62731) (0.73347) (0.56575) (0.06615) (0.03464)(-0.86708) (1.43764) (0.83186) (1.02520) (0.01634)

RLM1(-3) -5.509387 -1.072746 -0.884936 0.246350 0.009882 (1.65343) (0.74524) (0.57483) (0.06722) (0.03520)(-3.33209) (-1.43946) (-1.53948) (3.66505) (0.28074)

RLM1(-4) -0.421862 -0.308028 -0.124654 -0.008912 0.054462 (1.66688) (0.75130) (0.57950) (0.06776) (0.03549)(-0.25308) (-0.40999) (-0.21510) (-0.13151) (1.53478)

RLM1(-5) -0.400787 -1.552201 -0.883213 0.156740 -0.081706 (1.60371) (0.72283) (0.55754) (0.06519) (0.03414)(-0.24991) (-2.14740) (-1.58413) (2.40419) (-2.39322)

RLM1(-6) 3.835610 0.860345 0.744680 0.237824 0.014704

243

Page 244: Introduction to Econometrics 2

(1.65069) (0.74401) (0.57387) (0.06710) (0.03514) (2.32364) (1.15637) (1.29764) (3.54408) (0.41842)

LNCC(-1) 3.326889 -0.222112 0.171855 0.003817 0.106318 (3.11429) (1.40368) (1.08270) (0.12660) (0.06630) (1.06827) (-0.15823) (0.15873) (0.03015) (1.60362)

LNCC(-2) 2.178177 0.453103 -0.307083 -0.011932 0.238585 (3.09414) (1.39460) (1.07570) (0.12578) (0.06587) (0.70397) (0.32490) (-0.28547) (-0.09486) (3.62205)

LNCC(-3) -2.350621 1.260648 0.529375 -0.017843 0.094249 (3.15817) (1.42346) (1.09796) (0.12839) (0.06723)(-0.74430) (0.88562) (0.48214) (-0.13898) (1.40182)

LNCC(-4) -3.593158 1.965383 1.182607 -0.192501 0.188646 (3.09674) (1.39578) (1.07660) (0.12589) (0.06593)(-1.16030) (1.40809) (1.09846) (-1.52912) (2.86151)

LNCC(-5) -2.232457 -3.190555 -2.022610 0.175060 0.110959 (3.06462) (1.38130) (1.06544) (0.12458) (0.06524)(-0.72846) (-2.30982) (-1.89839) (1.40516) (1.70073)

LNCC(-6) 1.210720 -0.455374 0.025999 -0.184496 0.126581 (3.11062) (1.40203) (1.08143) (0.12645) (0.06622) (0.38922) (-0.32480) (0.02404) (-1.45899) (1.91150)

C -0.007280 -0.006901 -0.003303 0.003007 0.000240 (0.02500) (0.01127) (0.00869) (0.00102) (0.00053)(-0.29113) (-0.61229) (-0.37992) (2.95773) (0.45082)

RLNIP 3.791696 0.966689 0.811968 -0.248352 0.098049 (1.89349) (0.85344) (0.65829) (0.07698) (0.04031) (2.00249) (1.13270) (1.23346) (-3.22639) (2.43238)

R-squared 0.396979 0.268849 0.242781 0.375776 0.491295 Adj. R-squared 0.318434 0.173615 0.144151 0.294469 0.425035 Sum sq. resids 8.491171 1.724999 1.026289 0.014033 0.003848 S.E. equation 0.188884 0.085135 0.065667 0.007679 0.004021 F-statistic 5.054166 2.823037 2.461544 4.621715 7.414658 Log likelihood 83.90494 299.0680 369.1705 948.6329 1123.292 Akaike AIC -0.384481 -1.978281 -2.497559 -6.789873 -8.083643 Schwarz SC 0.041999 -1.551802 -2.071079 -6.363394 -7.657163 Mean dependent -0.017501 -0.009202 -0.005916 0.004123 0.004584 S.D. dependent 0.228792 0.093652 0.070982 0.009142 0.005303 Determinant Residual Covariance

3.18E-17

Log Likelihood 3212.693 Akaike Information Criteria -22.61254 Schwarz Criteria -20.48015

Source: Author’s calculation based on EViews 6 software.

According to Table 10, under each column of every variable, it is reported the estimated coefficient, its standard error and the t-statistic. For example, the coefficient for RLM1(-6) in the RLN3 equation is 3.84 with a significant t-statistic of 2.32. Then, the table shows standard OLS regression statistics for each equation. The log likelihood statistic is for the first column of 3-month Treasury constant maturity regression equation 83.90. The two information criteria Akaike

244

Page 245: Introduction to Econometrics 2

AIC and Schwarz SC are used for model selection. We prefer the model that provides the smaller values of the information criterion. In our case, we select the fifth column, which represents the logarithmic monthly returns of seasonally adjusted total consumer credit outstanding.The values for Akaike AIC and Schwarz SC are -8.08 and -7.66 respectively. All the regression equations showed a significant F-statistic. Let’s take as an example the regression equation (1) that was explained in the methodological section to show that one variable is the result of the change of the other.

ln M 1=α11 M 1t−1+α 123 MTCM t−1+b11 M 1t−2+b123 MTCM t−2+c11 M 1t−3+c123 MTCM t−3+d11 M 1t−4+d12 3 MTCM t−4+e11 M 1t−5+e123 MTCM t−5+ f 11 M 1t−6+ f 12

3MTCM t−6+c1+ε1 t

Where α ij , b ij , cij , d ij , e ij, f ij are the parameters to be calculated and c i is thecons tan t term and εij is the error term of the UVAR regression equation .The parentheses represents the t-statistics of the coefficients.

ln M 1=0 .49 M 1t−1+0 .46 3 MTCM t−1−1 . 41 M 1t−2−0. 41 3 MTCM t−2−5 .51 M 1 t−3+0 .05 3 MTCM t−3

(0.30 ) (6 .22 ) ( -0 .87 ) ( -5 .07 ) ( -3. 33) (0 .62)−0 .42 M 1t−4−0 . 09 3 MTCM t−4−0 . 40 M 1t−5−0 .03 3 MTCM t−5+3. 84 M 1t−6+0 .07 3 MTCM t−6−0. 007 ( -0 . 25) ( -1 .07 ) ( -0 .25) ( -0 .330 ) (2 .32 ) (0 .996) ( -0 .29 )+ε 1tThe constant t-statistic from equation 1 is not significant. The t-statistics of the lagged value of the money supply and the 3-month Treasury constant maturity are decreasing with time. We will check in mode detail this relationship and for the other variables by using the impulse - response and variance decomposition graphs.

245

Page 246: Introduction to Econometrics 2

Graph 2 displays the impulse - responses functions and appendix 2 shows the impulse responses functions tables and the remaining graphs. RLN3 represents logarithmic mean monthly returns of the 3 month Treasury constant maturity. RLN5 represents logarithmic mean monthly returns of the 5 year Treasury constant maturity. RLN10 shows logarithmic mean monthly returns of the 10 year Treasury constant maturity. RLM1 represents the logarithmic monthly returns of the money supply, M1. LNCC represents the logarithmic monthly returns of seasonally adjusted total consumer credit outstanding. RLNIP which shows the logarithmic monthly returns of total index industrial productionwas excluded as exogenous variable from the impulse - responses graphs.

-0.004

-0.002

0.000

0.002

0.004

2 4 6 8 10 12 14 16 18 20 22

Response of RLM1 to RLN3

-0.004

-0.002

0.000

0.002

0.004

2 4 6 8 10 12 14 16 18 20 22

Response of RLM1 to RLN5

-0.004

-0.002

0.000

0.002

0.004

2 4 6 8 10 12 14 16 18 20 22

Response of RLM1 to RLN10

-0.0015

-0.0010

-0.0005

0.0000

0.0005

0.0010

0.0015

2 4 6 8 10 12 14 16 18 20 22

Response of LNCC to RLN3

-0.0015

-0.0010

-0.0005

0.0000

0.0005

0.0010

0.0015

2 4 6 8 10 12 14 16 18 20 22

Response of LNCC to RLN5

-0.0015

-0.0010

-0.0005

0.0000

0.0005

0.0010

0.0015

2 4 6 8 10 12 14 16 18 20 22

Response of LNCC to RLN10

Response to Cholesky One S.D. Innovations

246

Page 247: Introduction to Econometrics 2

Source: Author’s calculation based on EViews 6 software.

Graph 2 shows the innovations, impulses of 3month, 5 year and 10 year Treasury constant maturities and the responses of macroeconomic variables, RLM1 and LNCC.As it can be seen from all graphs due to the stationary effect of the variables, the magnitude of the shock positive or negative between two variables gradually decrease and then dies off slowly as the time passes away. For example, the impulse - responses of the RLM1 to RLN10 started with a hump-shaped negative shock and then gradually declined close to zero the 23rd period. According to appendix 2, the first period, the impulse responses were -3.05E-05 and in the 23rd period it was 1.51E-05. Another example is the impulse - responses of the LNCC to RLN5. It has started with an inverse hump- shaped positive shock and then gradually declined. The first period the shock was 0.000824 and the 23rd period it reached the value of -9.99E-05. The periods represent the number of years and not the monthly observations. The dataset of this analysis has 23 years or 23 x12 = 276 periods or monthly observations.

247

Page 248: Introduction to Econometrics 2

Graph 3 shows the variance decompositions and appendix 3 displays the variance decomposition tables. RLN3 represents logarithmic mean monthly returns of the 3-month Treasury constant maturity. RLN5 represents logarithmic mean monthly returns of the 5-year Treasury constant maturity. RLN10 shows logarithmic mean monthly returns of the 10-year Treasury constant maturity. RLM1 represents the logarithmic monthly returns of the money supply, M1. RLNIP which shows the logarithmic monthly returns of total index industrial production was excluded as exogenous variable from the variance decompositions, (VD), graphs. LNCC represents the logarithmic monthly returns of seasonally adjusted total consumer credit outstanding.

248

Page 249: Introduction to Econometrics 2

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e n t RL M 1 v a ria n c e d u e to RL N3

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e n t RL M 1 v a ria n c e d u e to RL N5

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e n t RL M 1 v a ria nc e d u e to RLN1 0

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e n t RL M 1 v a ria n c e du e to RL M 1

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e n t RL M 1 v a ria nc e du e to L NCC

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e nt LNCC v a rian c e d u e to RLN3

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e nt LNCC v a rian c e d u e to RLN5

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e n t L NCC v a rian c e du e to RLN1 0

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e n t LNCC v a rian c e du e to RLM 1

0

20

40

60

80

100

2 4 6 8 10 12 14 16 18 20 22

Pe rc e n t L NCC v a rian c e du e to L NCC

Varianc e Decompos ition

Source: Author’s calculation based on EViews 6 software.

According to Graph 3 and appendix 3, we can see that variance decompositions show the proportion of the variance movements in the dependent variables that are due to their own shocks rather than shocks to other variables. For example, the variance of the dependent variable RLN3 has decreased from 100% to 74.70%. The variance of RLN5 has decreased from 83.96% to 69.92%. The variance of RLN10 has increased from 8.87% to 16.45%. The variance of RLM1 has decreased from 83.66% to 72.41% and finally the variance of LNCC has decreased from 90.25% to 8.02%. The proportion of the variance that is present from the macro factors is decreasing or increasing, as the number of periods increases. For example, percent LNCC variance due to RLN3 has increased from 0.65% in 1st period to 6.31% the 23rd period. There was a 5.66% increase, see appendix 3. Another example is the percent RLM1 variance due to RLN3. There was a decrease of the variance from 14.93% the 1st period to 13.00% the 23rd

249

Page 250: Introduction to Econometrics 2

period. There was a drop of the variance, which accounted to -1.93%. A final example is the percent RLM1 variance due to RLN10. There was an increase of the variance in the 1st period from 0.002% to 5.82% the 23rd period. There was a variance increase of 5.82%.

3. Summarizes and Concludes

In this article, we have attempted to model the effects of macroeconomic variables, namely seasonally adjusted money supply,(M1), total index of industrial production,(IP) and seasonally adjusted of total consumer credit outstanding(CCO) on the logarithmic mean monthly returns of the US term structure of interest rates. We have applied an Unrestricted Vector Autoregression system to check for exogeneity tests, impulse - responses and variance decomposition.

We have found that the Jarque – Bera χ2

statistics for all variables are very significant at the 5% significance value. We have rejected the null hypothesis, H0,

250

Page 251: Introduction to Econometrics 2

in favourite of the alternative, H1. In other words, the log difference of the monthly mean returns of the 3month, 5-year and 10-year Treasury constant maturity and the log difference of the monthly returns of the macroeconomic factors are not normally distributed. All the variables showed to be stationary and the fact that the roots have an absolute value less than one indicates that the impulse shock in the variables will decrease with time. Two out of the five criteria indicate 6 lags as the optimal UVAR model to be constructed.

We rejected the null hypothesis, H0, that RLN3, RLN5, RLN10, RLM1, and LNCC variables are not Granger cause of the others. In the case of, RLNIP, which shows the logarithmic monthly returns of total index industrial production, the sample evidence cannot reject the null hypothesis. Thus, in this case, there is no Granger causes among the variables.

Then, we constructed impulse - responses tables associated with their graphs based on E-views results.The innovations, impulses were the 3month, 5 year and 10 year Treasury constant maturities and the responses of the macroeconomic variables were RLM1 and LNCC. From all graphs due to the stationary effect of the variables, the magnitude of the shock positive or negative among two variables gradually decrease and then dies off slowly as the time passes away or as the periods increase. Finally, variance decomposition was illustrated through tables and graphs. It showed the proportion of the variance movements in the dependent variables that are due to their shocks. For example, the variance of the logarithmic mean monthly returns of the 5-year Treasury constant maturity, RLN5, has decreased from 83.96% to 69.92%.

In a future article, we will construct confidence intervals around the impulse – responses and variance decompositions to interpret better the results.

Appendix 1 shows the orthogonalized residual normality tests. Component 1 represents, RLN3, which is a logarithmic mean monthly return of the 3-month Treasury constant maturity. Component 2 represents,RLN5, which is logarithmic mean monthly returns of the 5 - year Treasury constant maturity.Component 3 represents, RLNIP, which shows the logarithmic monthly returns of total index industrial production. Component 4 represents, RLN10, which shows logarithmic mean monthly returns of the 10-year Treasury constant maturity. Component 5 represents, RLM1, which represents the logarithmic monthly returns of the money supply, M1.

251

Page 252: Introduction to Econometrics 2

Component 6 represents, LNCC, which shows the logarithmic monthly returns of seasonally adjusted total consumer credit outstanding.

VAR Residual Normality TestsOrthogonalization: Cholesky (Lutkepohl)Null Hypothesis: residuals are multivariate normalDate: 09/05/13 Time: 08:28Sample: 2 277Included observations: 270

Component Skewness Chi-sq df Prob.

1  0.286335  3.689444 1  0.05482  0.337953  5.139559 1  0.02343 -0.635545  18.17629 1  0.00004  0.099271  0.443463 1  0.50555  0.402610  7.294268 1  0.00696  1.607438  116.2736 1  0.0000

Joint  151.0166 6  0.0000

Component Kurtosis Chi-sq df Prob.

1  7.971573  278.0610 1  0.00002  4.356658  20.70585 1  0.00003  6.218751  116.5540 1  0.00004  3.134573  0.203735 1  0.65175  6.539411  140.9336 1  0.00006  13.50198  1240.780 1  0.0000

Joint  1797.238 6  0.0000

Component Jarque-Bera df Prob.

1  281.7505 2  0.00002  25.84541 2  0.00003  134.7303 2  0.00004  0.647198 2  0.72355  148.2278 2  0.00006  1357.053 2  0.0000

Joint  1948.254 12  0.0000Source: Author’s calculation based on EViews 6 software.

The attached graphs show the pairwise cross – correlograms for the estimated residuals in the UVAR for 6 lags.

252

Page 253: Introduction to Econometrics 2

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN3,RLN3(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN3,RLN5(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN3,RLN10(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN3,RLM1(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN3,RLNIP(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN3,LNCC(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN5,RLN3(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN5,RLN5(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN5,RLN10(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN5,RLM1(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN5,RLNIP(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN5,LNCC(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN10,RLN3(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN10,RLN5(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN10,RLN10(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN10,RLM1(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN10,RLNIP(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLN10,LNCC(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLM1,RLN3(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLM1,RLN5(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLM1,RLN10(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLM1,RLM1(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLM1,RLNIP(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLM1,LNCC(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLNIP,RLN3(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLNIP,RLN5(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLNIP,RLN10(-i))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLNIP,RLM1(-i))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLNIP,RLNIP(-i))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(RLNIP,LNCC(-i))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(LNCC,RLN3(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(LNCC,RLN5(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(LNCC,RLN10(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(LNCC,RLM1(-i ))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(LNCC,RLNIP(-i))

-.2

-.1

.0

.1

.2

1 2 3 4 5 6

Cor(LNCC,LNCC(-i ))

Autocorrelations with 2 Std.Err. Bounds

Source: Author’s calculation based on EViews 6 software.

The attached table displays the multivariate LM test statistics for residual serial correlation for 6 lags.

253

Page 254: Introduction to Econometrics 2

VAR Residual Serial Correlation LM TestsNull Hypothesis: no serial correlation at lag order hDate: 09/05/13 Time: 09:32Sample: 2 277Included observations: 270

Lags LM-Stat Prob

1  50.03072  0.06012  65.90542  0.00173  54.28053  0.02584  69.04899  0.00085  45.01862  0.14406  42.86275  0.2005

Probs from chi-square with 36 df.Source: Author’s calculation based on EViews 6 software.

Appendix 2 shows impulse – responses tables and graphs. RLN3, represents the logarithmic mean monthly returns of the 3 month Treasury constant maturity. RLN5, shows the mean monthly returns of the 5 year Treasury

254

Page 255: Introduction to Econometrics 2

constant maturity. RLN10, shows logarithmic mean monthly returns of the 10-year Treasury constant maturity. RLM1, represents the logarithmic monthly returns of the money supply, M1. LNCC, shows the logarithmic monthly returns of seasonally adjusted total consumer credit outstanding.

Response of RLM1:

Period RLN3 RLN5 RLN10 1 -0.002786 -0.000856 -3.05E-05

(0.00042) (0.00040) (0.00040) 2 -0.000496 -7.05E-05 0.001448

(0.00045) (0.00044) (0.00043) 3 -8.68E-05 -2.43E-05 0.001162

(0.00044) (0.00044) (0.00044) 4 0.000508 -0.001156 0.000401

(0.00045) (0.00044) (0.00043) 5 -0.000576 -0.000476 -0.000466

(0.00045) (0.00044) (0.00044) 6 -0.000300 -0.000314 0.000120

(0.00044) (0.00044) (0.00045) 7 0.000251 0.000263 -0.000481

(0.00044) (0.00046) (0.00047) 8 -0.000466 0.000149 -2.04E-05

(0.00033) (0.00029) (0.00028) 9 1.02E-05 7.70E-05 0.000328

(0.00027) (0.00028) (0.00026) 10 0.000428 7.67E-05 -1.26E-05

(0.00025) (0.00027) (0.00024) 11 -0.000304 -0.000314 0.000262

(0.00024) (0.00024) (0.00022) 12 -4.96E-05 -0.000285 1.29E-05

(0.00022) (0.00023) (0.00019) 13 -0.000229 -0.000286 -0.000159

(0.00021) (0.00022) (0.00017) 14 -0.000496 -0.000214 0.000209

(0.00018) (0.00017) (0.00014) 15 -8.03E-05 -0.000106 -2.73E-05

(0.00016) (0.00016) (0.00014) 16 -0.000148 -4.00E-05 5.82E-05

(0.00016) (0.00015) (0.00012) 17 -0.000131 -1.87E-06 0.000138

(0.00014) (0.00014) (0.00011) 18 4.08E-05 8.52E-05 4.39E-05

(0.00013) (0.00014) (0.00011) 19 -0.000111 3.84E-06 2.80E-05

(0.00013) (0.00013) (9.9E-05) 20 -0.000152 6.36E-05 3.63E-05

(0.00011) (0.00012) (8.7E-05) 21 -9.41E-05 -1.29E-05 -3.23E-05

(0.00010) (0.00011) (8.1E-05) 22 -0.000176 9.69E-08 -1.41E-05

(9.9E-05) (0.00010) (7.0E-05) 23 -0.000173 -4.30E-06 1.51E-05

(8.9E-05) (9.5E-05) (6.4E-05) Response of LNCC:

Period RLN3 RLN5 RLN10

255

Page 256: Introduction to Econometrics 2

1 0.000305 0.000824 -0.000703 (0.00023) (0.00023) (0.00022)

2 0.000574 0.000234 -7.16E-05 (0.00023) (0.00023) (0.00022)

And -0.000308 -0.000379 0.000448 (0.00024) (0.00023) (0.00023)

4 0.000345 -0.000186 -0.000191 (0.00024) (0.00023) (0.00023)

5 0.000352 -0.000407 8.69E-05 (0.00024) (0.00024) (0.00024)

6 0.000403 -0.000492 -0.000157 (0.00024) (0.00024) (0.00024)

7 0.000179 -5.51E-06 0.000115 (0.00024) (0.00025) (0.00025)

8 0.000131 -0.000272 1.03E-05 (0.00018) (0.00017) (0.00016)

9 0.000306 3.90E-05 -7.74E-06 (0.00015) (0.00017) (0.00015)

10 0.000273 -0.000295 -2.46E-05 (0.00014) (0.00016) (0.00013)

11 0.000308 -5.61E-05 8.55E-05 (0.00014) (0.00016) (0.00013)

12 0.000241 -0.000190 2.46E-05 (0.00014) (0.00015) (0.00012)

13 0.000224 -9.83E-05 6.49E-05 (0.00012) (0.00014) (9.9E-05)

14 0.000208 -0.000220 -0.000101 (0.00011) (0.00012) (9.3E-05)

15 7.45E-05 -0.000159 5.31E-05 (0.00010) (0.00012) (9.0E-05)

16 0.000190 -0.000202 -7.58E-06 (0.00010) (0.00012) (8.1E-05)

17 0.000162 -9.57E-05 2.47E-05 (9.8E-05) (0.00012) (7.8E-05)

18 0.000153 -0.000169 4.26E-05 (9.2E-05) (0.00011) (7.0E-05)

19 0.000186 -8.78E-05 1.66E-05 (8.8E-05) (0.00010) (6.9E-05)

20 0.000132 -0.000144 1.79E-05 (8.3E-05) (9.8E-05) (6.6E-05)

21 0.000127 -9.40E-05 3.28E-05 (7.8E-05) (9.4E-05) (6.1E-05)

22 0.000144 -0.000122 -1.32E-05 (7.6E-05) (9.0E-05) (5.8E-05)

23 9.60E-05 -9.99E-05 2.15E-05 (7.3E-05) (8.6E-05) (5.4E-05)

Ordering: RLN3 RLN5

RLN10 RLM1 LNCC

Source: Author’s calculation based on EViews 6 software.

256

Page 257: Introduction to Econometrics 2

- 0 . 1 0

- 0 . 0 5

0 . 0 0

0 . 0 5

0 . 1 0

0 . 1 5

0 . 2 0

5 1 0 15 2 0

Response of RLN3 t o RLN3

- 0 . 1 0

- 0 . 0 5

0 . 0 0

0 . 0 5

0 . 1 0

0 . 1 5

0 . 2 0

5 10 1 5 2 0

Response of RLN3 t o RLN5

- 0 . 1 0

- 0 . 0 5

0 . 0 0

0 . 0 5

0 . 1 0

0 . 1 5

0 . 2 0

5 10 1 5 20

Response of RLN3 t o RLN10

- 0 . 1 0

- 0 . 0 5

0 . 0 0

0 . 0 5

0 . 1 0

0 . 1 5

0 . 2 0

5 1 0 1 5 2 0

Response of RLN3 t o RLM1

- 0 . 1 0

- 0 . 0 5

0 . 0 0

0 . 0 5

0 . 1 0

0 . 1 5

0 . 2 0

5 1 0 1 5 20

Response of RLN3 t o LNCC

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

0 . 0 8

0 . 1 0

5 1 0 15 2 0

Response of RLN5 t o RLN3

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

0 . 0 8

0 . 1 0

5 10 1 5 2 0

Response of RLN5 t o RLN5

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

0 . 0 8

0 . 1 0

5 10 1 5 20

Response of RLN5 t o RLN10

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

0 . 0 8

0 . 1 0

5 1 0 1 5 2 0

Response of RLN5 t o RLM1

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

0 . 0 8

0 . 1 0

5 1 0 1 5 20

Response of RLN5 t o LNCC

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

5 1 0 15 2 0

Response of RLN10 t o RLN3

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

5 10 1 5 2 0

Response of RLN10 t o RLN5

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

5 10 1 5 20

Response of RLN10 t o RLN10

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

5 1 0 1 5 2 0

Response of RLN10 t o RLM1

- 0 . 0 4

- 0 . 0 2

0 . 0 0

0 . 0 2

0 . 0 4

0 . 0 6

5 1 0 1 5 20

Response of RLN10 t o LNCC

- 0 . 0 04

- 0 . 0 02

0 . 0 0 0

0 . 0 0 2

0 . 0 0 4

0 . 0 0 6

0 . 0 0 8

5 1 0 15 2 0

Response of RLM1 t o RLN3

- 0 . 0 0 4

- 0 . 0 0 2

0 . 0 0 0

0 . 0 0 2

0 . 0 0 4

0 . 0 0 6

0 . 0 0 8

5 10 1 5 2 0

Response of RLM1 t o RLN5

- 0 . 0 0 4

- 0 . 0 0 2

0 . 0 0 0

0 . 0 0 2

0 . 0 0 4

0 . 0 0 6

0 . 0 0 8

5 10 1 5 20

Response of RLM1 t o RLN10

- 0 . 0 04

- 0 . 0 02

0 . 0 0 0

0 . 0 0 2

0 . 0 0 4

0 . 0 0 6

0 . 0 0 8

5 1 0 1 5 2 0

Response of RLM1 t o RLM1

- 0 . 0 0 4

- 0 . 0 0 2

0 . 0 0 0

0 . 0 0 2

0 . 0 0 4

0 . 0 0 6

0 . 0 0 8

5 1 0 1 5 20

Response of RLM1 t o LNCC

- 0 . 0 02

- 0 . 0 01

0 . 0 0 0

0 . 0 0 1

0 . 0 0 2

0 . 0 0 3

0 . 0 0 4

5 1 0 15 2 0

Response of LNCC t o RLN3

- 0 . 0 0 2

- 0 . 0 0 1

0 . 0 0 0

0 . 0 0 1

0 . 0 0 2

0 . 0 0 3

0 . 0 0 4

5 10 1 5 2 0

Response of LNCC t o RLN5

- 0 . 0 0 2

- 0 . 0 0 1

0 . 0 0 0

0 . 0 0 1

0 . 0 0 2

0 . 0 0 3

0 . 0 0 4

5 10 1 5 20

Response of LNCC t o RLN10

- 0 . 0 02

- 0 . 0 01

0 . 0 0 0

0 . 0 0 1

0 . 0 0 2

0 . 0 0 3

0 . 0 0 4

5 1 0 1 5 2 0

Response of LNCC t o RLM1

- 0 . 0 0 2

- 0 . 0 0 1

0 . 0 0 0

0 . 0 0 1

0 . 0 0 2

0 . 0 0 3

0 . 0 0 4

5 1 0 1 5 20

Response of LNCC t o LNCC

Response t o One S. D. Innovat ions ± 2 S.E.

Source: Author’s calculation based on EViews 6 software.

257

Page 258: Introduction to Econometrics 2

Appendix 3 shows variance decompositions tables and their graphs. RLN3, represents the logarithmic mean monthly returns of the 3 - month Treasury constant maturity. RLN5, shows the mean monthly returns of the 5 - year Treasury constant maturity. RLN10, shows logarithmic mean monthly returns of the 10-year Treasury constant maturity. RLM1, represents the logarithmic monthly returns of the money supply, M1. LNCC, shows the logarithmic monthly returns of seasonally adjusted total consumer credit outstanding.

 Variance Decomposition of RLN3: Period S.E. RLN3 RLN5 RLN10 RLM1 LNCC

 1  0.188884  100.0000  0.000000  0.000000  0.000000  0.000000 2  0.218559  86.21577  1.036778  12.39962  0.009723  0.338109 3  0.222236  86.05002  1.133694  12.05206  0.193459  0.570764 4  0.227019  82.88804  1.462946  12.18694  2.872985  0.589095 5  0.227522  82.52764  1.456973  12.13412  3.017844  0.863423 6  0.234621  78.25319  6.365851  11.47242  2.838062  1.070479 7  0.237156  76.62955  7.618428  11.27195  3.368769  1.111300 8  0.238340  75.95021  7.701051  11.84266  3.337412  1.168662 9  0.239058  75.49572  7.744882  12.04522  3.548271  1.165909

 10  0.239498  75.22393  7.735826  12.04264  3.729866  1.267733 11  0.239706  75.09333  7.816822  12.02830  3.740877  1.320674 12  0.239859  75.04736  7.868141  12.02589  3.736437  1.322174 13  0.240157  74.89201  7.953534  12.00024  3.800148  1.354060 14  0.240272  74.84630  7.971255  12.00594  3.796825  1.379681 15  0.240350  74.79800  7.983874  11.99823  3.839462  1.380438 16  0.240414  74.76806  8.009390  11.99233  3.850383  1.379842 17  0.240462  74.75110  8.028543  11.99019  3.849191  1.380976 18  0.240520  74.72503  8.059063  11.98545  3.849601  1.380855 19  0.240534  74.71634  8.063058  11.98442  3.855322  1.380855 20  0.240535  74.71578  8.063133  11.98484  3.855304  1.380942 21  0.240551  74.70586  8.065824  11.99012  3.856623  1.381580 22  0.240566  74.70460  8.068680  11.98877  3.856208  1.381741 23  0.240573  74.70332  8.070570  11.98817  3.856254  1.381693

 Variance Decomposition of RLN5: Period S.E. RLN3 RLN5 RLN10 RLM1 LNCC

 1  0.085135  16.03689  83.96311  0.000000  0.000000  0.000000 2  0.086909  16.14380  80.57563  3.251359  0.019680  0.009531 3  0.087966  15.76259  79.93669  3.602403  0.673910  0.024413 4  0.089102  15.73795  77.99075  4.347889  1.363416  0.559997 5  0.091483  15.17507  74.00765  7.974948  1.322630  1.519704 6  0.095377  13.99306  72.39490  9.794052  1.918132  1.899860 7  0.097276  16.43440  70.33244  9.415898  1.921306  1.895953 8  0.097942  16.35744  70.32970  9.530793  1.911185  1.870883 9  0.098225  16.33695  70.30628  9.558314  1.905074  1.893375

 10  0.098283  16.31796  70.30032  9.571765  1.917484  1.892473 11  0.098885  16.42182  70.31841  9.456705  1.928539  1.874522 12  0.099020  16.55079  70.16740  9.432900  1.979436  1.869472 13  0.099140  16.51827  70.19905  9.425477  1.975250  1.881957 14  0.099266  16.49360  70.02356  9.591979  2.013501  1.877360 15  0.099292  16.52817  69.98899  9.587448  2.012695  1.882697 16  0.099348  16.51089  69.96334  9.617384  2.027440  1.880940

258

Page 259: Introduction to Econometrics 2

 17  0.099376  16.54920  69.92686  9.612556  2.031262  1.880125 18  0.099406  16.54254  69.93020  9.608989  2.033558  1.884716 19  0.099412  16.54578  69.92698  9.609224  2.033505  1.884514 20  0.099422  16.54245  69.92530  9.612488  2.035054  1.884703 21  0.099430  16.54322  69.92453  9.612371  2.035013  1.884865 22  0.099434  16.54529  69.91912  9.611876  2.038345  1.885366 23  0.099438  16.54475  69.91922  9.611357  2.038231  1.886439

 Variance Decompositi

on of RLN10:

 Period S.E. RLN3 RLN5 RLN10 RLM1 LNCC

 1  0.065667  15.19186  75.94124  8.866899  0.000000  0.000000 2  0.067442  15.22030  72.02121  12.73986  0.009152  0.009475 3  0.068096  14.93670  71.77935  12.96126  0.262016  0.060671 4  0.069139  15.11924  69.67642  13.84710  1.047985  0.309244 5  0.070493  14.96706  67.35286  15.86591  1.008886  0.805286 6  0.073135  14.01026  66.84137  16.69644  1.344534  1.107402 7  0.073855  15.34640  65.62953  16.42158  1.509181  1.093308 8  0.074236  15.20333  65.76202  16.44786  1.498225  1.088558 9  0.074503  15.15688  65.68395  16.56947  1.505804  1.083896

 10  0.074552  15.14521  65.65553  16.55148  1.565161  1.082623 11  0.074903  15.17259  65.76055  16.41297  1.573616  1.080282 12  0.075009  15.28807  65.57739  16.37969  1.670258  1.084587 13  0.075069  15.27153  65.60075  16.36193  1.678861  1.086928 14  0.075129  15.24919  65.49769  16.47101  1.696230  1.085878 15  0.075146  15.26189  65.48881  16.46466  1.695468  1.089173 16  0.075183  15.24726  65.48803  16.46855  1.707667  1.088497 17  0.075195  15.26641  65.46718  16.46457  1.713680  1.088152 18  0.075215  15.26011  65.47060  16.45776  1.720568  1.090969 19  0.075220  15.26196  65.46889  16.45799  1.720349  1.090815 20  0.075225  15.26004  65.47059  16.45574  1.722955  1.090672 21  0.075230  15.25844  65.47224  16.45574  1.722798  1.090772 22  0.075232  15.26034  65.46811  16.45542  1.725103  1.091040 23  0.075235  15.25948  65.46934  16.45463  1.725137  1.091415

 Variance Decomposition of RLM1: Period S.E. RLN3 RLN5 RLN10 RLM1 LNCC

 1  0.007679  14.93201  1.408790  0.001793  83.65741  0.000000 2  0.007861  14.69884  1.353274  3.849742  80.09780  0.000344 3  0.007975  14.29375  1.315793  6.147471  78.24247  0.000519 4  0.008253  13.77743  3.455566  6.009316  76.69831  0.059381 5  0.008350  13.99751  3.743413  6.223776  74.94617  1.089135 6  0.008433  13.86839  3.828040  6.125477  74.93354  1.244554 7  0.008762  12.93900  3.648154  6.016128  74.96689  2.429825 8  0.008782  13.20249  3.664885  5.990500  74.64585  2.496283 9  0.008842  13.02165  3.623240  6.064688  74.82685  2.463578

 10  0.008911  13.08389  3.576200  5.972101  74.78541  2.582403 11  0.008934  13.14819  3.698044  6.039478  74.42828  2.686012 12  0.008972  13.03811  3.780202  5.987556  74.45766  2.736475 13  0.009030  12.94587  3.845841  5.946986  73.93658  3.324723 14  0.009058  13.20594  3.885378  5.970688  73.48079  3.457195 15  0.009090  13.12297  3.873900  5.930189  73.43309  3.639854 16  0.009116  13.07628  3.853471  5.900208  73.26277  3.907268 17  0.009124  13.07749  3.846909  5.916065  73.14340  4.016133 18  0.009146  13.01773  3.838518  5.890622  73.08959  4.163536 19  0.009160  12.99413  3.826681  5.873484  72.91110  4.394608

259

Page 260: Introduction to Econometrics 2

 20  0.009168  13.00325  3.825628  5.865272  72.78787  4.517976 21  0.009182  12.97395  3.813728  5.848091  72.68372  4.680509 22  0.009194  12.98196  3.803873  5.833245  72.51926  4.861665 23  0.009202  13.00056  3.797639  5.823951  72.40657  4.971280

 Variance Decomposition of LNCC: Period S.E. RLN3 RLN5 RLN10 RLM1 LNCC

 1  0.004021  0.653365  4.758565  3.464337  0.877526  90.24621 2  0.004106  2.840979  4.931486  3.357184  1.335548  87.53480 3  0.004284  3.197107  5.419148  4.325109  1.253813  85.80482 4  0.004357  3.800864  5.445736  4.399050  1.287759  85.06659 5  0.004505  4.246718  6.022674  4.157454  1.333652  84.23950 6  0.004653  4.832902  6.913284  4.025814  2.307631  81.92037 7  0.004757  4.784219  6.614600  3.918434  2.247738  82.43501 8  0.004816  4.751350  6.813469  3.822861  2.290651  82.32167 9  0.004870  5.094298  6.670209  3.738691  2.287851  82.20895

 10  0.004936  5.307554  6.898374  3.642827  2.337586  81.81366 11  0.004995  5.612988  6.748992  3.589659  2.372267  81.67609 12  0.005053  5.743379  6.755517  3.510741  2.426843  81.56352 13  0.005101  5.855018  6.672008  3.463837  2.456900  81.55224 14  0.005132  5.971197  6.801086  3.465943  2.427677  81.33410 15  0.005161  5.926920  6.831699  3.438531  2.401480  81.40137 16  0.005194  6.003626  6.917563  3.395274  2.433160  81.25038 17  0.005218  6.059145  6.893116  3.367064  2.431557  81.24912 18  0.005244  6.095877  6.942210  3.340979  2.407476  81.21346 19  0.005265  6.189533  6.918309  3.315419  2.445803  81.13094 20  0.005282  6.220746  6.958205  3.295645  2.433496  81.09191 21  0.005298  6.247999  6.951528  3.279897  2.427499  81.09308 22  0.005313  6.297396  6.972491  3.262487  2.442194  81.02543 23  0.005324  6.306734  6.981937  3.250054  2.442243  81.01903

 Cholesky Ordering:

RLN3 RLN5 RLN10

RLM1 LNCC

Source: Author’s calculation based on EViews 6 software.

260

Page 261: Introduction to Econometrics 2

0

20

40

60

80

100

2 4 6 8 10

Percent RLN3 variance due to RLN3

0

20

40

60

80

100

2 4 6 8 10

Percent RLN3 variance due to RLN5

0

20

40

60

80

100

2 4 6 8 10

Percent RLN3 variance due to RLN10

0

20

40

60

80

100

2 4 6 8 10

Percent RLN3 variance due to RLM1

0

20

40

60

80

100

2 4 6 8 10

Percent RLN3 variance due to RLNIP

0

20

40

60

80

100

2 4 6 8 10

Percent RLN3 variance due to LNCC

0

20

40

60

80

100

2 4 6 8 10

Percent RLN5 variance due to RLN3

0

20

40

60

80

100

2 4 6 8 10

Percent RLN5 variance due to RLN5

0

20

40

60

80

100

2 4 6 8 10

Percent RLN5 variance due to RLN10

0

20

40

60

80

100

2 4 6 8 10

Percent RLN5 variance due to RLM1

0

20

40

60

80

100

2 4 6 8 10

Percent RLN5 variance due to RLNIP

0

20

40

60

80

100

2 4 6 8 10

Percent RLN5 variance due to LNCC

0

20

40

60

80

2 4 6 8 10

Percent RLN10 variance due to RLN3

0

20

40

60

80

2 4 6 8 10

Percent RLN10 variance due to RLN5

0

20

40

60

80

2 4 6 8 10

Percent RLN10 variance due to RLN10

0

20

40

60

80

2 4 6 8 10

Percent RLN10 variance due to RLM1

0

20

40

60

80

2 4 6 8 10

Percent RLN10 variance due to RLNIP

0

20

40

60

80

2 4 6 8 10

Percent RLN10 variance due to LNCC

0

20

40

60

80

100

2 4 6 8 10

Percent RLM1 variance due to RLN3

0

20

40

60

80

100

2 4 6 8 10

Percent RLM1 variance due to RLN5

0

20

40

60

80

100

2 4 6 8 10

Percent RLM1 variance due to RLN10

0

20

40

60

80

100

2 4 6 8 10

Percent RLM1 variance due to RLM1

0

20

40

60

80

100

2 4 6 8 10

Percent RLM1 variance due to RLNIP

0

20

40

60

80

100

2 4 6 8 10

Percent RLM1 variance due to LNCC

0

20

40

60

80

100

2 4 6 8 10

Percent RLNIP variance due to RLN3

0

20

40

60

80

100

2 4 6 8 10

Percent RLNIP variance due to RLN5

0

20

40

60

80

100

2 4 6 8 10

Percent RLNIP variance due to RLN10

0

20

40

60

80

100

2 4 6 8 10

Percent RLNIP variance due to RLM1

0

20

40

60

80

100

2 4 6 8 10

Percent RLNIP variance due to RLNIP

0

20

40

60

80

100

2 4 6 8 10

Percent RLNIP variance due to LNCC

0

20

40

60

80

100

2 4 6 8 10

Percent LNCC variance due to RLN3

0

20

40

60

80

100

2 4 6 8 10

Percent LNCC variance due to RLN5

0

20

40

60

80

100

2 4 6 8 10

Percent LNCC variance due to RLN10

0

20

40

60

80

100

2 4 6 8 10

Percent LNCC variance due to RLM1

0

20

40

60

80

100

2 4 6 8 10

Percent LNCC variance due to RLNIP

0

20

40

60

80

100

2 4 6 8 10

Percent LNCC variance due to LNCC

Variance Decomposition

Source: Author’s calculation based on EViews 6 software.

261

Page 262: Introduction to Econometrics 2

References

Alexander, C., (2003), Market Models. A Guide to Financial Data Analysis. John Wiley and Sons Ltd. ISMA Centre. The Business School For Financial Markets.

Amisano, G., and Giannini, C., (1997), “ Topics in Structural VAR Econometrics, 2nd ed, Berlin: Springer – Verlag.

Boswijk, P,H., (1995), “identifiability of Cointegrated Systems,”. Technical report. Tinbergen Institute.

Brooks,C.,(2002), Introductory econometrics for Finance. Cambridge University Press.

Christiano, L.J.,M Eichenbaum, C.L.E, (1999), “Monetary Policy Shocks: What have we learned and to what end?” Chapter 2 in J.B.Taylor and M. Woodford, (eds.), Handbook of Macroeconomics, Volume 1A, Amsterdam: Elsevier Science Publishers B.V.

Doornik, J.A and Hansen,H., (1994), “An Omnibus Test for Univariate and Multivariate Normality”. Manuscript.

Fisher, R.A., (1932), “Statistical Methods for Research Workers, 4 th Edition, Edinburgh: Oliver & Boyd.

Sims, C.A.(1980), “ Macroeconomics and Reality”. Economterica, 48, 1 – 48.

262

Page 263: Introduction to Econometrics 2

Confirmatory data analysis or inferential statistics

Confirmatory data analysis or inferential statistics is a group of statistical techniques used to go beyond the data. It involves the analysis and interpretation of data to make generalizations. In other words to draw conclusions about a population from a quantitative data collected from a sample. Statistical inference is one of the most important and crucial aspects of the decision making process in economics, business, and science.

Statistical inference refers to Estimation and Hypothesis Testing. Estimation is the process of inferring or estimating a population parameter (such as its mean or standard deviation) from the corresponding mean or standard deviation of a sample.

To be valid, estimation must be based on a representative sample. This can be obtained by random sampling, whereby each member of the population has an equal chance of being included in the sample. So the theory we will be looking at only holds if:

(a) the samples are taken at random(b) The samples are fairly large (over 30)

In testing a hypothesis, we start by making an assumption with regard to an unknown population parameter. We then take a random sample from the population, and on the basis of the corresponding sample statistics, we either accept or reject the hypothesis with a particular degree of confidence.

Populations and samples

Population: A body of people or any collection of items under consideration.

Sample: A subset of a population or a portion chosen from the population.

It is helpful to distinguish between the mean and standard deviation of a sample, and the mean and standard deviation of the population which the sample comes from. We shall use the following symbols.

x is the mean of a sample is the mean of the population

s is the standard deviation of a sample

σ (sigma) is the standard deviation of the population

263

Page 264: Introduction to Econometrics 2

Define what is meant by the sampling distribution of the mean

We take a sample in order to estimate average earnings about the population as a whole. For example, we might ask 1,000 people in Britain how much they earn, and work out the average for those 1,000 people. This would give us an idea of the average earnings of everyone in Britain. The sample average is likely to be close to the population average but not exactly the same. These means or averages can be plotted as a frequency distribution. This distribution is called a sampling distribution of the mean.

Thus a sampling distribution of the mean is a frequency distribution of the mean of a large number of samples.

264

Page 265: Introduction to Econometrics 2

The sampling distribution of the mean has some important properties:

1. The mean of the sampling distribution of the mean (denoted by x ) will equal , the mean of the sampled population.

μx=μ

2. The standard deviation of the sampling distribution of the mean (usually called

the standard error of the mean and denoted by σ x will equal to the standard

deviation of the population divided by the square root of the sample size. σ /√n σ

σ x= ------

√n

3. As the sample size is increased, the sampling distribution of the mean approaches the normal distribution regardless of the shape of the frequency distribution of the population. The approximation is sufficiently good for n ≥ 30. This is the central-limit theorem.

Why estimation is so important?

Managers use estimates because they must make rational decisions without complete information and with a great deal of uncertainty about what the future will bring.

Definition of an interval estimate

• It is a range of values used to estimate a population parameter such as mean.

• For example, the enrollment in accounting is between 100 and 150 students. It is very likely that the true population mean will fall within this interval.

Interval estimates and confidence intervals

• In statistics, the probability that we associate with an interval estimate is called the confidence interval or level. This probability then indicates how confident we are that the interval estimate will include the population mean.

• The most commonly used confidence levels are 90%, 95% and 99%

265

Page 266: Introduction to Econometrics 2

• The confidence level is used to find the area under the standard normal distribution by using the z value (1.645 for a 90% confidence level, 1.96 for a 95% confidence level and 2.576 for a 99% confidence level). Then find the upper and lower confidence limit and make your conclusion.

Example of Estimation of population means using confidence intervals.

Find the interval of the mean of the population of machines with 95% confidence interval. The sample size is 100 machines, the sample mean is 21 months and standard deviation is s = 6 months.

Steps to solve the problem 1. It is required to estimate the population mean by calculating the interval estimate? 2. Calculate the sample mean. 3. Estimate the z value for 95% confidence level. 4. Calculate the standard error of the mean.

5. Find the upper and lower confidence limit and make your conclusion.

1) Formula of the interval estimate of the population mean

μ= x±zσ x

Where: : is the population mean

x : is the sample mean z : is the value for the appropriate confidence interval.

σ x : is the standard error of the mean

2) The sample mean x is 21 months and s can be used as an estimate of σ = 6

3) z = ……. for a 95% confidence level

4) σ x=

σ√n

266

Page 267: Introduction to Econometrics 2

5) So confidence limits are: x + 1.96σ x = upper

confidence level

x - 1.96σ x = lower

confidence limit

Which confidence intervals 90%, 95% or 99% signify a high degree of accuracy in the estimate

• You may think that we should use a high confidence level, such as 99% in all estimation problems. It seems to signify a high degree of accuracy in the estimate.

• However, high confidence levels will produce large confidence intervals and as large intervals are not precise they give very fuzzy estimates. In other words, if you use 99% as confidence interval the interval estimate becomes more vague less precise. By convention, the most frequently used confidence interval is 95%, followed by 90 and 99%.

267

Page 268: Introduction to Econometrics 2

Exercises

1) A survey of 180 motorists revealed that they were spending on average £287 per year on car maintenance. Assuming a standard deviation of £94, construct 95% confidence intervals.

Steps to solve the problem 1. It is required to estimate the population mean by calculating the interval estimate? 2. Calculate the sample mean. 3. Estimate the z value for 95% confidence level. 4. Calculate the standard error of the mean.

5. Find the upper and lower confidence limit and make your conclusion.

Formula of the interval estimate of the population mean

μ= x±zσ x

Where: : is the population mean

x : is the sample mean z : is the value for the appropriate confidence interval.

σ x : is the standard error of the mean

2) The sample mean x is £287 and s can be used as an estimate of σ = £ 94

3) z = 1.96 for a 95% confidence level

4) σ x =

σ√n

=94√180 =

9413 . 42 = £ 7

5) So confidence limits are: x + 1.96σ x = £287 + 1.96 (7) = £300.72: upper

confidence limit

x - 1.96σ x = £ 287 – 1.96 (7) = £273.28: lower confidence limit

268

Page 269: Introduction to Econometrics 2

So the mean average spending of the population is between £273.28 and £300.72 with 95% confidence.

2) From a random sample of 576 of a company’s 20,000 employees, it was found that the average number of days each person was absent from work due to illness was eight days a year, with a standard deviation of 3.6 days.

What are the confidence limits of the average number of days absence a year through sickness per employee for the company as a whole?

(a) At the 95% level of confidence?(b) At the 99% level of confidence?

Formula of the interval estimate of the population mean

μ= x±zσ x

2) The sample mean x is 8 days and s can be used as an estimate of σ = 3.6

3) z = 1.96 for a 95% confidence level

4) σ x =

σ√n

= 3 . 6√576

=3 .624 = 0.15

5) So confidence limits are: x + 1.96σ x = 8 + 1.96 (0.15) = 8.29: upper

confidence limit

x - 1.96σ x = 8 – 1.96 (0.15) = 7.71 : lower confidence limit

So absence is 7.71 to 8.29 days with 95% confidence interval

269

Page 270: Introduction to Econometrics 2

3) The mileages recorded for a sample of company vehicles during a given week yielded the following data:

138 164 150 132 144 125 149 157146 158 140 147 136 148 152 144168 126 138 176 163 119 154 165146 173 142 147 135 153 140 135161 145 135 142 150 156 145 128

Calculate the mean and standard deviation, and construct a 95% confidence interval.

x=146 . 8

s=13. 05

Formula of the interval estimate of the population mean

μ= x±zσ x

s = σ = 13.05

σ x =

σ√n

=13. 05√40

=13 .056 .325 = 2.06

At the 95% level of confidence, 146.8 1.96(2.06) = 146.8 4.0376

146.8 + 4.0376 = 150.84 : upper confidence limit

146.8 – 4.0376 = 142.76 : lower confidence limit

270

Page 271: Introduction to Econometrics 2

4) In a study to investigate the average rental costs faced by UK insurance companies, a random sample of 64 companies was found to have a mean rental cost of £25 per square foot, with a standard deviation of £ 6.

Construct 95% confidence intervals for the true mean rental cost.

Solution

Formula of the interval estimate of the population mean

μ= x±zσ x

2) The sample mean x is £25 and s can be used as an estimate of σ = £ 6

3) z = 1.96 for a 95% confidence level

4) σ x =

σ√n

= 6√64

=68 = 0.75

5) So confidence limits are: x + 1.96σ x = £ 25 + 1.96 (0.75) = £26.47: upper

confidence limit

x - 1.96σ x = £ 25 – 1.96 (0.75) = £23.53 : lower confidence limit

271

Page 272: Introduction to Econometrics 2

A survey of 300 home buyers in a particular area found that 18 had mortgage payments in arrears. Construct 95% and 99% confidence intervals for the arrears percentage.

Solution

n = 300, p = 18/300 * 100 = 6%

95% Confidence Interval:

99% Confidence Interval:

6% 2.576 √ 6∗94300

6% 3.53

272

Page 273: Introduction to Econometrics 2

Estimating a population means using confidence intervals with unknown population standard deviation σ.

To estimate a population mean using confidence intervals from a sample mean with unknown standard deviation follow the steps of the previous session.

1. It is required to estimate the population mean by calculating the interval estimate:

μ= x±zσ x

2. Calculate the sample mean.

3. As σ the population standard deviation is usually unknown, the sample standard deviation (s) may be substituted as an estimate of σ. The denominator (n-1) should be used to eliminate the bias.

s = √∑ ( x−x )2

n−1

4. Calculate the standard error of the mean.

5. Then apply the confidence level to find the area under the standard normal distribution. Determine the appropriate z value for example 1.96 for a 95% confidence interval.

6. Find the upper and lower confidence limit and make your conclusion.

273

Page 274: Introduction to Econometrics 2

From the standard normal distribution table. Hypothesis testing assuming a normal distribution.

One – tail tests and two- tail tests

There are two different types of significance test, known as one – tail (one sided) or two-tail (two – sided) tests.

In two – tail test the alternative hypothesis was of the form ‘……does not equal……’ Thus in a two – tailed test there are two rejections regions. So H0 : = 0 and H1: 0

Different levels of confidence or significance levels (a) and their critical value.

95% confidence level or a = 5% significance level. The critical value is 1.96

99% confidence level or a = 1% significance level. The critical value is 2.576

In one-tail test, the alternative hypothesis is of the form ‘…..is greater than……..’. Or where the alternative hypothesis is of the form ‘……is less than…….’. There is one rejection area from the left or from the right. In general, a left tailed test is used when the H0: = o and H1: < o.

In contrast, a right tailed test is used when the H0: = o andH1: o. Different levels of confidence or significance levels (a) and their critical value .

95% confidence level or a = 5% significance level. The critical value is + 1.645 or – 1.645

99% confidence level a = 1% significance level. The critical value is + 2.33 or – 2.33

274

Page 275: Introduction to Econometrics 2

Inferences concerning means based on two samples

As an example o a two-sample test, consider the hypothesis that two population means (1 and 2) are equal. To test this, we need to collect two independent

random samples from the two populations and calculate their means (x1and x2 ) and standard deviation (s1 and s2). Now follow these steps:

Formulate the null and alternative hypotheses:

Ho: = 0H1: 0

2. Select the level of significance if is not given.

3. Calculate the test statistic:

z=x1− x2

√σ12

n1+

σ22

n2

using s1 and s2 as estimates of and .

4. Compare the calculated value with the critical value and then either reject or do

not reject the null hypothesis.

275

Page 276: Introduction to Econometrics 2

Exercises

1) Sample I is a sample of 60 employees at department A, which shows that the mean output per employee is 106 units with a standard deviation of 8.067 units. Sample II is a sample of 50 employees at department B which shows a mean output per employee of 103 units with a standard deviation of 6.0605 units.

Test whether there is a significant difference between the mean outputs per employee at the 95% significance level?

Solution

Formulate the null and alternative hypotheses:

Ho: = 0H1: 0

2. The level of significance is a = 0.05

3. Calculate the test statistic:

z=x1− x2

√σ12

n1+

σ22

n2

using s1 and s2 as estimates of and .

Sample I :

Standard Deviation s1 = 8.067

Estimate of population variance

σ12 = (8.067)2 = 65.08

Mean of Sample I x1=106units

The size of Sample I n1 = 60 employees

Sample I I:

276

Page 277: Introduction to Econometrics 2

Standard Deviation s2 = 6.0605

Estimate of population variance

σ22 = (6.0605)2 = 36.73

Mean of Sample II x2=103units

The size of Sample II n2 = 50 employees

z=106−103

√ 65.0860

+36 . 7350

=2 .22

4. Compare the calculated value with the critical value and then either reject or do

not reject the null hypothesis.

The null hypothesis is that there is no difference between the two population means. The alternative is that there is difference.

The null hypothesis would be accepted if the actual difference between the sample means did not exceed 1.96 standard errors. However, because the difference is 2.22 standard errors, the null hypothesis would be rejected at the 5% level of significance, and management would assume that average productivity on day 2 was different from that on day 1.

277

Page 278: Introduction to Econometrics 2

Figure 1

Two –tailed hypothesis test of the difference between two means of two samples at the 0.05 level of significance, showing the acceptance and rejection region.

Acceptance region of H0

Rejection area

/ - ∞ - 1.96 0 +1.96 +2.22 +∞ z scale

278

Page 279: Introduction to Econometrics 2

2) A manpower development statistician is asked to determine whether the hourly wages of semiskilled workers are the same in two cities.

By using the following tables test the hypothesis at 95% level that there is no difference between hourly wages for semiskilled workers in the two cities.

Data from a sample survey of hourly wages

City Mean Hourly Earnings from Sample

Standard Deviation of Sample

Size of Sample

Rome x1 = Є 8.95 σ1 = Є 0.40 200Milan x 2 = Є 9.10 σ2 = Є 0.60 175

Solution

Formulate the null and alternative hypotheses:

Ho: = 0H1: 0

2. The level of significance is a = 0.05

3. Calculate the test statistic:

z=x1− x2

√σ12

n1+

σ22

n2

using s1 and s2 as estimates of and .

Sample I :

Standard Deviation s1 = £ 0.40

Estimate of population variance

σ12 = (0.40)2 = £ 0.16

Mean of Sample I x1= £ 8.95

The size of Sample I n1 = 200

279

Page 280: Introduction to Econometrics 2

Sample I I:

Standard Deviation s2 = £ 0.60

Estimate of population variance

σ22 = (0.60)2 = £0.36

Mean of Sample II x2= £ 9.10

The size of Sample II n2 = 175

z= 8 . 95−9. 10

√ 0 .16200

+ 0.36175

=−2. 83

4. Compare the calculated value with the critical value and then either reject or do

not reject the null hypothesis.

The null hypothesis is that there is no difference between the two population means. The alternative is that there is difference.

The null hypothesis would be accepted if the actual difference between the sample means did not exceed 1.96 standard errors. However, because the difference is -2.83 standard errors, the null hypothesis would be rejected at the 5% level of significance, and manpower development statistician conclude that the populations means (the average semiskilled wages in these two cities differ.

280

Page 281: Introduction to Econometrics 2

Figure 2

Two–tailed hypothesis test of the difference between two means of two samples at the 0.05 level of significance, showing the acceptance and rejection region.

Acceptance region of H0

Rejection area

/ - ∞ -2.83 - 1.96 0 +1.96 +∞ z scale

Hypothesis tests concerning proportions. Inferences concerning proportions based on a single sample

As an example of a single-sample test, consider the hypothesis that the population proportion is equal to some assumed value, say o. To test this, we need to collect a random sample (n > 30) from the population and calculate its proportion (p). Now follow these steps:-

1. Formulate the null and alternative hypotheses:H: =

H1:

2. Select a significance level (say, = 0.05). This fixes the level of confidence at 1 - = 0.95.

3. Calculate the test statistic:

281

Page 282: Introduction to Econometrics 2

z=p−π0

√ π0(1−π

0)

n

Compare the result with the critical value of z which depends on the chosen level of significance and on whether the test is a one-tailed or a two-tailed test. In a two-tailed test, with = 0.05, the critical value of z is 1.96.

4. If the calculated z-value is greater than 1.96 or less than -1.96, reject the null hypothesis in favour of the alternative hypothesis. Otherwise, do not reject the null hypothesis.

282

Page 283: Introduction to Econometrics 2

Exercises

1) Test the hypothesis with 95% confidence level that a random sample of 100 invoices of whom 10 contain errors comes from a population in which the proportion of error invoices is 6%.

Solution

1. Formulate the null and alternative hypotheses:H: = H1:

2. The significance level is = 0.05 or the level of confidence is 1 - = 0.95.

3. Calculate the test statistic:

z=p−π0

√ π0(1−π

0)

n

z= 0 .1−0 .06

√ 0 . 06(1−0. 06 )100

z= 0 .040 . 0237

=1. 69( to 2 .d . p . ).

4. If the calculated z-value is greater than 1.96 or less than -1.96, reject the null hypothesis in favour of the alternative hypothesis. Otherwise, do not reject the null hypothesis.

283

Page 284: Introduction to Econometrics 2

Figure 3

Two –tailed hypothesis test concerning proportion on a single sample at the 0.05 level of significance, showing the acceptance and rejection region.

Acceptance region of H0

/ - ∞ - 1.96 0 +1.69 +1.96 +∞ z scale

So, we accept the null hypothesis that the proportion of error invoices is 6%.

284

Page 285: Introduction to Econometrics 2

Inferences concerning proportions based on two samples

As an example of a two-sample test, consider the hypothesis that two population proportions ( and ) are equal. To test this, we need to collect two independent random samples from the two populations and calculate their proportions (p1 and p2). Now follow these steps:-

1. Formulate the null and alternative hypotheses:

Ho: = 0H1: 0

2. Select the level of significance, .

3. Calculate the test statistic:

z=p1−p2

√ p q ( 1n1

+ 1n2 )

where p=

n1 p1+n2 p2n1+n2 and q=1− p

4. Compare the calculated value with the critical value and then either reject or do not reject the null hypothesis.

285

Page 286: Introduction to Econometrics 2

Exercises

1) A new product is to be tested on two groups of people. In the first group, 33 out of a random sample of 60 say they will buy the product. The corresponding figure for the second group is 67 out of a random sample of 90. Test with 95% confidence level whether the proportion of purchasers is the same in each group.

Solution

1. Formulate the null and alternative hypotheses:

Ho: = 0H1: 0

2. The level of significance is = 0.05 or the level of confidence is 1 - = 0.95.

Calculate the test statistic:

z=p1−p2

√ p q ( 1n1

+ 1n2 )

where p=

n1 p1+n2 p2n1+n2 and q=1− p

First Group Second Group

p1=3360

=0 .55

p2=6790

=0 .74

n1=60 n2 = 90

z= 0. 55−0 .74

√0 .664 x0 . 336( 160 +

190 )

286

Page 287: Introduction to Econometrics 2

z=−0 . 190 . 0787

=−2 . 41( to 2 . d . p .) .

3. Compare the calculated value with the critical value and then either reject or do not reject the null hypothesis.

Figure 4

Two–tailed hypothesis test concerning proportions on two samples at the 0.05 level of significance, showing the acceptance and rejection region.

Acceptance region of H0

Rejection area

/ - ∞ -2.41 - 1.96 0 +1.96 +∞ z scale

So, the null hypothesis is rejected. In other words, the proportion of purchasers is not the same in each group.

2) In 2003 a random sample of 400 invoices issued by the accounts section of a company was found to contain 36 invoices with errors. After a reorganization of the department a further random sample of 300 invoices issued during 2004 was inspected and found to contain 20 invoices with errors. Test with 95% confidence level whether the proportion of invoices with errors is the same in each group.

287

Page 288: Introduction to Econometrics 2

Solution

1. Formulate the null and alternative hypotheses:

Ho: = 0H1: 0

2. The level of significance is = 0.05 or the level of confidence is 1 - = 0.95.

Calculate the test statistic:

z=p1−p2

√ p q ( 1n1

+ 1n2 )

where p=

n1 p1+n2 p2n1+n2 and q=1− p

First Group Second Group

p1=36400

=0 . 09

p2=20400

=0 .05

n1=400 n2 = 400

z= 0.09−0 . 05

√0 .07 x0 .93( 1400 +

1400 )

z= 0 .040 . 018

=2. 22

3. Compare the calculated value with the critical value and then either reject or do not reject the null hypothesis.

288

Page 289: Introduction to Econometrics 2

Figure 5

Two–tailed hypothesis test concerning proportions on two samples at the 0.05 level of significance, showing the acceptance and rejection region.

Acceptance region of H0

Rejection Area

/ - ∞ - 1.96 0 +1.96 2.22 +∞ z scale

So, the null hypothesis is rejected. In other words, the proportion of invoices with errors is not the same in each group so there could have been some improvement in terms of management accounting.

A sample of 120 housewives was randomly selected from those reading a particular magazine, and 18 were found to have purchased a new household product. Another sample of 150 housewives was randomly selected from those not reading the particular magazine, and only six were found to have purchased the product. Construct a 95% confidence interval for the difference in the purchasing behaviour.

Solution

Sample1: n1 = 120, p1 = 18/120 * 100 = 15%

Sample2: n2 = 150, p2 = 6/150 * 100 = 4%

95% Confidence Interval:

289

Page 290: Introduction to Econometrics 2

or

A manufacturer claims that only 2% of the items produced are defective. If seven defective items were found in a sample of 200 would you accept or reject the manufacturer’s claim?

Solution

n = 200, p = 7/200 * 100 = 3.5%

290

Page 291: Introduction to Econometrics 2

The sample evidence suggests that we cannot reject H0.

A dispute exists between workers on two production lines. The workers on production line A claim that they are paid less than those on production line B. The company investigates the claim by examining the pay of 70 workers from each production line. The results were as follows:

Sample statistics Production LineA B

Mean £393 £394.50Standard deviation £6 £7.50

Formulate and perform an appropriate test.

Solution

The sample evidence suggests that we cannot reject H0.

291

Page 292: Introduction to Econometrics 2

A photocopying machine produces copies, 18% of which are faulty. The supplier of the machine claims

that by using a different type of paper the percentage of faulty copies will be reduced. If 45 are found to be

faulty from a sample of 300 using the new paper, would you accept the claim of the supplier?

Solution

n = 300, p = 45/300 * 100 = 15%

The sample evidence suggests that we cannot reject H0.

292

Page 293: Introduction to Econometrics 2

Market awareness of a new chocolate bar has been tested by two surveys, one in the Midlands and one in the South East. In the Midlands of 150 people questioned, 23 were aware of the product, whilst in the South East 20 out of 100 people were aware of the chocolate bar. Test at the 5% level of significance if the level of awareness is higher in the South East.

Solution

Midlands: n1 = 150, p1 = 23/150 * 100 = 15.33%South East: n2 = 100, p2 = 20/100 * 100 = 20%

The sample evidence suggests that we cannot reject H0.

293

Page 294: Introduction to Econometrics 2

Estimating a population proportion using confidence intervals

The arithmetic mean is a very important statistic, and sampling is often concerned with estimating the mean of a population. Many surveys, however attempt to estimate a proportion rather than a mean. Examples include surveys concerned with:

(a) Attitudes or opinions about an issue.(b) The percentage of times an event occurs (for example, the proportion of defective items out of the total number of items produced in a manufacturing department)

To estimate a population proportion π through confidence intervals, from a sample proportion p follow the following steps:

2. It is required to estimate the population proportion by calculating the interval estimate:

π=p±z√ π (1−π )

n

Unfortunately, the right – hand side containsπ , which is the proportion we are trying to estimate and so is unknown. We therefore have to use the sample

proportion, p, as an estimate of π . (but we must not confuse this with the constant π = 3.14159)

So we have:

π=p±z√ p (1−p)n

2. Calculate the sample proportion.

7. Then apply the confidence level to find the area under the standard normal distribution. Determine the appropriate z value for example 1.96 for a 95% confidence interval.

8. Find the upper and lower confidence limit and make your conclusion.

294

Page 295: Introduction to Econometrics 2

Exercises

1) In a random sample 320 out of 500 employees were members of a trade union. Estimate the population proportion of trade union members in the entire organisation at the 95% confidence level.

Solution

It is required to estimate the population proportion by calculating the interval estimate:

π=p±z√ π (1−π )n (1)

As π is unknown we have to use the sample proportion, p, as an estimate of π. So we have:

π=p±z√ p (1−p)n (2)

2. Calculate the sample proportion.

The sample proportion p is 320/ 500 = 0.64 (3)

From (3) equation (2) will be:

π=0 . 64±1. 96 √ 0 .64 (1−0 . 64 )500

π=0 . 64±(1 .96 x0 . 0215)=0 .64±0 .045) So confidence limits are: 0.64 + 0.04 = 0.68 or 68% : upper confidence level

0.64 – 0.04 = 0.6 or 60% : lower confidence limit

295

Page 296: Introduction to Econometrics 2

So we estimate the proportion or percentage of employees who are trade union members and are between 60% and 68% at the 95% level of confidence.

2) In a random sample of 100 invoices out of 200 were found to contain errors. Construct a 95% confidence intervals for the true population proportion of invoices containing errors.

Solution

1. It is required to estimate the population proportion by calculating the interval estimate:

π=p±z√ π (1−π )

n (1)

As π is unknown we have to use the sample proportion, p, as an estimate of π. So we have:

π=p±z√ p (1−p)

n (2)

2. Calculate the sample proportion.

The sample proportion p is 100 / 200 = 0.5 (3)

From (3) equation (2) will be:

π=0 . 5±1 .96√ 0 . 5(1−0 . 5)200

π=0 . 5±(1. 96 x 0.0353 )=0 . 5±0 . 075) So confidence limits are: 0.5 + 0.07 = 0.57 or 57% : upper confidence level

0.5 – 0.07 = 0.43 or 43% : lower confidence limit

296

Page 297: Introduction to Econometrics 2

So we estimate the proportion or percentage of invoices containing errors and are between 43% and 57% at the 95% level of confidence.

Introduction to Hypothesis Testing

In testing a hypothesis, we start by making an assumption with regard to an unknown population parameter. We then take a random sample from the population, and on the basis of the corresponding sample statistics, we either accept or reject the hypothesis with a particular degree of confidence.

The aim of a hypothesis test is to check whether sample results differ from the results that would be expected if the null hypothesis were true. In most hypothesis tests, we attempt to reject a stated hypothesis. Such a hypothesis is called a null hypothesis and is denoted by H0. Hypotheses which differ from the null hypothesis are called alternative hypotheses, usually denoted by H1 or HA.

Hypothesis tests concerning means. Inferences based on a single sample

The procedure for hypothesis testing is as follows:

(a) We establish a hypothesis, for example that the mean value of all of a company’s invoices is £200. This is the null hypothesis (H0). We also state an alternative hypothesis (H1). For example that the mean value is not £200. In other words, we consider the hypothesis that the population mean is equal to some assumed value, say 0

Formulate the null and alternative hypothesis H0: = 0 H1: 0

H0: = 200 H1: 200

(b) Select a significance level say 95%

(c) Calculate the test statistic

z=

x−μ0

σ /√n

297

Page 298: Introduction to Econometrics 2

Using s as an estimate of σ. Compare the result with the critical value of z which depends on the chosen level of significance and whether the test is a one-tailed or two-tailed test. In this two tailed test, with 95%, the critical value of z is 1.96.

(d) If in this two-tailed test the calculated z value is greater than 1.96 or less than – 1.96, reject the null hypothesis in favour of the alternative hypothesis. Otherwise, do not reject the null hypothesis.

Exercise of a two-tail test based on a single sample

A company’s management accountant has estimated that the average cost of providing a certain service to a customer is £40. A sample has been taken, consisting of 150 service provisions, and the mean cost for the sample was £ 45 with a standard deviation of £ 10.

Is the sample consistent with the estimate of an average cost of £ 40 under the 95% confidence interval?

Solution

To apply a hypothesis test, we begin by stating an initial view, called the null hypothesis, that the average cost per unit of service is £40. The alternative hypothesis will be that it is not £ 40.

(a) Formulate the null and alternative hypothesis H0: = 40 H1: 40

(b) Our choice of 5% or 100% - 5% = 95% confidence interval means that if the z value is within 1.96 then accept the null hypothesis. If the z value is greater than 1.96 or less than – 1.96, reject the null hypothesis in favour of the alternative hypothesis.

(c) Calculate the test statistic

z=x−μ0

σ /√n

z=45−4010 /√150

=6 .12 standard errors above the mean

(d) Conclusion: 6.12 standard errors is above 1.96 so the average cost per unit of service is not £40, and the management accountant is wrong.

298

Page 299: Introduction to Econometrics 2

Inferences concerning means based on two samples

As an example o a two-sample test, consider the hypothesis that two population means (1 and 2) are equal. To test this, we need to collect two independent random samples from the two populations and calculate their means (

x1 and x2 ) and standard deviation (s1 and s2). Now follow these steps:

Formulate the null and alternative hypotheses:

Ho: = 0H1: 0

2. Select the level of significance if is not given.

3. Calculate the test statistic:

z=x1− x2

√σ12

n1+

σ22

n2

using s1 and s2 as estimates of and .

4. Compare the calculated value with the critical value and then either reject or do

not reject the null hypothesis.

299

Page 300: Introduction to Econometrics 2

Exercises

Sample I is a sample of 60 employees at department A, which shows that the mean output per employee is 106 units with a standard deviation of 8.067 units. Sample II is a sample of 50 employees at department B which shows a mean output per employee of 103 units with a standard deviation of 6.0605 units.

Test whether there is a significant difference between the mean outputs per employee at the 5% significance level?

Solution

Formulate the null and alternative hypotheses:

Ho: = 0H1: 0

2. The level of significance is a = 0.05

3. Calculate the test statistic:

z=x1− x2

√σ12

n1+

σ22

n2

using s1 and s2 as estimates of and .

300

Page 301: Introduction to Econometrics 2

Sample I :

Standard Deviation s1 = 8.067

Estimate of population variance

σ12 = (8.067)2 = 65.08

Mean of Sample I x1=106units

The size of Sample I n1 = 60 employees

Sample II:

Standard Deviation s2 = 6.0605

Estimate of population variance

σ22 = (6.0605)2 = 36.73

Mean of Sample II x2=103units

The size of Sample II n2 = 50 employees

z=106−103

√ 65. 0860

+36 . 7350

=2 .22

4. Compare the calculated value with the critical value and then either reject or do

not reject the null hypothesis.

The null hypothesis is that there is no difference between the two population means. The alternative is that there is difference.

We will tests the null hypothesis with a two-tail test at the 5% level of significance. The null hypothesis would be accepted if the actual difference between the sample means did not exceed 1.96 standard errors. However, because the difference is 2.22 standard errors, the null hypothesis would be rejected at the 5% level of significance, and management would assume that average productivity on day 2 was different from that on day 1.

301

Page 302: Introduction to Econometrics 2

Two –tailed hypothesis test of the difference between two means of two samples at the 0.05 level of significance, showing the acceptance and rejection region.

Acceptance region of H0

Rejection area

/ - ∞ - 1.96 0 +1.96 +2.22 +∞ z scale

Truncated, dummy variables and panel data analysis

Dummy explanatory and dependent variables also known as categorical variables are qualitative data that are used in the regression equation in relation with the quantitative variables. They are important as they give us additional information for variables that cannot be quantified in a continuous measurement scale. For example, incomes and expenditures could be quantified. In contrast, religion, nationality and gender could not be quantified and are qualitative variables.

302

Page 303: Introduction to Econometrics 2

Dummy variables are quantified based on the values of 0 and 1. Hypotheses testing are formulated and 0 shows the lack of the characteristic and 1 the existence of the characteristic of the variable. For example in terms of gender characteristic, 1 indicates that a person is male and 0 indicates that a person is a female. Dummy variables are used in seasonal factors in sales prediction. They could be used in time – series analysis. We will examine the linear probability model, the logit and probit models. Dummy explanatory variables are used to show differences in the slope coefficients, the intercept and to test if the regression coefficients are stable. The error term follows the assumptions of the linear regression model. For example, use two regressions equations with two different dummy variables and check the differences in the intercepts and the slope coefficients. Use the F-test to check for equal and unequal variances.

Let’s consider the relationship between consumption, incomes and age of the households. You are provided with quantitative data for consumption and incomes nominated in thousands pounds and qualitative data for the age of households. The dependent variable is consumption and the independent variables are incomes and age of the households. The dummy variable for the households is as follows:

D = 0 Households aged between 30 - 40. 1 Households aged between 45 - 55.

The mathematical equation is as follows:

y=α+β1 x1+β2 d1

Where: y is the dependent variable. Consumption is the measured variable. x1 is the independent variable and it is related to incomes. d1 is the dummy variable for households age.

The data are as follows:

303

Page 304: Introduction to Econometrics 2

Consumption Incomes Age of households10000 20000 020000 25000 130000 35000 140000 45000 042000 50000 144000 52000 045000 55000 146000 58000 148000 60000 150000 64000 052000 70000 053000 77000 154000 78000 055000 80000 160000 81000 061000 83000 162000 84000 168000 85000 070000 90000 1

I have added the regression summary output.

SUMMARY OUTPUT

Regression Statistics

Multiple R 0.972682

R Square 0.946111Adjusted R Square 0.939375

Standard Error 3762.791

Observations 19

ANOVA

df SS MS FSignificance

F

Regression 2 3.98E+091.99E+0

9140.453

67.11202E-

11

Residual 16 2.27E+081415859

4

Total 18 4.2E+09

Coefficients

Standard Error t Stat P-value Lower 95%

Upper 95%

Intercept 3421.568 2935.6481.16552

40.26088

9 -2801.72559644.86

1

Incomes 0.708338 0.04229316.7481

81.45E-

110.61868021

90.79799

6

304

Page 305: Introduction to Econometrics 2

Age of households 59.17822 1749.552

0.033825

0.973435

-3649.70603

3768.062

Please check the R2 and the F-statistic. They are both significant and shows a good fit of the data in terms of explaining the proportion of variation of the dependent variable. Please check the coefficients in relation to the standard errors, the t-statistics and the p-values.

y= 3421.568+ 0. 708 + 59. 178SE (2935 . 648) (0 .04 ) (1749. 552)t-statistics (1 .1655 ) (16 . 748) ( 0. 0338 )

F = 140.4536 R2 = 0.946

305

Page 306: Introduction to Econometrics 2

Let’s consider another example that shows the relationship between consumption, incomes and gender. You are provided with quantitative data for consumption and incomes nominated in thousands sollars and qualitative data for the gender type. The dependent variable is consumption and the independent variables are incomes and gender. The dummy variable for gender is as follows:

D = 0 male. 1 female.

The mathematical equation is as follows:

y=α+β1 x1+β2 d1

Where: y is the dependent variable. Consumption is the measured variable. x1 is the independent variable and it is related to incomes. d1 is the dummy variable for gender.

The data are as follows:

Consumption Incomes Gender10000 20000 020000 25000 030000 35000 040000 45000 142000 50000 144000 52000 145000 55000 046000 58000 048000 60000 150000 64000 052000 70000 153000 77000 054000 78000 155000 80000 160000 81000 161000 83000 162000 84000 168000 85000 070000 90000 0

I have added the regression summary output.

306

Page 307: Introduction to Econometrics 2

SUMMARY OUTPUT

Regression Statistics

Multiple R 0.972684

R Square 0.946114Adjusted R Square 0.939378

Standard Error 3762.683

Observations 19

ANOVA

df SS MS FSignificance

F

Regression 2 3.98E+091.99E+0

9140.462

2 7.10876E-11

Residual 16 2.27E+081415778

2

Total 18 4.2E+09

Coefficients

Standard Error t Stat P-value Lower 95% Upper 95%

Intercept 3459.644 2792.848 1.2387510.23330

4-

2460.9272979380.21488

5

Incomes 0.708965 0.044123 16.068072.71E-

11 0.6154293380.80250074

4

Gender -81.9561 1804.825 -0.045410.96434

3-

3908.0122613744.10007

6

Please check the R2 and the F-statistic. Please check the coefficients in relation to the standard errors, the t-statistics and the p-values.

y= 3 459 .644+ 0. 708965 - 81 .9561SE (2792 . 848) (0. 044 ) (1804 . 825 )t-statistics (1 .23875 ) (16 . 06807 ) ( -0 . 0454 )

F = 140.4622 R2 = 0.946

307

Page 308: Introduction to Econometrics 2

Exercise

You are given the demand function for gold and silver futures contracts in relation to their prices and to the gender. The dummy variable is gender.

QG= 3 .24+1200 p+2.31 dt−statistics (0 . 45 ) (4 .45) (1. 23)R2=0 . 87

QS= 2 .12+33 . 12p+3 .21dt-statistics (0 . 23) (5. 24 ) (0 . 12)R2=0 . 90

You are required to calculate the following:

(a) interpret the coefficients in relation to the t-statistics.(b) Is the demand for gold and futures contract elastic?(c) How you interpret the dummy variables?(d) How you interpret the slope coeeficient in relation to the dependent variable?

308

Page 309: Introduction to Econometrics 2

Exercise

Please examine the multiplicative effect of the dummy variables that have on the dependent variable consumption. The mathematical formula is as follows:

y t=α+β1 x1+β2 d1+β3 d2+ β4( d1 d2)+εt

The data are as follows:

Consumption Incomes Gender Age of households10000 20000 0 020000 25000 0 130000 35000 0 140000 45000 1 042000 50000 1 144000 52000 1 045000 55000 0 146000 58000 0 148000 60000 1 150000 64000 0 052000 70000 1 053000 77000 0 154000 78000 1 055000 80000 1 160000 81000 1 061000 83000 1 162000 84000 1 168000 85000 0 070000 90000 0 1

You are required to calculate the following:

(a) Estimate the regression equation.(b) Check at the 5% significance level, whether the coefficients are

statistically significant.(c) Examine the standard errors and the t-statistics in relation to the p-values.(d) Is there a significant effect of the interaction of the two dummy variables?(e) How you interpret the negative or positive sign of the slope coeeficient in

relation to the dependent variable?(f) Check the R2 and the F-statistic.

309

Page 310: Introduction to Econometrics 2

I have added a hedge fund article to help you understand in practice how dummy variable is applied in a probit regression equation. The dependent variable is a dummy variable.

310

Page 311: Introduction to Econometrics 2

An empirical analysis of the performance of Hedge Funds over the period 1998 to 2003 in terms of incentive fees, management fees, size, age, hurdle rate, high watermark provision and lockup period.

Abstract:

This paper aims at testing empirically the major building blocks that affect the performance of Hedge Funds: incentive fees, management fees, size, age, hurdle rate, high watermark provision and lockup period. The sample is provided from Data Feeder dataset. It is very comprehensive and includes 680 funds for the period 1998 to 2003. According to my findings the results are mixed. Management fees and age affect significantly the performance of Hedge Funds. My findings suggest that there are other factors that could contribute to this deviation such as lock-up periods, hurdle rate and high water mark. Dummy variables applied on the probit binary regression equation suggest that these three factors constitute a significant explanation of the performance persistence.

Keywords: incentive fees, management fees, hurdle rate, high watermark provision, lock up period, Data feeder3

Introduction

3 I would like to thanks Karen Henseleit sales manager for passing to me the data from Datafeeder. The Alternative Asset Center (AAC), the Fund of Hedge Funds Specialist (www.aa-center.net) and Fund of Hedge Funds DataFeeder.

311

Page 312: Introduction to Econometrics 2

Hedge funds have been first created by Alfred Winslow Jones in (1949). As a

matter of fact, there is no universally accepted definition of hedge funds. They are

owned by private managers and wealthy individuals. Since the early 1990s, there

has been a growing interest in the use of hedge funds. They are a growing

business with more than $200 billion. They are suitable for sophisticated investors

as they offer greater flexibility in their asset allocation and have a relatively low

covariance with other asset classes. The fact that they have low correlation in

relation to traditional asset classes like equities and bonds makes them to offer a

better diversification. Their structure in terms of fees is the most challenging issue

among academics and practitioners as it affects performance return.

There has also been a lot of interest shown by both academics and practitioners in

estimating the raw returns and risk-adjusted multi-factor performance of hedge

funds and other investment vehicles (Fama and French (1993, 1996); Carhart

(1997), Agarwal and Naik (2004); Capocci and Hubner (2004). Various studies

have documented the comparison of hedge fund performance with that of S&P

500 used as a universe index. We are going to compare the performance of these

funds with the S&P 500 index.

This paper seeks to shed light on the complexity of the structure of hedge funds

by examining their performance in terms of management fees, hurdle rate, and

watermark provision. We look at these changing exposures for individual hedge

funds. Thus, we seek to provide empirical evidence on the type of fees or other

factors that sustain the fund and affect directly the fluctuations of return.

The paper is organized as follows. Section 2 describes the interrelation of

structure and performance. Section 3 provides information on data collection

issues, sampling and descriptive statistics. Section 4 presents methodological

issues and empirical results. Section 5 summarizes and concludes.

2. Structure and performance

312

Page 313: Introduction to Econometrics 2

The most interesting feature of hedge funds is the interrelation of their structure

and their performance. As a result, the payroll structure which is based on three

axes namely performance fees, incentive fees and management fees are closely

related to performance. Payroll structure is based on a minimum investment of

$250.000, an annual fee of 1% - 2%, and an incentive fee of 5% to 25% of annual

profits. This payroll structure usually includes another component known as high

water mark that adds past performance to current ones. On the other hand,

investors in hedge funds are often restricted with lockup periods which are the

time period that initial investment cannot be redeemed from the fund. Limitations

on cash withdrawals are a result of cash fluctuations and give fund managers

more freedom in setting up long-term positions. This lockup period could have an

adverse effect for active investors’.

Another factor that affects directly the performance of the fund is manager

incentive fees which are related to the hurdle rate. The former are additional fees

based on new profits from the funds according to the performance. The manager

is rewarded only when the fund does well. The later is the return above which the

manager begins to take incentive fees. For example, if the fund has a hurdle rate

of 10% and the fund returns 25% for the year, the fund will take incentive fees on

the 15% return above the hurdle rate. Thus, if the fund performance is below the

high water mark limit, then the manager will be restricted to charge incentive fees

which could lead to a consequent bad performance of the fund and finally

liquidation of the fund. This incentive fee is generally subject to a high water

mark provision. It is a benchmark used by funds to take fees only when profits are

recorded from the positive performance of the fund. For example a $1 million

investment is made in year 1 and the fund declines by 50%, leaving $500

thousand. In year 2, the fund returns 100% which equal to the initial investment.

It will not take incentive fees on the return in year 2 as the initial investment was

not increased. If the fund shows a negative performance, then the manager must

cover the loss in the next year before the incentive fee becomes applicable. We

relax the assumptions that if new investors have different high-water marks, then

returns are not the same for all investors in the fund. Usually, databases report the

313

Page 314: Introduction to Econometrics 2

returns for the initial or the average investor. Finally, management fees are taken

by the manager on the entire asset level of the investment.

The investment styles and asset allocation of hedge funds have changed

significantly over time to reflect the performance of the fund. Due to the diversity

of the industry, there is no standard method to classify hedge funds. They can be

classified either on the basis of domicile or the nature of the trading strategies

employed by them.

Based on domicile, hedge funds can be classified into two broad categories:

onshore and offshore. Onshore funds are domiciled in the US and are usually in

the Limited Partnership form, where the investors’ goals are aligned with fund

managers’ incentives. In addition, they are free from regulatory controls imposed

by the Investment Company Act of 1940. In contrast, offshore funds are

established in tax neutral jurisdictions such as the British Virgin Islands, the

Bahamas, Bermuda, the Cayman Islands, Dublin and Luxembourg allowing the

investors to minimize their tax liabilities by investing outside their country. They

are not regulated by the Securities and Exchange Commission (SEC) so it is very

interesting for practitioners to examine their behaviour in terms of risk and return.

The investors that focus on offshore funds are generally non US or US tax exempt

investors. In this paper, we focus on the whole industry.

On the basis of nature of trading strategies followed by hedge funds, they can be

divided into two broad categories: Non-Directional and Directional. Non-

directional strategies do not depend on the direction of any specific market

movement and are commonly referred to as market neutral strategies. In contrast,

directional strategies aim to benefits from broad market movements.

Our perspective differs from that used from other researcher in terms of database,

methodology, and empirical results. Brown, Goetzmann, and Ibboston (1999)

examine the performance of offshore hedge funds. They attribute offshore hedge

fund performance to the style effects rather than manager skills. Ackermann,

McEnally, and Ravenscraft (1999) report that the comparison of hedge funds and

market indexes yields mixed findings. They conclude that hedge funds

314

Page 315: Introduction to Econometrics 2

outperform mutual funds. The above papers all use different hedge fund data. For

example, Fung and Hsieh (1997) use combined data from Paradigm LDC and

TASS Management Limited. Brown, Goetzmann, and Ibboston (1999) employ the

hand-collected data from the US Offshore Funds Directory. Ackermann,

McEnally, and Ravenscraft (1999), utilize combined data from Hedge Fund

Research, Inc. (HFR) and Managed Account Reports (MAR).

3. Data collection issues, sampling and descriptive statistics

Data are collected by a number of data vendors and fund advisors such as TASS

and Credit Suisse First Boston/Tremont (CSFB/Tremont) databases and

Alternative Asset Center Fund of Hedge Funds DataFeeder. Specifically, TASS

and CSFB/Tremont database track more than 3000 funds. There are strict rules for

fund selection. The universe consists only of funds with a minimum of USD 10m

under management, a minimum one-year track record, and current audited

financial statements of more than 900 funds in the Index Universe.

(www.hedgeindex.com).

We took our sample of 680 hedge funds from Alternative Asset Center Fund of

DataFeeder database. We use the data to investigate hedge fund performance and

the question of performance persistence among hedge fund managers. On the

other hand, the Alternative Asset Center (AAC) was formed in July 1999 as an

independent publishing company. In appendix 1, we present the various

categories of hedge funds according to the style mandate that we are doing in our

analysis. The DataFeeder is very powerful and includes general information of

each of the 680 funds profiled in terms of fund name/code, fund domicile,

investment manager, fund NAV, redemption frequency, management fee,

performance fee, style mandate, investor type, investment objective and many

other information including monthly performances since 1989. A major and

important advantage of this database is that it keeps historical information on

defunct funds.

The use of monthly data has some strong advantages over annual returns used by

Brown et al. (1999). Monthly returns greatly enhance the accuracy of our standard

deviation measure of risk. The audited reports that hedge fund send to investors

315

Page 316: Introduction to Econometrics 2

generally include monthly returns and these are the same returns the funds supply

to the databases. Subscription and redemption opportunities typically do not

correspond to the incentive fee period.

The different strategies by funds, the total number of funds, descriptive statistics

of average NAV return Jarque-Bera test of normality and Sharpe ratio are

presented in Table 1. It is worth to mention that for the period 1998-2003 the

funds did not have a complete data history due to different inception periods.

Table 1 shows the descriptive statistics of the average NAV return Jarque-Bear statistic which is used to test if the series is normal or non-normal. This type of test uses the chi-squared distribution and specifically is a goodness-of-fit test. Sharpe ratio which is defined as the difference of the excess return over the standard deviation of the style categories. The whole sample consists of the 680 hedge funds covering the period January 1998 to January 2003.

AAC classifications of various style categories

Number of

Funds(N)

Mean(%)

StandardDeviation

(%)

Min(%)

Max(%)

Skew(%)

Kurt(%)

ADFtest

Jarque-Bear

Sharperatio

Alternative hedge and private equity investments

1 0.35 1.01 -2.20 3.37 0.03 2.99 -4.24* 0.001 2,54

Arbitrage 11 0.21 0.39 -0.80 1.15 0.16 3.32 -5,65* 0.51 6,23Arbitrage- Long/Short Equity

2 0.47 1.27 -2.21 4.61 1.36 6.25 -4,56* 38.85 2,12

Broad based but with larger allocation to Equity Hedged Strategies

2 0.81 0.64 -0.04 1.83 0.47 2.01 -5,39* 0.70 4,73

Conservative 131 0.47 1.27 -2.21 4.61 1.36 6.25 -4,56* 38.85 2,12Diversified 324 0.65 1.62 -6.45 5.80 -0.19 8.86 -4,22* 103.50 1,77Diversified across several strategies

1 0.56 1.30 -1.86 5.19 1.19 6.63 -3,83* 43.24 2,14

Equity 2 0.32 0.74 -1.61 2.30 0.09 4.25 -4,13* 4.05 3,43Equity Hedge 1 0.52 1.97 -6.44 6.04 -0.02 5.62 -4,73* 17.40 1,39Equity/Hedge/event 1 0.15 1.00 -1.44 2.84 0.79 3.62 -4,38* 2.62 2,37Equity/defensive 2 0.51 1.97 -6.44 6.04 -0.01 5.61 -4,71* 17.37 1,39Event driven 5 0.27 0.70 -1.67 2.76 0.26 4.86 -3,66* 9.47 3,56Global macro 1 0.35 1.98 -2.75 3.17 -0.17 1.74 -5,69* 0.92 1,30Global multi manager fund

3 0.19 1.08 -2.05 2.40 0.15 2.81 -4,50* 0.19 2,23

Hedged bond 2 0.71 1.56 -2.63 4.43 0.40 3.12 -3,92* 1.66 1,88Hedged equity 5 0.20 0.33 -0.94 0.90 -0.76 4.81 -5,54* 14.23 7,33Long/Short arbitrage 2 1.00 1.77 -3.92 5.12 0.45 3.64 -6,26* 3.11 1,82Long/Short equity 25 0.42 1.03 -2.70 -2.87 -0.12 4.52 -4,96* 4.84 2,56Macro/CTA 1 0.21 0.66 -0.95 1.82 0.45 3.34 -5,21* 0.72 3,68Market defensive 53 0.64 1.49 -5.76 4.54 -0.47 8.26 -3,84* 72.58 1,92Market neutral 10 0.70 1.40 -3.85 4.65 0.25 4.85 -3,82* 9.33 2,09Market neutral diversified

2 0.41 2.90 -9.94 8.25 -0.29 5.21 -4,59* 13.33 0,91

Multi-strategy 70 0.61 1.51 -6.56 4.41 -1.05 10.49 -4,32* 153.87 1,87Primarily arbitrage strategies

1 0.39 0.77 -0.76 2.23 0.66 3.00 -5,91* 1.653,39

Strategic 102 0.52 1.73 -6.54 5.65 -0.34 7.33 -3,61* 48.92 1,58Trading 2 0.36 0.50 -0.57 1.76 0.71 3.37 -5.93* 2.87 5,16Trading/Equity/Arbitrage 1 0.08 0.89 -2.94 2.66 -0.50 6.16 -4,33* 33.04 2,58Trading strategies 10 0.89 1.62 -3.38 6.60 0.71 5.76 -4,14* 24.52 1,92

316

Page 317: Introduction to Econometrics 2

Source: calculated by the author* Significant at 1% level

According to table 1, the highest absolute return was achieved by Long/Short

arbitrage (1.00%), Broad based but with larger allocation to Equity Hedged

Strategies (0.81%), Hedged bond (0.71%), trading strategies (0.89%), market

neutral (0.70%). On the other hand, the lowest absolute return was achieved by

hedged equity (0.20%), global multi-manager fund (0.19%), equity, hedge event

(0.15%), arbitrage (0.21%).

Almost all the style categories show a higher measure of standard deviation in

relation to their mean. It is interesting to see that the Sharpe ratio shows a

different picture when we take into consideration a risk adjusted measure in terms

of US 3month treasury bill risk free rate. After adjusting for the risk factor,

arbitrage which was low in terms of absolute return shows a high return of

(6.23%) which is due to the fact that we adjust our risk measure. Hedged equity

shows the same results of (7.33%). The Jarque-Bear test of normality shows a

very different picture according to the style of the category. For example,

diversified style category shows a Jarque-Bear test of 103.50 which imply that the

time series is not normally distributed. Global macro shows a figure of 0.92 which

is below the 5% significance level and therefore the series is normally distributed.

Most of the categories show a significant measure of leptokurtic kurtosis and a

mixed picture of positively or negatively skewed direction of the normal

distribution curve which imply that actual data are below or above the mean

according to the performance of the fund.

Table 2 displays descriptive statistics of Hedge fund independent variables that affect the performance measure

Table 2 shows the descriptive statistics of the variables that affect the performance of hedge funds in terms of management fees, incentive fees are the percentage of annual profits over high water mark provision or hurdle rate, size is the funds’ net assets under management, age is the number of months beginning from the inception of the fund. The whole sample is calculated for the period January 1998 to January 2003.

Variable Mean Median Standard deviation Range Minimum Maximum

317

Page 318: Introduction to Econometrics 2

Management fees (%) 1,37 1,50 0,69 12,00 0,00 12,00

Incentive fees (%) 7,75 10,00 6,44 30,00 0,00 30,00

Size ($million) 135,98 324,7 418,99 791,82 450 791,87

Age (months) 41,78 24,00 36,55 226,00 2,00 228,00

S&P 500 index -0,11 -0,46 5,31 23,32 -10,62 12,69

Source: calculated by the author.

According to table 2, management fees and incentive fees varies significantly

according to the various style categories of hedge funds. Managers of hedge funds

get strong incentives to manage properly the fund and get a return above a high

water mark provision. According to the table the incentive fees given to the

managers have a median of 10% and a mean of 7,75%. The maximum value that

they can get is 30%.

Incentive fees constitute an important factor that could affect significantly the

performance of the fund. Annual Management fees charged in addition to

incentive fees are another variable which affect the attitude of the investors to

increase their shareholder in the fund assets. The range of different styles of hedge

funds is 12 %. The average manager get a fee of 1,37% charged on the assets and

the median figure is 1,50% while the maximum fees charged was 12%.

The age of the fund shows a median figure of 24 months and a range of 226

months among the various funds. The average fund has an age of 41,78 months

and a median of 24 months. A consistent positive performance would have a

direct influence on the age of the fund as it is the source of survival and long-term

existence in the database. Finally, the S&P 500 index is used as a benchmark that

will be compared with the various style categories of hedge funds to identify if the

funds are outperforming or underperforming the universe index. According to

table 2, the index shows a negative performance of -0,11 percentage points for the

period 1998-2003. The standard deviation shows a wide dispersion of 5,31

percentage from the mean which indicate a high dispersion from the actual data.

Table 3 shows the Augmented Dicky Fuller test statistic of the independent variable that are going to be used in the multiple regression equation.

318

Page 319: Introduction to Econometrics 2

Table 3 shows the results of the Augmented Dicky Fuller test statistic in terms of management fees, incentive fees defined as the percentage of annual profits over high water mark provision or hurdle rate, size calculated as the funds’ net assets under management, age the number of months beginning from the inception of the fund. The whole sample is calculated for the period January 1998 to January 2003.

Variable ADF test statisticManagement fees -8,40*Incentive fees -9,87*Size -11,26*Age -9,86*Source: calculated by the author* Significant at 1% level

From the ADF unit root test reported in table 3, it is clear that all independent

variables are at the 1% level of significance stationary. Since all variables are

stationary standard regression may be used.

Table 4 presents a Pearson correlation matrix of the dependent and independent variables for the major style categories for the period 1998 to 2003. The final sample consists of large style categories and the number of funds in each category ≥ 30. We excluded the small style categories with limited number of funds such as 1 or 2 as they create multi-collinearity problems in our regression equation due to limited observations. We excluded funds with the acronyms Alternative Equity and private (AEP), arbitrage (A), arbitrage long/short equity (AL/SE), Broad based with larger allocation (BBWLA), Diversified across several strategies (DAST), Equity (E), Equity/Hedge (EH), Equity Hedge Event (EHE), Event driven (ED), Equity defensive (ED), Global macro (GM), Global multi manager (GMM), Hedged bond (HB), Hedged equity (HE), Long/Short Arbitrage (L/SA), Long/Short Equity (L/SE), Macro CTA (MC), Market defensive (MD), Market neutral (MN), Market neutral diversified (MND), Primarily arbitrage strategy (PAS), Trading (T), Trading Equity Arbitrage (TEA), Trading strategies (TS). We include a representative sample of 680 hedge funds that belongs in style categories with the acronyms of Conservative (C), Diversified (D), Market defensive (MD), Multi-strategy (MS), Strategic (S). The dependent variables are compared with the independent variables of age, size, management fees (MF), incentive fees (IF), and S&P 500 index.

C D MD MS S Age Size MF IFS&P500 INDEX

C 1D 0,09 1,00MD 0,10 0,10 1,00MS 0,09 0,09 0,10 1,00S 0,09 0,09 0,10 0,09 1,00Age -0,29 -0,32 -0,30 -0,30 -0,31 1,00Size -0,17 -0,18 -0,17 -0,20 -0,24 0,01 1,00MF 0,11 0,11 0,12 0,12 0,13 0,03 0,03 1,00IF 0,11 0,08 0,08 0,08 0,05 0,01 -0,09 0,11 1,00S&P500 INDEX 0,36 0,44 0,40 0,44 0,45 -0,33 -0,04 0,27 0,06 1,00

Source: calculated by the author

319

Page 320: Introduction to Econometrics 2

The correlation matrix shown in table 4 reveals no statistically significant

relationship between the independent and dependent variables. Most of the figures

reveal negative and insignificant values among the various categories and the

independent variables. There are no signs of multicollinearity among the variables

and therefore we can use them in our multiple regressions.

4. Methodological issues and empirical results.

In this paper we empirically analyze the persistence of hedge funds performance

over the period 1998 to 2003 by applying two methodologies. The first

methodology is a multiple regression that takes into consideration a range of

independent variables such as management fees, incentive fees, age and size of

the fund. The statistical, econometric and descriptive statistics of the independent

and dependent variables were examined in the previous section and concluded

that there is no sign of multicollinearity and the series are stationary.

The first methodology follows a multiple regression model of the followed form:

RNAV Style categories = a + b1(Rm-Rf) + b2(IF) + b3(MF) + b4(Age) + b5(Size) + εt

Where:

RNAV Style categories : is the excess NAV return of the style category and 3 month US treasury bill

Rm : is the return on an excess market return of S&P 500 index. The weighted value index such as Credit Suisse First Boston (CSFB) Tremont Hedge Fund Index

Rf : is the risk free rate of 3 months US Treasury Bill

IF: is the incentive fee

MF: is the management fee

Age: the number of months since fund inception.

320

Page 321: Introduction to Econometrics 2

Size: the assets of the fund under the management control.εt : is the random or tracking error.

The hypotheses to be tested are as follows:

H0 : α ≤ 0, Fund managers have an inferior or neutral performance

H1: α 0 Fund managers have a superior performance

In terms of signs of the independent variables, we expect to find positive values

for the coefficients of incentive fee, management fee, size, and age. Incentive and

management fees constitute a compensation structure that affect the skill of the

manager to outperform the index and the fund shows a positive performance. Size

is well documented in the literature by Fama and French (1993, 1996) that affect

the performance of the fund according to the fees structure. Finally, age is directly

related to the fees and performance of the fund.

Table 5 shows the results of the multiple regressions of the 680 hedge funds of the sample period starting from 1998 to 2003. We have regress the average NAV performance of each style categories against the excess market return, incentive fee, management fee, age and size. Excess market return is calculated as the difference of S&P 500 index and three month US treasury bill. Management fee is defined according to the style category of the hedge fund. Incentive fees defined as the percentage of annual profits over high water mark provision or hurdle rate, size calculated as the funds’ net assets under management, age the number of months beginning from the inception of the fund.

AAC style categories α Rm – Rf IF MF Age SizeConservative 2.37

(21.01)*0.02

(1.56)-0.00

(-0.27)0.07

(1.84)*0.01

(3.28)*0.03

(0.61)Diversified 2.68

(18.76)*-0.00

(-0.15)0.01

(0.68)0.024(0.22)

0.00(1.58)

-0.03(-0.31)

Market defensive 2.42(23.23)*

0.00(0.27)

-0.01(-1.47)

0.21(3.01)*

0.00(4.15)*

-0.01(-1.26)

Multi-strategy 2.51(20.61)*

0.01(2.20)*

-0.01(-2.15)*

0.10(1.26)

0.00(2.18)*

-0.01(-0.27)

Strategic 2.39(4.32)*

0.01(0.75)

0.03(1.28)

0.13(0.45)

-0.00(-0.04)

-0.04(-0.24)

Source: calculated by the author

* represents t -value that is statistically significant at 5% significance level** represents t -value that is statistically significant at 1% significance level

According to table 5, α indicates a skilled fund manager whose decisions add

value to the fund. On the other hand, negative α values or statistically

321

Page 322: Introduction to Econometrics 2

insignificant values represent inferior or neutral performance of the manager. All

style categories display an α that is positive and statistically significant at the 5%

significance level. Two out of five style categories display management fee

variable that is positive and statistically significant. Three out of five display the

age variable as positive and statistically significant.

The second methodology is a probit model which measures the relationship

between a binary variable strength such as lockup or hurdle rate over a number of

other variables. This technique helps us to measure how the binary variable

affects performance measure of our sample for the specified period of time.

Specifically, we are going to test the effect of lockup and hurdle rate upon the

performance of the style categories.

Lockup period = a + b1(Rm-Rf) + b2(IF) + b3(MF) + b4(Age) + b5(Size) + εt

Hurdle rate = a + b1(Rm-Rf) + b2(IF) + b3(MF) + b4(Age) + b5(Size) + εt

Watermark provision = a + b1(Rm-Rf) + b2(IF) + b3(MF) + b4(Age) + b5(Size) + εt

Where:

Lockup period, hurdle rate, watermark provision: are the binary variables

Rm : is the return on an excess market return of S&P 500 index. The weighted value index such as Credit Suisse First Boston (CSFB) Tremont Hedge Fund Index

Rf : is the risk free rate of 3 months US Treasury Bill

IF: is the incentive fee

MF: is the management fee

Age: the number of months since fund inception.

Size: the assets of the fund under the management control.εt : is the random or tracking error.

The hypotheses to be tested are as follows:

322

Page 323: Introduction to Econometrics 2

H0 : Lockup = 0 The performance of the style category is independent from the

lockup period.

H1: Lockup = 1 The performance of the style category is dependent from the

lockup period.

H0 : Hurdle rate = 0 The performance of the style category is independent from

the hurdle rate

H1: Hurdle rate = 1 The performance of the style category is dependent from the

hurdle rate.

H0 : Watermark provision = 0 The performance of the style category is

independent from the watermark provision.

H1: Watermark provision = 1 The performance of the style category is dependent

from the watermark provision.

Table 6 shows the results of the probit regressions of the 680 hedge funds of the sample period starting from 1998 to 2003. We have regressed for each style categories three binary variables, watermark provision (WP), hurdle rate(HR), and lockup period (LP) against the performance of each category, the excess market return, incentive fee, management fee, age and size. Excess market return is calculated as the difference of S&P 500 index and three month US treasury bill. Management fee is defined according to the style category of the hedge fund. Incentive fees defined as the percentage of annual profits over high water mark provision or hurdle rate, size calculated as the funds’ net assets under management, age the number of months beginning from the inception of the fund.AAC style categories

Binary

VariableLR statistic

(6 df)Probability(LR stat)

Rm – Rf IF MF Age Size

323

Page 324: Introduction to Econometrics 2

Conservative WP 11.73 0.07 -0.05(0.12)

0.04(0.14)

0.26(0.26)

-0.00(0.92)

0.36(0.03)*

HR 14.13 0.03* 0.04(0.38)

0.08(0.06)

-0.62(0.16)

0.01(0.47)

0.24(0.24)

LP 15.91 0.01* 0.04(0.63)

-0.02(0.71)

-1.60(

0.01)*

-0.03(0.26)

-0.09(0.79)

Diversified WP 15.75 0.02* 0.01(0.82)

0.09(0.01)*

0.09(0.81)

0.01(0.10)

-0.04(0.89)

HR 9.11 0.17 -0.02(0.63)

0.06(0.09)

0.16(0.69)

-0.00(0.72)

0.53(0.13)

LP N/A N/A N/A N/A N/A N/A N/A

Market defensive WP 23.09 0.00* 0.00(0.93)

0.14(0.00)*

-0.19(0.66)

0.01(0.33)

0.14(0.03)*

HR 7.56 0.27 0.07(0.01)*

0.07(0.08)

0.22(0.63)

-0.00(0.88)

0.06(0.43)

LP 2.19 0.90 0.06(0.50)

-0.17(0.49)

0.46(0.54)

0.01(0.72)

0.01(0.45)

Multi-strategy WP 4.45 0.62 0.04(0.23)

0.00(0.93)

0.25(0.52)

0.01(0.14)

-0.14(0.24)

HR 5.97 0.43 0.02(0.44)

0.02(0.59)

0.09(0.80)

0.01(0.33)

0.08(0.30)

LP 11.97 0.06 0.04(0.28)

-0.03(0.43)

0.51(0.20)

-0.00(0.70)

0.07(0.41)

Strategic WP 7.61 0.27 0.03(0.15)

0.01(0.74)

0.63(0.08)

0.00(0.84)

-0.28(0.21)

HR 13.44 0.04* -0.00(0.96)

0.07(0.03)*

-0.50(0.17)

0.01(0.25)

-0.50(0.08)

LP 6.71 0.35 -0.09(0.25)

-0.14(0.09)

-1.56(0.19)

0.00(0.82)

-0.24(0.67)

Source: calculated by the author* represents p - value that is statistically significant at 5% significance level

The above table shows that for 5% significance level the p-value is significant for

most binary variables which the sample evidence imply that we can reject the

Null hypothesis. Specifically, the p-value for hurdle rate and lockup period is

significant as the LR statistic is greater than the critical value of 12.59. Hurdle

rate and lockup period affect significantly the performance of conservative style

category. In contrast, Watermark provision does not affect the performance of the

conservative category and this may be due to the fact that most of the

324

Page 325: Introduction to Econometrics 2

observations in our sample are funds that do not offer watermark. The diversified

style category shows a different picture. The watermark provision binary variable

is significant as the test statistic for 6 degree of freedom is greater than the critical

value of 12.59. The sample evidence suggests that we can reject the null

hypothesis in favourite of the alternative one. The lockup binary variable shows a

N/A value as the matrix was near singular. The hurdle rate shows an insignificant

value. The third category which is market defensive style category shows a

significant p-value for watermark provision. The other two binary variables show

an insignificant p-value.

Multi-strategy style category did not show any significant p-value for any of the

three binary variables. Finally, strategic style category shows a significant p-value

for hurdle rate of 0.04. The other two binary variables namely lockup period and

watermark provision show an insignificant p-value. Thus, the sample evidence

suggests that only performance is dependent on the hurdle rate.

325

Page 326: Introduction to Econometrics 2

5. Conclusion

This paper examined the performance persistence of hedge funds in terms of

management fees, hurdle rate, incentive fees, size, age, high watermark provision

and lockup period. The sample under consideration was for the period 1998 to

2003 and includes 680 funds. The first methodology applied was a multiple

regression that takes into consideration a range of independent variables such as

management fees, incentive fees, age and size of the fund.

According to the findings, all style categories display an α that is positive and

statistically significant at the 5% significance level. Two out of five style

categories display management fee variable that is positive and statistically

significant. Three out of five display the age variable as positive and statistically

significant.

The second methodology is a probit model which measures the relationship

between a binary variable strength such as lockup or hurdle rate over a number of

other variables. According to the results, hurdle rate and lockup period affect

significantly the performance of conservative style category. The watermark

provision binary variable for the diversified style category is significant as the test

statistic for 6 degree of freedom is greater than the critical value of 12.59. The

third category which is market defensive style category shows a significant p-

value for watermark provision.

326

Page 327: Introduction to Econometrics 2

Finally, strategic style category shows a significant p-value for hurdle rate of

0.04. Thus dummy variables applied on the probit binary regression suggest a

significant explanation of performance persistence of hedge funds.

References

Alfred Winslow Jones (1949), “Creation of the first hedge fund”.

Ackermann, C., R. McEnally and Ravenscraft, (1999). “The Performance of Hedge Funds: Risk, Return, and Incentives”. The Journal of Finance. Volume 54, Issue 3, pages 833-874.

Agarwal, Vikas and Narayan Y. Naik (2004), “Risk and Portfolio Decisions Involving Hedge Funds”. Review of Financial Studies, 17, pp.63-98.

Brown and Goetzmann (1997), “Hedge Funds with Style”. NBER working paper No.8173.

Brown, Stephen, William N. Goetzmann and Roger G. Ibbotson, (1999), “Offshore Hedge Funds, Survival and Performance”. Journal of Business 72: (January #1) 91-118.

Carhart, M.(1997), “On Persistence in Mutual Fund Performance”. Journal of Finance, Vol 52. No 1.

Capocci and Hubner, (2004), “An Analysis of Hedge Fund Performance”. Journal of Empirical Finance. Vol.11, No 1, pp 55-89.

Fama, E. F., and K. R. French. (1993), “Common Risk Factors in the Returns

on Stocks and Bonds.”Journal of Financial Economics 33, (1993): 3-56.

Fama, E. F., and K. R. French.(1996), “Multifactor Explanations of Asset Pricing Anomalies.” Journal of Finance 51, (1996): 55-84.

Fung and Hsieh(1997), “Performance Characteristics of Hedge Funds and Commodity Funds: Natural versus Spurious Biases. Journal of Financial and Quantitative Analysis. Volume 35, pp.291-307.

Lavinio, Stefano (2000), Hedge fund handbook: a definitive guide for analyzing and evaluating alternative investments, New York: McGraw Hill.

327

Page 328: Introduction to Econometrics 2

Appendix 1

The followed section represents the style mandate classified by the Alternative

Asset Center (AAC) for the sample size of the 680 hedge funds.

1. Alternative, hedge, and private-equity investments:

328

Page 329: Introduction to Econometrics 2

Manager allocates its capital to ten investment funds specialized in US distressed

investing. The Fund aims to achieve an attractive risk reward profile and low

correlation to the traditional markets over the medium to long term. This makes it

an ideal complement to a traditional portfolio to attain the desired diversification

effect.

2. Arbitrage

The overall objective of this category is to deliver consistent capital appreciation

and to achieve a superior risk-adjusted performance. Fund manager aims to

achieve above average risk adjusted returns by investing mainly in a selection of

hedge funds specializing in US and Canada equities. It is expected that the

performance of the Fund will be greater than the benchmark of S&P 500 and less

dependent on the overall direction of those markets than a traditional long-only

manager.

3. Arbitrage - Long/Short Equity

The Funds seeks to deliver consistent long-term capital appreciation in USD

terms but with lower level of volatility.

4. Broad-based but with larger allocation to Equity Hedged Strategies

329

Page 330: Introduction to Econometrics 2

Distressed Focus invests predominantly in a focused portfolio of distressed fund

managers on a global basis. The desired return and risk characteristics are: Return

target over a cycle: Greater than MSCI World Index; Volatility tolerance: Less

than MSCI World Index; and Loss target. The Fund seeks to produce consistent

absolute returns while limiting risk by investing predominantly in non-directional

managers, i.e. the underlying managers have portfolios of long and short

securities where there is some economic relationship between the long securities

and the short securities. These are typically arbitrage or relative value managers,

seeking to capture a spread between the prices of two securities.

5. Conservative

The primary objective of this category is the preservation of capital, independent

of global equity and debt market conditions. The investment strategy of the Fund

is to invest in multiple alternative strategies to create diversified absolute return

portfolios. Capital is allocated to multiple managers within each strategy. The

Investment Manager invests in multiple arbitrage, relative value and market

neutral strategies in an effort to preserve capital, protect investors from substantial

market fluctuations and provide diversified participation in alternative investment

strategies. The approach is designed to produce non-correlated returns with lower

volatility than traditional asset classes. The trading strategies of the sub-funds

may consist of, but are not limited to, convertible arbitrage, risk/merger arbitrage,

commodity arbitrage, fixed income arbitrage, market neutral long/short equity,

and mortgage arbitrage. The Fund's objective is to achieve a superior risk-

adjusted rate of return by providing investors with an alternative to traditional

330

Page 331: Introduction to Econometrics 2

investment products through its investment in internally managed strategies. The

Fund's objective is to achieve long-term capital growth by investing either directly

or indirectly through selected funds or investment managers, in a strategically

determined mix of, equity securities, fixed income securities, derivative securities,

currencies and other investment assets with an emphasis on long term growth.

6. Diversified

The objective of this category is to provide long-term growth of principal, while

outperforming the broad equity market indices with significantly less volatility.

The investment objective of the Fund is to achieve significant capital appreciation

by investing up to 100% of its assets in a European style call option on a basket of

hedge funds. The call option will be issued by a major financial institution, and

the Basket will be selected and monitored by the Investment Manager. The Basket

will at all times include at least 18 different hedge funds, diversified among a

minimum of 5 strategies or substrategies, and the maximum position per hedge

fund will be 10%. The hedge funds selected by the Investment Manager (the

'Hedge Funds') generally will employ 'nondirectional' trading strategies, the

results of which generally are not expected to highly correspond with the direction

of stock and bond markets. The Fund's investment objective is to achieve capital

appreciation by investing a substantial portion of its assets among a diversified

group of money managers selected and monitored by the Investment Manager.

The money managers selected by the Investment Manager (the 'Money

Managers') generally will employ 'nondirectional' trading strategies, the results of

which generally are not expected to highly correspond with the direction of stock

331

Page 332: Introduction to Econometrics 2

and bond markets. Trading strategies which Money Managers will utilise may

include, without limitation, the following: Relative Value/Arbitrage, Hedged

Equity, Short-Biased Equity/Short Only Equity, Event Driven, Macro, Managed

Futures/CTA, and Emerging Markets.

7. Diversified across several strategies

This category invests in a number of different investment funds. A sufficient

number of funds will be selected in order to achieve a satisfactory reduction of the

risk, whereas each fund taken in isolation can present a high level of risk. In this

way the main instrument for reducing risk is diversification.

8. Equity

The Fund's investment objective is to achieve consistent, above average returns

regardless of market conditions whilst seeking a risk profile that is lower and less

volatile than market indices. The Fund will seek to achieve its investment

objective primarily by investing its assets in a diversified portfolio of Underlying

Funds which use traditional, non-traditional or alternative asset management

strategies.

9. Equity Hedge/ Event driven

332

Page 333: Introduction to Econometrics 2

The Fund's objective is to achieve capital appreciation by investing its assets in a

diversified portfolio of underlying funds which use hedge funds or similar

alternative asset management strategies.

10. Equity/Defensive

The investment objective of the Fund is to achieve significant capital appreciation

over time. The objective of the Fund will be met primarily through the allocation

of it's assets to other funds or to separate accounts managed by advisors who

specialize in the trading of Asian equity securities, both long and short.

11. Event Driven

The Fund seeks to produce consistent absolute returns while limiting risk by

investing predominantly in non-directional managers, i.e. the underlying

managers have portfolios of long and short securities where there is some

economic relationship between the long securities and the short securities. These

are typically arbitrage or relative value managers, seeking to capture a spread

between the prices of two securities.

12. Global Macro

The Fund seeks to produce consistent absolute returns while limiting risk by

investing predominantly in non-directional managers, i.e. the underlying

managers have portfolios of long and short securities where there is some

333

Page 334: Introduction to Econometrics 2

economic relationship between the long securities and the short securities. These

are typically arbitrage or relative value managers, seeking to capture a spread

between the prices of two securities.

13. Global Multi-Manager Fund

The investment objective of the Fund is to achieve significant capital appreciation

over time. The objective of the Fund will be met primarily through the allocation

of it's assets to other funds or to separate accounts managed by advisors who

specialize in the trading of European equity securities, both long and short. The

Fund seeks to achieve significant capital appreciation over time. The Fund aims to

generate, over the long term, rates of return above those that can generally be

expected from "traditional" equity investments, while at the same time

maintaining a disproportionately lower level of risk as measured by the volatility

of returns.

14. Hedged Bond

CTAs (commodity trading advisers), another name for managed futures

managers. With the multi-manager approach, investment risks are spread over

different organisations, asset classes and investment strategies. Due to the low

correlation to other hedge fund styles, this category provides an excellent

protection for diversified portfolios in difficult markets for other strategies. The

Fund invests its assets with carefully selected money managers or funds which

base their trading approach on relative value strategies. These funds engage

334

Page 335: Introduction to Econometrics 2

principally in arbitrage strategies in the global equity and corporate debt securities

markets taking advantage of mispricings between two related and correlated

securities. Typical arbitrage strategies include: convertible bond arbitrage, fixed

income arbitrage, mortgaged-backed arbitrage and derivative arbitrage.

15. Hedged Equity

The investment objective of the Fund is to achieve long-term, risk-adjusted capital

appreciation by investing primarily in a diversified portfolio of other collective

investment undertakings which use non-traditional or alternative asset

management strategies. The Fund's objective is to select and blend multiple

investment managers in order to achieve capital appreciation and asset

diversification. The Fund plans to diversify investments with multiple managers

and strategies in order to reduce market risk and correlation. The Fund's

strategies include but are not limited to long/short equity, arbitrage/relative value

and event driven investments. The fund does not invest in commodities,

currencies, technical trading systems or real estate.

16. Long/Short - Arbitrage

The investment objective of the Fund is to achieve long term capital growth. The

Fund will be managed on a fund of funds basis and the portfolio will consist of

investments in other funds which generally follow long/short investment

strategies. The Fund will aim to achieve its objective by investing primarily in

335

Page 336: Introduction to Econometrics 2

open-ended investment funds which have as their investment objective investment

in equities or equity-related securities or other financial instruments.

17. Long/Short Equity

The Fund invests solely in specialist arbitrage funds with the aim of generating

moderate absolute returns with low volatility, which are neither correlated nor

significantly dependent on rising or falling trends in world financial markets. The

Fund seeks to achieve over 14% annual net return with low volatility by investing

in a basket of very liquid securities that comprise the S&P 100 shares and to

hedge the basket with out of the money calls and puts. Leverage may be used at

the discretion of the investment manager, according to market conditions.

18. Macro/CTA

The investment objective of the Fund is to generate stable and consistent long

term-rates of return, acceptable liquidity and to eliminate default risk by investing

its assets exclusively in "Blue Chip"-Hedge Funds controlled by prime financial

institutions.

19. Market Defensive

The Fund was created to utilise an array of alternative investment strategies to

provide investors with consistent returns, capital preservation and capital

appreciation over the long-term. The Fund's investment objective is to achieve

336

Page 337: Introduction to Econometrics 2

consistent, above average returns regardless of market conditions whilst seeking a

risk profile that is lower and less volatile than market indices. The Fund will seek

to achieve its investment objective primarily by investing its assets in a diversified

portfolio of Underlying Funds which use traditional, non-traditional or alternative

asset management strategies.

20. Market Neutral

This category is designed to deliver consistent absolute capital growth

independent of stock and bond performance. It aims to provide a stable and

consistent return profile that has little or no correlation to either equity or bond

market movements, and to produce a consistent return of around 10 to 12% per

annum in USD terms. This targeted return is regardless of market conditions. The

Fund seeks to keep annualized volatility below 5%, while limiting the risk of any

losing quarters.

21. Market Neutral Diversified

The Fund seeks to achieve long-term capital appreciation. Current income is a

secondary objective. The Fund's goal is an average annual returns of 15% to 20%.

The Fund including its predecessor has had average annual returns of 20% for 10

years.

22. Multi-strategy

337

Page 338: Introduction to Econometrics 2

The Fund's objective is to target 8% to 12% annualised performance over a

business cycle with capital preservation and volatility lower than global markets.

The objective of the Multi Strategy Fund is to achieve capital appreciation by

investing in broadly diversified and professionally managed investment funds and

limited partnerships. The Fund may also, to a limited degree, invest directly in

securities, although this facility would only be used for hedging purposes. The

objective of the Multi Strategy Fund is to achieve capital appreciation by

investing in broadly diversified and professionally managed investment funds and

limited partnerships. The Fund may also, to a limited degree, invest directly in

securities, although this facility would only be used for hedging purposes.

23. Primarily Arbitrage Strategies

The Fund's investment objective is to achieve consistent, above average returns

regardless of market conditions whilst seeking a risk profile that is lower and less

volatile than market indices. The Fund will seek to achieve its investment

objective primarily by investing its assets in a diversified portfolio of Underlying

Funds which use traditional, non-traditional or alternative asset management

strategies.

24. Strategic

338

Page 339: Introduction to Econometrics 2

This category employs a multi-advisor hedge fund portfolio to achieve an

absolute return on average which is designed to be largely independent of

Emerging Market economic conditions and movements in the major world market

indices. The Funds’ assets will be dynamically invested in various alternative

investment strategies such as Merger Arbitrage, Equity Short Bias, Mortgage

Arbitrage, Statistical Arbitrage, Distressed Strategy, Structural Arbitrage, Multi-

Strategy, Developed Country Equity Hedged, Market Neutral Strategies,

Currency Trading, Convertible Arbitrage, Acquisition Finance, Global Macro,

CTA Managers, Market Timing and Special Situations.

25. Trading

The Fund aims to achieve absolute returns from credit strategies. The Fund aims

to outperform USD 3-month LIBOR by 200-300 basis points with a low degree of

volatility.

26. Trading / Equity / Arbitrage

The Fund aims to provide absolute returns through investing with European

hedged managers with a growth focus.

27. Trading Strategies

Funds of hedge funds invest in a number of hedge funds spreading the risk across

different funds, with the objective of creating a diversified portfolio.

I have added a research article to help you understand the application of times series data in relation to a binary logit regreesion. Thanks for your participation and your attention!

339

Page 340: Introduction to Econometrics 2

Comparison of the census X12 and Tramo / Seats additive ARIMA(p,d,q) seasonal adjustment models applied to Credit Suisse asset management income and Templeton global income closed-end funds. Application of a binary logit regression.

Abstract

In this article, we have applied two methodologies. The first one is developed by the US Bureau of the Census X12 ARIMA(p,d,q) seasonal adjustment method. The second methodology was developed by Gomez and Maravall, (1996), and is known as TRAMO/SEATS seasonal adjustment method. We have used the additive decomposition seasonal adjustment method. We have found that the natural logarithmic monthly market price returns of all closed-end funds are a stationary series. There is serial correlation in the natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS), and Templeton global income, (LNMPTGI) closed-end funds before applying the ARIMA model. The natural logarithmic monthly market price returns of all closed-end funds are not normally distributed. By applying the Census X 12 ARIMA(1,1,1), the best fit model is the Templeton global income, (LNMPTGI) closed-end fund. It has the highest log likelihood value of 259.2763 in comparison to 229.33 of the Credit Suisse asset management income, (LNMPCS) closed-end fund. In addition, it has the smallest error term in terms of the various criterion. Specifically, the AIC criterion was -504.55. The AICC was -503.63. The Hannan Quinn criterion was -496.40 and the BIC criterion was -484.48. In contrast, the Credit Suisse asset management income, (LNMPCS) closed-end fund has an AIC value of -444.6547. The AICC was -443.7366. The Hannan Quinn criterion was -436.50 and the BIC criterion was -424.58. By applying the forecast of the TRAMO/SEATS methodology of the Templeton global income, (LNMPTGI) and the Credit Suisse asset management income, (LNMPCS) closed-end funds, we have found that there are minor fluctuations of the share price of the closed-end fund after adjusting for the seasonal variation. The arbitrageurs are able to craft their investment strategy concerning holding, buying or selling shares of the closed-end funds. The management gender did not play an important role in the binary logit regression in terms of performance management of positive logarithmic returns.The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market returns observations are 143.The data was obtained from Thomson Financial Investment View database.

Keywords: Autoregressive integrated moving average, (ARIMA) model, arbitrageurs and noise traders, natural logarithmic market price returns, X12 ARIMA(p,d,q) seasonal adjustment method, TRAMO/SEATS seasonal adjustment method.

Introduction

In the mutual fund industry and specifically closed – end funds, there is an interaction of irrational investors or noise traders and rational or arbitrageurs traders. Noise created from irrational traders diverge prices from fundamental

340

Page 341: Introduction to Econometrics 2

values. They overestimate the expected return of the share of the closed-end fund in some periods and underestimate it in others. The second category includes rational investors, who make rational decisions in accordance to their preferences. Their investment decisions are based on rational expectations about assets future return, and the investors act within the efficient market hypothesis.

The purpose of this article is to apply and compare the Census X 12 and Tramo/Seats ARIMA seasonal adjustment method. Tramo stands for time series regression with ARIMA noise, missing observations and outliers. Seats stand for signal extraction in ARIMA time series. The aim of using both methodologies is to forecast the trend, remove the noise and adjust seasonal and irregular components of the ARIMA model of Credit Suisse asset management income and Templeton global income closed-end funds. The autoregressive integrated moving average model that we have used is ARIMA(1,1,1) of order AR(1), I(1), and MA(1), to test the natural logarithmic monthly market returns of the of both closed – end funds. The software that we have used is EViews 6. The aim of using both methodologies is twofold. Firstly, we aim to adjust and eliminate the seasonal patterns of the closed-end funds by decomposing the time series by using moving averages into four components, namely, trend, (T), irregular, (I), cyclical, (C), and seasonal components, (S). To determine the order of the additive ARIMA model, we have checked the autocorrelation, the partial correlation, the Q-statistic and the probability of the correlogram of the time series of closed-end funds. Due to the fact that the autocorrelations were significant and below the 5% significance level, we have used a first order autoregressive and moving average terms and an integrated term of first order.

Secondly, we aim to compare and evaluate the seasonal adjustment of the prices of the closed-end funds after removing white noise from the residuals. The various statistical tests that have been used to select the model of best-fit are the log likelihood,(l), the Akaike information criteria, (AIC), the Hurvich and Tsai, (AICC) criterion, the Hannan and Quinn criterion and the Bayesian information criterion, (BIC). Then, we will plot in a graph the seasonal patterns of both time series of the closed-end funds. Based on the Hodrick – Prescott filter after removing white noise from the residuals, we will test if the seasonal adjustment of the prices of the closed-end funds is or is not significant. This will be very helpful for the arbitrageurs to craft their investment strategy concerning holding, buying or selling shares of the closed-end funds. The Hodrick and Prescott filter is a smooth estimate of a trend of a time series. The Hodrick and Prescott values of λ for monthly data are 14400. The time series that we have used are the monthly natural logarithmic market prices returns of the Credit Suisse asset management income, and Templeton global income.

The rest of the paper is organized as follows. Section 1 describes the methodological issues and data explanations. Section 2 shows the results of statistical and econometrical tests and Section 3 summarizes and concludes.

1. Methodological issues and data explanations.

341

Page 342: Introduction to Econometrics 2

In this article, we are going to test the natural logarithmic monthly market prices returns to select the best fit model. In addition, we aim to determine the forecasted from the actual expectations that arises from the interaction of arbitrageurs and noise traders after adjusting and eliminating seasonality. The methodology was based on the application of the autoregressive integrating moving average, (ARIMA), model. Application of such models has been developed by different academics such as: Andrews, (1991), Bhargava, (1986), Box and Gwilym, (1976), Davidson and MacKinnon, (1993), Dickey and Fuller, (1979), Elliott, Thomas and Stock, (1996), Fischer, (1932), Greene, (1997), Gomez and Maravall, (1996), Hamilton, (1994a), Hayashi, (2000), Hodrick and Prescott, (1997), Johnston et al, (1997), Kwiatkowski, Scimdt, Peter and Yongcheol, (1992), Maddala and Wu, (1999), Newey, Whitney and West, (1994), Serena and Perron, (2001), Phillips and Perron, (1988), Rao and Griliches, (1969), Said and Dickey, (1984).

In this article, we are going to apply an ARIMA(p,d,q) model of order AR(p), I(d) and MA(q). According to the Box-Jenkins notation p is nonseasonal AR order, d is order of nonseasonal differences and q is nonseasonal MA order. A first order integrated parameter initiates that the model is constructed based on the first difference of the time series.

Specifically, the order of the model is AR(1), I(1) and MA(1). It is aimed to eliminate seasonality and estimate the predictability of market prices returns of the Credit Suisse asset management income, and Templeton global income closed-end funds.The mathematical equation of the ARMA(p,q) model is as follows:

yt=a+∑i=1

p

ρi yt−i+ε t+∑i=1

q

ωi εt−i (1)

Equation (1) is derived from the addition of equation (2) and equation (3).

The mathematical equation of an AR(p) model is as follows:

yt=α+ρ1 y t−1+ρ2 y t−2+ .. .. . .. .+ρi y t−i+εt OR

y t=a+∑i=1

p

ρi y t−i+ε t (2)

Where: yt is the dependent variable and in our case is the logarithmic monthly returns of the market prices of the closed-end funds. yt-i is the lagged dependent variable of order i. α is a constant .

ρi are coefficients.

ε t is the error term.

The mathematical equation of an MA(q) model is as follows:

342

Page 343: Introduction to Econometrics 2

y t=εt+ω1 εt−1+ω2 εt−2+. .. .. . .. .+ωi εt−i OR

y t=εt+∑i=1

q

ωi εt−i (3)

Where: y t , is the logarithmic monthly returns of the market prices of the closed-end funds . ωi are coefficients.

ε t is the error term.

ε t−i is the lagged error term of order i .

The general mathematical equation of the ARIMA(p,d,q) model is as follows:

(1−∑k=1

p

ak Lk )(1-L)d y t=(1+∑k=1

q

ωk Lk )εt (4)

Where:

(1−∑k=1

p

ak Lk ) is the autoregressive term .

(1-L)d X t is the integrated term .

(1+∑k=1

q

βk Lk )εt is the moving average term .

L is the lag operator .ωk are the coefficients .εt is the error term .d is the difference term .y t is the variable under consideration and in our case are the logarithmicmonthly market price returns of closed-end funds .

The discount / premium of a closed-end fund is calculated as the difference between the market price for month t and market price for month t-1 divided by the market price for month t-1.

Discount / premium =Market Pricet−Market Price t-1

Market Pricet-1 (5) The logarithmic formula is as follows:

Rt= ln(Pt /P t−1 ) (6)

Where: Rt is the monthly return for month t, Pt is the closing price for month t, and

343

Page 344: Introduction to Econometrics 2

Pt-1 is the closing price lagged one period for month t-1.

The hypotheses that have been formulated are as follows:

H0: Forecasting and the seasonal adjustment of the prices of the closed-end funds after removing white noise from the residuals through Hodrick-Prescott Filter is significant. The arbitrageurs are able to craft their investment strategy concerning holding, buying or selling shares of the closed-end funds.

H1: Forecasting and the seasonal adjustment of the prices of the closed-end funds after removing white noise from the residuals through Hodrick-Prescott Filter is not significant. The arbitrageurs are not able to craft their investment strategy concerning holding, buying or selling shares of the closed-end funds.

The data that we have used are monthly starting from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market returns observations are 143. The data was obtained from Thomson Financial Investment View database. We have checked for stationarity of the monthly market prices returns by applying an ADF test. We have also checked the results of the correlogram and the normality test of the time series.

Figure 1 shows the fluctuations of the monthly percentage changes of the market price returns of the Credit Suisse asset management income closed-end fund for the period starting from 31/01/1990 to 31/12/2001. The changes represent the discount / premium of the fund.

Discount / Premium of Credit Suisse asset management

-15

-10

-5

0

5

10

15

20

25

30

Feb-9

0

Feb-9

1

Feb-9

2

Feb-9

3

Feb-9

4

Feb-9

5

Feb-9

6

Feb-9

7

Feb-9

8

Feb-9

9

Feb-0

0

Feb-0

1

Date

Perc

enta

ge re

turn

s

Source: Author’s calculation based on Excel software. Data were obtained from Thomson Financial Investment View database.

344

Page 345: Introduction to Econometrics 2

According to Figure 1, there was a large percentage monthly fluctuations of the market prices of the Credit Suisse asset management income. Specifically, in 31/3/1990, the percentage changes in the market prices of the closed - end fund was in discount and accounted to -6.45%. In 31/10/1990, the fund changes of the market prices were in premium and the value was 4.08%. In 30/11/1993, the discount was -9.67%, and in 30/09/1996, the value of the fund was in discount and accounted as -7.29%. In 31/01/2001, the fund recorded the highest premium and accounted as 23.64%.

2. Statistical and econometrical tests.

Table 1 shows the ADF test of the natural logarithmic monthly market price returns of the Credit Suisse asset management income closed-end fund, (LNMPCS). The total observations are 144 and the logarithmic monthly market prices returns observations are 143. The data was obtained from Thomson Financial Investment View database.

Null Hypothesis: LNMPCS has a unit rootExogenous: ConstantLag Length: 0 (Automatic based on SIC, MAXLAG=13)

t-Statistic   Prob.*

Augmented Dickey-Fuller test statistic -11.63889  0.0000Test critical values: 1% level -3.476805

5% level -2.88183010% level -2.577668

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test EquationDependent Variable: D(LNMPCS)Method: Least SquaresDate: 10/26/13 Time: 14:42Sample (adjusted): 1990M03 2001M12

345

Page 346: Introduction to Econometrics 2

Included observations: 142 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

LNMPCS(-1) -1.019636 0.087606 -11.63889 0.0000C -0.003161 0.003619 -0.873409 0.3839

R-squared 0.491766     Mean dependent var -0.000764Adjusted R-squared 0.488136     S.D. dependent var 0.060176S.E. of regression 0.043053     Akaike info criterion -3.438791Sum squared resid 0.259497     Schwarz criterion -3.397160Log likelihood 246.1542     Hannan-Quinn criter. -3.421874F-statistic 135.4637     Durbin-Watson stat 1.933443Prob(F-statistic) 0.000000

Source: Author’s calculation based on EViews 6 software.

For a level of significance of one per cent, the critical value of the t-statistic from Dickey-Fuller’s table is -3.48. According to Table 1 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -11.64, which is smaller than the critical values, (-3.48, -2.88, -2.58). In other words, the natural logarithmic monthly returns of the Credit Suisse asset management income closed-end fund is a stationary series.

Table 2 shows the ADF test of the natural logarithmic monthly market price returns of the Templeton global income, (LNMPTGI) closed-end funds. The total observations are 144 and the logarithmic monthly market prices returns observations are 143. The data was obtained from Thomson Financial Investment View database.

Null Hypothesis: LNMPTGI has a unit rootExogenous: ConstantLag Length: 0 (Automatic based on SIC, MAXLAG=13)

t-Statistic   Prob.*

Augmented Dickey-Fuller test statistic -13.31403  0.0000Test critical values: 1% level -3.476805

5% level -2.88183010% level -2.577668

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test EquationDependent Variable: D(LNMPTGI)Method: Least SquaresDate: 10/26/13 Time: 14:44

346

Page 347: Introduction to Econometrics 2

Sample (adjusted): 1990M03 2001M12Included observations: 142 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

LNMPTGI(-1) -1.114683 0.083722 -13.31403 0.0000C -0.002271 0.002730 -0.831818 0.4069

R-squared 0.558726     Mean dependent var 5.74E-05Adjusted R-squared 0.555574     S.D. dependent var 0.048701S.E. of regression 0.032467     Akaike info criterion -4.003206Sum squared resid 0.147574     Schwarz criterion -3.961574Log likelihood 286.2276     Hannan-Quinn criter. -3.986289F-statistic 177.2635     Durbin-Watson stat 2.024262Prob(F-statistic) 0.000000

Source: Author’s calculation based on EViews 6 software.

For a level of significance of one per cent, the critical value of the t-statistic from Dickey-Fuller’s table is -3.48. According to Table 2 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -13.31, which is smaller than the critical values, (-3.48, -2.88, -2.58). In other words, the natural logarithmic monthly returns of the Templeton global income is a stationary series.

Table 3 displays the results of the correlogram of the natural logarithmic monthly market price returns of Credit Suisse asset management income, (LNMPCS), and Templeton global income, (LNMPTGI) closed-end funds. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

LNMPCS

AC  PAC  Q-Stat  Prob

1 -0.394 -0.394 22.527 0.0002 -0.159 -0.372 26.205 0.0003 0.033 -0.273 26.370 0.0004 0.098 -0.105 27.783 0.0005 -0.203 -0.320 33.911 0.0006 0.214 -0.036 40.810 0.0007 -0.007 0.002 40.818 0.0008 -0.100 -0.044 42.344 0.0009 0.018 0.008 42.394 0.00010 0.080 0.042 43.385 0.000

347

Page 348: Introduction to Econometrics 2

11 -0.131 -0.051 46.044 0.00012 0.065 -0.011 46.709 0.000Source: Author’s calculation based on EViews 6 software.Significant p-value at the 5% significance level.

LNMPTGI

AC  PAC  Q-Stat  Prob

1 -0.512 -0.512 38.080 0.0002 -0.065 -0.444 38.695 0.0003 0.168 -0.182 42.823 0.0004 -0.173 -0.291 47.269 0.0005 0.148 -0.104 50.557 0.0006 -0.055 -0.102 51.019 0.0007 -0.014 -0.048 51.051 0.0008 0.043 -0.023 51.328 0.0009 -0.071 -0.066 52.108 0.00010 0.085 0.014 53.233 0.00011 -0.085 -0.079 54.365 0.00012 0.121 0.113 56.652 0.000

Source: Author’s calculation based on EViews 6 software.Significant p-value at the 5% significance level.

The hypotheses that have been formulated and tested are as follows:

H0: The time series of the natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS), and Templeton global income, (LNMPTGI) closed-end funds have no serial correlation.

H1: The time series of the natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS), and Templeton global income, (LNMPTGI) closed-end funds have serial correlation.According to Table 3, the Q-statistic associated with the p-values are statistically significant, which is a sign that there is serial correlation in the natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS), and Templeton global income, (LNMPTGI) closed-end funds.

Table 4 displays Jarque - Bera normality test of the natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS), and Templeton global income, (LNMPTGI) closed-end funds. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

LNMPCS LNMPTGI Mean -0.003315 -0.002417 Median  0.000000  0.000000 Maximum  0.212175  0.078155 Minimum -0.140286 -0.144342 Std. Dev.  0.042823  0.032779 Skewness  0.127874 -0.386170

348

Page 349: Introduction to Econometrics 2

 Kurtosis  7.617280  5.431913

 Jarque-Bera  127.4171  38.79298 Probability  0.000000  0.000000

 Sum -0.474012 -0.345625 Sum Sq. Dev.  0.260404  0.152570

 Observations  143  143Source: Author’s calculation based on E-views 6 software.Significant p-value at the 5% significance level.

We state the hypotheses as follows:

H0: The natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS), and Templeton global income, (LNMPTGI) closed-end funds are normally distributed.

H1: The natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS), and Templeton global income, (LNMPTGI) closed-end funds are not normally distributed.

According to Table 4, the Error! Objects cannot be created from editing field codes. statistics of the Jarque - Bera for both closed – end funds are very significant at the 5% significance value. For example, the natural logarithmic monthly market price returns of the LNMPCS show a Error! Objects cannot be created from editing field codes.statistic of 127.42, which is very significant as the p-value is 0.0000 and it is less than the 5% significance level. The joint test of the null hypothesis that sample skewness equals 0 and sample kurtosis equals 3 is rejected. Thus, we can reject H0 of normality. The distribution of the various logarithmic market price returns shows excess kurtosis. It is leptokurtic and slightly positively or negatively skewed. For example, the kurtosis of the logarithmic monthly market price returns of LNMPTGI is 5.43, which is greater than 3. Table 5 displays the Census X 12 ARIMA(1,1,1) seasonal adjustment method assuming stability of the natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

Sum of Squares Degrees of Freedom

Mean Square F-value

Between months 0.0523 11 0.00475 3.330**Residual 0.1869 131 0.00143

Total 0.2392 142** Seasonality at the 0.1 per cent level

Source: Author’s calculation based on E-views 6 software.

According to Table 5, the F-value of the Credit Suisse asset management income, (LNMPCS) closed-end fund between months is 3.330 and shows evidence of seasonality at the 0.1 per cent level.

349

Page 350: Introduction to Econometrics 2

Table 6 shows the Census X 12 ARIMA(1,1,1), test for the presence of residual seasonality of the natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

No evidence of residual seasonality in the entire series at the 1 per cent level. F = 0.26No evidence of residual seasonality in the last 3 years at the 1 per cent level. F = 0.74No evidence of residual seasonality in the last 3 years at the 5 per cent level.Source: Author’s calculation based on E-views 6 software.

According to Table 6, there is no evidence of residual seasonality in the entire series of the Credit Suisse asset management income, (LNMPCS) closed-end fund, in the last 3 years at the 1 and 5 per cent level.

Table 7 shows the Census X 12 ARIMA(1,1,1), sample autocorrelations of the residuals of the natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

Lag 1 2 3 4 5 6 7 8 9 10 11 12ACF -0.05 -0.07 0.01 -0.03 0.02 0.06 0.03 -0.09 -0.04 -0.07 -0.04 -0.12Q 0.34 0.93 0.95 1.09 1.16 1.67 1.76 3.02 3.23 3.98 4.17 6.18P 0.000 0.000 0.000 0.000 0.000 0.196 0.414 0.389 0.520 0.552 0.654 0.519

Lag 13 14 15 16 17 18 19 20 21 22 23 24ACF 0.14 0.02 0.02 0.06 -0.03 -0.04 -0.02 0.01 0.09 0.07 -0.07 -0.11Q 9.07 9.12 9.19 9.82 9.97 10.18 10.24 10.26 11.50 12.29 13.11 15.14P 0.336 0.426 0.514 0.547 0.619 0.679 0.745 0.803 0.778 0.782 0.785 0.714

Lag 25 26 27 28 29 30 31 32 33 34 35 36ACF -0.03 -0.07 0.09 -0.01 -0.05 -0.01 0.03 0.02 -0.07 0.04 0.10 -0.04

350

Page 351: Introduction to Econometrics 2

Q 15.27 16.09 17.40 17.41 17.77 17.78 17.96 18.04 18.81 19.18 20.90 21.15P 0.761 0.764 0.741 0.788 0.814 0.851 0.877 0.902 0.904 0.917 0.891 0.908Source: Author’s calculation based on E-views 6 software.

The hypotheses that have been formulated and tested are as follows:

H0: The time series of the natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS) closed-end fund has no serial correlation.

H1: The time series of the natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS), closed-end fund has serial correlation.

According to Table 7, The Q-statistic associated with the p-values are not statistically significant, which is a sign that there is no serial correlation in the natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS) closed-end fund.

Table 8 shows the Census X 12 ARIMA(1,1,1), forecasting of the natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

Date Forecast Error2002.Jan 0.01739 0.0366492002.Feb -0.02294 0.0364772002.Mar -0.02614 0.0365392002.Apr -0.02344 0.0365152002.May -0.01015 0.0365712002.Jun -0.02867 0.0365462002.Jul -0.01322 0.0366022002.Aug -0.04411 0.0365782002.Sep -0.03140 0.0366332002.Oct -0.03722 0.036609

351

Page 352: Introduction to Econometrics 2

2002.Nov -0.00538 0.0366652002.Dec -0.04846 0.036640Source: Author’s calculation based on E-views 6 software.

Table 8 shows 12 months forecast of 2002 associated with the ARIMA error. For example, in February 2002, the value of the Credit Suisse asset management income closed-end fund fund was -0.02294 and the error was 0.036477. In November 2002, the value of the closed-end fund was -0.00538 and the error was 0.036665. The forecast took place after adjusting the seasonal variation and removing the noise from the ARIMA model. The arbitrageurs are able to craft their investment strategy concerning holding, buying or selling shares of the closed-end funds.

Table 9 shows the Census X 12 ARIMA(1,1,1), average absolute percentage error within-sample forecasts of the Credit Suisse asset management income, (LNMPCS) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

Last year 82.26%Last 1 year 97.14%Last 2 year 167.16%Last three years 115.52%Source: Author’s calculation based on E-views 6 software.

According to Table 9, there is variation in the average absolute percentage error within- sample forecasts of the Credit Suisse asset management income, (LNMPCS) closed-end fund. For example, the last 1 year, it was 97.14% and the last three years, it was 115.52%.

Table 10 shows the Census X 12 ARIMA(1,1,1), model in terms of likelihood statistics of the Credit Suisse asset management income, (LNMPCS) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

ARIMA Model: (1 1 1)(0 1 1)Nonseasonal differences: 1Seasonal differences: 1 Standard Parameter Estimate Errors ----------------------------------------------------- Nonseasonal AR Lag 1 -0.9275 0.00001

Nonseasonal MA Lag 1 -0.0259 0.04195

352

Page 353: Introduction to Econometrics 2

Variance 0.13178E-02

Likelihood Statistics ------------------------------------------------------------------ Effective number of observations (nefobs) 130 Number of parameters estimated (np) 7 Log likelihood (L) 229.3273 AIC -444.6547 AICC (F-corrected-AIC) -443.7366 Hannan Quinn -436.4984 BIC -424.5819

Source: Author’s calculation based on E-views 6 software.

According to Table 10, the estimate of nonseasonal AR Lag 1 of the Credit Suisse asset management income, (LNMPCS) closed-end fund was -0.9275 and the nonseasonal MA for Lag 1 was -0.0259. The value of the log likelihood was 229.33. The AIC was -444.6547, the AICC was -443.7366. The Hannan Quinn criterion was -436.50 and the BIC criterion was -424.58.

353

Page 354: Introduction to Econometrics 2

Table 11 displays the Census X 12 ARIMA(1,1,1) seasonal adjustment method assuming stability of the natural logarithmic monthly market price returns of the Templeton global income, (LNMPTGI) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

Sum of Squares Degrees of Freedom

Mean Square F-value

Between months 0.0197 11 0.00179 2.411Residual 0.0971 131 0.00074

Total 0.1168 142No evidence of seasonality assuming stability at the 0.1 per cent level

Source: Author’s calculation based on E-views 6 software.

According to Table 11, the F-value between months of the Templeton global income, (LNMPTGI) closed-end fund is 2.411 and shows no evidence of seasonality assuming stability at the 0.1 per cent level.

Table 12 shows the Census X 12 ARIMA(1,1,1) seasonal adjustment method moving seasonality test of the natural logarithmic monthly market price returns of the Templeton global income, (LNMPTGI) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

Sum of Squares Degrees of Freedom

Mean Square F-value

Between Years 0.0027 10 0.000272 1.012Error 0.0296 110 0.000269Total

No evidence of moving seasonality at the five percent levelSource: Author’s calculation based on E-views 6 software.

According to Table 12, the F-value between years is 1.012 and shows no evidence of moving seasonality at the five percent level.

Table 13 shows the Census X 12 ARIMA(1,1,1), test for the presence of residual seasonality of the natural logarithmic monthly market price returns of the Templeton global income, (LNMPTGI) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

No evidence of residual seasonality in the entire series at the 1 per cent level. F = 0.83No evidence of residual seasonality in the last 3 years at the 1 per cent level. F = 0.55No evidence of residual seasonality in the last 3 years at the 5 per cent level.Source: Author’s calculation based on E-views 6 software.

According to Table 13, there is no evidence of residual seasonality of the Templeton global income, (LNMPTGI) closed-end fund in the entire series, in the last 3 years at the 1 and 5 per cent level.

354

Page 355: Introduction to Econometrics 2

Table 14 shows the Census X 12 ARIMA(1,1,1), sample autocorrelations of the residuals of the natural logarithmic monthly market price returns of Templeton global income, (LNMPTGI) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

Lag 1 2 3 4 5 6 7 8 9 10 11 12ACF 0.00 0.03 -0.01 -0.05 0.05 0.02 0.03 0.06 0.02 0.12 -0.06 -0.07Q 0.00 0.13 0.15 0.51 0.90 0.96 1.05 1.60 1.68 3.72 4.24 4.95P 0.000 0.000 0.000 0.000 0.000 0.328 0.591 0.658 0.794 0.591 0.644 0.667

Lag 13 14 15 16 17 18 19 20 21 22 23 24ACF -0.09 -0.07 -0.07 -0.01 0.11 -0.03 -0.03 0.04 -0.01 -0.07 -0.03 -0.04Q 6.16 6.92 7.70 7.72 9.61 9.74 9.91 10.15 10.16 10.86 11.01 11.25P 0.630 0.646 0.658 0.738 0.650 0.715 0.769 0.810 0.858 0.864 0.894 0.915

Lag 25 26 27 28 29 30 31 32 33 34 35 36ACF -0.02 -0.02 0.03 -0.03 -0.05 -0.09 -0.12 -0.14 -0.02 0.04 0.07 -0.03Q 11.33 11.39 11.54 11.65 12.06 13.55 16.00 19.66 19.72 20.08 20.96 21.16P 0.937 0.955 0.966 0.975 0.979 0.969 0.936 0.845 0.874 0.891 0.889 0.908

Source: Author’s calculation based on E-views 6 software.

The hypotheses that have been formulated and tested are as follows:

H0: The time series of the natural logarithmic monthly market price returns of Templeton global income, (LNMPTGI) closed-end fund has no serial correlation.

H1: The time series of the natural logarithmic monthly market price returns of the Templeton global income, (LNMPTGI) closed-end fund has serial correlation.

According to Table 14, The Q-statistic associated with the p-values are not statistically significant, which is a sign that there is no serial correlation in the natural logarithmic monthly market price returns of the Templeton global income, (LNMPTGI) closed-end fund.

355

Page 356: Introduction to Econometrics 2

Table 15 shows the Census X 12 ARIMA(1,1,1), forecasting of the natural logarithmic monthly market price returns of the Templeton global income, (LNMPTGI) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

Date Forecast Error2002.Jan 0.024957 0.02986122002.Feb 0.000936 0.02986552002.Mar -0.025045 0.03017952002.Apr -0.016328 0.03040632002.May -0.002302 0.03050262002.Jun 0.017867 0.03054072002.Jul 0.008202 0.03055332002.Aug 0.000871 0.03070342002.Sep 0.010761 0.03055972002.Oct -0.000352 0.03056042002.Nov -0.001682 0.03056052002.Dec -0.005935 0.0305606Source: Author’s calculation based on E-views 6 software.

Table 15 shows 12 months forecast of 2002 associated with the ARIMA error. For example, in March 2002, the value of the Templeton global income, (LNMPTGI) closed-end fund was -0.025045 and the error was 0.0301795. In August 2002, the value of the closed-end fund was 0.000871 and the error was 0.0307034. The forecast shows minor fluctuations after removing the seasonal components and the noise from the ARIMA model. The arbitrageurs are able to craft their investment strategy concerning holding, buying or selling shares of the closed-end funds.

356

Page 357: Introduction to Econometrics 2

Table 16 shows the Census X 12 ARIMA(1,1,1), average absolute percentage error within-sample forecasts of the Templeton global income, (LNMPTGI) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

Last year 146.72%Last 1 year 254.62%Last 2 year 379.28%Last three years 260.21%Source: Author’s calculation based on E-views 6 software.

According to Table 16, there is variation of the average absolute percentage error within- sample forecasts of the Templeton global income, (LNMPTGI) closed-end fund. For example, the last 1 year, it was 254.62% and the last two years, it was 379.28%.

Table 17 shows the Census X 12 ARIMA(1,1,1), model in terms of likelihood statistics of the Templeton global income, (LNMPTGI) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

ARIMA Model: (1 1 1)(0 1 1)Nonseasonal differences: 1Seasonal differences: 1 Standard Parameter Estimate Errors ----------------------------------------------------- Nonseasonal AR Lag 1 -0.8403 0.11811

Nonseasonal MA Lag 1 0.1630 0.09515 Variance 0.89169E-03 -----------------------------------------------------

Likelihood Statistics ------------------------------------------------------------------ Effective number of observations (nefobs) 130 Number of parameters estimated (np) 7 Log likelihood (L) 259.2763 AIC -504.5525 AICC (F-corrected-AIC) -503.6345 Hannan Quinn -496.3963 BIC -484.4798 Source: Author’s calculation based on E-views 6 software.

357

Page 358: Introduction to Econometrics 2

According to Table 17, the estimate of nonseasonal AR Lag 1 of the Templeton global income, (LNMPTGI) closed-end fund was -0.8403 and the nonseasonal MA for Lag 1 was 0.1630. The value of the log likelihood was 259.2763. The AIC was -504.55, the AICC was -503.63. The Hannan Quinn criterion was -496.40 and the BIC criterion was -484.48.

358

Page 359: Introduction to Econometrics 2

Table 18 shows the TRAMO seasonal adjustment method in terms of test statistics on white noise residuals of the Credit Suisse asset management income, (LNMPCS) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

MEAN= -0.0049048ST.DEV.OF MEAN= 0.0046276T-VALUE= -1.0599NORMALITY TEST= 6.878 ( CHI-SQUARED(2) )SKEWNESS= 0.2726 ( SE = 0.2157 )KURTOSIS= 3.9912 ( SE = 0.4313 )SUM OF SQUARES= 0.3594726 DURBIN-WATSON= 2.0453STANDARD ERROR OF RESID= 0.5362631E-01MSE OF RESID.= 0.2875781E-02Source: Author’s calculation based on E-views 6 software.

According to Table 18, there is no evidence of TRAMO white noise in the residuals of the Credit Suisse asset management income, (LNMPCS) closed-end fund. There is no autocorrelation as the Durbin – Watson is 2.0453. The time series is slightly positively skewed. The distribution of the logarithmic market price returns shows excess kurtosis. The natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS), closed-end fund is normally distributed. Table 19 shows the SEATS seasonal adjustment method in terms of test statistics on white noise residuals of the Credit Suisse asset management income, (LNMPCS) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

MEAN= -0.5177D-03ST.DEV.OF MEAN= 0.4559D-02T-VALUE= -0.1136NORMALITY TEST= 8.028 ( CHI-SQUARED(2) )SKEWNESS= 0.2358 ( SE = 0.2148 )KURTOSIS= 4.1224 ( SE = 0.4297 )SUM OF SQUARES= 0.3512D+00DURBIN-WATSON= 2.1392STANDARD DEVI OF RESID.= 0.5322D-01VARIANCE OF RESID= 0.2833D-02Source: Author’s calculation based on E-views 6 software.

According to Table 19, there is no evidence of SEATS white noise in the residuals of the Credit Suisse asset management income, (LNMPCS) closed-end fund. There is no autocorrelation as the Durbin – Watson is 2.14. The time series is slightly positively skewed. The distribution of the logarithmic market price

359

Page 360: Introduction to Econometrics 2

returns shows excess kurtosis of 4.12 as it is greater than three. The natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS), closed-end fund is normally distributed.

Table 20 shows the TRAMO / SEATS Forecast of the seasonal adjustment method of the Credit Suisse asset management income, (LNMPCS) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

ORIGIN: 143 NUMBER: 24 OBS FORECAST (actual) STD ERROR (residual)144 0.200402 0.542735E-01145 -0.622746E-01 0.545024E-01146 -0.110287 0.545191E-01147 -0.245604E-01 0.545427E-01148 0.511368E-01 0.545663E-01149 -0.709488E-01 0.545904E-01150 -0.271919E-01 0.546147E-01151 0.695119E-02 0.546394E-01152 -0.554714E-01 0.546644E-01153 0.128216E-01 0.546898E-01154 0.533586E-01 0.547154E-01155 -0.146331 0.547414E-01156 0.194269 0.785061E-01157 -0.684947E-01 0.788021E-01158 -0.116594 0.788617E-01159 -0.309554E-01 0.789323E-01160 0.446544E-01 0.790034E-01161 -0.775185E-01 0.790754E-01162 -0.338491E-01 0.791483E-01163 0.206647E-03 0.792220E-01164 -0.623033E-01 0.792966E-01165 0.590223E-02 0.793721E-01166 0.463519E-01 0.794485E-01167 -0.153425 0.795257E-01Source: Author’s calculation based on E-views 6 software.

Table 20 shows the forecast of the time series of the Credit Suisse asset management income, (LNMPCS) closed-end fund after the 143 observations. The additive forecast includes 24 observations. The forecast took place after decomposing the time series by using moving averages into four components, namely, trend, (T), irregular, (I), cyclical, (C), and seasonal components, (S). Then, the four factors were adjusted to sum to zero. Finally, adjustment for the seasonal variation was done by adding on the appropriate seasonal factor for the various months. According to the table, there are minor fluctuations of the share price of the closed-end fund after adjusting for the seasonal variation. The

360

Page 361: Introduction to Econometrics 2

arbitrageurs are able to craft their investment strategy concerning holding, buying or selling shares of the closed-end funds.

Table 21 shows the TRAMO seasonal adjustment method in terms of test statistics of white noise residuals of the Templeton global income, (LNMPTGI) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

MEAN= -0.0005425ST.DEV.OF MEAN= 0.0027537T-VALUE= -0.1970NORMALITY TEST= 0.6485 ( CHI-SQUARED(2) )SKEWNESS= 0.1621 ( SE = 0.2165 )KURTOSIS= 2.8715 ( SE = 0.4330 )SUM OF SQUARES= 0.1242745 DURBIN-WATSON= 1.9568STANDARD ERROR OF RESID= 0.3178619E-01MSE OF RESID.= 0.1010362E-02Source: Author’s calculation based on E-views 6 software.

According to Table 21, there is no evidence of TRAMO white noise in the residuals of the Templeton global income,(LNMPTGI), closed-end fund. There is no autocorrelation as the Durbin – Watson is 1.9568. The time series is slightly positively skewed. The natural logarithmic monthly market price returns of the Templeton global income, (LNMPTGI), closed-end fund is normally distributed.

Table 22 shows the SEATS seasonal adjustment method in terms of test statistics of white noise residuals of the Templeton global income, (LNMPTGI) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

MEAN= 0.1478D-03ST.DEV.OD MEAN= 0.2434D-02T-VALUE= 0.0607NORMALITY TEST= 0.7305 ( CHI-SQUARED(2) )SKEWNESS= 0.1330 ( SE = 0.2056 )KURTOSIS= 3.2296 ( SE = 0.4111 )SUM OF SQUARES= 0.1194D+00DURBIN-WATSON= 2.0239STANDARD DEVI OF RESID.= 0.3116D-01VARIANCE OF RESID= 0.9710D-03Source: Author’s calculation based on E-views 6 software.

According to Table 22, there is no evidence of SEATS white noise in the residuals of the Templeton global income, (LNMPTGI) closed-end fund. There is no autocorrelation as the Durbin – Watson is 2.02. The time series is slightly

361

Page 362: Introduction to Econometrics 2

positively skewed and the value is 0.13. The distribution of the logarithmic market price returns shows excess kurtosis of 3.23 as it is greater than three. The natural logarithmic monthly market price returns of the Templeton global income, (LNMPTGI) closed-end fund is normally distributed.

Table 23 shows the TRAMO / SEATS Forecast of the seasonal adjustment method of the Templeton global income, (LNMPTGI) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

ORIGIN: 143 NUMBER: 24 OBS FORECAST (actual) STD ERROR (residual) 144 0.232518E-01 0.325554E-01 145 -0.473846E-02 0.324685E-01 146 -0.207987E-01 0.324795E-01 147 -0.196237E-01 0.325604E-01 148 -0.110866E-02 0.326569E-01 149 0.165685E-01 0.327537E-01 150 0.802519E-02 0.328527E-01 151 -0.400952E-03 0.330534E-01 152 0.100996E-01 0.330594E-01 153 -0.911702E-03 0.331671E-01 154 -0.160741E-02 0.332779E-01 155 -0.634085E-02 0.333936E-01 156 0.209655E-01 0.340763E-01 157 -0.712830E-02 0.340819E-01 158 -0.213478E-01 0.342131E-01 159 -0.199381E-01 0.343627E-01 160 -0.147820E-02 0.345179E-01 161 0.161727E-01 0.346762E-01 162 0.761890E-02 0.348379E-01 163 -0.816882E-03 0.350822E-01 164 0.967351E-02 0.351716E-01 165 -0.134810E-02 0.353434E-01 166 -0.205409E-02 0.355190E-01 167 -0.679779E-02 0.357011E-01 Source: Author’s calculation based on E-views 6 software.

Table 23 shows the forecast of the time series of the Templeton global income, (LNMPTGI) closed-end fund after the 143 observations. The additive forecast includes 24 observations. The forecast took place after decomposing the time series by using moving averages into four components, namely, trend, (T), irregular, (I), cyclical, (C), and seasonal components, (S). Then, the four factors were adjusted to sum to zero. Finally, adjustment for the seasonal variation was done by adding on the appropriate seasonal factor for the various months. According to the table, there are minor fluctuations of the share price of the

362

Page 363: Introduction to Econometrics 2

closed-end fund after adjusting for the seasonal variation. The arbitrageurs are able to craft their investment strategy concerning holding, buying or selling shares of the closed-end funds.

Graph 1 displays the means seasonal patterns that needed to be adjusted in order that the arbitrageurs eliminate noise and be able to craft their investment strategy concerning holding, buying or selling shares of the Credit Suisse asset management income, (LNMPCS) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

-.15

-.10

-.05

.00

.05

.10

.15

.20

.25

Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

Means by Season

LNMPCS by Season

Source: Author’s calculation based on EViews 6 software.

According to Graph 1, the time series of the natural logarithmic monthly price returns of the Credit Suisse asset management income, (LNMPCS) closed-end fund shows the means seasonal fluctuations that needed to be adjusted. The sample evidence suggests that the seasonal fluctuations of the prices of the closed-end funds are not helping the arbitrageurs to craft their investment strategy concerning holding, buying or selling shares of the closed-end funds. The seasonal components creates noise that distract the arbitrageurs to decide about their investment strategy.

363

Page 364: Introduction to Econometrics 2

Graph 2 displays the seasonal adjustment of the prices of the Credit Suisse asset management income, (LNMPCS) closed-end fund after removing white noise from the residuals through Hodrick-Prescott Filter. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

-.2

-.1

.0

.1

.2

.3

-.2

-.1

.0

.1

.2

.3

90 91 92 93 94 95 96 97 98 99 00 01

LNMPCS Trend Cycle

Hodrick-Prescott Filter (lambda=14400)

Source: Author’s calculation based on EViews 6 software.

Graph 2 shows the seasonal adjustment of the prices of the Credit Suisse asset management income, (LNMPCS) closed-end fund after removing white noise from the residuals through Hodrick-Prescott Filter. The noise and seasonality has been removed from the ARIMA(1,1,1) model. The Hodrick and Prescott filter is a smooth estimate of a trend of a time series. The Hodrick and Prescott values of λ for monthly data are 14400. The sample evidence suggests to accept the null hypothesis stated in the methodological section. The hypothesis was that forecasting and the seasonal adjustment of the prices of the closed-end funds after removing white noise from the residuals through Hodrick-Prescott Filter is significant. The arbitrageurs are able to craft their investment strategy concerning holding, buying or selling shares of the closed-end funds.

364

Page 365: Introduction to Econometrics 2

Graph 3 displays the means seasonal patterns that needed to be adjusted in order that the arbitrageurs eliminate noise and be able to craft their investment strategy concerning holding, buying or selling shares of the Templeton global income, (LNMPTGI) closed-end fund. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

-.16

-.12

-.08

-.04

.00

.04

.08

Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

Means by Season

LNMPTGI by Season

Source: Author’s calculation based on EViews 6 software.

According to Graph 3, the time series of the natural logarithmic monthly price returns of the Templeton global income, (LNMPTGI) closed-end fund shows the means seasonal fluctuations that needed to be adjusted. The sample evidence suggests that the seasonal fluctuations of the prices of the closed-end funds are not helping the arbitrageurs to craft their investment strategy concerning holding, buying or selling shares of the closed-end funds. The seasonal components creates noise that distract the arbitrageurs to decide about their investment strategy.

365

Page 366: Introduction to Econometrics 2

Graph 4 displays the seasonal adjustment of the prices of the Templeton global income, (LNMPTGI) closed-end fund after removing white noise from the residuals through Hodrick-Prescott Filter. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market price returns observations are 143. The data was obtained from Thomson Financial Investment View database.

-.15

-.10

-.05

.00

.05

.10 -.15

-.10

-.05

.00

.05

.10

90 91 92 93 94 95 96 97 98 99 00 01

LNMPTGI Trend Cycle

Hodrick-Prescott Filter (lambda=14400)

Source: Author’s calculation based on EViews 6 software.

Graph 4 shows the seasonal adjustment of the prices of the Templeton global income, (LNMPTGI) closed-end fund after removing white noise from the residuals through Hodrick-Prescott Filter. The noise and seasonality has been removed from the ARIMA(1,1,1) model. The Hodrick and Prescott filter is a smooth estimate of a trend of a time series. The Hodrick and Prescott values of λ for monthly data are 14400. The sample evidence suggests to accept the null hypothesis stated in the methodological section. The hypothesis was that forecasting and the seasonal adjustment of the prices of the closed-end funds after removing white noise from the residuals through Hodrick-Prescott Filter is significant. The arbitrageurs are able to craft their investment strategy concerning holding, buying or selling shares of the closed-end funds.

366

Page 367: Introduction to Econometrics 2

Table 24 shows the results of the binary logit regression. The dummy variable that we have used is management gender. 1 was used for male and 0 for female.The dependent variable is management gender and the independent variable are logarithmic performance returns.

Dependent Variable: GenderMethod: ML - Binary LogitDate: 01/12/16 Time: 04:50Sample: 1 143Included observations: 143Convergence achieved after 4 iterationsCovariance matrix computed using second derivatives

Variable Coefficient Std. Error z-Statistic Prob.

C -0.698420 0.179525 -3.890375 0.0001Credit Suisse asset

management income logarithmic returns -0.332125 0.233139 -1.424578 0.1543Templeton global

income logarithmic returns 0.329809 0.234682 1.405342 0.1599

Mean dependent var 0.332000    S.D. dependent var 0.471167S.E. of regression 0.470733    Akaike info criterion 1.273196Sum squared resid 220.9247    Schwarz criterion 1.287919Log likelihood -633.5980    Hannan-Quinn criter. 1.278792Restr. log likelihood -635.5860    Avg. log likelihood -0.633598LR statistic 3.976024    McFadden R-squared 0.003128Prob(LR statistic) 0.136967

Obs with Dep=0 90     Total obs 143Obs with Dep=1 53

The LR statistic which tests the joint null hypothesis that all slope coefficients except the constant are zero is not rejected at the 5% significance level. The McFadden R2 indicates bad goodness of fit of the model. There is no significant effect of the independent variables upon the dependent variable which is gender. In other words, the gender does not affect the return of the fund. Please check the coefficients, the standard errors, the z-statistics and the p-probabilities. The probability of the LR statistic is not significant as it is above the 5% significance level.

367

Page 368: Introduction to Econometrics 2

Section 3 summarizes and concludes.

In this article, we have applied two methodologies. The first one is developed by the US Bureau of the Census X12 ARIMA(1,1,1) seasonal adjustment method. The second methodology was developed by Gomez and Maravall, (1996), and is known as TRAMO/SEATS seasonal adjustment method. The whole dataset is from 31/01/1990 to 31/12/2001. The total observations are 144 and the logarithmic monthly market returns observations are 143.The data was obtained from Thomson Financial Investment View database.

We have found that the natural logarithmic monthly market price returns of all closed-end funds are a stationary series. There is serial correlation in the natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS), and Templeton global income, (LNMPTGI) closed-end funds before applying the ARIMA model. The natural logarithmic monthly market price returns of all closed-end funds are not normally distributed.

By applying the Census X 12 ARIMA(1,1,1), we have found that the F-value of the Credit Suisse asset management income, (LNMPCS) closed-end fund between months is 3.330 and shows evidence of seasonality at the 0.1 per cent level. There is no evidence of residual seasonality in the entire series of the Credit Suisse asset management income, (LNMPCS) closed-end fund, in the last 3 years at the 1 and 5 per cent level. The Q-statistic associated with the p-values are not statistically significant, which is a sign that there is no serial correlation in the natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS) closed-end fund. The estimate of nonseasonal AR Lag 1 of the Credit Suisse asset management income, (LNMPCS) closed-end fund was -0.9275 and the nonseasonal MA for Lag 1 was -0.0259. The value of the log likelihood was 229.33. The AIC was -444.6547, the AICC was -443.7366. The Hannan Quinn criterion was -436.50 and the BIC criterion was -424.58.

By applying the Census X 12 ARIMA(1,1,1), we have found that the F-value between months of the Templeton global income, (LNMPTGI) closed-end fund is 2.411 and shows no evidence of seasonality assuming stability at the 0.1 per cent level. The F-value between years is 1.012 and shows no evidence of moving seasonality at the five percent level. There is no evidence of residual seasonality of the Templeton global income, (LNMPTGI) closed-end fund in the entire series in the last 3 years at the 1 and 5 per cent level. The Q-statistic associated with the p-values are not statistically significant, which is a sign that there is no serial correlation in the natural logarithmic monthly market price returns of the Templeton global income, (LNMPTGI) closed-end fund. The estimate of nonseasonal AR Lag 1 of the Templeton global income, (LNMPTGI) closed-end fund was -0.8403 and the nonseasonal MA for Lag 1 was 0.1630. The value of the log likelihood was 259.2763. The AIC was -504.55, the AICC was -503.63. The Hannan Quinn criterion was -496.40 and the BIC criterion was -484.48.

368

Page 369: Introduction to Econometrics 2

By applying the Census X 12 ARIMA(1,1,1), the best fit model is the Templeton global income, (LNMPTGI) closed-end fund. It has the highest log likelihood value of 259.2763 in comparison to 229.33 of the Credit Suisse asset management income, (LNMPCS) closed-end fund. In addition, it has the smallest error term in terms of the various criterion. Specifically, the AIC criterion was -504.55. The AICC was -503.63. The Hannan Quinn criterion was -496.40 and the BIC criterion was -484.48. In contrast, the Credit Suisse asset management income, (LNMPCS) closed-end fund has an AIC value of -444.6547. The AICC was -443.7366. The Hannan Quinn criterion was -436.50 and the BIC criterion was -424.58.

There is no evidence of TRAMO white noise in the residuals of the Credit Suisse asset management income, (LNMPCS) closed-end fund. There is no autocorrelation as the Durbin – Watson is 2.0453. The time series is slightly positively skewed. The distribution of the logarithmic market price returns shows excess kurtosis. The natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS), closed-end fund is normally distributed.

There is no evidence of SEATS white noise in the residuals of the Credit Suisse asset management income, (LNMPCS) closed-end fund. There is no autocorrelation as the Durbin – Watson is 2.14. The time series is slightly positively skewed. The distribution of the logarithmic market price returns shows excess kurtosis of 4.12 as it is greater than three. The natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS), closed-end fund is normally distributed.

There is no evidence of TRAMO white noise in the residuals of the Templeton global income,(LNMPTGI), closed-end fund. There is no autocorrelation as the Durbin – Watson is 1.9568. The time series is slightly positively skewed. The natural logarithmic monthly market price returns of the Templeton global income, (LNMPTGI), closed-end fund is normally distributed.

There is no evidence of SEATS white noise in the residuals of the Templeton global income, (LNMPTGI) closed-end fund. There is no autocorrelation as the Durbin – Watson is 2.02. The time series is slightly positively skewed and the value is 0.13. The distribution of the logarithmic market price returns shows excess kurtosis of 3.23 as it is greater than three. The natural logarithmic monthly market price returns of the Templeton global income, (LNMPTGI) closed-end fund is normally distributed.

By applying the forecast of the TRAMO/SEATS methodology of the Templeton global income, (LNMPTGI) and the Credit Suisse asset management income, (LNMPCS) closed-end funds, we have found that there are minor fluctuations of the share price of the closed-end fund after adjusting for the seasonal variation. The arbitrageurs are able to craft their investment strategy concerning holding, buying or selling shares of the closed-end funds.

369

Page 370: Introduction to Econometrics 2

The time series of the natural logarithmic monthly market price returns of the Credit Suisse asset management income, (LNMPCS) and the Templeton global income, (LNMPTGI) closed-end funds show the means seasonal fluctuations that needed to be adjusted. The sample evidence suggests that the seasonal fluctuations of the prices of the closed-end funds are not helping the arbitrageurs to craft their investment strategy concerning holding, buying or selling shares of the closed-end funds. The seasonal components creates noise that distract the arbitrageurs to decide about their investment strategy.

The seasonal adjustment of the market prices of the Credit Suisse asset management income, (LNMPCS) and the Templeton global income, (LNMPTGI) closed-end funds after removing white noise from the residuals through Hodrick-Prescott Filter. The noise and seasonality has been removed from the ARIMA(1,1,1) model. The Hodrick and Prescott filter is a smooth estimate of a trend of a time series. The Hodrick and Prescott values of λ for monthly data are 14400. The sample evidence suggests to accept the null hypothesis stated in the methodological section. The arbitrageurs are able to craft their investment strategy concerning holding, buying or selling shares of the closed-end funds. Forecasting and the seasonal adjustment of the market prices of the closed-end funds after removing white noise from the residuals through Hodrick-Prescott Filter is significant.

The LR statistic which tests the joint null hypothesis that all slope coefficients except the constant are zero is not rejected at the 5% significance level. The McFadden R2 indicates bad goodness of fit of the model. There is no significant effect of the independent variables upon the dependent variable which is gender. In other words, the gender does not affect the return of the fund. The probability of the LR statistic is not significant as it is above the 5% significance level.

370

Page 371: Introduction to Econometrics 2

References

Andrews, D,W.K.,(1991), “Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation”. Econometrica, 59, pp.817 – 858.

Bhargava, A., (1986), “On the Theory of Testing for Unit Roots in Observed Time Series”. Review of Economic Studies, 53, pp.369 -384.

Box, G,E,P, and Gwilym, M, J, (1976), “ Time Series Analysis: Forecasting and Control, Revised Edition, Oakland, CA: Holden – Day.

Davidson, R and MacKinnon, J,G., (1993), “ Estimation and Inference in Econometrics, Oxford: Oxford University Press.

Dickey, D.A and Fuller, W.A, (1979), “ Distribution of the Estimators for Autoregressive Time Series with a Unit Root”. Journal of the American Statistical Association, 74, pp.427 -431.

Elliott, G, Thomas, J., R, and Stock,J., (1996), “ Efficient Tests for an Autoregressive Unit Root”. Econometrica 64, pp.813 – 836.

EViews User’s Guide II,(2007), “Quantitative Micro Software”. pp.71-72.

Fisher, R.A., (1932), “ Statistical Methods for Research Workers, 4th Edition, Edinburgh: Oliver & Boyd.

Greene, William H.,(1997), “Econometric Analysis”. 3rd Edition. Upper Saddle River, NJ: Prentice Hall.

Gomez, V., and Maravall, A.,(1996), “Programs TRAMO and SEATS: Instructions for the User. Working paper 9628, Servicio de Estudios, Banco de Espana.

Hamilton,J.D.,(1994a), “ Time Series Analysis”, Princeton University Press.

Hayashi, F., (2000), “Econometrcis, Princeton, NJ: Princeton University Press.

Hodrick, R.J and Prescott, E.C., (1997), “Postwar US Business Cycles: An Empirical Investigations”. Journal of Money, Credit, and Banking, 29, pp.1-16.

Johnston, J, et al., (1997), “ Econometric Methods, 4th Edition, New York: McGraw-Hill.

Kwiatkowski, D, P, Scmidt, P and Yongcheol, S., (1992), “Testing the Null Hypothesis of Stationary against the Alternative of a Unit root”. Journal of Econometrics, 54, pp.159 – 178.

371

Page 372: Introduction to Econometrics 2

Maddala, G.S and Wu, S., (1999), “A Comparative Study of Unit Root Tests with Panel Data and a New Simple Test”. Oxford Bulletin of Economics and Statistics, 61, pp.631 -52.

Newey, Whitney and Kenneth West, (1994), “ Automatic Lag Selection in Covariance Matrix Estimation”. Review of Economic Studies, 61, pp.631 – 653.

Serena, Ng and Perron, P, (2001), “ Lag Length Selection and the Construction of Unit Root Tests with Good size and Power”. Econometrica, 69 (6), pp.1519-1554.

Philipps, P.C.B and Perron, P., (1988), “ Testing for a Unit Root in Time Series Regression”. Biometrika, 75, pp.335 -346.

Rao, P., and Griliches,P., (1969), “ Small Sample Properties of Several Two-Stage Regression Methods in the Context of Auto-Correlated Errors”. Journal of the American Statistical Association, 64, pp.253 – 272.

Said, Said, E and Dickey,D,A., (1984), “Testing for Unit Roots in Autoregressive Moving Average Models of Unknown Order”. Biometrika, 71, pp.599 – 607.

372

Page 373: Introduction to Econometrics 2

Censored regression models are used when the dependent variable is not fully observed. Part of the data and information are not disclosed. As an example, we can mention, income survey data or private banking earnings data on individual investors. This is due to confidentiality reasons. An additional example is firm bankruptcy records. In this case, the dependent variable is assigned censoring points above or below zero. Negative values are coded as zero. These data are left censoring. You can assign positive values as right censoring. Check your dataset and decide according to the positive and negative values the range that you are going to use. First of all, you plot your data in EViews 6. Then, to estimate the model, select Quick/ estimate equation. Choose from the equation estimation dialog the censored estimation method. In the equation specification box, enter the censored dependent variable followed by the independent variables. Then select one of the three distributions of the error term. The three available choices are normal, logistic and extreme value. You have also the possibility to choose the optimization algorithm by clicking the box options close to specification. Check in EViews handbook the definition and exaplantions of the different optimization algorithms. They are very important and are used in investment programming.

Thus, the dependent variable y has the following two limitations:

y={0 if y¿≤ 01if y¿>0 }

The y¿ represents the variable which is not observed.

In this example, enter the following formula:

y c x1 x2

The dependent variable represents censored data from a private banking related to healthy private individual based on their investment records.

I have used two censored points. For negative values that represent losses, I have inserted 0 to the left edit field. On the other hand, in the right edit field-censoring point, I have inserted the value 10000 for potential gains. You can adjust these numerical values according to your dataset.

The following table includes information related to the coefficients value, the standard errors, the z-statistic and their probabilities. Please pay attention to the coefficient called SCALE. It is used to estimate the standard deviation of the residual. In addition, EViews 6 provides likelihood summary statistics for the dependent variable.

373

Page 374: Introduction to Econometrics 2

Dependent Variable: YMethod: ML - Censored Normal (TOBIT)Date: 10/23/16 Time: 16:02Sample: 1 10Included observations: 10Left censoring (value) series: 0Right censoring (value) series: 10000Convergence achieved after 5 iterationsCovariance matrix computed using second derivatives

Variable Coefficient Std. Error z-Statistic Prob.

C 345.4160 483.1773 0.714884 0.4747X1 0.177116 0.559433 0.316599 0.7515X2 0.786860 0.566223 1.389664 0.1646

Error Distribution

SCALE:C(4) 491.9895 110.0122 4.472137 0.0000

Mean dependent var 5609.100    S.D. dependent var 2562.232S.E. of regression 587.6591    Akaike info criterion 16.03479Sum squared resid 2417402.    Schwarz criterion 16.15583Log likelihood -76.17396    Hannan-Quinn criter. 15.90202Avg. log likelihood -7.617396

Left censored obs 0     Right censored obs 0Uncensored obs 10     Total obs 10

Truncated regression models are preformed in EViews 6 in the same way as censored regression models. In this case, there are numerical values that are not observed. The dependent variable is below or above a threshold and this is the definition of truncation. Again this is due to confidentiality of data. For example, when reporting tax incomes for wealthy private families that are using offshore companies. In contrast, these models could be also used for poor households with incomes above or below a threshold values that is assigned from the econometrician.

Use the truncation points with cautious. For example, use zero for values that will be at or below zero removed from your final table. Use negative or positive values according to your dataset. Use the available statistics to describe your data including the likelihood function.

Thus, the dependent variable y has the following two limitations:

y={1if y¿>00 if y¿≤ 0}

374

Page 375: Introduction to Econometrics 2

The y¿ represents the variable which is not observed. It is known as the tobit model.

Count models regression in EViews 6

In this type of regression, the dependent variable y takes integer values or whole numbers. For example, the days that a treasury department in an investment bank needs to process invoices. In other words, we are focusing on the frequency that an event is occurring. Another example could be the payroll claims that an insurance company is receiving each week or the number of contracts that a credit derivatives department is buying.

To estimate such model select in EViews 6 quick / estimate equation and then select count – integer count data. EViews 6 will display the count estimation dialog box. Insert the regression equation of the dependent variable data followed by the independent variables or regressors. For example, consider the following regression model:

y c x1 x2

Select your count estimation method such as poisson, negative binomial, exponential, etc…. Specify a value for the variance parameter. Check which log likelihood function of the parameter or coefficients you want to be maximized under poisson, exponential model etc….

Then, click on options and select the robust covariances and the algorithm optimization type. Check EViews 6 users guide II for further explanations of the algorithm and the count estimation methods. For example, you have choices of three algorithm. The quadratic hill climbing, the Newton – Raphson and the Berndt – Hall- Hall – Hasman. The robust covariances is used to compute two types of standard errors. The Huber/ White option computes the quasi maximum likelihood standard errors.

I have attached an example of how your output will be shown in EViews 6.

Dependent Variable: YMethod: ML/QML - Poisson CountDate: 10/24/16 Time: 11:24Sample: 1 10Included observations: 10Convergence achieved after 6 iterationsQML (Huber/White) standard errors & covariance

Variable Coefficient Std. Error z-Statistic Prob.

C 7.602760 0.173898 43.71968 0.0000X1 4.42E-05 0.000241 0.183441 0.8545X2 0.000129 0.000238 0.540097 0.5891

R-squared 0.902279    Mean dependent var 5609.100Adjusted R-squared 0.874358    S.D. dependent var 2562.232S.E. of regression 908.2076    Akaike info criterion 136.5331Sum squared resid 5773888.    Schwarz criterion 136.6238

375

Page 376: Introduction to Econometrics 2

Log likelihood -679.6654    Hannan-Quinn criter. 136.4335Restr. log likelihood -5714.043    LR statistic 10068.75Avg. log likelihood -67.96654    Prob(LR statistic) 0.000000

Check the coefficients, the standard error, the z-statistic and the probabilities for significance at the 95% confidence level. Check the LR statistic and the related probability. Check the log likelihood, the information criteria and the R-squared.

376

Page 377: Introduction to Econometrics 2

Detailed steps how to organize your work file with pooled time series, cross – section and pooled data

Pooled time series data and cross - section data are very important to analyse financial and economic data. For example, the money supply or the industrial production time series of different European countries. Another example could be the inflation and unemployment rate of different time periods. EViews 6 help you to manage your data and perform calculations when using times series or cross – section data. In EViews 6 select object, then, new object, then pool. Set up the pool workfile by including both time series and cross – section data. Once the pool window is open enter the name of your variables by using the _ character at the beginning of each variable. The pool object is a description of the type of the data that you are using. You can also define groups by entering the keyword @ group . A subset of the variables will be included under the group name. Cross –section data of an economic variable such as inflation, GDP or money supply of different countries should be included in the pool object. Use the function proc and then select group. Write the name of the variables that will be included in the group and then start your analysis. You can also include dummy variables in the pool object by using the following format:

series IND _ USA = 0

series IND_GBP = 1

Use the function Proc / estimate from the pool menu and the dialog box will open. Insert the pool dependent series, the cross section and period effects, the estimation method, and the regressors or the independent variables.

Panel data are known as longitudinal data or cross sectional data. These data are obtained from firms or households at periodic intervals to obtain information that will be used for further analysis. In other words, panel data are based on repetition at periodic intervals to obtain a concise picture of the sample under study. Another example that we could mention is income tax rate for different countries and for different years. Once you open EViews 6, select under workfile structure type balanced panel. Choose the frequency of your data, the start and end date and the number of cross sections. For example, the frequency could be quarterly. Click OK and EViews will create a structured workfile. Check the range and sample description at the top of the workfile window. Arrange your variables in a group and then select view/descriptive statistics and tests/ stats by classification. The dialog statistics by classification will open. Select the statistics that you are interested such as mean, median, standard deviation ect…. You have also the option clicking on view to bring up the graph options dialog. Select line, bar or boxplot option of how you wish to display your data. If you choose stack cross sections with bar, EViews 6 will display a single graph of the stacked data. You have also the options individual cross sections and combined cross sections. Thus, in the same sheet, you could have several graphs of your data. Finally, for line graphs in the panel options select mean plus standard deviations bounds. You could select 1, two or three standard deviations bounds. You could also select

377

Page 378: Introduction to Econometrics 2

median plus quantiles and then choose extreme quantiles. To calculate the unit root select view/unit root test. You have also the option to calculate a Pedroni residual cointegration test. Once your series and entered in a group workfile select view/descriptive statistics for the stacked data. If you wish to perform a hypothesis testing in a single series, then, select view/ descriptive statistics and tests/ simple hypothesis tests.

I have attached a research paper to help you understand the use of quantile regression and Pedroni residual cointegration test in EViews 6. Thanks for your participation and for your effort to learn new academic concepts.

378

Page 379: Introduction to Econometrics 2

Application of a quantile regression to estimate across which quantiles the US Federal Reserve sets the monetary policy in relation to short, medium and long-term yields of the US interest rates.

Dr Michel Zaki Guirguis

Bournemouth University4

Institute of Business and LawFern BarrowPoole, BH12 5BB, UKTel:0030-210-9841550Mobile:0030-6982044429Email: [email protected]

Biographical notes

I hold a PhD in Finance from Bournemouth University in the U.K. I have worked for several multinational companies including JP Morgan Chase and Interamerican Insurance and Investment Company in Greece. Through seminars, I learned how to manage and select the right mutual funds according to various clients needs. I supported and assisted the team in terms of six sigma project and accounts reconciliation. Application of six sigma project in JP Morgan Chase in terms of statistical analysis is important to improve the efficiency of the department. Professor Philip Hardwick and I have published a chapter in a book entitled “International Insurance and Financial Markets: Global Dynamics and Local Contingencies”, edited by Cummins and Venard at Wharton Business School (University of Pennsylvania in the US). I am working on several papers that focus on the Financial Services Sector.

Abstract

In this article, we are investigating the effects of the macroeconomic variables. We have applied a Quantile regression, (including LAD), in EViews 6 to test the quantile of the natural logarithmic returns of the seasonally adjusted money supply, (M2) on the natural logarithmic returns of the 3 - month, 5 - year and 10 - year Treasury with constant maturities. The aim by using this methodology is to extend the conditional mean analysis of the dependent variable in relation to the independent variables. We want to test across which quantiles the US Federal Reserve sets the monetary policy in relation to interest rates. Therefore, we use as yardstick of the quantile to estimate the value of 0.5, 0.85, 0.90 and 0.95. By using other measures of locations such as the median or the 90th, 95th percentiles of the cumulative distribution function, we are able to better describe, understand and analyse the median regression. We have found mixed results by using two methods that estimate the covariance matrix of the quantile regression. The total 4 I have left from Bournemouth University since 2006. The permanent address of the author’s is, 94, Terpsichoris road, Palaio – Faliro, Post Code: 17562, Athens – Greece.

379

Page 380: Introduction to Econometrics 2

dataset includes 277 observations. The data that we have used are monthly returns starting from 01/01/1990 to 01/01/2013 and total to 276 observations. The data was obtained from the Federal Reserve Statistical Release Department and the symbols of the series are H.6 and H.15.

Keywords: Quantile regression, Seasonally adjusted money supply, (M2), 3 month Treasury with constant maturity, 5-year and 10-year Treasury with constant maturities, symmetric quantiles tests, process coefficients, slope equality test.

Introduction

380

Page 381: Introduction to Econometrics 2

Monetary policy has been seen as a macroeconomic policy instrument that was used by the US Federal Open Market Committee, (FOMC), to affect output, employment, interest rates or the exchange rate mechanism. In this article, we have focused on the relationship of the seasonally adjusted money supply, M2, in relation to the US term structure of interest rates. The purpose of this article is by using quantile regression, we aim to extend the conditional mean analysis of the dependent variable, namely, adjusted money supply, M2 across 0.5, 0.85, 0.90 and 0.95 quantiles. We wanted to detect if the independent or explanatory variables, which are the 3 month, 5 year and 10 year Treasury with constant maturities are significantly influencing the money supply.

In January, 1990, the seasonally adjusted money supply, M2, was 3168,5 billions dollars and the 3 –month Treasury with constant maturity was 7.9%, the 5-year Treasury with constant maturity was 8.12% and the 10 year – Treasury with constant maturity was 8.21%. In January 2000, the seasonally adjusted money supply, M2, increased and to the value of 4642,6 billions dollars and the short, medium and long-term yields of interest rates have decreased. Specifically, the 3-month Treasury with constant maturity was 5.5%. The 5-year Treasury with constant maturity was 6.58% and the 10-year Treasury with constant maturity was 6.66%. In January, 2008, due to the growth of the downside risks and the tightened of credit, the committee has increased the money supply and decreased the interest rates. For example, the seasonally adjusted money supply, M2, has increased from 6694,5 billions dollars in January 2006 to 7483 billions dollars in January 2008. On the other hand, the 3 – month Treasury with constant maturity has decreased from 4.34% in January 2006 to 2.82% in January 2008. In January 27th, 2010, the Federal Reserve was in the process of purchasing $1.25 trillion of agency mortgage-backed securities and about $175 billion of agency debt. There was a substantial increase in the money supply in 2011 and 2012. The Federal Open Market Committee has decided to purchase $600 billion of longer – term Treasury securities by the end of the second quarter of 2011. The purpose of asset purchase program is to maximise employment and achieve price stability. The committee has adopted an expansionary monetary policy with low interest rates and low federal funds rate of 0 to ¼ percent to reduce the unemployment rate, as a result of the recession of 2008.

Evaluation of the performance of the quantile regression models have been based on the function Forecast that is used in EViews. Indicators such as root mean square error, (RMSE), mean absolute error, (MAE), Theil inequality coefficient, bias proportion and variance proportion and the covariance proportion have been used to choose the best model that minimize the error term.

The rest of the paper is organized as follows. Section 1 describes the methodology and the data. Section 2 is an analysis of statistics and econometrics tests and Section 3 summarizes and concludes.

381

Page 382: Introduction to Econometrics 2

1. Methodological issues and data explanations.

In this article, we are going to test the relationship of macroeconomic indicators by applying a quantile regression. The methodology was initiated by Koenker and Bassett, (1978), in an effort to capture not only the conditional mean of the dependent variable in relation to other regressors. The aim was to get a holistic understanding of the cumulative distribution function in relation to other measures of location and percentiles. This area of research has attracted the attention of many academics such as: Buchinsky, M,(1995), Chamberlain, (1994), Falk, (1986), He and Hu,(2002), Hendricks and Koenker, (1992), Jones, (1992), Kocherginsky, Xuming and Yunming, (2005), Koenker, (1994, 2005), Koenker and Hallock, (2001), Koenker and Jose, (1999), Powell, (1986), Siddiqui, (1960), Welsh, (1988).

Let’s assume a constant model where the natural logarithmic monthly returns of the seasonally adjusted money supply, (M2), is linearly related to a constant. The mathematical equation, (1) is as follows:

y =α 0+εt (1)

Where: y = lnM2, which is the natural logarithmic monthly returns of the seasonally adjusted money supply, (M2), and is the dependent variable.α 0 : is the constant term of the regression .ε t : is the error term that has an i .i . d . properties .

According to EViews User’s Guide II, ( p.271), by taking into consideration the empirical quantile, the mathematical equation is as followed:

Qn (τ )=inf { y : Fn ( y )≥τ } (2)

Koenker and Bassett, (1978), proposed the following conditional minimization

problem to find the τth

sample quantile based on the fact that the τth

quantile of the dependent variable that in our case is seasonally adjusted money supply, M2,

can be obtained from the solution of α 0( τ ) based on equation (1). Thus, the empirical quantile equation is as followed: Qn (τ )=argminα0{ ∑i : yt≥α0

τ| y t−a0|+ ∑i : yt∠α0

(1−τ )|y t−α 0|} (3 )

Qn (τ )=argmina0 {∑iρτ ( y i−α 0 )} (4 )

382

Page 383: Introduction to Econometrics 2

We extend the concept of finding theτth

quantile, by assuming a linear regression model that incorporates independent variables X. Thus, equation (5) is as follows:

y =α 0+β1 ln 3 month+β2 ln 5 year+β3 ln 10 year+εt (5)

Where: y = lnM2, which is the natural logarithmic monthly returns of the seasonally adjusted money supply, (M2), and is the dependent variable.α 0 : is the constant term of the regression .β i : are the coefficients to be estimated from the independent variables .ε t : is the error term that has an i .i .d . properties .

According to EViews User’s Guide II, ( p.271), given values for the p-vector of independent variables X as in equation (5), we assume a linear specification for the conditional quantile of the dependent variable y, thus, we have:

Q(τ|X i , β (τ ))=Χ i' β (τ ) (6)

Where β (τ ) is the vector of coefficients associated with the τ -th quantile .

The conditional quantile regression (6) based on the analog of the unconditional quantile minimization of equation (4) becomes:

βn( τ )=argminβ (τ ){∑iρτ ( y i−X i

' β( τ ))} (7)

We have used two methods to estimate the covariance matrix of the quantile regression. The purpose of using both methods is to check if there is substantial deviations of the standard errors and the QLR statistic among the two methods. The first methodology is based on the Huber sandwich standard errors and covariance and the method for computing the scalar sparsity is the Kernel residual. The second methodology is based on the Bootstrap standard errors and covariance and the method that we have used to compute the scalar sparsity is the Siddiqui mean fitted. The bootstrap method that we have used are the XY pair bootstrap resampling with replacement of subsamples of size m from the original data. We have used a maximum number of replications of 500 iterations. According to EViews User’s Guide II, ( p.277), the estimator of the asymptotic covariance matrix is estimated as follows:

V ( β )=n(mn ) 1

B∑j=1

B

( β j (τ )− β ( τ ))( β j(τ )− β ( τ ))' (8)

Where V ( β ) is the bootstrap covariance matrix and β ( τ ) is the mean of bootstrap elements. m and n are vectors of resampled residuals. On the other hand, to calculate the Siddiqui mean fitted as developed by Bassett and Koenker,

383

Page 384: Introduction to Econometrics 2

(1982), we assume that the conditional quantile denoted by Qn(τ ) is the inverse

of the conditional cumulative distribution function, F−1(τ ). According to EViews

User’s Guide II, ( p.273), a simple difference quotient of the empirical quantile function is as follows:

s( τ )=[ F−1( τ+hn )−F−1( τ−hn) ] /(2hn ) (9)

Where: hn is a bandwidth, which tend to zero as the sample size approach infinity.

The Siddiqui mean fitted is based on estimating two additional quantile regression

models for τ−hn and τ+hn and then use the coefficients to calculate fitted quantiles.

According to EViews User’s Guide II, ( p.274), the kernel residual is computed by estimating s(τ ) using the inverse of a kernel density function estimator. Thus the mathematical equation is as follows:

s( τ )=1/ [ (1 /n )∑i=1

n

cn−1 K (u i(τ )/cn )] (10)

Where u( τ ) are the residuals from the quantile regression.

The hypotheses that have been formulated are as follows:

H0: Increase in the seasonally adjusted money supply, M2, was not accompanied by decrease in the yields of the 3-month Treasury constant maturity, the 5-year Treasury constant maturity and the 10-year Treasury constant maturity across the 0.5, 0.85, 0.90 and 0.95 quantiles.

H1: Increase in the seasonally adjusted money supply, M2, was accompanied by decrease in the yields of the 3-month Treasury constant maturity, the 5-year Treasury constant maturity and the 10-year Treasury constant maturity across the 0.5, 0.85, 0.90 and 0.95 quantiles.

Descriptive statistics will be displayed and to test for normality the Jarque – Bera statistic is analysed. We have checked the stationery of the series by applying the Augmented Dickey – Fuller’s stationary test, (ADF), statistic to calculate and compare the critical values.

The data that we have used are monthly returns starting from 01/01/1990 to 01/01/2013, which total to 276 observations. The data has been derived from money stock measures, and selected interest rates. All the data were obtained from the Federal Reserve Statistical Release Department and they are denoted by the symbols, H.6 and H.15. According to the Federal Reserve Statistical Release, the seasonally adjusted money supply, (M2), consists of M1, namely currency outside the US Treasury, Federal Reserve Banks, the vaults of depository

384

Page 385: Introduction to Econometrics 2

institutions, traveller’s checks of nonblank issuers, demand deposits at commercial banks less cash items in the process of collection and Federal Reserve Float, other checkable deposits, credit union share draft accounts, and demand deposits at thrift institutions. In addition to M1 the M2 seasonally adjusted series include savings deposit, small – denomination time deposits and retail money funds.

The natural logarithmic formula that we have used is as follows:

Rt= ln (Pt /P t−1 ) (11)

Where: Rt is the monthly return for month t, Pt is the closing price for month t, and Pt-1 is the closing price lagged one period for month t-1.

385

Page 386: Introduction to Econometrics 2

2. Statistics and econometrics tests.

Table 1 shows descriptive statistics and normality tests of the natural logarithmic monthly returns of the seasonally adjusted money supply, M2 denoted as, LNM2 and natural logarithmic monthly returns of the 10 - year Treasury constant maturity, 5 - year Treasury constant maturity and 3 – month Treasury constant maturity denoted as LN10, LN5 and LN3.

Table 1 displays Jarque - Bera normality test. LN10 shows logarithmic monthly returns of the 10 - year Treasury constant maturity. LN3 represents logarithmic monthly returns of the 3 - month Treasury constant maturity. LN5 represents logarithmic monthly returns of the 5 - year Treasury constant maturity. LNM2 represents the logarithmic monthly returns of the seasonally adjusted money supply, M2.

LN10 LN3 LN5 LNM2 Mean -0.005284 -0.017124 -0.008352  0.004323 Median -0.008242  0.000000 -0.010953  0.004013 Maximum  0.178310  1.466337  0.357415  0.023941 Minimum -0.377530 -1.845827 -0.411980 -0.008032 Std. Dev.  0.056167  0.240634  0.080410  0.003743 Skewness -1.041721 -0.669940 -0.313518  1.487112 Kurtosis  11.17080  25.82268  8.542075  9.580017

 Jarque-Bera  817.6817  6010.707  357.7393  599.6403 Probability  0.000000  0.000000  0.000000  0.000000

 Sum -1.458250 -4.726123 -2.305051  1.193229 Sum Sq. Dev.  0.867561  15.92375  1.778078  0.003854

 Observations  276  276  276  276

Source: Author’s calculation based on EViews software.Significant p-value at 5% significance level.

We state the hypotheses as follows:

H0: The log difference of the monthly returns of the 3 - month, 5-year and 10-year Treasury constant maturities and the log difference of the monthly returns of the seasonally adjusted money supply, M2, are normally distributed.

386

Page 387: Introduction to Econometrics 2

H1: The log difference of the monthly returns of the 3- month, 5-year and 10-year Treasury constant maturities and the log difference of the monthly returns of the seasonally adjusted money supply, M2, are not normally distributed.

According to Table 1, the Jarque – Bera χ2

statistics for all variables are very significant at the 5% significance value. For example, the logarithmic monthly returns of seasonally adjusted money supply, M2, which represents the growth or

contraction in an economy shows a χ2

statistic of 599.6403, which is very significant, as the p-value is 0.0000. The joint test of the null hypothesis that sample skewness equals 0 and sample kurtosis equals 3 is rejected. Thus, The sample evidence suggest that we can reject H0 of normality. The distribution of the various variables shows excess kurtosis. It is leptokurtic and slightly positively or negatively skewed. For example, the kurtosis of the logarithmic monthly returns of the 3 – month Treasury with constant maturity is 25.82, which is greater than 3.

Tables 2 - 5 show the ADF tests of the natural logarithmic differences of the US seasonally adjusted money supply,(M2), the 3 - month Treasury constant maturity, the 5 - year Treasury constant maturity, and the 10 - year Treasury constant maturity.

Table 2 shows the ADF test of the natural logarithmic monthly difference of the US seasonally adjusted money supply,(M2), for the period 01/01/1990 to 01/01/2013.

Null Hypothesis: LNM2 has a unit rootExogenous: ConstantLag Length: 4 (Automatic based on SIC, MAXLAG=15)

t-Statistic   Prob.*

Augmented Dickey-Fuller test statistic -4.997789  0.0000Test critical values: 1% level -3.454353

5% level -2.87200110% level -2.572417

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test EquationDependent Variable: D(LNM2)Method: Least SquaresDate: 09/15/13 Time: 16:28Sample (adjusted): 7 277Included observations: 271 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

LNM2(-1) -0.413270 0.082691 -4.997789 0.0000D(LNM2(-1)) -0.260029 0.082776 -3.141373 0.0019D(LNM2(-2)) -0.135864 0.080517 -1.687388 0.0927D(LNM2(-3)) 0.013954 0.073519 0.189808 0.8496D(LNM2(-4)) -0.176664 0.060777 -2.906785 0.0040

C 0.001810 0.000410 4.415128 0.0000

387

Page 388: Introduction to Econometrics 2

R-squared 0.378184    Mean dependent var -7.89E-07Adjusted R-squared 0.366451    S.D. dependent var 0.004206S.E. of regression 0.003348    Akaike info criterion -8.539021Sum squared resid 0.002970    Schwarz criterion -8.459269Log likelihood 1163.037    Hannan-Quinn criter. -8.506999F-statistic 32.23417    Durbin-Watson stat 2.014182Prob(F-statistic) 0.000000

Source: Author’s calculation based on EViews software.

For a level of significance of one per cent, the critical value of the t-statistic of the Dickey-Fuller’s table is -3.45. According to Table 2 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -4.998, which is smaller than the critical values, (-3.4564, -2.87, -2.57). In other words, the natural monthly logarithmic difference of the US seasonally adjusted money supply,(M2) is a stationary series.

Table 3 shows the ADF test of the natural logarithmic monthly difference of the 3 - month Treasury constant maturity for the period 01/01/1990 to 01/01/2013.

Null Hypothesis: LN3 has a unit rootExogenous: ConstantLag Length: 1 (Automatic based on SIC, MAXLAG=15)

t-Statistic   Prob.*

Augmented Dickey-Fuller test statistic -13.43752  0.0000Test critical values: 1% level -3.454085

5% level -2.87188310% level -2.572354

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test EquationDependent Variable: D(LN3)Method: Least SquaresDate: 09/15/13 Time: 16:30Sample (adjusted): 4 277Included observations: 274 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

LN3(-1) -0.978777 0.072839 -13.43752 0.0000D(LN3(-1)) 0.266550 0.058664 4.543703 0.0000

C -0.016748 0.013801 -1.213521 0.2260

R-squared 0.429880    Mean dependent var -7.67E-05Adjusted R-squared 0.425673    S.D. dependent var 0.300248S.E. of regression 0.227541    Akaike info criterion -0.112085Sum squared resid 14.03099    Schwarz criterion -0.072525Log likelihood 18.35561    Hannan-Quinn criter. -0.096206F-statistic 102.1693    Durbin-Watson stat 1.935150Prob(F-statistic) 0.000000

388

Page 389: Introduction to Econometrics 2

Source: Author’s calculation based on EViews software.

For a level of significance of one per cent, the critical value of the t-statistic from Dickey-Fuller’s table is -3.45. According to Table 3 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -13.44, which is smaller than the critical values, (-3.4563, -2.87, -2.57). In other words, the natural logarithmic monthly difference of the returns of the 3 - month Treasury constant maturity is a stationary series.

Table 4 shows the ADF test of the natural logarithmic monthly returns of the 5 - year Treasury constant maturity for the period 01/01/1990 to 01/01/2013.

Null Hypothesis: LN5 has a unit rootExogenous: ConstantLag Length: 0 (Automatic based on SIC, MAXLAG=15)

t-Statistic   Prob.*

Augmented Dickey-Fuller test statistic -13.14101  0.0000Test critical values: 1% level -3.453997

5% level -2.87184510% level -2.572334

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test EquationDependent Variable: D(LN5)Method: Least SquaresDate: 09/15/13 Time: 16:31Sample (adjusted): 3 277Included observations: 275 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

LN5(-1) -0.781114 0.059441 -13.14101 0.0000C -0.006563 0.004777 -1.373921 0.1706

R-squared 0.387461    Mean dependent var 0.000399Adjusted R-squared 0.385217    S.D. dependent var 0.100407S.E. of regression 0.078727    Akaike info criterion -2.238417Sum squared resid 1.692035    Schwarz criterion -2.212113Log likelihood 309.7823    Hannan-Quinn criter. -2.227860F-statistic 172.6861    Durbin-Watson stat 1.966135Prob(F-statistic) 0.000000

Source: Author’s calculation based on EViews software.

For a level of significance of five per cent, the critical value of the t-statistic from Dickey-Fuller’s table is -2.87. According to Table 4 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five

389

Page 390: Introduction to Econometrics 2

and ten per cent significance level. The ADF test statistic is -13.14, which is smaller than the critical values, (-3.4563, -2.87, -2.57). In other words, the natural logarithmic monthly difference of the 5 - year Treasury constant maturity returns is a stationary series.

Table 5 shows the ADF test of the monthly mean log difference of the 10 - year Treasury constant maturity for the period 01/01/1990 to 01/01/2013.

Null Hypothesis: LN10 has a unit rootExogenous: ConstantLag Length: 1 (Automatic based on SIC, MAXLAG=15)

t-Statistic   Prob.*

Augmented Dickey-Fuller test statistic -11.97065  0.0000Test critical values: 1% level -3.454085

5% level -2.87188310% level -2.572354

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test EquationDependent Variable: D(LN10)Method: Least SquaresDate: 09/15/13 Time: 16:32Sample (adjusted): 4 277Included observations: 274 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

LN10(-1) -0.903092 0.075442 -11.97065 0.0000D(LN10(-1)) 0.161771 0.060349 2.680583 0.0078

C -0.004929 0.003316 -1.486693 0.1383

R-squared 0.400766    Mean dependent var 0.000331Adjusted R-squared 0.396344    S.D. dependent var 0.070016S.E. of regression 0.054399    Akaike info criterion -2.974048Sum squared resid 0.801962    Schwarz criterion -2.934488Log likelihood 410.4446    Hannan-Quinn criter. -2.958170F-statistic 90.62216    Durbin-Watson stat 1.955152Prob(F-statistic) 0.000000

Source: Author’s calculation based on EViews software.

For a level of significance of one per cent, the critical value of the t-statistic from Dickey-Fuller’s table is -3.45. According to Table 5 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root with one, five and ten per cent significance level. The ADF test statistic is -11.97, which is

390

Page 391: Introduction to Econometrics 2

smaller than the critical values, (-3.45, -2.87, -2.57). In other words, the natural logarithmic difference of the monthly 10 - year Treasury constant maturity is a stationary series.

Table 6 displays the results of the Quantile regression, (including LAD). The method that we have chosen for computing the coefficient covariances is the Huber Sandwich and the method for computing the scalar sparsity is the Kernel residual. The dependent variable is LNM2, which represents the logarithmic monthly returns of the seasonally adjusted money supply, M2. The independent or explanatory variables are LN10, which shows natural logarithmic monthly returns of the 10-year Treasury constant maturity. LN3 represents natural logarithmic monthly returns of the 3-month Treasury constant maturity. LN5 represents logarithmic monthly returns of the 5-year Treasury constant maturity. The quantile to estimate is 0.5 and the sample covers the period 01/01/1990 to 01/01/2013.

Dependent Variable: LNM2Method: Quantile Regression (Median)Date: 09/16/13 Time: 19:05Sample: 2 277Included observations: 276Huber Sandwich Standard Errors & CovarianceSparsity method: Kernel (Epanechnikov) using residualsBandwidth method: Hall-Sheather, bw=0.14922Estimation successfully identifies unique optimal solution

Variable Coefficient Std. Error t-Statistic Prob.

C 0.004008 0.000224 17.86485 0.0000LN3 -0.001003 0.001216 -0.824815 0.4102LN5 -0.008029 0.008792 -0.913149 0.3620

LN10 0.007850 0.014024 0.559804 0.5761

Pseudo R-squared 0.016853    Mean dependent var 0.004323Adjusted R-squared 0.006010    S.D. dependent var 0.003743S.E. of regression 0.003655    Objective 0.350851Quantile dependent var 0.004010    Objective (const. only) 0.356865Sparsity 0.007407    Quasi-LR statistic 6.495842Prob(Quasi-LR stat) 0.089827

Source: Author’s calculation based on EViews software.Significant results at the 5% significance level.

Our estimates are based on the median or 0.5 quantile that displays the relationship of the dependent variable, LNM2, which represents the logarithmic monthly returns of the seasonally adjusted money supply, M2, in relation to the explanatory variables. LN10, shows the natural logarithmic monthly returns of the 10-year Treasury constant maturity. LN3 represents natural logarithmic monthly returns of the 3-month Treasury constant maturity. LN5 represents logarithmic monthly returns of the 5-year Treasury constant maturity. According to Table 6, the coefficient of the constant of the quantile regression is significant at the 5% significance level. The coefficient is 0.004, the t-statistic is 17.86 and the p-probability is 0.0000. The coefficients of LN3, LN5 and LN10 are not statistically significant and based on the 0.5 quantile, they do not affect the seasonally adjusted money supply, M2. We have concluded that increase in the seasonally

391

Page 392: Introduction to Econometrics 2

adjusted money supply, M2, was not accompanied by decrease in the yields of the 3-month Treasury constant maturity, the 5-year Treasury constant maturity and the 10-year Treasury constant maturity across the 0.5 quantile.

The bandwidth uses a value of 0.14922. The adjusted R-squared is very low and the numeric value that explains the variation of the independent variables that have in the dependent variable is 0.6%. The Quasi –LR statistic is 6.50 and the probability of the (Quasi-LR stat) is accounted as 0.09, which is not statistically significant, as it is above the 5% significance level. The minimized value of equation (3) and (4) are reported in the objective function. The value recorded is 0.35. The sparsity is 0.007. We have also tested the residuals of the quantile regression and we have found that it is stationary. The Augmented Dickey – Fuller test statistic was - 4.92, which is smaller than the critical values at the one,(-3.45), five,(-2.87) and ten percent,(-2.57). Thus, we can reject the null hypothesis namely the existence of a unit root in the residual series.

Table 7 displays the results of the Quantile regression, (including LAD). The method that we have chosen for computing the coefficient covariances is the Bootstrap standard errors and covariance and the method for computing the scalar sparsity is the Siddiqui using fitted quantiles. The dependent variable is LNM2, which represents the logarithmic monthly returns of the seasonally adjusted money supply, M2. The independent or explanatory variables are LN10, which shows natural logarithmic monthly returns of the 10-year Treasury constant maturity. LN3 represents natural logarithmic monthly returns of the 3-month Treasury constant maturity. LN5 represents logarithmic monthly returns of the 5-year Treasury constant maturity. The quantile to estimate is 0.5 and the sample covers the period 01/01/1990 to 01/01/2013.

Dependent Variable: LNM2Method: Quantile Regression (Median)Date: 09/16/13 Time: 15:06Sample: 2 277Included observations: 276Bootstrap Standard Errors & CovarianceBootstrap method: XY-pair, reps=500, rng=kn, seed=2001381543Sparsity method: Siddiqui using fitted quantilesBandwidth method: Hall-Sheather, bw=0.14922Estimation successfully identifies unique optimal solution

392

Page 393: Introduction to Econometrics 2

Variable Coefficient Std. Error t-Statistic Prob.

C 0.004008 0.000188 21.36299 0.0000LN3 -0.001003 0.002322 -0.432028 0.6661LN5 -0.008029 0.009703 -0.827461 0.4087

LN10 0.007850 0.014494 0.541652 0.5885

Pseudo R-squared 0.016853    Mean dependent var 0.004323Adjusted R-squared 0.006010    S.D. dependent var 0.003743S.E. of regression 0.003655    Objective 0.350851Quantile dependent var 0.004010    Objective (const. only) 0.356865Sparsity 0.007099    Quasi-LR statistic 6.777516Prob(Quasi-LR stat) 0.079338

Source: Author’s calculation based on EViews software.Significant results at the 5% significance level.

The purpose of running this quantile regression that calculate the covariance matrix based on XY bootstrap pair and uses as a sparsity method the Siddiqui fitted quantiles is to check, if there is substantial deviations of the standard errors and the QLR statistic from the quantile regression reported in Table 6. The quantile regression in Table 6 uses as the covariance matrix the Huber Sandwich and the scalar sparsity is the Kernel residual. By comparing Table 6 and Table 7, we have found that the standard errors of the coefficients are identical or close to each other. For example, the standard errors of LN5 are (0.0088 versus 0.0097) and the standard errors of LN3 are (0.001 versus 0.002). The sparsity has similar value of 0.007 and the coefficients of LN3, LN5 and LN10 in both quantiles regressions are not statistically significant and they do not affect the money supply at the 0.5 quantile level. Similarly, the other statistics such as the adjusted R-squared, the Quasi –LR statistic and the probability of the (Quasi-LR stat) have very close or identical numerical values as in Table 6. Specifically, the adjusted R-squared is very low and the numeric value that explains the variation of the independent variables that have in the dependent variable is only 0.6%. The Quasi –LR statistic is 6.77 and the probability of the (Quasi-LR stat) is accounted as 0.08, which is not statistically significant, as it is above the 5% significance level. The minimized value of equation (3) and (4) are reported in the objective function. The value recorded is 0.35. We have also tested the residuals of the quantile regression and we have found that it is stationary. The Augmented Dickey – Fuller test statistic was - 4.92, which is smaller than the critical values at the one,(-3.45), five,(-2.87) and ten percent,(-2.57). Thus, we can reject the null hypothesis namely the existence of a unit root in the residual series.

Table 8 and Graph 1 displays the forecasting results of the quantile regression. The quantile to estimate is 0.5 and the sample covers the period 01/01/1990 to 01/01/2013.

393

Page 394: Introduction to Econometrics 2

-.008

-.004

.000

.004

.008

.012

.016

.020

50 100 150 200 250

LNM2F ± 2 S.E.

Forecast: LNM2FActual: LNM2Forecast sample: 2 277Included observations: 276

Root Mean Squared Error 0.003628Mean Absolute Error 0.002542Mean Abs. Percent Error 313.4202Theil Inequality Coefficient 0.370750 Bias Proportion 0.005637 Variance Proportion 0.839548 Covariance Proportion 0.154815

Source: Author’s calculation based on EViews software.

According to Table 8 and Graph 1, the root mean squared error, (RMSE), and mean absolute error, (MAE), have a low value of 0.004 and 0.003. Theil inequality coefficient should be between zero and one. In our case, it is 0.37, which shows that the model is good fit. The closer is this value to zero indicates that the model is best fit. The bias proportion is very low and has a value of 0.006 and the value of the covariance proportion is 0.15.

Table 9 shows the results of the Quantile regression, (including LAD). The method that we have chosen for computing the coefficient covariances is the Huber Sandwich and the method for computing the scalar sparsity is the Kernel residual. The dependent variable is LNM2, which represents the logarithmic monthly returns of the seasonally adjusted money supply, M2. The independent or explanatory variables are LN10, which shows natural logarithmic monthly returns of the 10-year Treasury constant maturity. LN3 represents natural logarithmic monthly returns of the 3-month Treasury constant maturity. LN5 represents logarithmic monthly returns of the 5-year Treasury constant maturity. The quantile to estimate is 0.85 and the quantile regression covers the period 01/01/1990 to 01/01/2013.

Dependent Variable: LNM2Method: Quantile Regression (tau = 0.85)Date: 09/15/13 Time: 18:19Sample: 2 277Included observations: 276Huber Sandwich Standard Errors & CovarianceSparsity method: Kernel (Epanechnikov) using residualsBandwidth method: Hall-Sheather, bw=0.07117Estimation successfully identifies unique optimal solution

Variable Coefficient Std. Error t-Statistic Prob.

C 0.007587 0.000354 21.45546 0.0000LN3 -0.003956 0.001203 -3.288054 0.0011LN5 -0.034039 0.007625 -4.464028 0.0000

394

Page 395: Introduction to Econometrics 2

LN10 0.045355 0.012385 3.661991 0.0003

Pseudo R-squared 0.058617    Mean dependent var 0.004323Adjusted R-squared 0.048234    S.D. dependent var 0.003743S.E. of regression 0.004943    Objective 0.234169Quantile dependent var 0.007261    Objective (const. only) 0.248750Sparsity 0.015247    Quasi-LR statistic 15.00057Prob(Quasi-LR stat) 0.001816

Source: Author’s calculation based on EViews software.Significant results at the 5% significance level.

Our estimates are based on the 0.85 quantile that displays the relationship of the dependent variable, LNM2, which represents the logarithmic monthly returns of the seasonally adjusted money supply, M2, in relation to the explanatory variables. LN10, shows the natural logarithmic monthly returns of the 10-year Treasury constant maturity. LN3 represents natural logarithmic monthly returns of the 3-month Treasury constant maturity. LN5 represents logarithmic monthly returns of the 5-year Treasury constant maturity. According to Table 9, all the coefficients of the quantile regression are significant at the 5% significance level. The coefficient of the constant is 0.008, the t-statistic is 21.46 and the p-probability is 0.0000. The coefficients of LN3, LN5 and LN10 are statistically significant and based on the 0.85 quantile, they do affect the seasonally adjusted money supply, M2. The interpretation of the results of the quantile regression was used in combination with the descriptive statistics in the introductory section that were related to the seasonally adjusted money supply, M2 and interest rates. We have concluded that increase in the seasonally adjusted money supply, M2, was accompanied by decrease in the yields of the 3-month Treasury constant maturity, the 5-year Treasury constant maturity and the 10-year Treasury constant maturity across the 0.85 quantile. The bandwidth uses a value of 0.07117. The adjusted R-squared is very low and the numeric value that explains the variation of the independent variables that have in the dependent variable is only 4.8%. The Quasi –LR statistic is 15 and the probability of the (Quasi-LR stat) is accounted as 0.002, which is statistically significant, as it is below the 5% significance level. The minimized value of equation (3) and (4) are reported in the objective function. The value recorded is 0.23. The sparsity is 0.02. We have also tested the residuals of the quantile regression and we have found that it is stationary. The Augmented Dickey – Fuller test statistic was - 4.87, which is smaller than the critical values at the one,(-3.45), five,(-2.87) and ten percent,(-2.57). Thus, we can reject the null hypothesis namely the existence of a unit root in the residual series.

Table 10 displays the results of the Quantile regression, (including LAD). The method that we have chosen for computing the coefficient covariances is the Bootstrap standard errors and covariance and the method for computing the scalar sparsity is the Siddiqui using fitted quantiles. The dependent variable is LNM2, which represents the logarithmic monthly returns of the seasonally adjusted money supply, M2. The independent or explanatory variables are LN10, which shows natural logarithmic monthly returns of the 10-year Treasury constant maturity. LN3 represents natural logarithmic monthly returns of the 3-month Treasury constant maturity. LN5 represents logarithmic monthly returns of the 5-year Treasury constant maturity. The

395

Page 396: Introduction to Econometrics 2

quantile to estimate is 0.85 and the quantile regression covers the period 01/01/1990 to 01/01/2013.

Dependent Variable: LNM2Method: Quantile Regression (tau = 0.85)Date: 09/16/13 Time: 15:24Sample: 2 277Included observations: 276Bootstrap Standard Errors & CovarianceBootstrap method: XY-pair, reps=500, rng=kn, seed=2001381543Sparsity method: Siddiqui using fitted quantilesBandwidth method: Hall-Sheather, bw=0.07117Estimation successfully identifies unique optimal solution

Variable Coefficient Std. Error t-Statistic Prob.

C 0.007587 0.000383 19.80691 0.0000LN3 -0.003956 0.003028 -1.306491 0.1925LN5 -0.034039 0.012992 -2.619992 0.0093

LN10 0.045355 0.021512 2.108374 0.0359

Pseudo R-squared 0.058617    Mean dependent var 0.004323Adjusted R-squared 0.048234    S.D. dependent var 0.003743S.E. of regression 0.004943    Objective 0.234169Quantile dependent var 0.007261    Objective (const. only) 0.248750Sparsity 0.019363    Quasi-LR statistic 11.81207Prob(Quasi-LR stat) 0.008055

Source: Author’s calculation based on EViews software.Significant results at the 5% significance level.

The purpose of running this quantile regression that calculate the covariance matrix based on XY bootstrap pair and uses as a sparsity method the Siddiqui fitted quantiles is to check, if there is substantial deviations of the standard errors and the QLR statistic from the quantile regression reported in Table 9. We have used a maximum number of replications of 500 iterations. The quantile regression in, Table 9, uses as the covariance matrix the Huber Sandwich and the scalar sparsity is the Kernel residual. By comparing Table 9 and Table 10, we have found that the standard errors of the coefficients are identical or close to each other. For example, the standard errors of LN3 are (0.001 versus 0.003) and LN10, (0.012 versus 0.022).

The coefficients of LN5 and LN10 are statistically significant and they affect the money supply at the 0.85 quantile level. The coefficient of LN3 is not statistically significant across the 0.85 quantile of the bootstrap quantile regression method. We have concluded that increase in the seasonally adjusted money supply, M2, was accompanied by decrease in the yields of the 5-year Treasury constant maturity and the 10-year Treasury constant maturity across the 0.85 quantile. In contrast, we found that increase in the money supply, M2, was not accompanied by decrease in the yields of the 3-month Treasury constant maturity by using the bootstrap method.

The other statistics such as the adjusted R-squared, the Quasi –LR statistic and the probability of the (Quasi-LR stat) have very close or identical numerical values as in Table 9. Specifically, the adjusted R-squared is very low and the numeric value

396

Page 397: Introduction to Econometrics 2

that explains the variation of the independent variables that have in the dependent variable is only 4.8%. The Quasi –LR statistic is 11.81 and the probability of the (Quasi-LR stat) is accounted as 0.008, which is statistically significant, as it is below the 5% significance level. The minimized value of equation (3) and (4) are reported in the objective function. The value recorded is 0.23. We have also tested the residuals of the quantile regression and we have found that it is stationary. The Augmented Dickey – Fuller test statistic was - 4.87, which is smaller than the critical values at the one,(-3.45), five,(-2.87) and ten percent,(-2.57). Thus, we can reject the null hypothesis namely the existence of a unit root in the residual series.

Table 11 and Graph 2 displays the forecasting results of the quantile regression. The quantile to estimate is 0.85 and the sample covers the period 01/01/1990 to 01/01/2013.

-.010

-.005

.000

.005

.010

.015

.020

.025

.030

50 100 150 200 250

LNM2F ± 2 S.E.

Forecast: LNM2FActual: LNM2Forecast sample: 2 277Included observations: 276

Root Mean Squared Error 0.004907Mean Absolute Error 0.004060Mean Abs. Percent Error 604.1479Theil Inequality Coefficient 0.362679 Bias Proportion 0.473298 Variance Proportion 0.238381 Covariance Proportion 0.288321

Source: Author’s calculation based on EViews software.

According to Table 11 and Graph 2, the root mean squared error, (RMSE), and mean absolute error, (MAE), have a low value of 0.005 and 0.004. Theil inequality coefficient should be between zero and one. In our case, it is 0.36, which shows that the model is good fit. The bias proportion is 0.47 and the value of the covariance proportion is 0.29.

Table 12 displays the results of the Quantile regression, (including LAD). The method that we have chosen for computing the coefficient covariances is the Huber Sandwich and the method for computing the scalar sparsity is the Kernel residual. The dependent variable is LNM2, which represents the logarithmic monthly returns of the seasonally adjusted money supply, M2. The independent or explanatory variables are LN10, which shows natural logarithmic monthly returns of the 10-year Treasury constant maturity. LN3 represents natural logarithmic monthly returns of the 3-month Treasury constant maturity. LN5 represents logarithmic monthly returns of the 5-year Treasury constant maturity. The quantile to estimate is 0.90 and the quantile regression covers the period 01/01/1990 to 01/01/2013.

Dependent Variable: LNM2

397

Page 398: Introduction to Econometrics 2

Method: Quantile Regression (tau = 0.9)Date: 09/15/13 Time: 18:21Sample: 2 277Included observations: 276Huber Sandwich Standard Errors & CovarianceSparsity method: Kernel (Epanechnikov) using residualsBandwidth method: Hall-Sheather, bw=0.053141Estimation successfully identifies unique optimal solution

Variable Coefficient Std. Error t-Statistic Prob.

C 0.008283 0.000353 23.46179 0.0000LN3 -0.004340 0.000944 -4.598709 0.0000LN5 -0.034227 0.006669 -5.132085 0.0000

LN10 0.041775 0.010856 3.848001 0.0001

Pseudo R-squared 0.102578    Mean dependent var 0.004323Adjusted R-squared 0.092680    S.D. dependent var 0.003743S.E. of regression 0.005458    Objective 0.183094Quantile dependent var 0.008045    Objective (const. only) 0.204022Sparsity 0.018321    Quasi-LR statistic 25.38398Prob(Quasi-LR stat) 0.000013

Source: Author’s calculation based on EViews softwareSignificant results at the 5% significance level.

Our estimates are based on the 0.90 quantile that displays the relationship of the dependent variable, LNM2, which represents the logarithmic monthly returns of the seasonally adjusted money supply, M2, in relation to the explanatory variables. LN10, shows the natural logarithmic monthly returns of the 10-year Treasury constant maturity. LN3 represents natural logarithmic monthly returns of the 3-month Treasury constant maturity. LN5 represents logarithmic monthly returns of the 5-year Treasury constant maturity. According to Table 12, all the coefficients of the quantile regression are significant at the 5% significance level. The coefficient of the constant is 0.008, the t-statistic is 23.46 and the p-probability is 0.0000. The coefficients of LN3, LN5 and LN10 are statistically significant and based on the 0.90 quantile, they affect the seasonally adjusted money supply, M2. We have concluded that increase in the seasonally adjusted money supply, M2, was accompanied by decrease in the yields of the 3-month Treasury constant maturity, the 5-year Treasury constant maturity and the 10-year Treasury constant maturity across the 0.90 quantile. The bandwidth uses a value of 0.0531. The adjusted R-squared is very low and the numeric value that explains the variation of the independent variables that have in the dependent variable is 9.3%. The Quasi –LR statistic is 25.38 and the probability of the (Quasi-LR stat) is accounted as 0.0000, which is statistically significant, as it is below the 5% significance level. The minimized value of equation (3) and (4) are reported in the objective function. The value recorded is 0.18. The sparsity is 0.018.

We have also tested the residuals of the quantile regression and we have found that it is stationary. The Augmented Dickey – Fuller test statistic was – 11.92, which is smaller than the critical values at the one,(-3.45), five,(-2.87) and ten percent,(-2.57). Thus, we can reject the null hypothesis namely the existence of a unit root in the residual series.

398

Page 399: Introduction to Econometrics 2

Table 13 displays the results of the Quantile regression, (including LAD). The method that we have chosen for computing the coefficient covariances is the Bootstrap standard errors and covariance and the method for computing the scalar sparsity is the Siddiqui using fitted quantiles. The dependent variable is LNM2, which represents the logarithmic monthly returns of the seasonally adjusted money supply, M2. The independent or explanatory variables are LN10, which shows natural logarithmic monthly returns of the 10-year Treasury constant maturity. LN3 represents natural logarithmic monthly returns of the 3-month Treasury constant maturity. LN5 represents logarithmic monthly returns of the 5-year Treasury constant maturity. The quantile to estimate is 0.90 and the quantile regression covers the period 01/01/1990 to 01/01/2013.

Dependent Variable: LNM2Method: Quantile Regression (tau = 0.9)Date: 09/16/13 Time: 15:26Sample: 2 277Included observations: 276Bootstrap Standard Errors & CovarianceBootstrap method: XY-pair, reps=500, rng=kn, seed=2001381543Sparsity method: Siddiqui using fitted quantilesBandwidth method: Hall-Sheather, bw=0.053141

399

Page 400: Introduction to Econometrics 2

Estimation successfully identifies unique optimal solution

Variable Coefficient Std. Error t-Statistic Prob.

C 0.008283 0.000430 19.25999 0.0000LN3 -0.004340 0.002686 -1.616076 0.1072LN5 -0.034227 0.012516 -2.734529 0.0067

LN10 0.041775 0.021145 1.975684 0.0492

Pseudo R-squared 0.102578    Mean dependent var 0.004323Adjusted R-squared 0.092680    S.D. dependent var 0.003743S.E. of regression 0.005458    Objective 0.183094Quantile dependent var 0.008045    Objective (const. only) 0.204022Sparsity 0.027394    Quasi-LR statistic 16.97713Prob(Quasi-LR stat) 0.000714

Source: Author’s calculation based on EViews softwareSignificant results at the 5% significance level.

The purpose of running this quantile regression that calculate the covariance matrix based on XY bootstrap pair and uses as a sparsity method the Siddiqui fitted quantiles is to check, if there is substantial deviations of the standard errors and the QLR statistic from the quantile regression reported in Table 12. We have used a maximum number of replications of 500 iterations. The quantile regression in, Table 12, uses as the covariance matrix the Huber Sandwich and the scalar sparsity is the Kernel residual. By comparing Table 12 and Table 13, we have found that the standard errors of the coefficients are identical or close to each other. For example, the standard errors LN5 are (0.007 versus 0.013) and the standard errors of LN10, (0.011 versus 0.021).

The coefficients of LN5 and LN10 are statistically significant and they affect the money supply at the 0.90 quantile level. The coefficient of LN3 is not statistically significant across the 0.90 quantile of the bootstrap quantile regression method. We have concluded that increase in the seasonally adjusted money supply, M2, was accompanied by decrease in the yields of the 5-year Treasury constant maturity and the 10-year Treasury constant maturity across the 0.90 quantile. In contrast, we found that increase in the money supply, M2, was not accompanied by decrease in the yields of the 3-month Treasury constant maturity by using the bootstrap method.

The other statistics such as the adjusted R-squared, the Quasi –LR statistic and the probability of the (Quasi-LR stat) have very close or identical numerical values as in Table 12. Specifically, the adjusted R-squared is very low and the numeric value that explains the variation of the independent variables that have in the dependent variable is 9.3%. The Quasi –LR statistic is 16.98 and the probability of the (Quasi-LR stat) is accounted as 0.0007, which is statistically significant, as it is below the 5% significance level. The minimized value of equation (3) and (4) are reported in the objective function. The value recorded is 0.18. The sparsity value of the bootstrap method is 0.03 in relation to 0.02 Huber Sandwich method.

We have also tested the residuals of the quantile regression and we have found that it is stationary. The Augmented Dickey – Fuller test statistic was – 11.92, which is smaller than the critical values at the one,(-3.45), five,(-2.87) and ten

400

Page 401: Introduction to Econometrics 2

percent,(-2.57). Thus, we can reject the null hypothesis namely the existence of a unit root in the residual series.

Table 14 and Graph 3 display the forecasting results of the quantile regression. The quantile to estimate is 0.90 and the sample covers the period 01/01/1990 to 01/01/2013.

-.010

-.005

.000

.005

.010

.015

.020

.025

.030

50 100 150 200 250

LNM2F ± 2 S.E.

Forecast: LNM2FActual: LNM2Forecast sample: 2 277Included observations: 276

Root Mean Squared Error 0.005418Mean Absolute Error 0.004606Mean Abs. Percent Error 666.8042Theil Inequality Coefficient 0.379729 Bias Proportion 0.572191 Variance Proportion 0.170073 Covariance Proportion 0.257736

Source: Author’s calculation based on EViews software.

According to Table 14 and Graph 3, the root mean squared error, (RMSE), and mean absolute error, (MAE), have a low value of 0.005 and 0.005. Theil inequality coefficient should be between zero and one. In our case, it is 0.38, which shows that the model is good fit. The bias proportion is 0.57 and the value of the covariance proportion is 0.26.

Table 15 displays the results of the Quantile regression, (including LAD). The method that we have chosen for computing the coefficient covariances is the Huber Sandwich and the method for computing the scalar sparsity is the Kernel residual. The dependent variable is LNM2, which represents the logarithmic monthly returns of the seasonally adjusted money supply, M2. The independent or explanatory variables are LN10, which shows natural logarithmic monthly returns of the 10-year Treasury constant maturity. LN3 represents natural logarithmic monthly returns of the 3-month Treasury constant maturity. LN5 represents logarithmic monthly returns of the 5-year Treasury constant maturity. The quantile to estimate is 0.95 and the quantile regression covers the period 01/01/1990 to 01/01/2013.

Dependent Variable: LNM2Method: Quantile Regression (tau = 0.95)Date: 09/15/13 Time: 18:22Sample: 2 277Included observations: 276Huber Sandwich Standard Errors & CovarianceSparsity method: Kernel (Epanechnikov) using residualsBandwidth method: Hall-Sheather, bw=0.032598Estimation successfully identifies unique optimal solution

401

Page 402: Introduction to Econometrics 2

Variable Coefficient Std. Error t-Statistic Prob.

C 0.010311 0.000624 16.52067 0.0000LN3 -0.003782 0.000909 -4.159674 0.0000LN5 -0.023802 0.006310 -3.772213 0.0002

LN10 0.009016 0.010455 0.862411 0.3892

Pseudo R-squared 0.168511    Mean dependent var 0.004323Adjusted R-squared 0.159340    S.D. dependent var 0.003743S.E. of regression 0.007229    Objective 0.116870Quantile dependent var 0.009810    Objective (const. only) 0.140555Sparsity 0.039346    Quasi-LR statistic 25.34616Prob(Quasi-LR stat) 0.000013

Source: Author’s calculation based on EViews softwareSignificant results at the 5% significance level.

According to Table 15, all the coefficients of the quantile regression are significant at the 5% significance level. The coefficient of the constant is 0.010, the t-statistic is 16.52 and the p-probability is 0.0000. The coefficients of LN3 and LN5 are statistically significant and based on the 0.95 quantile, they do affect the seasonally adjusted money supply, M2. The coefficient of the LN10 is not statistically significant as the p-value is 0.39, which is above the 5% significance level. We conclude that increase in the seasonally adjusted money supply, M2, was accompanied by decrease in the yields of the 3-month Treasury constant maturity and the 5-year Treasury constant maturity across the 0.95 quantile, but not for the 10-year Treasury constant maturity. The bandwidth uses a value of 0.032598. The numeric value of the adjusted R-squared that explains the variation of the independent variables that have in the dependent variable is 15.93%. The Quasi –LR statistic is 25.35 and the probability of the (Quasi-LR stat) is accounted as 0.0000, which is statistically significant, as it is below the 5% significance level. The minimized value of equation (3) and (4) are reported in the objective function. The value recorded is 0.12. We have also tested the residuals of the quantile regression and we have found that it is stationary. The Augmented Dickey – Fuller test statistic was – 12.35, which is smaller than the critical values at the one,(-3.45), five,(-2.87) and ten percent,(-2.57). Thus, we can reject the null hypothesis namely the existence of a unit root in the residual series.

Table 16 displays the results of the Quantile regression, (including LAD). The method that we have chosen for computing the coefficient covariances is the Bootstrap standard errors and covariance and the method for computing the scalar sparsity is the Siddiqui using fitted quantiles. The dependent variable is LNM2, which represents the logarithmic monthly returns of the seasonally adjusted money supply, M2. The independent or explanatory variables are LN10, which shows natural logarithmic monthly returns of the 10-year Treasury constant maturity. LN3 represents natural logarithmic monthly returns of the 3-month Treasury constant maturity. LN5 represents logarithmic monthly returns of the 5-year Treasury constant maturity. The quantile to estimate is 0.95 and the quantile regression covers the period 01/01/1990 to 01/01/2013.

Dependent Variable: LNM2Method: Quantile Regression (tau = 0.95)Date: 09/16/13 Time: 15:29

402

Page 403: Introduction to Econometrics 2

Sample: 2 277Included observations: 276Bootstrap Standard Errors & CovarianceBootstrap method: XY-pair, reps=500, rng=kn, seed=2001381543Sparsity method: Siddiqui using fitted quantilesBandwidth method: Hall-Sheather, bw=0.032598Estimation successfully identifies unique optimal solution

Variable Coefficient Std. Error t-Statistic Prob.

C 0.010311 0.000760 13.57214 0.0000LN3 -0.003782 0.002181 -1.733744 0.0841LN5 -0.023802 0.015362 -1.549384 0.1225

LN10 0.009016 0.025571 0.352599 0.7247

Pseudo R-squared 0.168511    Mean dependent var 0.004323Adjusted R-squared 0.159340    S.D. dependent var 0.003743S.E. of regression 0.007229    Objective 0.116870Quantile dependent var 0.009810    Objective (const. only) 0.140555Sparsity 0.072621    Quasi-LR statistic 13.73249Prob(Quasi-LR stat) 0.003293

Source: Author’s calculation based on EViews softwareSignificant results at the 5% significance level.

The purpose of running this quantile regression that calculate the covariance matrix based on XY bootstrap pair and uses as a sparsity method the Siddiqui fitted quantiles is to check, if there is substantial deviations of the standard errors and the QLR statistic from the quantile regression reported in Table 15. We have used a maximum number of replications of 500 iterations. By comparing Table 15 and Table 16, we have found that the standard errors of the coefficients are identical or close to each other. For example, the standard errors of LN3 are (0.001 versus 0.002).

The coefficients of LN3, LN5 and LN10 are not statistically significant as the p-values are 0.08, 0.12 and 0.72 respectively. They do not affect the money supply at the 0.95 quantile level. We conclude that increase in the seasonally adjusted money supply, M2, was not accompanied by decrease in the yields of the 3-month Treasury with constant maturity, 5-year Treasury constant maturity and the 10-year Treasury constant maturity across the 0.95 quantile by using the bootstrap covariance method.

Similarly, the other statistics such as the adjusted R-squared, the Quasi –LR statistic and the probability of the (Quasi-LR stat) have very close or identical numerical values as in Table 15. Specifically, the adjusted R-squared is very low and the numeric value that explains the variation of the independent variables that have in the dependent variable is 15.93%. The Quasi –LR statistic is 13.73 and the probability of the (Quasi-LR stat) is accounted as 0.003, which is statistically significant, as it is below the 5% significance level. The minimized value of equation (3) and (4) are reported in the objective function. The value recorded is 0.12. We have also tested the residuals of the quantile regression and we have found that it is stationary. The Augmented Dickey – Fuller test statistic was – 12.35, which is smaller than the critical values at the one,(-3.45), five,(-2.87) and

403

Page 404: Introduction to Econometrics 2

ten percent,(-2.57). Thus, we can reject the null hypothesis namely the existence of a unit root in the residual series.

Table 17 and Graph 4 display the forecasting results of the quantile regression. The quantile to estimate is 0.95 and the sample covers the period 01/01/1990 to 01/01/2013.

-.02

-.01

.00

.01

.02

.03

.04

50 100 150 200 250

LNM2F ± 2 S.E.

Forecast: LNM2FActual: LNM2Forecast sample: 2 277Included observations: 276

Root Mean Squared Error 0.007177Mean Absolute Error 0.006430Mean Abs. Percent Error 855.3818Theil Inequality Coefficient 0.436872 Bias Proportion 0.747150 Variance Proportion 0.059089 Covariance Proportion 0.193761

Source: Author’s calculation based on EViews software.

According to Table 17 and Graph 4, the root mean squared error, (RMSE), and mean absolute error, (MAE), have a low value of 0.007 and 0.006. Theil inequality coefficient should be between zero and one. In our case, it is 0.44, which shows that the model is good fit. The bias proportion is 0.75 and the value of the covariance proportion is 0.19.

Graph 5 and Table 18 show the process coefficients of the independent variables that are estimated across 0.5, 0.85, 0.90 and 0.95 quantiles. The independent or explanatory variables are LN10, which shows natural logarithmic monthly returns of the 10-year Treasury constant maturity. LN3 represents natural logarithmic monthly returns of the 3-month Treasury constant maturity. LN5 represents logarithmic monthly returns of the 5-year Treasury constant maturity and the constant C of the quantile regression. The sample covers the period 01/01/1990 to 01/01/2013.

404

Page 405: Introduction to Econometrics 2

.002

.004

.006

.008

.010

.012

0.0 0.2 0.4 0.6 0.8 1.0

Quantile

C

-.008

-.006

-.004

-.002

.000

.002

0.0 0.2 0.4 0.6 0.8 1.0

Quantile

LN3

-.05

-.04

-.03

-.02

-.01

.00

.01

0.0 0.2 0.4 0.6 0.8 1.0

Quantile

LN5

-.04

-.02

.00

.02

.04

.06

.08

0.0 0.2 0.4 0.6 0.8 1.0

Quantile

LN10

Quantile Process Estimates (95% CI)

Source: Author’s calculation based on EViews softwareSignificant results at the 5% significance level.

Graph 5 shows the results of the constant and the explanatory variables across 0.5, 0.85, 0.90 and 0.95 quantiles with 95% confidence level. The coefficients estimates of the constant show a positive relationship between the quantile value and the estimated coefficients. The LN3, represents the natural logarithmic monthly returns of the 3-month Treasury constant maturity. The coefficients show a negative relationship between the quantile value and the estimated coefficients. LN10, represents the natural logarithmic monthly returns of the 10-year Treasury constant maturity. The coefficients show a positive relationship between the quantile value and the estimated coefficients. LN5, represents logarithmic monthly returns of the 5-year Treasury constant maturity. The coefficients show a negative relationship between the quantile value and the estimated coefficients

Table 18 shows the process coefficients of the independent variables that are estimated across 0.5, 0.85, 0.90 and 0.95 quantiles. The independent or explanatory variables are LN10, which shows natural logarithmic monthly returns of the 10-year Treasury constant maturity. LN3 represents natural logarithmic monthly returns of the 3-month Treasury constant maturity.

405

Page 406: Introduction to Econometrics 2

LN5 represents logarithmic monthly returns of the 5-year Treasury constant maturity and the constant C of the quantile regression. The sample covers the period 01/01/1990 to 01/01/2013.

Quantile Process EstimatesEquation: EQ01Specification: LNM2 C LN3 LN5 LN10

Quantile Coefficient Std. Error t-Statistic Prob.

C 0.500 0.004008 0.000224 17.86485 0.00000.850 0.007587 0.000354 21.45546 0.00000.900 0.008283 0.000353 23.46179 0.00000.950 0.010311 0.000624 16.52067 0.0000

LN3 0.500 -0.001003 0.001216 -0.824815 0.41020.850 -0.003956 0.001203 -3.288054 0.00110.900 -0.004340 0.000944 -4.598709 0.00000.950 -0.003782 0.000909 -4.159674 0.0000

LN5 0.500 -0.008029 0.008792 -0.913149 0.36200.850 -0.034039 0.007625 -4.464028 0.00000.900 -0.034227 0.006669 -5.132085 0.00000.950 -0.023802 0.006310 -3.772213 0.0002

LN10 0.500 0.007850 0.014024 0.559804 0.57610.850 0.045355 0.012385 3.661991 0.00030.900 0.041775 0.010856 3.848001 0.00010.950 0.009016 0.010455 0.862411 0.3892

Source: Author’s calculation based on EViews softwareSignificant results at the 5% significance level.

Table 18, shows the results of the constant and the explanatory variables across 0.5, 0.85, 0.90 and 0.95 quantiles with 95% confidence level. The coefficients estimates of the constant are all positive with significant t-statistics and p-values for all quantiles.The LN3, which represents the natural logarithmic monthly returns of the 3-month Treasury constant maturity displays negative coefficients across all quantiles. All the t-statistics are significant at the 5% significance level. The only coefficient that is not significant is across the 0.5 quantile. It has a coefficient of -0.001, a t-statistic of -0.82 and a p-value of 0.41, which is greater than the 5% significance level. LN10, which represents the natural logarithmic monthly returns of the 10-year Treasury constant maturity shows positive coefficients across the quantiles 0.5, 0.85, 0.90 and 0.95. The coefficients that are not significant are across the 0.5 and 0.95 quantile. They have coefficients of 0.008 and 0.009, a t-statistics of 0.56 and 0.86 and a p-values of 0.58 and 0.39, which are greater than the 5% significance level

Finally, LN5, which represents logarithmic monthly returns of the 5-year Treasury constant maturity shows negative coefficients across all quantiles. The t-statistics across the 0.5, 0.85, 0.90 and 0.95 quantiles are negative and statistically significant. The only coefficient that is not significant is across the 0.5 quantile. It has a coefficient of -0.008, a t-statistic of -0.91 and a p-value of 0.36, which is greater than the 5% significance level.

Table 19 shows Koenker and Bassett, (1982a), test for the equality of the slope coefficients across 0.5, 0.85, 0.90 and 0.95 quantiles. The dependent

406

Page 407: Introduction to Econometrics 2

variable is LNM2, which represents the logarithmic monthly returns of the seasonally adjusted money supply, M2. The independent or explanatory variables are LN10, which shows natural logarithmic monthly returns of the 10-year Treasury constant maturity. LN3 represents natural logarithmic monthly returns of the 3-month Treasury constant maturity. LN5 represents logarithmic monthly returns of the 5-year Treasury constant maturity. The sample covers the period 01/01/1990 to 01/01/2013.

Quantile Slope Equality TestEquation: EQ01Specification: LNM2 C LN3 LN5 LN10

Test SummaryChi-Sq. Statistic Chi-Sq. d.f. Prob.

Wald Test 57.99703 9 0.0000

Restriction Detail: b(tau_h) - b(tau_k) = 0

Quantiles Variable Restr. Value Std. Error Prob.

0.5, 0.85 LN3 0.002953 0.001323 0.0256LN5 0.026010 0.008974 0.0038

LN10 -0.037504 0.014327 0.00890.85, 0.9 LN3 0.000384 0.000734 0.6006

LN5 0.000188 0.004719 0.9682LN10 0.003579 0.007697 0.6419

0.9, 0.95 LN3 -0.000558 0.000997 0.5754LN5 -0.010425 0.005653 0.0652

LN10 0.032759 0.009752 0.0008

Source: Author’s calculation based on EViews software.Significant results at the 5% significance level.

According to Table 19, the χ2

- statistic value of the Wald test of the quantile slope equality test is 57.997, which is statistically significant at the 5% significance level. Coefficients differ across quantile values 0.5, 0.85, 0.90 and 0.95 and the conditional quantiles are not identical. For example, LN3 which represents natural logarithmic monthly returns of the 3-month Treasury constant maturity at the quantiles level of 0.5 and 0.85 shows a significant probability of 0.0256. Similarly, LN5 and LN10 at the same quantile levels show a significant p-value.

Table 20 displays Newey and Powell, (1987), conditional symmetric quantiles test across 0.5, 0.85, 0.90 and 0.95 quantiles. The dependent variable is LNM2, which represents the logarithmic monthly returns of the seasonally adjusted money supply, M2. The independent or explanatory variables are LN10, which shows natural logarithmic monthly returns of the 10-year Treasury constant maturity. LN3 represents natural logarithmic monthly returns of the 3-month Treasury constant maturity. LN5 represents logarithmic monthly returns of the 5-year Treasury constant maturity. The sample covers the period 01/01/1990 to 01/01/2013.

407

Page 408: Introduction to Econometrics 2

Symmetric Quantiles TestEquation: EQ01Specification: LNM2 C LN3 LN5 LN10Test statistic compares all coefficients

Test SummaryChi-Sq. Statistic Chi-Sq. d.f. Prob.

Wald Test 25.65211 12 0.0120

Restriction Detail: b(tau) + b(1-tau) - 2*b(.5) = 0

Quantiles Variable Restr. Value Std. Error Prob.

0.05, 0.95 C 0.001630 0.000722 0.0239LN3 -0.004725 0.007465 0.5268LN5 0.007334 0.022120 0.7402

LN10 -0.014298 0.037471 0.70280.1, 0.9 C 0.000430 0.000506 0.3945

LN3 -0.005961 0.006734 0.3761LN5 -0.020611 0.025417 0.4174

LN10 0.039757 0.039270 0.31130.15, 0.85 C 0.000710 0.000457 0.1200

LN3 -0.005297 0.002459 0.0312LN5 -0.030577 0.020740 0.1404

LN10 0.050049 0.028188 0.0758

Source: Author’s calculation based on EViews software.Significant results at the 5% significance level.

According to Table 20, the χ2

- statistic value of the Wald test of the symmetric quantiles test is 25.65, which is statistically significant at the 5% significance level. There is evidence of departures from symmetry as the p-value is 0.012. Although, the individual coefficient restriction test values show no evidence of asymmetry across the quantiles 0.5, 0.85, 0.90 and 0.95. For example, the p-probability for LN3 across the quantiles 0.1 , 0.9 is not significant as the value is 0.38. The p-probability for LN5 across the quantiles 0.15 , 0.85 is not significant as the value is 0.14..

Section 3 summarizes and concludes.

In this article, we have attempted to model the effects of macroeconomic variables, namely the natural logarithmic returns of seasonally adjusted money supply,(M2), on the natural logarithmic monthly returns of the US term structure of interest rates. We have applied a quantile regression, (including LAD) in EViews. The purpose was to test across which quantiles the US Federal Reserve sets the monetary policy in relation to interest rates.

408

Page 409: Introduction to Econometrics 2

We have found that the Jarque – Bera χ2

statistics for all variables are very significant at the 5% significance value. We have rejected the null hypothesis, H0,

in favourite of the alternative, H1. All the variables showed to be stationary .

The quantile regression model with the lowest root mean squared error and mean absolute error was the one with quantile estimation 0.5. It had also the lowest bias and covariance proportion of 0.0056 or 0.56% and 0.1548 or 15.48% respectively. It has the lowest systematic error. Theil inequality coefficient should be between zero and one. In our case, it is 0.37, which shows that the model is good fit. The closer is this value to zero indicates that the model is best fit. The smallest value mean a minimum error between the actual and predicted values.

We have used two methods to estimate the covariance matrix of the quantile regression. The purpose of using both methods is to check if there is substantial deviations of the standard errors and the QLR statistic among the two methods. The first methodology is based on the Huber sandwich standard errors and covariance and the method for computing the scalar sparsity is the Kernel residual. The second methodology is based on the Bootstrap standard errors and covariance and the method that we have used to compute the scalar sparsity is the Siddiqui mean fitted.

By using both methods, we have found that the coefficients of LN3, LN5 and LN10 are not statistically significant and based on the 0.5 quantile, they do not affect the seasonally adjusted money supply, M2. We have concluded that increase in the seasonally adjusted money supply, M2, was not accompanied by decrease in the yields of the 3-month Treasury constant maturity, the 5-year Treasury constant maturity and the 10-year Treasury constant maturity across the 0.5 quantile.

The coefficients of LN3, LN5 and LN10 are statistically significant and based on the 0.85 quantile, they do affect the seasonally adjusted money supply, M2 by using the Huber sandwich standard errors and covariance method. We have concluded that increase in the seasonally adjusted money supply, M2, was accompanied by decrease in the yields of the 3-month Treasury constant maturity, the 5-year Treasury constant maturity and the 10-year Treasury constant maturity across the 0.85 quantile by using the first method.

The coefficients of LN5 and LN10 are statistically significant and they affect the money supply at the 0.85 quantile level by using the bootstrap standard errors and covariance method. The coefficient of LN3 is not statistically significant across the 0.85 quantile of the bootstrap quantile regression method. We have concluded that increase in the seasonally adjusted money supply, M2, was accompanied by decrease in the yields of the 5-year Treasury constant maturity and the 10-year Treasury constant maturity across the 0.85 quantile. In contrast, we found that increase in the money supply, M2, was not accompanied by decrease in the yields of the 3-month Treasury constant maturity by using the bootstrap method.

The coefficients of LN3, LN5 and LN10 are statistically significant and based on the 0.90 quantile, they affect the seasonally adjusted money supply, M2. We have concluded that increase in the seasonally adjusted money supply, M2, was

409

Page 410: Introduction to Econometrics 2

accompanied by decrease in the yields of the 3-month Treasury constant maturity, the 5-year Treasury constant maturity and the 10-year Treasury constant maturity across the 0.90 quantile by using the Huber sandwich standard errors and covariance method.

The coefficients of LN5 and LN10 are statistically significant and they affect the money supply at the 0.90 quantile level by using the bootstrap standard errors and covariance method. The coefficient of LN3 is not statistically significant across the 0.90 quantile of the bootstrap quantile regression method. We have concluded that increase in the seasonally adjusted money supply, M2, was accompanied by decrease in the yields of the 5-year Treasury constant maturity and the 10-year Treasury constant maturity across the 0.90 quantile. In contrast, we have found that increase in the money supply, M2, was not accompanied by decrease in the yields of the 3-month Treasury constant maturity by using the bootstrap method.

The coefficients of LN3 and LN5 are statistically significant and based on the 0.95 quantile, they do affect the seasonally adjusted money supply, M2. The coefficient of the LN10 is not statistically significant as the p-value is 0.39, which is above the 5% significance level. We have concluded that increase in the seasonally adjusted money supply, M2, was accompanied by decrease in the yields of the 3-month Treasury constant maturity and the 5-year Treasury constant maturity across the 0.95 quantile, but not for the 10-year Treasury constant maturity.

The coefficients of LN3, LN5 and LN10 are not statistically significant as the p-values are 0.08, 0.12 and 0.72 respectively. They do not affect the money supply at the 0.95 quantile level. We have concluded that increase in the seasonally adjusted money supply, M2, was not accompanied by decrease in the yields of the 3-month Treasury with constant maturity, 5-year Treasury constant maturity and the 10-year Treasury constant maturity across the 0.95 quantile by using the bootstrap covariance method.

Finally, we have found that coefficients differ across quantile values 0.5, 0.85, 0.90 and 0.95 and the conditional quantiles are not identical. There is evidence of departures from symmetry as the p-value is 0.012. Although, the individual coefficient restriction test values show no evidence of asymmetry across the quantiles 0.5, 0.85, 0.90 and 0.95.

References

410

Page 411: Introduction to Econometrics 2

Bassett,G.,Jr., and Koenker, R.,(1982), “ An Empirical Quantile Function for Linear Models with i.i.d. Errors”. Journal of the American Statistical Association, 77(378), pp.407 - 415.

Buchinsky, M, (1995), “ Estimating the Asymptotic Covariance Matrix for Quantile Regression Models: A Monte Carlo Study. “ Journal of Econometrics, 68, pp.303-338.

Chamberlain, G.,(1994), “ Quantile Regression, Censoring and the Structure of Wages”, in Advances in Econometrics, Christopher Sims, ed., New York: Elsevier, pp.171 – 209.

EViews 6, (2007), “User’s Guide II”. Quantitative micro software. p.271, 273, 274.

Falk, M., (1986), “ On the Estimation of the Quantile Density Function”. Statistics and Probability Letters, 4, pp.69-73.

He, X., and Hu, F.,(2002), “ Markov Chain Marginal Bootstrap. Journal of the American Statistical Association, 97 (459), pp.783 – 795.

Hendricks, W., and Koenker, R.,(1992), “ Hierarchical Spline Models for Conditional Quantiles and the Demand for Electricity. Journal of the American Statistical Association, 87 (417), pp.58 -68.

Jones, M.C., (1992), “Estimating Densities, Quantiles, Quantile Densities and Density Quantiles”. Annals of the Institute of Statistical Mathematics, 44 (4), pp.721 – 727.

Kocherginsky, M., Xuming,He., and Yunming,Mu.,(2005), “ Practical Confidence Intervals for Regression Quantiles”. Journal of Computational and Graphical Statistics, 14(1), pp.41 – 55.

Koenker, R., and Bassett,G.,Jr., (1978), “ Regression Quantiles”, Econometrica, 46(1), pp.33-50.

Koenker, R., (1994), “Confidence Intervals for Regression Quantiles”. Asymptotic Statistics Mandl and M.Huskova, eds, New York: Springer – Verlag, pp.349 – 359.

Koenker, R., ( 2005), “ Quantile Regression”. New York: Cambridge University Press.

Koenker, R., and Bassett,G.,Jr., (1982a), “ Robust Tests for Heteroskedasticity Based on Regression Quantiles”, Econometrica, 50 (1), pp.43-62.

Koenker, R., and Hallock, K.F., (2001), “Quantile Regression”. Journal of Economic Perspectives, 15(4), pp.143 – 156.

411

Page 412: Introduction to Econometrics 2

Koenker, R., and Jose,A.F.Machado, (1999), “ Goodness of Fit and Related Inference Processes for Quantile Regression. Journal of the American Statistical Association, 94(448), pp.1296 – 1310.

Newey, W.,K., and Powell, J.,L.,(1987), “ Asymmetric Least Squares Estimation”. Econometrica, 55 (4), pp.819 – 847.

Powell, J.,(1986), “ Censored Regression Quantiles”. Journal of Econometrics, 32, pp.143 – 155.

Siddiqui, M.M.,(1960), “ Distribution of Quantiles in Samples from a Bivariate Population”. Journal of Research of the National Bureau of Standards –B, 64(3), pp.145 -150.

Welsh, A.H., (1988), “ Asymptotically Efficient Estimation of the Sparsity Function at a Point”. Statistics and Probability Letters, 6, pp.427 - 432.

412

Page 413: Introduction to Econometrics 2

I have attached a short article to show the application of Pedroni residual cointegration test in applied econometrics.

Application of a Pedroni residual cointegration test, and an unrestricted cointegration rank test in terms of trace and maximum eigenvalue of pooled data. Evidence from the US macroeconomic indicators in terms of GDP and industrial production.

Dr Michel Zaki Guirguis

Bournemouth University5

Institute of Business and LawFern BarrowPoole, BH12 5BB, UKTel:0030-210-9841550Mobile:0030-6982044429Email: [email protected]

Biographical notes

I hold a PhD in Finance from Bournemouth University in the U.K. I have worked for several multinational companies including JP Morgan Chase and Interamerican Insurance and Investment Company in Greece. Through seminars, I learned how to manage and select the right mutual funds according to various clients needs. I supported and assisted the team in terms of six sigma project and accounts reconciliation. Application of six sigma project in JP Morgan Chase in terms of statistical analysis is important to improve the efficiency of the department. Professor Philip Hardwick and I have published a chapter in a book entitled “International Insurance and Financial Markets: Global Dynamics and Local Contingencies”, edited by Cummins and Venard at Wharton Business School (University of Pennsylvania in the US). I am working on several papers that focus on the Financial Services Sector.

5 I have left from Bournemouth University since 2006. The permanent address of the author’s is, 94, Terpsichoris road, Palaio – Faliro, Post Code: 17562, Athens – Greece.

413

Page 414: Introduction to Econometrics 2

Abstract

In this article, we have tested if there is long – run relationship of pooled data in EViews 6 between the macroeconomic variables, namely, US GDP and industrial production. We have found that the results of the Johansen cointegration tests in terms of trace tests and maximum eigenvalue tests show that there is no long – run relationship between the GDP and the industrial production. The p-values for both statistics is not significant at the 5% significance level. Then, we have applied the Pedroni residual cointegration test. We have found that all the tests in terms of panel v-statistic, panel rho-statistic, panel PP-statistic, and panel ADF-statistic of individual and common AR coefficients show insignificant statistical values associated with insignificant probabilities. The sample evidence suggest to accept the alternative hypothesis in terms that the variables are not cointegrated, and therefore the residuals are I(1). The total dataset includes cross section annual data starting from 1980 to 2012 and total to 33 observations for GDP and 31 observations for industrial production. The data was obtained from the US Bureau of Economic Analysis, (BEA) and the Federal Reserve Statistical Release Department.

Keywords: Pedroni residual cointegration test, Unrestricted cointegration rank test in terms of trace and maximum eigenvalue, GDP, industrial producion.

414

Page 415: Introduction to Econometrics 2

Introduction

The GDP measures the value of the good and services that are produced by the US economy during a time period. GDP measured as the sum of expenditures or purchases by households, government or businesses is used to identify the final goods and services purchased by them. It is an aggregate figure of adding consumption, investment, government spending and net exports of goods and services.

In 1980, the US Gross domestic product, (GDP), was 2862.5 billions dollars and in 1981, it was 3210.9 billions dollars. There was an increase of 348.4 billions dollars. From 1980 to 1990, the GDP figures has accounted an increase of 3117.1 billions dollars. From 1991 to 2005, the GDP has recorded an increase of 6921.4 billions dollars. In 2009, it was 14417.9 billions dollars and from 2010 to 2012, the figures increased from 14958.3 billions dollars to16244.6 billions dollars. There was an increase of 1286.3 billions dollars.

In 1982, the US industrial production, (IP), was 48.280 billions dollars. In 1989, it was 61.636 billions dollars. There was an increase of 13.356 billions dollars. From 1990 to 1995, the industrial production has recorded an increase of 9.53 billions dollars. From 2000 to 2007, there was an increase of 8.44%. From 2007 to 2008, the industrial production decreased from 100.000 to 96.623 billions dollars. In 2012, the figures was 97.042 billions dollars.

The rest of the paper is organized as follows. Section 1 describes the methodological issues and data explanations. Section 2 shows the results of statistics and econometrics tests and Section 3 summarizes and concludes.

415

Page 416: Introduction to Econometrics 2

1. Methodological issues and data explanations.

In this article, we have tested the relationship of the macroeconomic indicators by applying a Pedroni residual cointegration test, and an unrestricted cointegration rank test in terms of trace and maximum eigenvalue. Application of pooled cross section data in cointegration tests has attracted the attention of many academics such as: Arellano, (1987), Baltagi, (2001), Baltagi and Chang, (1994), Beck and Katz, (1995), Breitung, (2000), Choi, (2001), Davis, (2002), Fischer, (1932), Hadri, (2000), Im, Pesaran and Shin, (2003), Kao, (1999), Levin, Lin and Chu, (2002), Maddala and Wu, (1999), Pedroni, (1999), Pedroni, (2004), Wansbeek and Kapteyn, (1989), Wooldridge, (2002).

According to EViews User’s Guide II, ( p.374), Pedroni (Engle – Granger based) cointegration tests is based on testing the residuals of a spurious regression that is done using I(1) variables. Let’s assume the following regression:

y it=α i+δ i t+β i ,t χ i , t+εi , t (1)

Where:y i ,t and χ i , t are dependent and independent variable integrated of I (1). In our case, the dependent variable is the US GDP and the independent variable is the industrial production .α i and δi are individual and trend effects . To test if the residuals are I(1), an auxiliary regression is formulated as follows:

ε i , t=ρi e it−1+uit (2)

According to equation (2), the null hypothesis of no cointegration is H0: ρ=1 and the alternative hypothesis of stationary residuals is H1:ρ <1. Pedroni, (1995), proposed in the residual cointegration results a Phillips – Peron and Dickey - Fuller test of the AR individual coefficients.

The hypotheses that have been formulated and tested are as follows:

H0: The variables are cointegrated, then the residuals are I(0).

H1: The variables are not cointegrated, then the residuals are I(1).

According to E-views user’s guide II, (p.363), the VAR-based cointegration tests developed in Johansen, (1991, 1995) of order p are as follows:

416

Page 417: Introduction to Econometrics 2

y t=A1 yt−1+ .. .. ..+A p yt−p+Bχ t+ε t (3)

Where yt is a k-vector of non-stationary I(1) variables, χ t is a d-vector of

deterministic variables, and ε t is a vector of innovations.

According to E-views user’s guide II, (p.363), the VAR equation could be written as:

Δy t=Πyt−1+∑i=1

p−1

Γ i Δyt−i+Βχ t+εt

(4)

Where:

The matrix Π=∑

i=1

p

A i−I , The matrix Γ i=− ∑j=i+1

p

A j

(5)

To find the number of co-integrating vectors, Johansen,(1991,1995) used two statistic tests. The first one is the trace test. According to E-views user’s guide II, it tests the null hypothesis of r cointegrating relations against the alternative of k cointegrating relations, where k is the number of endogenous variables, for r = 0,1,……k-1. The alternative of k cointegrating relations corresponds to the case where none of the series has a unit root and a stationary VAR may be specified in terms of the levels of all of the series. The trace statistic for the null hypothesis of r cointegrating relations is computed as:

LRtr (r |k)=−T ∑

i=r+1

k

log (1−λ i)

(6)

Where λ i is the i-th largest eigenvalue of the Π matrix in equation (5).

The second block of the output reports the maximum eigenvalue statistic, which tests the null hypothesis of r cointegrating relations against the alternative of r+1 cointegrating relations. This test statistic is computed as:

LRmax (r |r+1) = -T log(1-λr+1) = LR tr(r|k) – LRtr (r+1|k) (7)

for r = 0,1,……, k-1.

417

Page 418: Introduction to Econometrics 2

According to EViews User’s Guide II, ( p.495), models that are estimated using a pool object of cross section or time series data could be expressed mathematically as followed:

Y it=α+ Xit, β it+δi+γ t+εit

(8)

Where:α is the constant of the model .Y it is the dependent variable.X it is a k-vector of regressors .ε it are the error terms .δ i and γi represent cross -section or period specific effects .

The data was obtained from the US Bureau of Economic Analysis, (BEA) and the Federal Reserve Statistical Release Department. The industrial production index, (IP), measures the real output of all manufacturing, mining, electric and gas industries. Manufacturing is consisted of those industries included in the North American Industry Classification System, (NAICS). It has been constructed from 312 individual series, which are market groups and industry groups. Te current formula that is used to measure IP is the geometric mean of the change in output and is calculated using the unit value estimate for the current month and the estimate for previous month. Production indexes for a restricted number of industries are calculated by dividing estimated nominal output by a corresponding Fischer price index.

Gross domestic product, (GDP), is the total of the addition of personal consumption expenditures,(PCE), gross private domestic investment,(GPDI), net export of goods and services,(NEGS), and government consumption expenditures and gross investment, (GCEGI). Specifically, personal consumption expenditures are consisted from durable and non durable goods and services. The gross private domestic investment includes fixed investment, non-residential, structures, equipment, intellectual property products, residential, and change in private inventories. Net exports of goods and services is consisted of the difference of exports of goods and services and imports of goods and services. Finally, government consumption expenditures and gross investment include federal, national defense, non-defense, and state and local. In this article, we use the aggregate figure of GDP.

Descriptive statistics will be displayed and to test for normality the Jarque – Bera statistic is analysed. We have checked for stationarity of the series by applying a pooled unit root summary test to calculate and compare the statistical values with the p-values.

418

Page 419: Introduction to Econometrics 2

2. Statistics and econometrics tests.

Table 1 shows descriptive statistics and normality tests of the US GDP and industrial production measured in billions dollars.

Table 1 displays Jarque - Bera normality test of the US GDP and the industrial production measured in billions dollars for the period 1980 to 2012.

GDP IP Mean  9185.219  76.56343 Median  8608.500  80.35430 Maximum  16244.60  100.0000 Minimum  3345.000  48.27980 Std. Dev.  4040.569  16.92745 Skewness  0.242217 -0.191180 Kurtosis  1.727786  1.496620

 Jarque-Bera  2.393722  3.108202 Probability  0.302141  0.211379

 Sum  284741.8  2373.466 Sum Sq. Dev.  4.90E+08  8596.152

 Observations  31  31Source: Author’s calculation based on E-views software.Significant p-value at 5% significance level.

We state the hypotheses as follows:

H0: GDP and industrial production are normally distributed.

H1: GDP and industrial production are not normally distributed.

According to Table 1, the Jarque – Bera χ2

statistics of both variables are not significant at the 5% significance value. For example, the value of GDP shows a χ2

statistic of 2.39, which is not significant, as the p-value is 0.30. Similarly, the χ2

statistic of the industrial production is 3.11 and the p-value is 0.21. The sample evidence suggest that we can not reject H0 of normality. The distribution of both variables is slightly positive or negative skewed. Kurtosis, which is a measure of the distribution peak is slightly positive and the values are less than 3. For example, GDP has a platykurtic distribution. Finally, there is a large dispersion between the mean and the standard deviation of both variables. For example, the mean of the GDP is 9185.219 billions dollars and the standard deviation is 4040.569 billions dollars.

419

Page 420: Introduction to Econometrics 2

Table 2 shows the pool unit root test of the US GDP and the industrial production measured in billions dollars for the period 1980 to 2012.

Pool unit root test: SummarySeries: GDP, IPDate: 09/28/13 Time: 18:50Sample: 1980 2012Exogenous variables: Individual effectsAutomatic selection of maximum lagsAutomatic lag length selection based on SIC: 0 to 1 and Bartlett kernel

Cross-Method Statistic Prob.** sections ObsNull: Unit root (assumes common unit root process)Levin, Lin & Chu t*  1.43619  0.9245  2  61

Null: Unit root (assumes individual unit root process)Im, Pesaran and Shin W-stat  2.42983  0.9924  2  61ADF - Fisher Chi-square  0.83947  0.9331  2  61PP - Fisher Chi-square  0.86593  0.9294  2  62

** Probabilities for Fisher tests are computed using an asymptotic Chi        -square distribution. All other tests assume asymptotic normality.Source: Author’s calculation based on E-views software.Significant p-value at 5% significance level.

According to Table 2 and to the sample evidence, we can not reject the null hypothesis namely the existence of a unit root by assuming common and individual unit root process. All the statistical methods mentioned in the Table in terms of Levin, Lin & Chu t*, Im, Pesaran and Shin W-stat,   ADF - Fisher Chi-square, and PP - Fisher Chi-square display insignificant statistics and probabilities. For example, the ADF - Fisher Chi-square has a statistic value of 0.84 and a probability of 0.933, which is an evidence of a unit root. In other words, the GDP and the industrial production macroeconomic variables are not a stationary series.

420

Page 421: Introduction to Econometrics 2

Table 3 shows the first difference pool unit root test of the US GDP and the industrial production measured in billions dollars for the period 1980 to 2012.

Pool unit root test: SummarySeries: GDP, IPDate: 09/28/13 Time: 18:51Sample: 1980 2012Exogenous variables: Individual effectsAutomatic selection of maximum lagsAutomatic lag length selection based on SIC: 0 and Bartlett kernel

Cross-Method Statistic Prob.** sections ObsNull: Unit root (assumes common unit root process)Levin, Lin & Chu t* -4.43262  0.0000  2  60

Null: Unit root (assumes individual unit root process)Im, Pesaran and Shin W-stat -3.41623  0.0003  2  60ADF - Fisher Chi-square  18.5608  0.0010  2  60PP - Fisher Chi-square  16.6499  0.0023  2  60

** Probabilities for Fisher tests are computed using an asymptotic Chi        -square distribution. All other tests assume asymptotic normality.Source: Author’s calculation based on E-views software.

According to Table 3 and to the sample evidence, we can reject the null hypothesis namely the existence of a unit root by assuming common and individual unit root process. All the statistical methods mentioned in the Table in terms of Levin, Lin & Chu t*, Im, Pesaran and Shin W-stat,   ADF - Fisher Chi-square, and PP - Fisher Chi-square display significant statistics and probabilities. For example, the ADF - Fisher Chi-square has a statistic value of 18.56 and a probability of 0.0010. In other words, the GDP and the industrial production macroeconomic variables are stationary series of order I(1).

421

Page 422: Introduction to Econometrics 2

Table 4 shows the trace and the maximum eigenvalue statistics of Johansen’s cointegration test. GDP is the Gross domestic product and , IP, is the industrial production measured in billions dollars for the period 1984 to 2012 of the adjusted sample.

Date: 09/28/13 Time: 19:58Sample (adjusted): 1984 2012Included observations: 29 after adjustmentsTrend assumption: Linear deterministic trendSeries: GDP IPLags interval (in first differences): 1 to 1

Unrestricted Cointegration Rank Test (Trace)

Hypothesized Trace 0.05No. of CE(s) Eigenvalue Statistic Critical Value Prob.**

None  0.194696  6.685121  15.49471  0.6145At most 1  0.013889  0.405601  3.841466  0.5242

 Trace test indicates no cointegration at the 0.05 level * denotes rejection of the hypothesis at the 0.05 level **MacKinnon-Haug-Michelis (1999) p-values

Unrestricted Cointegration Rank Test (Maximum Eigenvalue)

Hypothesized Max-Eigen 0.05No. of CE(s) Eigenvalue Statistic Critical Value Prob.**

None  0.194696  6.279520  14.26460  0.5776At most 1  0.013889  0.405601  3.841466  0.5242

 Max-eigenvalue test indicates no cointegration at the 0.05 level * denotes rejection of the hypothesis at the 0.05 level **MacKinnon-Haug-Michelis (1999) p-valuesSource: Author’s calculation based on E-views software.

The hypotheses that have been formulated and tested are as follows:

H0: There is no cointegration or long-run relationship between the variables GDP and industrial production.

H1: There is cointegration or long-run relationship between the variables GDP and industrial production.

According to Table 4, the results of the Johansen cointegration tests in terms of trace tests and maximum eigenvalue tests show that there is no long – run relationship between GDP and industrial production. The p-values for both statistics is not significant at the 5% significance level. For example, the trace and

422

Page 423: Introduction to Econometrics 2

the maximum eigenvalue statistic have a value of 6.69 and 6.28 with a probability of 0.61 and 0.58. Therefore, we accept H0 and reject the alternative hypothesis H1.

Table 5 shows the Pedroni residual cointegration test. GDP is the Gross domestic product and, IP, is the industrial production expressed as billions dollars for the period 1980 to 2012.

Pedroni Residual Cointegration TestSeries: GDP IPDate: 09/28/13 Time: 18:54Sample: 1980 2012Included observations: 33Cross-sections included: 2Null Hypothesis: No cointegrationTrend assumption: No deterministic trendUser-specified lag length: 1Newey-West automatic bandwidth selection and Bartlett kernel

Alternative hypothesis: common AR coefs. (within-dimension)Weighted

Statistic Prob. Statistic Prob.Panel v-Statistic  0.202948  0.4196  0.202948  0.4196Panel rho-Statistic  0.719383  0.7640  0.719383  0.7640Panel PP-Statistic  1.082106  0.8604  1.082106  0.8604Panel ADF-Statistic  0.800634  0.7883  0.800634  0.7883

Alternative hypothesis: individual AR coefs. (between-dimension)

Statistic Prob.Group rho-Statistic  1.385023  0.9170Group PP-Statistic  1.806748  0.9646Group ADF-Statistic  1.472627  0.9296

Cross section specific results

Phillips-Peron results (non-parametric)

Cross ID AR(1) Variance HAC Bandwidth Obs_GDP 0.958 317712.5 502655.6 3.00 30

_IP 0.958 317712.5 502655.6 3.00 30

Augmented Dickey-Fuller results (parametric)

Cross ID AR(1) Variance Lag Max lag Obs_GDP 0.899 292781.4 1 -- 29

_IP 0.899 292781.4 1 -- 29

Source: Author’s calculation based on E-views software.

The hypotheses that have been formulated and tested are as follows:

H0: The variables are cointegrated, then the residuals are I(0).

H1: The variables are not cointegrated, then the residuals are I(1).

423

Page 424: Introduction to Econometrics 2

According to Table 5, all the tests in terms of panel v-statistic, panel rho-statistic, panel PP-statistic, and panel ADF-statistic of individual and common AR coefficients show insignificant statistical values associated with insignificant probabilities. For example, the panel PP-statitic shows a statistical value of 1.08 with an insignificant value of 0.86. Similarly, the group PP – statistic shows a statistical value of 1.81 with an insignificant value of 0.96. The sample evidence suggest to accept the alternative hypothesis in terms that the variables are not cointegrated, and therefore the residuals are I(1). The results combined with Table 4 shows that the macroeconomic variables are not cointegrated.

Section 3 summarizes and concludes.

424

Page 425: Introduction to Econometrics 2

In this article, we have tested if there is long – run relationship of pooled cross section data in EViews 6 between the macroeconomic variables, namely, US GDP and industrial production. We have applied a Pedroni residual cointegration test, and an unrestricted cointegration rank test in terms of trace and maximum eigenvalue. The total dataset includes cross section annual data starting from 1980 to 2012 and total to 33 observations for GDP and 31 observations for industrial production. The data was obtained from the US Bureau of Economic Analysis, (BEA) and the Federal Reserve Statistical Release Department..

We have found that Jarque – Bera χ2

statistics of both variables are not significant at the 5% significance value. GDP and the industrial production macroeconomic variables are stationary series of order I(1). the results of the Johansen cointegration tests in terms of trace tests and maximum eigenvalue tests show that there is no long – run relationship between GDP and the industrial production. The p-values for both statistics is not significant at the 5% significance level.

Then, we have applied the Pedroni residual cointegration test. We have found that all the tests in terms of panel v-statistic, panel rho-statistic, panel PP-statistic, and panel ADF-statistic of individual and common AR coefficients show insignificant statistical values associated with insignificant probabilities. The sample evidence suggest to accept the alternative hypothesis in terms that the variables are not cointegrated, and therefore the residuals are I(1).

425

Page 426: Introduction to Econometrics 2

References

Arellano, M.,(1987), “Computing Robust Standard Errors for Within – groups Estimators”, Oxford Bulletin of Econnomics and Statistics, 49, pp.431 – 434.

Baltagi, B.H., (2001), “Econometric Analysis of Panel Data, Second Edition, West Sussex, England: John Wiley and Sons.

Baltagi, B.H., and Chang, Y,J., (1994), “ Incomplete Panels: A Comparative Study of Alternative Estimators for the Unbalanced One – way Error Component Regression Model”. Journal of Econometrics, 62, ppp.67-89.

Beck, N., and Katz,J.N., (1995), “ What to Do (and Not to Do) With Time – Series Cross – Section Data”. American Political Science Review, 89(3), pp.634 -647.

Breitung, J., (2000), “ The Local Power of Some Unit Root Tests for Panel Data”. Advances in Econometrics, Vol.15: Nonstationary Panels, Panel Cointegration, and Dynamic Panels, Amsterdam: JAI Press, pp. 161 – 178.

Choi, I., (2001), “ Unit Root Tests for Panel Data”. Journal of International Money and Finance, 20: pp.249 – 272.

Davis, P., (2002), “Estimating Multi – Way Error Components Models with Unbalanced Data Structures. Journal of Econometrics, 106, pp.67 -95.

EViews User’s Guide II,(2007), “Quantitative Micro Software”. pp. 363, 374, 495.

Fischer, R.A., (1932), “Statistical Methods for Research Workers, 4th Edition, Edinburgh: Oliver & Boyd.

Hadri, K., (2000), “ Testing for Stationarity in Heterogenous Panel data”. Econometric Jounral, 3, pp.148 – 161.

Im, K.S., Pesaran,M.H., and Shin,Y., (2003), “Testing for Unit Roots in Heterogenous Panels”. Journal of Econometrics, 115, pp.53 – 74.

Johansen, S.,(1991),” Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector Autoregressive Models. “Econometrica, 59, pp.1551 – 1580.

Johansen, S.,(1995), “ Likelihood – based Inference in Cointegrated Vector Autiregressive Models, Oxford: Oxford University Press.

Kao, C., (1999), “Spurious Regression and Residual – Based Tests for Cointegration in Panel Data”. Journal of Econometrics, 90, pp. 1- 44.

426

Page 427: Introduction to Econometrics 2

Levin, A., Lin,C.F., and Chu,C., (2002), “ Unit Root Tests in Panel Data: Asymptotic and Finite – sample Properties”. Journal of Econometrics, 108, pp. 1 -24.

Maddala, G.S, and Wu,S., (1999), “ A Comparative Study of Unit Root Tests with Panel Data and A New Simple Test”. Oxford Bulletin of Economics and Statistics, 61, pp.631 – 52.

Pedroni, P., (1999), “ Critical Values for Cointegration Tests in Heterogenous Panels with Multiple Regressors”. Oxford Bulletin of Economics and Statistics, 61, pp. 653 -70.

Pedroni, P., (2004), “ Panel Cointegration; Asymptotic and Finite Sample Properties of Pooled Time Series Tests with an Application to the PPP Hypothesis”. Econometric Theory, 20, pp.597 -625.

Wansbeek, T., and Kapteyn, A., (1989), “ Estimation of the Error Components Model with Incomplete Panles”. Journal of Econometrics, 41, pp.341 -361.

Wooldridge, J.M., (2002), “ Econometric Analysis of Cross Section and Panel Data, Cambridge, MA:The MIT Press.

427

Page 428: Introduction to Econometrics 2

Resampling methods

Simulate share prices in Excel by using the function rand () and NORMSINV. Monte Carlo method compared with Black and Scholes method. Monte Carlo method using antithetic variables.

The function rand () is used in Excel to calculate a random number. This function gives uniform random numbers in the range of 0 and 1. The function NORMSINV () is used to convert the random number into a standard normal distribution number. This function converts the random numbers into standard normal variables between -3 and +3.By pressing f9 the random numbers and the standard normal ones are changed automatically.

Consider the following exercise related to share prices simulation. The following data are given:

Current share price : 25 USDTime period : 10Expected return: 5%Volatility: 25%

RequiredUsing the function rand() and NORMSINV construct simulated share prices for 10 observations. A B C DCurrent share price

25

Expected return

0.05

Volatility 0.25

Days Random Standard normal

Simulated share price

1 0.555495 0.139556938 25.402 0.456186 -0.110047819 24.943 0.410829 -0.225412798 24.614 0.96409 1.800250776 33.905 0.53459 0.086812363 26.036 0.417113 -0.20928523 24.747 0.676326 0.457450615 28.498 0.412777 -0.22040649 24.779 0.097506 -1.295896311 19.23

10 0.814514 0.894654836 32.87

Insert in Excel in column B the function =rand() in each cell from 1 to 10. By pressing f9 the numbers change automatically. Insert in column C the function =NORMSINV(B1) to convert the random number into standard normal one. Use

428

Page 429: Introduction to Econometrics 2

the same function for cells 1 to 10. Insert in column D the following equation for the first cell:

S = Share price * exp[(expected return*day1/10+sig* standard normal *sqrt(day1/10)]

Or

=$B$1*EXP($B$2*A5/10+$B$3*C5*SQRT(A5/10))

Then copy paste the equation for the remaining cells. Everytime you press f9, you will find that the numbers are changing. The same apply for the chart. You will get different layout everytime that you press f9.

The following chart shows the simulated share prices in relation to days.

Simulated share prices

05

10152025303540

1 2 3 4 5 6 7 8 9 10

Days

Shar

e pr

ices

Simulated shareprice

429

Page 430: Introduction to Econometrics 2

Please consider the following Monte Carlo simulation problem and compare the value with Black and Scholes call option value.

The formula to calculate the simulated share price is as follows:

S = Share price * exp[(r-q-0.5σ2)*T + ε*σ*sqrt(T)]

Where ε: is the standard normal value for each observation. The standard normal function is NORMSINV(rand) and the random function is rand(). You will get different numbers by pressing f9.

The drift is calculated from the formula (r-q-0.5σ2)*T and volatility from the formula σ*sqrt(T).

The Monte Carlo value was found from the call option payoff table. The call option payoff for each observation is calculated as share price 1 – Exercise price, share price 2 – Exercise, etc… When the payoff is negative, it means that the call option has zero value and you input zero.

Once you have calculated the call option payoff table, you calculate the average and you multiply it by the discount factor to get the Monte Carlo value. The discount factor is calculated from the formula exp(-r*T).

430

Page 431: Introduction to Econometrics 2

Share price 90Exercise 80Interest rate (r) 0.07

Dividend yield (q) 0.02

Black and Scholes call option 14.48

Maturity (T) 0.5 Monte Carlo value 12.67Volatility (σ) 0.3

Simulations 40Drift 0.0025sig 0.2121Discount factor 0.9656

Simulations randomStandard normal Share price

Call option payoff

1.00 0.38 -0.30 84.58 4.582.00 0.69 0.50 100.38 20.383.00 0.61 0.27 95.48 0.004.00 0.38 -0.32 84.36 4.365.00 0.50 0.01 90.46 10.466.00 0.17 -0.95 73.79 0.007.00 0.54 0.11 92.39 12.398.00 0.03 -1.86 60.78 0.009.00 0.43 -0.17 86.94 6.94

10.00 0.79 0.79 106.79 26.7911.00 0.70 0.53 100.90 20.9012.00 0.84 1.01 111.83 31.8313.00 0.03 -1.84 61.05 0.0014.00 0.07 -1.49 65.76 0.0015.00 0.24 -0.70 77.78 0.0016.00 0.83 0.97 110.86 30.8617.00 0.79 0.80 106.85 26.8518.00 0.26 -0.66 78.48 0.0019.00 0.29 -0.54 80.44 0.4420.00 0.88 1.20 116.32 36.3221.00 0.68 0.47 99.71 19.7122.00 0.56 0.14 92.98 12.9823.00 0.21 -0.82 75.81 0.0024.00 0.18 -0.91 74.33 0.00

431

Page 432: Introduction to Econometrics 2

25.00 0.22 -0.77 76.69 0.0026.00 0.77 0.73 105.45 25.4527.00 0.43 -0.18 86.85 6.8528.00 0.69 0.49 100.07 20.0729.00 0.87 1.13 114.78 34.7830.00 0.87 1.12 114.49 34.4931.00 0.37 -0.34 83.90 3.9032.00 0.33 -0.43 82.29 2.2933.00 0.37 -0.33 84.17 4.1734.00 0.73 0.62 102.99 22.9935.00 0.06 -1.55 64.94 0.0036.00 0.69 0.48 99.98 19.9837.00 0.17 -0.97 73.45 0.0038.00 0.45 -0.12 88.04 8.0439.00 0.90 1.30 118.99 38.9940.00 0.89 1.23 117.14 37.14

Average 13.12Source: author’s illustration

The following chart shows the simulated share prices in relation to days.

Simulated share prices

020406080

100120140

1 5 9 13 17 21 25 29 33 37

Days

Shar

e pr

ices

Share price

432

Page 433: Introduction to Econometrics 2

Please consider the following Monte Carlo simulation problem with antithetic variables and compare the value with Black and Scholes call option value.

The formulas to calculate the simulated share prices are as follows:

Share price 1 = Share price * exp[(r-q-0.5σ2)*T + ε*σ*sqrt(T)]Share price 2 = Share price * exp[(r-q-0.5σ2)*T - ε*σ*sqrt(T)]

Where ε: is the standard normal value for each observation. The standard normal function is NORMSINV(rand) and the random function is rand(). You will get different numbers by pressing f9.

The drift is calculated from the formula (r-q-0.5σ2)*T and volatility from the formulaσ*sqrt(T).

The Monte Carlo value was found from the average call option payoff table. The call option payoff for each observation is calculated as share price 1 – Exercise price, share price 2 – Exercise, etc…

Once you have calculated the call option payoff table, you calculate the average and you multiply it by the discount factor to get the Monte Carlo value. The discount factor is calculated from the formula exp(-r*T).

433

Page 434: Introduction to Econometrics 2

Share price 90Exercise 80Interest rate 0.07

Dividend yield 0.02

Black and Scholes call option 14.48

Maturity 0.5MC value

13.28

Volatility 0.3

Simulations 40Drift 0.0025sig 0.2121Discount factor 0.9656

Simulations randomStandard normal

Share price1

Share price2

Call option payoff1 Call option payoff2 Average payoff

1 0.18 -0.90 80.45 103.75 0.45 23.75 12.102 0.49 -0.02 91.07 91.65 11.07 11.65 11.363 0.09 -1.36 75.41 110.68 0.00 30.68 15.344 0.80 0.83 102.75 81.23 22.75 1.23 11.995 0.87 1.12 107.11 77.93 27.11 0.00 13.556 0.43 -0.17 89.19 93.59 9.19 13.59 11.397 0.29 -0.54 84.62 98.64 4.62 18.64 11.638 0.19 -0.87 80.78 103.32 0.78 23.32 12.059 0.10 -1.26 76.41 109.23 0.00 29.23 14.62

10 0.15 -1.05 78.71 106.04 0.00 26.04 13.0211 0.89 1.25 109.06 76.53 29.06 0.00 14.5312 0.64 0.36 96.06 86.89 16.06 6.89 11.4813 1.00 2.79 135.49 61.60 55.49 0.00 27.7414 0.05 -1.66 72.24 115.53 0.00 35.53 17.7715 0.11 -1.25 76.57 109.01 0.00 29.01 14.5016 0.09 -1.31 75.87 110.01 0.00 30.01 15.0117 0.17 -0.95 79.89 104.48 0.00 24.48 12.2418 0.80 0.84 102.91 81.11 22.91 1.11 12.0119 0.55 0.14 93.14 89.61 13.14 9.61 11.3820 0.01 -2.33 65.67 127.11 0.00 47.11 23.5521 0.19 -0.87 80.80 103.30 0.80 23.30 12.0522 0.03 -1.92 69.59 119.93 0.00 39.93 19.9723 0.94 1.56 113.88 73.29 33.88 0.00 16.9424 0.04 -1.71 71.75 116.33 -8.25 36.33 14.04

434

Page 435: Introduction to Econometrics 2

25 0.83 0.95 104.45 79.91 24.45 0.00 12.2226 0.75 0.68 100.65 82.93 20.65 2.93 11.7927 0.25 -0.66 83.18 100.35 3.18 20.35 11.7628 0.27 -0.62 83.73 99.69 3.73 19.69 11.7129 0.72 0.57 99.08 84.24 19.08 4.24 11.6630 0.72 0.59 99.30 84.05 19.30 4.05 11.6831 0.30 -0.51 84.98 98.22 4.98 18.22 11.6032 0.96 1.74 116.81 71.45 36.81 0.00 18.4133 0.25 -0.67 83.12 100.42 3.12 20.42 11.7734 0.65 0.39 96.50 86.50 16.50 6.50 11.5035 0.32 -0.46 85.58 97.53 5.58 17.53 11.5636 0.82 0.93 104.16 80.13 24.16 0.13 12.1537 0.06 -1.59 72.93 114.46 0.00 34.46 17.2338 0.37 -0.32 87.33 95.58 7.33 15.58 11.4539 0.20 -0.85 80.96 103.10 0.96 23.10 12.0340 0.52 0.05 91.96 90.76 11.96 10.76 11.36

Average 13.75Source: author’s illustration

The following chart shows the simulated share prices in relation to days.

Simulated share prices

0

20

40

60

80

100

120

140

160

1 5 9 13 17 21 25 29 33 37

Days

Shar

e pr

ices

1 a

nd 2

Share price1Share price2

435

Page 436: Introduction to Econometrics 2

Please read the following article to help you understand the bbotstrap method.

Performance Persistence. A Comparative Study of UK Investment Trusts.

Abstract:

Performance persistence in the investment literature was a major area of investigation for both academics and practitioners for more than 2 decades. The results from various U.K open – end mutual funds studies are mixed and there is no enough statistical evidence of performance persistence in investment trusts. The interactions of both alive and dead investment trusts under various statistical performance tests will enable us to detect the effect of biases and shed light to the degree and direction positive or negative of the performance existence and persistence. Our results find some evidence for managerial performance persistence in the short-term, but there is little evidence for persistence in the long-run.

Keywords: performance persistence, survivorship bias, contingency table, Treynor and Mazuy conditional model, General Method of Moments, bootstrapped model.

Jel Classification: G24

436

Page 437: Introduction to Econometrics 2

Introduction

An investment trust in the UK is a collective investment company that invests in shares of other companies. The structure of investment trusts provide investors with several benefits. They allow investors to invest small amounts of their money, to spread their risk and to use a professional manager’s expertise for various sectors. Investment trusts issue shares that are publicly traded on the London Stock Exchange (LSE). They provide professional fund managers to invest in the shares of a wider range of companies. Even people with small amounts of money can gain exposure to a diversified and professionally run portfolio of shares and spread the risk of stock market investment. According to the Association of Investment Trust Companies (AITC), there were 263 conventional trusts with total assets of £55.6 billion in 2005 in the UK.

An investment trust is characterized by a fixed capitalization. This structure is advantageous for investment manager to plan better in the long-term. The size does not expand or decrease continually. Investment trusts issue shares that are publicly traded on the London Stock Exchange (LSE) and therefore their price is determined by the demand and supply of the market. In contrast with open-end funds, if a number of investors decided to sell their units for various reasons at a certain point in time, the fund manager will affect the capitalization by selling part of the assets of the trust. This can badly affect investment performance, especially, over the long-term.

The structure of investment trusts is divided into conventional and split capital investment trusts. Conventional investment trusts have just one class of share. On the other hand, split capital investment trusts offer two or more classes of shares, which can be used by investors to meet specific investment goals. At least one of the share classes is likely to have a limited life (usually between five and ten years), so there will be a fixed wind-up date when the company is terminated and the assets split between the various categories of shareholder. In this paper, we focus on conventional investment trusts.

The aim of this paper is to describe and investigate performance persistence of UK investment trusts by using a variety of statistical and methodological approaches. In my thesis, I investigated the anomalies documented in the finance literature such as size, book-to-market factor and the excess market return that will affect the performance of investment trusts. In addition, we tested if fund managers have market timing ability and can predict the movement of the market. The results are mixed and only some sectors over certain time periods show evidence of managerial performance persistence. I extended my analysis by using rank correlation analysis, but we find no evidence of performance persistence. The question that remains is which methodological approach related to the time frame is going to reveal the real sources of persistence.

My study will contribute significantly to the investment trust literature as most of the studies that took place focused on pension funds, open mutual funds in the US. Very few studies were performed on UK investment trusts. I extend the research contained in the thesis by applying new methodological approaches such

437

Page 438: Introduction to Econometrics 2

as contingency tables of UK investment trusts to test for performance persistence among ‘winners’ and ‘losers’. Chi-squared independence tests based on these contingency tables will test whether the observed and expected number of funds is significantly different in subsequent periods of time using data from 1990 to 2006. This test should identify whether persistence arises from poorly-performing or well-performing funds. In addition, short-term and long-term performance persistence will be investigated by applying a multi-period rolling methodology approach based on a General Method of Moments (GMM) statistical method. To test these hypotheses, a rolling methodology for the first, third, fifth, ninth and twelfth years will be used. The term ‘rolling’ means that the third year includes the first, second and third years and the twelfth year includes all the previous years starting from the fifth year. The rolling methodology approach is consistent with Gruber (1996), Fama and French (1993) and Carhart (1997). Finally, a bootstrapped methodology will be used to test any deviation from estimated and simulated NAV return.

In addition, I am intending to include dead funds to avoid survivorship bias. Survivorship bias may affect performance measures and indicate persistence when in reality it does not exist. Survivor bias is the effect of considering only the performance of funds that are alive and present and exclude the dead ones from the whole sample. There are different types of bias such as selection and look ahead bias (see Grinblatt and Titman, 1988, Brown, Goetzmann, Ibboston and Ross,1992, Brown and Goetzmann, 1995 and Malkiel, 1995).

Various performance persistence studies in the US examined survivorship bias by comparing test results of a survivor sample in relation to a full sample that included both survivor and dead funds. Both Hendricks et al (1993) and Carhart (1997) find that persistence is weaker in the sample of survivors. Carhart (1997) clarify two sources of bias. The existence of survivorship and look-ahead bias could signal misleading interpretation of our results. The former is due to the sample selection method and the fact that the sample includes only funds that survive until the end of the sample period and then remove them from the database. The later is the result of the methodology used. Specifically, he used two concepts relating to the ranking and evaluation of subsequent periods of the life of the funds. To distinguish of look-ahead bias, he compared two periods. At the end of the first period, funds are ranked based on their past performance. In the second period which is known as the evaluation period, the funds are allocated based on their performance from the first period. During these two periods of ranking and evaluation, funds disappear in a non-random way which causes the look-ahead bias.

Brown and Goetzmann (1995) and Carhart (1997) found that biases are detected for mutual funds during a multiperiod time scale and not in a single period of time. Elton et al.(1996b), find that survivor bias in average fund returns grows with the length of the sample period by taking into consideration that the sample does not add new funds. Most of the empirical studies in the US found that mutual fund performance is persistent.

438

Page 439: Introduction to Econometrics 2

Another form of bias mentioned from Allen and Tan (1999) is selection bias which is due to the fact that fund managers’ voluntary report and keep data for funds that show a positive performance. Their decision contaminates the return data on performance of the funds. The continuous repetition of positive returns of these funds will force the losers or funds with negative performance measures to exit the industry. Therefore, we will have an overweight of the performance figure which in reality is not a representative figure of our dataset. The overweight figure and the result of the bias is a consequence of high variability performance funds in relation to low variability funds. Our sample is free from survivorship, look-ahead and selection bias.

The rest of the paper is organized as follows. Section 2 reviews the literature of performance persistence. Section 3 refers to data and sampling issue. Section four refers describes methodological issues in terms of different statistical methods that are going to be adopted. Section five refers to the performance persistence tests. Section six is a bootstrap test of performance persistence and section seven concludes.

2. Literature Review

Most of the studies that took place in the past focused on US closed-end funds, pension funds and unit trusts with limited research on UK Investment Trusts. The first detailed study was obtained from Dimson and Minio-Kozerski’s (2001) on UK Investment Trusts. In their study they used return-based style analysis of NAV returns to measure past performance over one, two, and three-year periods. The return based style analysis is a constrained regression and was originally suggested by Sharpe (1992). The coefficients of each variable from this regression are constrained to lie between zero and one and their sum should equal to one. They used fifteen indexes to measure the fund’s NAV performance by using a rolling methodology the first three years. They ranked the funds on the level of past performance and allocated to deciles. Their analysis reveals no evidence of performance persistence in the UK market. The correlation coefficients and the t-tests of the performance difference between the top and bottom groups are not significant.

Gile, Wilsdon and Worboys (2002), presented the results of a study of performance persistence using a database of UK equity unit trusts between 1981 and 2001 that included both live and dead funds. Their main findings were the following: The performance persisted in UK equity unit trusts between 1981 and 2001. The importance of persistence depends on both the time horizon and the sector in which the fund is invested. The cumulative return for funds that are in the top quartile exceeds the return for funds that are in the bottom quartile for the various time horizon and sector combinations. In their report, they concluded that persistence is affected by the statistical test that is adopted and past persistence based on historical data is a guide to whether it will continue in future periods.

Allen and Tan (1999) used a sample of 131 UK funds for the period 1989-1995. They have applied four major empirical tests. These are contingency table of

439

Page 440: Introduction to Econometrics 2

winners and losers, ordinary least squares regression analysis of CAPM risk-adjusted excess returns and Spearman Rank Correlation Coefficient analysis of successive period performance rankings. They found that both raw and risk-adjusted returns show evidence of persistence in the long run but not in the very short run.

Wood Mackenzie Company (2002), studies the S&P UK all companies sector for performance persistence. They found no evidence of significant persistence looking at five-year time frames. They showed evidence of short-term persistence with a top quartile of trusts continuing to out-perform a group in the following years. They extended their study to UK pension fund performance in the mid-1990’s and they found stronger evidence over medium term periods of 3 to 5 years than over periods of more than 5 years. Their evidence is stronger when returns are adjusted for risk but still their results are different according to the time periods.

Heffernan (2001) examined the relative performance of eight categories of UK investment trusts constituting of 273 trusts for the period 1994-99. He founds evidence of positive serial correlation. Funds with good performance in one year are associated with good returns the next year. In addition, he examined the effects of past variance on current performance and past performance on current variance. The results show fund level persistence in both relative performance and variance. He also applied a three step utility function which shows that worse performers funds would tend to display greater variance.

Tonks (2002) examined the persistence in performance by using data on the quarterly returns of 2,175 UK pension funds from the Combined Actuarial Performance Services Ltd from 1983 to 1997. He used different time periods to classify each pension fund as winner-winner (WW), winner-loser (WL), loser-winner (LW), and loser-loser (LL). WW and LL denote winner and loser in two consecutive periods, LW denotes Loser in the first period and Winner in the second period. WL denotes Winner in the first period and Loser in the second period. He tested for short-term persistence by using quarterly rankings. For long-term persistence, he used a twelve-quarter ranking period. He calculated statistics based on the percentage of repeated winners, the cross-product ratio CP=(WWxLL)/(WLxLW), and a chi-squared test with 1 degree of freedom. He rejected independence if CHI exceeded the critical value of 3.84 for a 5% significance level. His approach was similar to the one used by Agarwal and Naik (1999) and Carpenter and Lynch (1999) in their paper on the multi-period performance persistence of hedge funds. In more detail, Agarwal and Naik (1999) used two performance measures (alphas and appraisal ratios) for 586 hedge funds from January 1982 to December 1998. In their study, alpha was defined as the return of the fund manager using a particular strategy minus the average return on all the funds using the same strategy. The appraisal ratio was defined as alpha divided by the standard errors of the residuals from the regression of the fund return on the average return of all the funds following that strategy. In their case, losers and winners were determined by comparing the alphas and appraisal ratios of individual fund managers to those of the median manager.

3. Data and sampling

440

Page 441: Introduction to Econometrics 2

Each of the funds that we include in our study is allocated to one of the 16 categories described in Table 1 for UK investment trusts. This study investigates almost half of the entire investment trust industry, with the exception of funds that invest in unquoted securities such as venture and development, private equity and specialist funds, emerging market funds, hedge funds, and split capital trusts. The reason for excluding unquoted securities is that if a significant proportion of investments held are unquoted, there will be some uncertainty as to the true value of the underlying assets. By excluding the above, our sample consists finally of 16 categories with a total number of 210 funds including the “dead” ones.

The different categories of UK funds, the total number of funds, and descriptive statistics of NAV of the survival funds by category are described in Table 1. Table 2 details the various survival funds by category. Table 3 shows the descriptive statistics of the 120 dead investment trust funds. Table 4 shows the total number of alive and dead funds, their mean returns and the bias. Table 5 shows the correlation matrix of alive and dead funds. Table 6 shows the status of dead funds included in our dataset. We will use a comprehensive data set of NAV including both alive and “dead” funds from January 1990 to January 2006. The defunct funds that will be included have an incomplete dataset due to the limited period of their survival. They will be used for further tests to detect performance persistence bias and how they affect the whole sample. Bootstrap simulation methods will be applied to detect any pattern of persistence.

Table 1 summarises descriptive statistics of the average NAV return of UK investment trusts by AITC sectors for the period January 1990 to January 2006.

Table 1 Descriptive statistics of the UK average NAV return for the sample of the 90 alive investment trusts calculated for the period January 1990 to January 2006

AITC Category Number of Funds

(N)

Mean Standard deviation

Range Min Max

Global Growth 20 0,63 1,34 4,28 -1,98 2,30Global Growth & Income 3 0,44 1,31 5,31 -2,30 3,01Global Smaller Companies 1 0,82 1,80 6,45 -2,61 3,84

UK Growth 12 0,64 1,23 4,54 -2,17 2,37UK Growth & Income 9 0,59 1,17 4,19 -2,06 2,13

UK Smaller Companies 12 0,51 2,00 7,89 -3,28 4,61

441

Page 442: Introduction to Econometrics 2

North America 3 0,67 1,53 5,35 -2,43 2,92

North America Smaller Companies

2 0,89 1,64 6,96 -3,59 3,37

Far East (Including Japan) 2 0,44 2,40 8,95 -3,53 5,42

Far East (Excluding Japan) 6 0,72 2,77 9,34 -3,31 6,03

Japan 3 0,35 3,33 13,24 -3,96 9,29

Japanese Smaller Companies

2 0,51 4,91 22,39 -6,66 15,73

Europe 6 0,71 1,61 5,38 -2,25 3,13

European Smaller Companies

3 1,10 2,11 8,29 -3,73 4,56

Country Specialist - Far East

3 0,62 2,22 7,84 -3,92 3,92

Sector Specialist - Property 3 0,36 2,30 8,80 -4,21 4,59

Total 90Average 0,63 2,10 8,08 -3,25 4,83Source: calculated by the author

According to Table 1, it is clear that there is a small dispersion between the lower and upper bounds in each category by AITC. For example, Japanese Smaller Companies sector has a range of 22,39 percentage points with a lower bound of -6,66 per cent and an upper bound of 15,73 per cent. UK Smaller Companies sector has a range of 7,89 percentage points with a lower bound of -3,28 per cent and an upper bound of 4,61 per cent. Far East excluding Japan sector has a range of 9,34 percentage points with a minimum value of -3,31 per cent and a maximum value of 6,03 per cent. In addition, we use the coefficient of variation to compare the relative dispersion of 2 or more data sets which have different means. We see that there is a high degree of dispersion between the sectors by AITC category. For example, Global smaller Companies sector has a coefficient of variation of 2,20 per cent compared with the coefficient of variation of Japanese smaller companies which is 6,88 per cent. Similarly, UK Growth has a lower coefficient of variation in relation to UK smaller companies which is 1,92 per cent. In general, the average fund by sector will show an average NAV return of 0.63 per cent, a standard deviation of 2,10 per cent and a wide range of 8,08 percentage points with the lower bound estimated as -3,25 per cent and the upper bound as 4,83 per cent.

Table 2 shows the various funds allocated to one of the 16 categories for UK investment trusts. Our sample consists of 90 survival UK investment trusts.

Table 2 details the funds in each AITC category of UK investment trusts for the period 1990 to 2006

Global Growth ALLIANCE TRUST

442

Page 443: Introduction to Econometrics 2

BANKERS INV.TRUSTBRITISH EMPIRE SECSBRUNNER INV.TSTELECTRIC & GENERAL IT.FOREIGN & COLONIALGARTMORE GLOBAL TSTJUPITER PRIMADONA GROWTHLAW DEBENTURELONDON & ST.LAWRENCEMAJEDIE INVS.MONKS INV.TRUSTPERSONAL ASSETSRIT CAPITAL PARTNERSSCOTTISH AMERICANSCOTTISH INV.SCOTTISH MORTGAGESECOND ALLIANCETRIBUNE TRUSTWITAN INV.TRUST

Global Growth & IncomeBRITISH ASSETSMURRAY INTL.ECLECTIC INVESTMENT TST.

Global Smaller CompaniesF&C SMALLER COS.

UK GrowthALBANY INV.TRUSTCAPITAL GEARING TST.EDINBURGH INV.TRUSTEDINBURGH UK TRACKERFINSBURY GROWTH TST.FLEMING MERCANTILEHANSA TRUSTJPMF MERCANTILE ITJPMF.CLAVERHOUSEJPMF MID CAP IT.KEYSTONE IT.UK SELECT TRUST

UK Growth & IncomeCITY OF LONDONDUNEDIN INC.GROWTHLOWLAND INV.MERCHANTS TRUSTFINSBURY GW & INC TSTMURRAY INCOMESHIRES INCOME TST.TEMPLE BARVALUE & INCOME

UK Smaller CompaniesALLIANZ DRESDNER SMCOS.DUNEDIN SMALLER COS.GARTMORE SMALLER COS.HENDERSON SMALLER COSI&S.UK SMALLER COS.INVESCO ENGLISH & INTERNATIONALINVESCO PERP.UK SMCOS.PLATINUM INV.TST.SCHRODER UK MID & SMALLSMALLER COMPANIES IT.THROGMORTON TRUST3I SM.QUOTED COS.TRUST

North AmericaAMERICAN OPPOR.TST.EDINBURGH US TRACKER TST.JPMF. AMERICAN.IT

North America Smaller CompaniesJPMF US DISCOVERYNORTH ATLANTIC SMCOS.

Far East (Including Japan)F&C PACIFICMARTIN CURRIE PACIFIC

Far East (Excluding Japan)ABERDEEN NEW DAWN IT.EDINBURGH DRAGON TST.HENDERSON FAR EAST INC.HENDERSON TR PAC.ITPACIFIC ASSETSPACIFIC HORIZON

Japanese Smaller CompaniesBAILLIE SHIN NIPPONJPMF JAPANESE SMCOS.

JapanBAILLIE GIFF.JAPANFLEMING JAPANESEJPMF. JAPANESE

EuropeF&C EUROTRUSTFLEMING CONT.EUROPEGARTMORE EUROPEANINVESCO PERP.EUR.IT.MARTIN CURRIE EUR.JPMF.CONT.EUROPE

443

Page 444: Introduction to Econometrics 2

European Smaller CompaniesEUROPEAN ASSETS TST.JPMF EUROPEAN FLEDGELINGTR EUROPEAN GROWTH

Country Specialists - Far EastABERDEEN NEW THAINEW ZEALAND INV.STOCKS CONVERTIBLE TST.

Sector Specialists - PropertyTR PROPERTY INV.TRUST OF PROPERTYGRESHAM HOUSE

Source: Datastream University of Piraeus - Athens - Greece. Association of Investment Trust Companies (AITC).

444

Page 445: Introduction to Econometrics 2

Table 3 shows descriptive statistics of the average NAV return of dead UK investment trusts by date for the period January 1991 to January 2006.

Table 3 Descriptive statistics of the average NAV return for the dead sample of 120 investment trusts calculated by date for the period January 1991 to January 2006.

Date that the funds ceased to be active

Number of Funds

Mean Standard deviation

Range Min Max

1991 3 -0,08 1,76 21,31 -10,74 10,571992 6 -0,15 1,95 23,42 -11,92 11,511993 9 -0,01 1,78 19,48 -9,46 10,021994 1 0,12 1,98 19,17 -10,00 9,181995 7 0,09 4,47 49,57 -13,36 36,211996 13 0,25 2,83 24,60 -10,61 13,991997 8 -0,13 5,06 58,36 -35,19 23,171998 20 0,29 5,86 69,58 -41,73 27,851999 11 0,33 4,32 45,59 -19,72 25,882000 6 0,38 3,37 25,84 -13,63 12,212001 9 0,33 3,54 27,98 -13,33 14,652002 2 0,27 4,67 32,24 -16,79 15,452003 3 0,24 5,69 40,92 -14,53 26,382004 5 0,20 4,06 25,69 -13,71 11,982005 6 0,76 7,57 66,78 -28,36 38,412006 11 0,32 4,89 53,12 -23,21 29,90Total 120Average 0,20 3,99 37,73 -17,89 19,84Source: calculated by the author

According to the descriptive statistics of Table 3, there is a high dispersion between the lower and upper bounds by date for the various dead investment trusts. The manifestation of high ranges of average NAV return before the death of the trust signal the importance of inclusion them in the whole sample to avoid bias. For example, in 1998 there were 20 funds that have a range of 69, 58 percentage points with a lower bound of -41, 73 per cent and an upper bound of 27, 85 per cent. In addition, the comparison of mean 0, 29 percentage points return and standard deviation 5, 86 which is a measure of risk from the mean denotes a significant dispersion. In general, the average fund by sector will show an average NAV return of 0, 20 per cent, a standard deviation of 3, 99 per cent and a wide range of 37, 73 percentage points with the lower bound estimated as -17, 89 per cent and the upper bound as 19, 84 per cent.

The following table presents the total number of alive and dead funds, their raw yearly mean returns and the bias. Table 4 shows the total number of alive and dead funds, their raw yearly means and the bias accounted from 01/01/1990 to 01/01/2006.

Survival Funds Dead Funds Bias

445

Page 446: Introduction to Econometrics 2

DateMeanReturn

MeanReturn

Difference of Survival minus Dead Funds

1990 -2,12 -2,31 0,191991 1,01 0,80 0,201992 1,35 0,53 0,821993 2,73 2,89 -0,161994 -0,54 -1,02 0,491995 1,20 0,76 0,431996 0,60 0,51 0,081997 0,56 0,14 0,411998 0,67 0,38 0,291999 3,31 0,97 2,342000 -0,42 0,20 -0,622001 -1,13 -0,51 -0,632002 -1,91 -1,20 -0,712003 1,99 -0,31 2,312004 1,05 1,45 -0,402005 1,84 0,16 1,682006 0,65 -0,04 0,69Mean 0,64 0,20 0,44

Source: calculated by the author.

According to Table 4, the bias which is the difference between the value for alive funds and dead funds accounted to an average of 0,44 percent a significant difference of 0,20 percent from the mean return of survival funds. The number of funds that liquidated over the successive years has increased significantly in relation to the alive funds. The mean return calculated before they ceased to operate is a first test that if we exclude them from the whole sample they will affect the performance measure and show persistence in terms of spurious bias. Carhart (1997b) reported a non-survival rate of 3,6% over the period 1962-1995 but his sample was completely different from ours and he measured funds after accounting for expenses and dividends. We are going to test the frequency of mutual fund disappearance and their impact on the whole sample over different horizons. In addition, we are going to test the effect of biases through a probit model in the subsequent sections by using risk adjusted measures.

A second test that substantiates the importance of including the dead funds is the correlation matrix of table 5.

Table 5 shows the correlation matrix of dead and alive funds for the period 1990 to 2006.

Survival DeadSurvival 1Dead 0,76 1

Source: calculated by the author

According to Table 5, the sample evidence suggests that there is linear positive strong relationship between the rate of return of surviving and dead funds.

446

Page 447: Introduction to Econometrics 2

To substantiate the above we use a t-test. Thus the hypothesis will be as follows:

H0:μ1−μ2=0

H1: μ1−μ2≠0

The t statistic of the t-test of two sample means is 1,95 which is outside the critical value of 1,75. The sample evidence suggests that with 5% significance level there is difference of the average NAV return of survival and dead funds. Therefore, their exclusion is causing bias which results in overestimating or underestimating performance measure.

The following table summarises the sample of 120 “dead” UK investment trusts. As mentioned above, the term “dead” can mean liquidation, delisted, or suspended. In the sample investigated, we included the different types of dead funds in order to avoid survivorship bias look-ahead and selection bias.

Table 6 UK Investment Trusts Dead funds included in our dataset in terms of liquidated, delisted, unitized, converted, suspended or open-ended and merged.

AITC Category / Name of Fund Status of Dead FundsACORN INV.TST. DELISTED 24/03/93 – N.A.V. (PAR)

Delisted

ALLIANZ DRESDNER SMCOS. DEAD - 22/04/04

Dead

AMBROSE INV.CAP. - N.A.V. (PAR) DeadAMERICAN ENDEAVOUR DELISTED 20/01/99 - N.A.V. (PAR)

Delisted

ANGLO & OVERSEAS DEAD - 27/07/05 – N.A.V. (PAR)

Dead

ANGLO SCANDINAVIAN - N.A.V. (PAR) DeadARCHIMEDES CAPITAL DEAD - 30/09/04

Dead

BETA GLOBAL EMRG.MKTS. DELISTED 15/03/02 - N.A.V. (PAR)

Delisted

BARING STRATTON UNITISED 05/05/98 - N.A.V. (PAR)

Unitised

BRITISH INV.TRUST DEAD - 19/05/97 – N.A.V. (PAR)

Dead

CHARTER EUROPEAN DELISTED 22/04/02 - N.A.V. (PAR)

Delisted

CHINA & EASTERN - N.A.V. (PAR) DeadCITY OF OXFORD ORD. DELISTED 29/09/99 - N.A.V. (PAR)

Delisted

CITY&COML.CAP. DELISTED 05/02/93 – N.A.V. (PAR)

Delisted

CONTINENTAL ASSETS DELISTED 15/06/98 - N.A.V. (PAR)

Delisted

CST EMERGING ASIA DELISTED 11/08/93 - N.A.V. (PAR)

Delisted

447

Page 448: Introduction to Econometrics 2

DERBY TRUST CAP. DEAD - 29/12/03 – N.A.V. (PAR)

Dead

DRAYTON CONS. - N.A.V. (PAR) DeadDRAYTON FAR EAST DEAD - RECONSTRUCTION - N.A.V. (PAR)

Dead

DUNEDIN WORLDWIDE DELISTED 10/07/98 - N.A.V. (PAR)

Delisted

ENG.& SCOT. 'B' DEAD - CONVERTED – N.A.V. (PAR)

Converted

ENGLISH NAT.DFD. DEAD - USE 960820 - N.A.V. (PAR)

Dead

ENSIGN TRUST - N.A.V. (PAR) DeadEQ.CONSORT DFD. - N.A.V. (PAR) DeadEUROLAND PLUS (SMCOS.) DELISTED 27/09/99

Delisted

EXMOOR DUAL ORD. DELISTED 20/07/98 - N.A.V. (PAR)

Delisted

F&C EMERGING MKTS.IT. DEAD - 10/04/06 - N.A.V. (PAR)

Dead

F&C GERMAN DELISTED 01/06/98 - N.A.V. (PAR)

Delisted

F&C PACIFIC - N.A.V. (PAR) DeadFINSBURY SMCOS B ORD. DELISTED 10/04/00 - N.A.V.(PAR)

Delisted

FIRST PHILIPPINE SUSPENDED 26/06/97 SuspendedFIRST TOKYO IDX. - N.A.V. (PAR) DeadFLEMING FAR EASTERN DEAD 12/09/97 - N.A.V. (PAR)

Dead

FLEMING HIGH INCOME SUSPENDED 25/11/96 - N.A.V. (PAR)

Suspended

FLEMING INC.& GW.CAP. 00 DELISTED 27/04/00 - N.A.V. (PAR)

Delisted

FLEMING INTL.HI.INC.ORD. DELISTED 31/10/96 - N.A.V. (PAR)

Delisted

FRAMLINGTON DT CAP. DELISTED 29/07/99 - N.A.V. (PAR)

Delisted

GART.VALUE ORD. - N.A.V. (PAR) DeadGARTMORE AMERICAN SECS. - N.A.V. (PAR)

Dead

GARTMORE EMRG.PACIFIC DELISTED 04/10/99 - N.A.V.(PAR)

Delisted

GENERAL CONS.CAP. DELISTED 06/01/98

Delisted

GERMAN SMALLER COS. DELISTED 25/10/99 - N.A.V. (PAR)

Delisted

GOVETT ATLANTIC - N.A.V. (PAR) DeadGOVETT ORIENTAL DELISTED 30/09/98 - N.A.V. (PAR)

Delisted

GOVETT STRATEGIC DEAD - 23/07/02 - N.A.V. (PAR)

Dead

GROUP TRUST DELISTED 17/08/01 - N.A.V. (PAR)

Delisted

GT JAPAN INV. DELISTED 09/11/01 - Delisted

448

Page 449: Introduction to Econometrics 2

N.A.V. (PAR)HENDERSON AMERICAN CAP. DELISTED 26/02/99 - N.A.V. (PAR)

Delisted

HENDERSON FAR EAST INC. - N.A.V. (PAR)

Dead

HENDERSON GREENFRIAR DELISTED 14/05/98 - N.A.V. (PAR)

Delisted

HENDERSON HIGH INC.PKG. DEAD - 03/10/05 - N.A.V.(PAR)

Dead

HOTSPUR INV. - N.A.V. (PAR) DeadI&S.OPTIMUM INC.ORD. 'DELISTED 05/06/97' - N.A.V. (PAR)

Delisted

INDEPENDENT INV. DELISTED 17/09/93 - N.A.V. (PAR)

Delisted

INVESCO TECHMARK ENTER. DEAD – SEE 29154W - N.A.V. (PAR)

Dead

INVESTORS CAP.PKG.UNITS DELISTED 25/06/01 - N.A.V. (PAR)

Delisted

JERSEY PHOENIX ORD. DEAD - 08/04/05 - N.A.V. (PAR)

Dead

JF PACIFIC DELISTED 06/02/97 - N.A.V. (PAR)

Delisted

JOS HOLDINGS DELISTED 28/10/92 - N.A.V. (PAR)

Delisted

JOVE INV.CAPITAL DEAD - 28/10/04 - 28/10/04

Dead

JPMF.TECHNOLOGY TRUST DEAD - 07/01/03 - N.A.V. (PAR)

Dead

JUP.EXTRA INC.ORD. DELISTED 28/09/00 - N.A.V. (PAR)

Delisted

JUP.INTL.GREEN ORD. DELISTED 19/03/01 - N.A.V. (PAR)

Delisted

JUPITER GEARED CAP. DELISTED 14/01/99 - N.A.V. (PAR)

Delisted

KLEINWORT OVERSEAS 'DELISTED 19/03/98' - N.A.V. (PAR)

Delisted

KOREA EUROPE FUND DEAD - 20/02/03 - N.A.V. (PAR)

Dead

LANICA TRUST DELISTED 17/06/98 - N.A.V. (PAR)

Delisted

LONDON AMERICAN GW. DEAD - 09/09/96 - N.A.V. (PAR)

Dead

LONDON SMALLER COS. DEAD – TAKEN OVER - N.A.V. (PAR)

Dead

M&G 2ND.DUAL CAPITAL DELISTED 21/09/99 - N.A.V. (PAR)

Delisted

M&G DUAL CAPITAL DELISTED - N.A.V. (PAR)

Delisted

MANAKIN HOLDINGS DELISTED 29/05/96 - N.A.V. (PAR)

Delisted

MARTIN CURRIE EUR. DEAD - 10/06/05 - N.A.V. (PAR)

Dead

449

Page 450: Introduction to Econometrics 2

MART.CURRIE MOORGATE DELISTED 23/09/99 - N.A.V. (PAR)

Delisted

MCIT CAPITAL IT. DELISTED 25/06/98 DelistedMURRAY GLB.RTN.'B' DELISTED 01/08/01 - N.A.V. (PAR)

Delisted

MURRAY GLOBAL RETURN DEAD - 20/03/06 - N.A.V. (PAR)

Dead

MURRAY INCOME 'B' 'DELISTED 28/02/02' - N.A.V. (PAR)

Delisted

NATWEST ENTERPRISE IT DELISTED 26/06/00 - N.A.V.(PAR)

Delisted

NEWMARKET VENTURES DELISTED 24/10/00 - N.A.V. (PAR)

Delisted

NORTH AMERICAN GAS DEAD - 09/09/96 - N.A.V. (PAR)

Dead

OLIM CONVERTIBLE ORD. DELISTED 26/05/99 - N.A.V.(PAR)

Delisted

OVERSEAS INV. 'DELISTED 31/03/98' - N.A.V. (PAR)

Delisted

PACIFIC PROPERTY - N.A.V. (PAR) DeadPRIVATE INVESTORS CAP.IT DEAD - 19/11/04 - N.A.V. (PAR)

Dead

PARIBAS FRENCH INV. DEAD - 31/08/97 - N.A.V. (PAR)

Dead

PRACTICAL INV. DELISTED 30/12/93 - N.A.V. (PAR)

Delisted

PRECIOUS METALS TRUST - N.A.V. (PAR)

Dead

RADIOTRUST - RECON. (SEE 962332) - N.A.V. (PAR)

Dead

RALSTON INV.TST. DELISTED 01/02/93 - N.A.V. (PAR)

Delisted

RIVER PLATE CAPITAL DEAD - DEAD 20/3/97 - N.A.V.(PAR)

Dead

S&P LINKED CAPITAL DEAD 12/12/96 - N.A.V. (PAR)

Dead

SCHRODER MEDITERRANEAN DELISTED 06/08/96 - N.A.V. (PAR)

Delisted

SCOTTISH EASTERN DELISTED 19/03/99 - N.A.V. (PAR)

Delisted

SCOTTISH ASIAN DELISTED 16/06/00 - N.A.V. (PAR)

Delisted

SCOTTISH NAT.CAPITAL DELISTED 29/09/98 - N.A.V. (PAR)

Delisted

SECOND ALLIANCE DEAD - 21/06/06 - N.A.V. (PAR)

Dead

SECURITIES TST.SCTL. DEAD - 28/06/05 - N.A.V. (PAR)

Dead

SELECTIVE ASSETS DEAD - TAKEOVER 901533 - N.A.V.(PAR)

Dead

SIAM SELECTIVE GW. DELISTED 23/07/01 - N.A.V. (PAR)

Delisted

SOMERSET TRUST REVERSE TAKEOVER 319719 - N.A.V. (PAR)

Dead

450

Page 451: Introduction to Econometrics 2

SMALLER COMPANIES IT. DEAD - 07/08/03 - N.A.V. (PAR)

Dead

SPHERE INC.& RESI.CAP. - N.A.V. (PAR) DeadSR PAN EUROPEAN ORD. 'DELISTED 14/08/01' - N.A.V. (PAR)

Delisted

ST.ANDREW TRUST DELISTED 10/09/99 - N.A.V. (PAR)

Delisted

ST.DAVID'S CAPITAL DELISTED 26/11/98 - N.A.V. (PAR)

Delisted

SUMIT DEAD - T/O BY 900743 - N.A.V. (PAR)

Dead

THORNTON ASIAN EMRG. 'DELISTED 26/03/97' - N.A.V. (PAR)

Delisted

THROGMORTON USM - N.A.V. (PAR)TOR INV.CAPITAL DELISTED 11/10/00 - N.A.V. (PAR)

Delisted

TR TECHNOLOGY B ORD. DELISTED 22/10/99 - N.A.V. (PAR)

Delisted

TRIBUNE TRUST DEAD - 13/03/06 – N.A.V. (PAR)

Dead

TRIPLEVEST CAP. - N.A.V. (PAR) DeadTRUST OF PROPERTY DEAD - 27/03/06 – N.A.V. (PAR)

Dead

TURKEY TRUST DELISTED 25/11/99 DelistedUPDOWN INV. DELISTED 05/11/99 – N.A.V. (PAR)

Delisted

USDC INV.TST. DEAD - CONVERTED – N.A.V. (PAR)

Converted

WHITBREAD INV. DELISTED 06/01/94 – N.A.V. (PAR)

Delisted

WORTH INV.TST. - N.A.V. (PAR) DeadYEOMAN INV.TRUST CAP. RECON.SEE 962546 - N.A.V. (PAR)

Dead

Total 120Source: Datastream “University of Piraeus”.

4. Methodological issues

4.1 Raw NAV returns

We define monthly NAV total returns as the ratio of the difference of the monthly NAV. We use a gross figure of return before dividends and expenses as in my thesis Guirguis (2005), I found that net NAV total return after dividends and expenses can not fully explain performance persistence. We do not take logarithms as the NAV return is normally distributed.

RNAV i , t=(NAV i , t−NAV i , t−1)/NAV i , t−1∗100 (1)

451

Page 452: Introduction to Econometrics 2

4.2 Multi-period risk adjusted excess NAV returnsFund return is generated by using the Jensen (1968) alpha measure, which is the intercept from the linear equation of excess returns on the fund in relation to the excess return on the index.

The intercepts (α ' s ) from the linear equation is used to measure the contribution of the manager to the performance of the fund. Thus, a positive and statistically significant alpha indicates superior performance of the fund, whereas zero or negative values or statistically insignificant values represent inferior or neutral managerial performance.

The methodology that we are going to use is Hansen’s (1982) generalized method of moments (GMM). We use initially a Prewhitening option which runs a preliminary VAR(1) prior to GMM. The parameters should comply with the orthogonality conditions with a set of instrumental variables. The GMM method aims that the sample correlations between the instruments and the parameters to be equal to zero.

Thus, E [ f (ϑ ) z ]=0 . Where ϑ the parameters and z a set of instrumental variables as defined from a linear equation. We use as instrumental variable the independent variables. GMM is a robust estimator because it does not require priori information and impose restrictions of the exact distribution of the residuals. Another reason of using this technique instead of ordinary least squares (OLS) is because t-ratios are more robust to heteroskedasticity and serial correlation detection. (HAC). We use four Newey-West bandwidth selection criterion lags to estimate standard errors in relation to serial correlation coefficient. The t-statistic is calculated as follows:

t−statistic= CoefficientHeteroscedastic standard errors

(2)

The null hypothesis tested is that the coefficient alpha is zero for NAV fund’s return. The results are reported in table 27 for the Jensen’s alpha and the t-ratios for the null hypothesis that the coefficient alpha is zero. The Jensen measure provides no evidence or there is that the No of funds outperforms the index and persists through the time span rolling methodology. 4.3 Chi-square statisticPerformance persistence tests are analysed using contingency tables similar to Goetzmann and Ibbotson (1994), Goetzmann, W.N and R.G. Ibbotson (1994), “Do winners repeat? Half of the funds are winners and half are losers in each period. If performance does not persist, the numbers in each bin should be the same. There is evidence for persistence provided that the numbers of funds are higher in the diagonal basis (top left and bottom right). Chi-square tests are used to provide us a more solid measure of performance persistence.

c2=∑ (O−E )2

E (3)Where: O = observed values E = expected values ∑ = All the cells in the table are summed

452

Page 453: Introduction to Econometrics 2

The degrees of freedom which determine the critical values are given by (R - 1)(C - 1) where R is the number of rows in the contingency table and C is the number of columns. The rest of the paper refers to the various performance tests used to identify performance persistence.

5. Performance Persistence Tests

5.1.Decile performance ranked portfolio strategiesFunds are grouped into portfolios ranked on the level of past performance and allocated to deciles. Performance is measured over one, three, five and nine year periods by using monthly data from 01/01/1990 to 01/01/2006. The statistical test that is used is Spearman rank correlation coefficient. The rationale of the method is to break down the distribution into deciles based on past performance in order to be able to detect persistence among the selected groups through the subsequent years. A weakness of this methodology is that researchers arbitrarily choose as a benchmark the past performance that will be compared with subsequent years. Dimson and Minio-Kozerski (2001) found that there is managerial performance persistence during the first two years of the life of the funds. Similarly, Allen and Tan (1999) used as a benchmark the first two-year period compared with the subsequent two-year period. Our approach is different in terms of the subsequent years. We test for short and long term persistence by using as a benchmark the first two years, measured as the average NAV return and test for persistence over the first year, the first three years, the first five years and then the nine-year period starting at the fifth year.

Table 7 shows the deciles and the Spearman rank correlation coefficients, together with t-tests. Spearman rank correlation is a non-parametric test measuring the correlation between the ranks of the deciles over the subsequent performance periods. In other words, Spearman’s rank correlation test checks for the existence, strength and direction of a relationship between two rankings.

Table 7 Managerial performance persistence of the UK market

Funds are ranked on past performance and grouped into portfolio over one, three, five, nine twelve years measured and allocated to deciles. Spearman rank correlation coefficients are computed in SPSS between the value of each decile’s average performance and its rank. The sample includes 210 UK Investment trusts over the period 1990-2006.

Decile of Average NAV performance

1Y 3Y 5Y 9Y 12Y

1(Highest performance) -0,7 0,8 0,86 0,89 1,1

2 0,92 1,38 1,29 0,96 1,293 0,42 2,15 1,94 0,99 1,24 -1,36 1,13 1,62 1,41 1,765 0,35 1,65 1,51 1,1 -0,296 -0,4 -0,02 -0,17 -0,79 -0,597 -0,52 -0,23 -0,12 -0,1 -0,08

453

Page 454: Introduction to Econometrics 2

8 -0,45 -0,09 -0,08 -0,004 -0,0039 -0,15 -0,35 -0,09 -0,01 -0,01

10(Lowest performance) -0,28 -0,07 -0,02 -0,01 -0,01

Spearman rank correlation testCorrelation coefficient -0,04 -0,73* -0,57 -0,57 -0,52Significance t-statistic 0,90 0,02 0,08 0,08 0,12

Number of observations

12 36 60 108 144

Source: author calculation **represents t -value that is statistically significant at 1% significance level * represents t -value that is statistically significant at 5% significance level

The weak evidence of performance persistence in the UK contradicts Gruber’s (1996) findings of performance persistence in the US mutual funds. Gruber used the same methodology as ours. Funds were ranked and placed into deciles on the basis of past returns. His sample was free of survivorship bias as in our case and he used five years of data (1990-1994) to examine S&P 500 index funds and bond index funds. We used a larger sample of 16 years of data. He found that all of the rank correlations were statistically significant at the 1% significance level. In our study, funds during the third year show a negative and statistically significant correlation coefficient (0,02) which is an evidence of weak performance persistence of funds that showed negative and bad performance during the short-term of the first three years. Our results are consistent with Hendricks, Patel, and Zeckhauser (1993), Goetzmann and Ibbotson (1994), Brown and Goetzmann (1995), and Wermers (1996) who find evidence of persistence in mutual fund performance in the US over relatively short- term horizons of one to three years.

5.2.Contingency tables over two consecutive periodsPerformance persistence of UK investment trusts will be tested by using contingency tables from the interrelation of winners and losers. The purpose of this technique is to test whether the observed and expected frequencies of both alive and dead funds is significantly different in various periods of time, using data from 1990 to 2006. This test would identify whether persistence arises from funds that have positive return during successive periods or negative return in consecutive periods. The funds are ranked based on their raw average NAV return. Our methodology is similar to the one used by Tonks (2000), Allen and Tan (1999) and Carpenter and Lynch (1999). They find that the Chi-square test based on the number of winners and losers is well specified and robust to the presence of survivorship bias. Our sample size is going to vary among the different periods of time as the dead funds have an incomplete data set and exit the industry in various times throughout the life of the sample.

The hypotheses to be tested are as follows:

H0 : Performance persistence is independent among winners and losers over successive one year period.H1: There is association of performance persistence among winners and losers over successive one year period.

Table 8 shows the results of a 2x2 contingency table of the raw average NAV return of the whole sample accounting of 210 investment trusts both alive and dead for the successive period of 1990 - 1991. We use the chi-square statistic test to check for the

454

Page 455: Introduction to Econometrics 2

existence of association of performance persistence among winners and losers or for independence and weak form of persistence.

Table 8 Contingency table for winners and losers for the period 1990 - 1991.

Funds are ranked on past average NAV performance and grouped into portfolio over one year period. Chi-square test is computed to prove independence or association among winners (W) and losers (L). The sample includes 210 UK investment trusts over the period 1990 - 1991.

1991

1990

W L TotalW 2

(1,02%)4

(28,57%)6

L 194(98,98%)

10(71,42%)

204

Total 196 14 210Sample Size 210

DF 1Critical Value 3,841

Χ2 35,74* Source: author calculation* represents chi-square statistic that is statistically significant at 5% significance level for a two-tailed test.

The results from the above table provide evidence of strong persistence over one year interval. The chi-squared test reveal a significant statistic of 35,74 which is a sign of strong association of persistence among winners and losers. In terms of percentage points, losers in the previous year that continued to be losers in the following years accounted for 71,42%. In this case we reject the null hypothesis in favour of the alternative. We used the Yates correction for a 2 x 2 contingency table but as there is a large difference from the critical value, thus the Yates correction yields exactly the same value from the original test.

Table 9 shows the results of a 2x2 contingency table of the raw average NAV return of the whole sample accounting of 210 investment trusts both alive and dead for the successive period of 1991 - 1992.

Table 9 Contingency table for winners and losers for the period 1991 - 1992.

Funds are ranked on past average NAV performance and grouped into portfolio over one year period. Chi-square test is computed to prove independence or association among winners (W) and losers (L). The sample includes 210 UK investment trusts over the period 1991 - 1992.

1992

1991

W L TotalW 174

(95,60%)8

(28,57%)182

L 8(4,40%)

20(71,42%)

28

Total 182 28 210Sample Size 210

455

Page 456: Introduction to Econometrics 2

DF 1Critical Value 3,841

Χ2 94,36*Source: author calculation* represents chi-square statistic that is statistically significant at 5% significance level for a two-tailed test..

The results from table 9 provide evidence of strong persistence over one year interval. The chi-squared test reveal a significant statistic of 94,36 which is a sign of strong association of persistence among winners and losers. In this case we reject the null hypothesis in favour of the alternative. We used the Yates correction for a 2 x 2 contingency table but as there is a large difference from the critical value, thus the Yates correction yields exactly the same value from the original test.

Table 10 shows the results of a 2x2 contingency table of the raw average NAV return of the whole sample accounting of 210 investment trusts both alive and dead for the successive period of 1992 - 1993.

Table 10 contingency table for winners and losers for the period 1992 - 1993.

Funds are ranked on past average NAV performance and grouped into portfolio over one year period. Chi-square test is computed to prove independence or association among winners (W) and losers (L). The sample includes 210 UK investment trusts over the period 1992 - 1993.

1993

1992

W L TotalW 172

(85,57%)3

(33,33%)175

L 29(14,43%)

6(66,66%)

35

Total 201 9 210Sample Size 210

DF 1Critical Value 3,841

Χ2 16,93*Source: author calculation* represents chi-square statistic that is statistically significant at 5% significance level for a two-tailed test.

The results from the above table provide evidence of strong persistence over one year interval. The chi-squared test reveal a significant statistic of 16,93 which is a sign of strong association of persistence among winners and losers. In this case we reject the null hypothesis in favour of the alternative.

Table 11 shows the results of a 2x2 contingency table of the raw average NAV return of the whole sample accounting of 210 investment trusts both alive and dead for the successive period of 1993 - 1994.

456

Page 457: Introduction to Econometrics 2

Table 11 Contingency table for winners and losers for the period 1993 - 1994.

Funds are ranked on past average NAV performance and grouped into portfolio over one year period. Chi-square test is computed to prove independence or association among winners (W) and losers (L). The sample includes 210 UK investment trusts over the period 1993 - 1994.

1994

1993

W L TotalW 43

(84,31%)157

(98,74%)200

L 8(15,69%)

2(1,26%)

10

Total 51 159 210Sample Size 210

DF 1Critical Value 3,841

Χ2 17,71*Source: author calculation* represents chi-square statistic that is statistically significant at 5% significance level for a two-tailed test.

The results from the above table provide evidence of strong persistence over one year interval. The chi-squared test reveal a significant statistic of 17,71 which is a sign of strong association of persistence among winners and losers. In this case we reject the null hypothesis in favour of the alternative.

Table 12 shows the results of a 2x2 contingency table of the raw average NAV return of the whole sample accounting of 191 investment trusts both alive and dead for the successive period of 1994 - 1995.

Table 12 Contingency table for winners and losers for the period 1994 - 1995.

Funds are ranked on past average NAV performance and grouped into portfolio over one year period. Chi-square test is computed to prove independence or association among winners (W) and losers (L). The sample includes 191 UK investment trusts over the period 1994 - 1995.

1995

1994

W L TotalW 7

(5,88%)23

(31,94%)30

L 112(94,12%)

49(68,06%)

161

Total 119 72 191Sample Size 191

DF 1Critical Value 3,841

457

Page 458: Introduction to Econometrics 2

Χ2 23,01*Source: author calculation* represents chi-square statistic that is statistically significant at 5% significance level for a two-tailed test.

The results from the above table provide evidence of strong persistence over one year interval. The chi-squared test reveal a significant statistic of 23,01 which is a sign of strong association of persistence among winners and losers. In terms of percentage points, winner from last year continues to persist in the following year and account for the rate of 5,88 %. Losers over the consecutive two year periods account of 68,06 % from the whole sample.

Table 13 shows the results of a 2x2 contingency table of the raw average NAV return of the whole sample accounting of 210 investment trusts both alive and dead for the successive period of 1995 - 1996.

Table 13 Contingency table for winners and losers for the period 1995 - 1996.

Funds are ranked on past average NAV performance and grouped into portfolio over one year period. Chi-square test is computed to prove independence or association among winners (W) and losers (L). The sample includes 210 UK investment trusts over the period 1995 - 1996.

1996

1995

W L TotalW 160

(91,43%)25

(71,43%)185

L 15(8,57%)

10(28,57%)

25

Total 175 35 210Sample Size 210

DF 1Critical Value 3,841

Χ2 11,65*Source: author calculation* represents chi-square statistic that is statistically significant at 5% significance level for a two-tailed test.

The results from the above table provide evidence of strong persistence over one year interval. The chi-squared test reveal a significant statistic of 11,65 which is a sign of strong association of persistence among winners and losers. In this case we reject the null hypothesis in favour of the alternative. We used the Yates correction for a 2 x 2 contingency table but as there is a large difference from the critical value, thus the Yates correction yields exactly the same value from the original test. Winners over the two successive period (WW) account of 91,43% of the whole sample while losers (LL) yields a 28,57 %.

Table 14 shows the results of a 2x2 contingency table of the raw average NAV return of the whole sample accounting of 210 investment trusts both alive and dead for the successive period of 1996 - 1997.

Table 14 Contingency table for winners and losers for the period 1996 - 1997.

458

Page 459: Introduction to Econometrics 2

Funds are ranked on past average NAV performance and grouped into portfolio over one year period. Chi-square test is computed to prove independence or association among winners (W) and losers (L). The sample includes 210 UK investment trusts over the period 1996 - 1997.

1997

1996

W L TotalW 172

(93,45%)12

(46,15%)184

L 12(6,52%)

14(53,85%)

26

Total 184 26 210Sample Size 210

DF 1Critical Value 3,841

Χ2 47,01*Source: author calculation* represents chi-square statistic that is statistically significant at 5% significance level for a two-tailed test.

The results from the above table provide evidence of strong persistence over one year interval. The chi-squared test reveal a significant statistic of 47,01 which is a sign of strong association of persistence among winners and losers. In this case we reject the null hypothesis in favour of the alternative. Winners persistence (WW) accounted of 93,45 % of the whole sample. Losers (LL) persistence was 53,85% of the whole sample.

Table 15 shows the results of a 2x2 contingency table of the raw average NAV return of the whole sample accounting of 163 investment trusts both alive and dead for the successive period of 1997 - 1998.

Table 15 Contingency table for winners and losers for the period 1997 - 1998.

Funds are ranked on past average NAV performance and grouped into portfolio over one year period. Chi-square test is computed to prove independence or association among winners (W) and losers (L). The sample includes 163 UK investment trusts over the period 1997 - 1998.

1998

1997

W L TotalW 109

(90,83%)22

(51,16%)131

L 11(9,17%)

21(48,84%)

32

Total 120 43 163Sample Size 163

DF 1Critical Value 3,841

Χ2 31,59*Source: author calculation* represents chi-square statistic that is statistically significant at 5% significance level for a two-tailed test.

459

Page 460: Introduction to Econometrics 2

The results from the above table provide evidence of strong persistence over one year interval. The chi-squared test reveal a significant statistic of 31,59 which is a sign of strong association of persistence among winners and losers.

Table 16 shows the results of a 2x2 contingency table of the raw average NAV return of the whole sample accounting of 145 investment trusts both alive and dead for the successive period of 1998 - 1999.

Table 16 Contingency table for winners and losers for the period 1998 - 1999.

Funds are ranked on past average NAV performance and grouped into portfolio over one year period. Chi-square test is computed to prove independence or association among winners (W) and losers (L). The sample includes 145 UK investment trusts over the period 1998 - 1999.

1999

1998

W L TotalW 86

(67,19%)13

(76,47%)99

L 42(32,81%)

4(23,53%)

46

Total 128 17 145Sample Size 145

DF 1Critical Value 3,841

Χ2 0,59Source: author calculation

The results from the above table provide evidence of weak persistence over one year interval. The chi-squared test reveal a statistic of 0,59 which is a sign of no association of persistence among winners and losers. In this case we reject the alternative hypothesis in favour of the null. We used the Yates correction for a 2 x 2 contingency table and we get the value of 0,79 which can not reject the null hypothesis.

Table 17 shows the results of a 2x2 contingency table of the raw average NAV return of the whole sample accounting of 145 investment trusts both alive and dead for the successive period of 1999 - 2000.

Table 17 Contingency table for winners and losers for the period 1999 - 2000.

Funds are ranked on past average NAV performance and grouped into portfolio over one year period. Chi-square test is computed to prove independence or association among winners (W) and losers (L). The sample includes 145 UK investment trusts over the period 1998 - 1999.

2000

1999

W L TotalW 58

(93,55%)79

(95,18%)137

L 4(6,45%)

4(4,82%)

8

Total 62 83 145

460

Page 461: Introduction to Econometrics 2

Sample Size 145DF 1

Critical Value 3,841

Χ2 0,18Source: author calculation

The results from the above table provide evidence of weak persistence over one year interval. The chi-squared test reveal a statistic of 0,18 which is a sign of no association of persistence among winners and losers. In this case we reject the alternative hypothesis in favour of the null. The Yates correction yields a small difference of 0,28 from the critical value and therefore we can not reject the null hypothesis.

Table 18 shows the results of a 2x2 contingency table of the raw average NAV return of the whole sample accounting of 128 investment trusts both alive and dead for the successive period of 2000 - 2001.

Table 18 Contingency table for winners and losers for the period 2000 - 2001.

Funds are ranked on past average NAV performance and grouped into portfolio over one year period. Chi-square test is computed to prove independence or association among winners (W) and losers (L). The sample includes 128 UK investment trusts over the period 2000 - 2001.

2001

2000

W L TotalW 18

(27,27%)11

(17,74%)29

L 48(72,73%)

51(82,26%)

99

Total 66 62 128Sample Size 128

DF 1Critical Value 3,841

Χ2 1,98Source: author calculation

The results from the above table provide evidence of no persistence over one year interval. The chi-squared test reveal a statistic of 1,98 which is a sign of no association of persistence among winners and losers. The Yates correction yields a small difference of 2,07 from the critical value which is in favour of the null hypothesis.

Table 19 shows the results of a 2x2 contingency table of the raw average NAV return of the whole sample accounting of 120 investment trusts both alive and dead for the successive period of 2001 - 2002.

Table 19 Contingency table for winners and losers for the period 2001 - 2002.

Funds are ranked on past average NAV performance and grouped into portfolio over one year period. Chi-square test is computed to prove independence or association among winners (W) and losers (L). The sample includes 120 UK investment trusts over the period 2001 - 2002.

461

Page 462: Introduction to Econometrics 2

2002

2001

W L TotalW 10

(27,27%)8

(17,74%)18

L 6(72,73%)

96(82,26%)

102

Total 16 104 120Sample Size 120

DF 1Critical Value 3,841

Χ2 32,67*Source: author calculation* represents chi-square statistic that is statistically significant at 5% significance level for a two-tailed test.

The results from the above table provide evidence of strong persistence over one year interval. The chi-squared test reveal a significant statistic of 32,67 which is a sign of strong association of persistence among winners and losers. In this case we reject the null hypothesis in favour of the alternative. The Yates correction did not yield a significant difference as there is a large difference from the critical value.

Table 20 shows the results of a 2x2 contingency table of the raw average NAV return of the whole sample accounting of 115 investment trusts both alive and dead for the successive period of 2002 - 2003.

Table 20 Contingency table for winners and losers for the period 2002 - 2003.

Funds are ranked on past average NAV performance and grouped into portfolio over one year period. Chi-square test is computed to prove independence or association among winners (W) and losers (L). The sample includes 115 UK investment trusts over the period 2002 - 2003.

2003

2002

W L TotalW 16

(14,68%)3

(50%)19

L 93(85,32%)

3(50%)

96

Total 109 6 115Sample Size 115

DF 1Critical Value 3,841

Χ2 1,02Source: author calculation

The results from the above table provide evidence of no persistence over one year interval. The chi-squared test reveal a significant statistic of 1,02 which is a sign of no association of persistence among winners and losers. The persistence of losers over

462

Page 463: Introduction to Econometrics 2

both years account of 50% from the whole sample, while winners (WW) persistence generates a 14,68 %. In this case we reject the null hypothesis in favour of the alternative. The Yates correction for a 2 x 2 contingency table generates a figure of 2,18 but as there is a large difference from the critical value, we can not reject the null hypothesis in favour of the alternative.

Table 21 shows the results of a 2x2 contingency table of the raw average NAV return of the whole sample accounting of 112 investment trusts both alive and dead for the successive period of 2003 - 2004.

Table 21 Contingency table for winners and losers for the period 2003 - 2004.

Funds are ranked on past average NAV performance and grouped into portfolio over one year period. Chi-square test is computed to prove independence or association among winners (W) and losers (L). The sample includes 112 UK investment trusts over the period 2003 - 2004.

2004

2003

W L TotalW 103

(97,17%)6

(100%)109

L 3(2,83%)

0(0%)

3

Total 106 6 112Sample Size 112

DF 1Critical Value 3,841

Χ2 0,29Source: author calculation

The results from the above table provide no evidence of persistence over one year interval. The chi-squared test reveal a statistic of 0,29 which is a sign of weak association of persistence among winners and losers. In this case we reject the alternative hypothesis in favour of the null. The Yates correction generates a figure of 3,07 but is still below the critical value of 3,84 and therefore we can not reject the null hypothesis

Table 22 shows the results of a 2x2 contingency table of the raw average NAV return of the whole sample accounting of 106 investment trusts both alive and dead for the successive period of 2004 - 2005.

Table 22 Contingency table for winners and losers for the period 2004 - 2005.

Funds are ranked on past average NAV performance and grouped into portfolio over one year period. Chi-square test is computed to prove independence or association among winners (W) and losers (L). The sample includes 106 UK investment trusts over the period 2004 - 2005.

2005W L Total

W 99(97,06%)

4(100%)

103

L 3(2,94%)

0(0%)

3

463

Page 464: Introduction to Econometrics 2

2004 Total 102 4 106Sample Size 106

DF 1Critical Value 3,841

Χ2 0,12Source: author calculation

The results from the above table provide evidence of weak persistence over one year interval. The chi-squared test reveal a statistic of 0,12 which is a sign of no association of persistence among winners and losers. The Yates correction for a 2 x 2 contingency table generates a figure of 3,48 which is below the critical value of 3,84 and therefore we still accept the null hypothesis.

Table 23 shows the results of a 2x2 contingency table of the raw average NAV return of the whole sample accounting of 102 investment trusts both alive and dead for the successive period of 2005 - 2006.

Table 23 Contingency table for winners and losers for the period 2005 - 2006.

Funds are ranked on past average NAV performance and grouped into portfolio over one year period. Chi-square test is computed to prove independence or association among winners (W) and losers (L). The sample includes 102 UK investment trusts over the period 2005 - 2006.

2006

2005

W L TotalW 86

(97,63%)13

(100%)99

L 3(3,37%)

0(0%)

3

Total 89 13 102Sample Size 102

DF 1Critical Value 3,841

Χ2 0,45Source: author calculation

The results from the above table provide no evidence of persistence over one year interval. The chi-squared test reveal a statistic of 0,45 which is a sign of no association of persistence among winners and losers. In this case we reject the alternative hypothesis in favour of the null. The Yates correction generates a figure of 2,05 from the critical value and therefore, we accept the null hypothesis..

In summary, the results of most of the chi-square tests show significant evidence of persistence from funds that have positive return during successive periods or negative return in consecutive periods. In terms of percentage points, losers in the previous year continued to be losers in the following years similarly is the case for winners. A possible explanation of this situation it is due to market timing ability of the manager for the winners and the continuation of negative sentiment for losers.

5.3. Multi-period analysis of risk adjusted excess NAV return

464

Page 465: Introduction to Econometrics 2

We test the hypothesis of Grinblatt and Titman (1992), Elton, Gruber, Das and Hlavha (1993), and Elton, Gruber, Das, and Blake (1996a) who documented mutual fund return persistence over longer horizons periods of five to ten years and attribute the persistence to skilled managerial performance. This method will allow me to test the results of Hendricks, Patel and Zeckhauser (1993), Goetzmann and Ibboston (1994), Brown and Goetzmann (1995) and Wermers (1996) who found evidence of performance persistence in mutual fund performance over relatively short-term horizons of one to three years and attribute the persistence to market timing abilities of portfolio managers. The rolling approach is consistent with Gruber (1996), Fama and French (1993) and Carhart (1997).Following the Treynor and Mazuy (1966) model, the extra variable that is tested is the square of the market return which is included as an attempt to capture market timing

ability. The intercepts (α ' s ) from the regression equations for one, three, five, nine and twelve years respectively are used to measure the contribution of the manager to the performance of the fund. A positive and statistically significant alpha indicates superior managerial performance of the fund, whereas negative values or statistically insignificant values represent inferior or neutral managerial performance. In their paper, Treynor and Mazuy (1966) assume that if a mutual fund is not engaged in market timing and maintains a constant fund beta, the relationship between the fund return and the return on the benchmark will be linear. However, if the fund is successful at market timing, the fund return will be higher than the benchmark return, and the relationship between the fund return and the return on the benchmark will be non-linear. Thus we can test for timing ability by testing for this non-linearity. To do this, we include the square of the market return as an additional independent variable

(with coefficientγ ). A negative or zero γ means that fund managers do not have

market timing ability, whereas a positive γ would imply that fund managers have market timing ability.

The hypotheses to be tested are as follows:

H0 : α = 0, γ ≤ 0 Fund managers have an inferior or neutral performance over the successive years

H1: α ≠ 0, γ 0 Fund managers have a superior performance over the successive years

The model that will be used is the following:

RNAV s, t=α+β1(Rm, t−R f , t )+γ ( Rm, t )2+εt (4)

where the β’s and γ are the coefficients measuring the sensitivity of each factor. RNAVs,t is the monthly excess NAV return for each sector and R f,t is the return on one-month treasury bills. The intercept,a , from regressing NAV excess returns on the

market measures the contribution of the manager and γ is the coefficient that measures market timing ability. We test initially for autocorrelation, stationarity, normality and multicollineariy of the independent variables before conducting the regression tests.

465

Page 466: Introduction to Econometrics 2

Autocorrelation of the excess NAV returnAutocorrelation analysis is the first step in characterising the time series properties of the excess NAV return. Table 24 shows the autocorrelation of the excess NAV return of all UK investment trusts.

Table 24 Autocorrelation of the excess NAV return of UK investment trusts.

The table shows the average first to twelfth-order autocorrelation levels. The t-statistics of the autocorrelation coefficients are shown. The results are based on the average for all UK investment trusts. We use monthly data from January 1990 to January 2006.

AC t-Stat1 -0,577 -10,242 0,048 0,703 0,128 1,874 -0,124 -1,815 0,032 0,466 0,043 0,627 -0,06 -0,878 -0,006 -0,099 0,02 0,29

10 0,043 0,6211 -0,023 -0,3312 -0,057 -0,83

Source: calculated by the author

According to Table 24, the first order autocorrelation coefficient is characterized by a negative coefficient of -0,58 and decays to -0,06. The t-statistics for all order autocorrelation coefficients in excess NAV levels are all insignificant except from the first coefficients which imply that there is no autocorrelation problem.

Stationarity A non-stationary series tends to show a statistically significant spurious correlation when variables are regressed. We test whether excess NAV on UK investment trusts is stationary.

Table 25 shows the ADF test for excess NAV return for UK investment trusts sector.

Table 25 ADF test of the UK excess NAV return by excluding a constant and a trend.

Table 20 shows ADF test of the excess NAV return by all investment trusts for the period January 1990 to January 2006 for two different critical values one per cent and five per cent. We test if excess NAV return follows a random walk by excluding a constant and a linear time trend.

ADF Test Statistic -9,86 1% Critical Value* -3,46 5% Critical Value -2,87

*MacKinnon critical values for rejection of hypothesis of a unit root.Source: calculated by the author

For a level of significance of 1 per cent and a sample size larger than 100 observations, the critical value of the t-statistic from Dickey-Fuller’s tables for no

466

Page 467: Introduction to Econometrics 2

intercept and no trend is -3,46. According to the Table 25, we can reject the null hypothesis namely the existence of a unit root with one per cent significance level. The ADF test statistic is -9,86. In other words, the excess NAV return is stationary.

Normality

This section focuses on tests of normality related to the dependent variable namely the excess NAV return. We show a histogram, Jarque Bera test and related descriptive statistics. Table 26 and Figure 1 show the results of Jarque Bera test which is used to test if the series is normal or non-normal. This type of test uses the chi-squared distribution and specifically is a goodness-of-fit test. So we state the hypothesis as follows:

H0: Excess NAV return is normally distributedH1: Excess NAV return is not normally distributed

Table 26 and Figure 1 Jarque Bera normality test of UK excess NAV return.

Table 24 and Figure 1 show the results of Jarque Bera test of normality and related descriptive statistics for UK excess NAV return. We use monthly data from January 1990 to January 2006.

0

10

20

30

40

50

-10.0 -7.5 -5.0 -2.5 0.0 2.5 5.0 7.5

Series: RTSample 2 211Observations 210

Mean -2.422089Median -0.753087Maximum 7.052118Minimum -10.18822Std. Dev. 3.181549Skewness -0.211289Kurtosis 2.666643

Jarque-Bera 2.534870Probability 0.281553

From the above table the χ2 statistic, namely 2.53, is below the critical value at 5% significance level so we accept H0. Even though the distribution is slightly negatively skewed and has positive kurtosis.

Table 27 summarises the results of the GMM conditional model of Treynor and Mazuy (1966) that includes the market timing ability variable. The sample includes 16 sectors of UK investment trusts with a total number of 210 funds.

Table 27 Conditional model of UK excess NAV return

We use 12, 36, 60 and 144 observations by applying a rolling methodology. The sample includes 16 sectors of UK investment trusts with total number of 210 funds both alive and dead. The intercept,a , from regressing NAV excess returns on the market measures

467

Page 468: Introduction to Econometrics 2

the contribution of the manager and γ is the coefficient that measures market timing ability

AITC Category Coefficients 1Y 3Y 5Y 12YGlobal Growth W L W L W L W L

-0.17(-0.17)

0.11(0.23)

0,51(1,57)

0,5(1,82)*

0,56(2,18)*

0,39(1,99)*

0,34(1,72)

0,28(2,05)*

market timing ability (γ)

-0.01(-1.12)

-0,03(-3,49)**

-0,01(-1,81)*

-0,01(-2,55)*

-0,01(-1,50)

-0,01(-1,63)

-0,01(-0,98)

-0,01(-1,36)

Global Growth & Income 0,30(0,44)

-1,59(-1,74)

1,01(1,22)

-0,35(-0,54)

0,15(0,22)

-0,24(-1,08)

0,39(0,51)

-0,01(-0,03)

market timing ability (γ)

-0,05(-4,51)**

0,00(0,11)

-0,05(2,09)*

-0,01(-1,29)

0,04(1,01)

0,01(0,17)

-0,03(-2,00)*

-0,03(-1,10)

Global Smaller Companies -0,61(-1,11)

-0,12(-0,10)

0,40(0,71)

-0,41(-0,99)

0,36(0,89)

-0,21(-0,55)

N/A 0,95(1,26)

market timing ability (γ)

-0,01(-1,61)

-0,04(-2,62)*

-0,02(2,57)*

-0,01(-0,86)

-0,01(-0,86)

-0,01(-0,58)

N/A 0,01(0,73)

UK Growth -0,07(-0,29)

0,48(0,27)

-0,22(-0,86)

0,27(0,58)

-0,09(-0,35)

0,34(1,36)

0,33(0,75)

0,33(0,75)

market timing ability (γ)

-0,01(-3,64)**

-0,04(-1,45)

-0,01(-0,83)

-0,01(-1,60)

-0,00(-0,04)

-0,01(-2,55)*

0,01(0,77)

0,01(0,77)

UK Growth & Income -0,30(-1,13)

0,16(0,46)

-0,41(-1,80)*

0,11(0,17)

0,30(1,14)

0,04(0,17)

0,40(1,04)

0,51(1,03)

market timing ability (γ)

-0,01(-2,66)**

-0,01(-3,03)**

0,00(0,21)

-0,01(-1,06)

-0,00(-0,83)

-0,01(-1,36)

0,01(0,71)

0,01(0,89)

UK Smaller Companies 0,21(0,33)

-0,57(-1,30)

0,72(1,38)

0,14(0,30)

0,59(1,30)

0,80(1,85)*

0,69(1,35)

0,64(0,90)

market timing ability (γ)

-0,02(-1,69)

-0,04(-2,59)*

-0,02(-1,32)

-0,02(3,42)**

-0,02(-1,76)*

-0,04(-3,07)**

-0,00(-0,01)

0,01(0,85)

North America -0,27(-0,20)

-0,70(-0,60)

0,67(0,88)

0,32(0,46)

0,70(1,13)

0,15(0,35)

-0,04(-0,11)

0,04(0,07)

market timing ability (γ)

-0,02(-1,12)

-0,03(-1,86)*

-0,02(1,11)

-0,01(-0,98)

-0,01(-0,78)

-0,01(-0,38)

0,01(1,12)

0,02(1,50)

North America Smaller Companies

-1,93(-2,56)**

-0,39(-0,48)

-0,81(-1,41)

-0,41(-0,69)

-0,09(-0,17)

0,22(0,68)

N/A -1,12(-0,88)

market timing ability (γ)

-0,00(-0,10)

-0,06(-4,04)**

-0,01(-1,11)

0,01(0,42)

-0,01(-0,65)

-0,01(-1,17)

N/A 0,02(1,01)

Far East (Including Japan)

-1,09(-2,21)

0,80(1,43)

0,47(1,36)

0,40(1,21)

0,51(1,79)

0,60(2,03)*

0,64(1,74)

-0,27(-0,44)

market timing ability (γ)

-0,01(-0,82)

-0,05(-5,67)**

-0,03(2,62)*

-0,01(-2,18)*

-0,01(-1,25)

-0,02(-1,56)

0,01(0,51)

0,04(1,39)

Far East (Excluding Japan)

-0,08(-0,22)

0,31(0,23)

0,64(0,83)

0,49(0,81)

0,91(1,40)

0,09(0,54)

0,51(0,66)

0,42(0,70)

market timing ability (γ)

-0,02(-2,37)*

-0,07(-2,30)*

-0,02(-1,30)

-0,03(-1,83)

-0,02(-1,04)

-0,00(-0,98)

0,04(1,77)

0,01(0,91)

Japan N/A -4,47(-1,38)

N/A 1,06(0,61)

0,740,63

0,650,89

0,00(0,00)

N/A

market timing ability (γ)

N/A 0,08(0,62)

N/A -0,03(-0,60)

-0,03(-0,96)

-0,03(-1,21)

0,04(1,99)*

N/A

Japanese Smaller Companies

-2,50(-4,33)**

-3,47(-2,32)*

-1,72(-1,93)*

0,21(0,19)

-1,22(-1,45)

-0,48(-1,65)

-0,72(-1,43)

-0,05(-0,16)

market timing ability (γ)

0,000,20

-0,07(-0,44)

0,03(1,09)

-0,01(-0,16)

0,030,85

-0,01(-0,10)

0,04(2,20)*

*

0,01(1,09)

Europe 0,23(0,21)

-0,76(-0,59)

-1,22(-0,92)

-3,75(-2,39)*

0,86(1,66)

0,91(1,95)

0,76(1,15)

0,59(0,98)

market timing ability (γ)

-0,02(-0,35)

-0,03(-0,36)

0,03(1,02)

0,01(0,14)

-0,03(-1,70)

-0,03(-2,50)

-0,01(-0,53)

0,01(0,83)

European Smaller Companies

3,16(2,08)*

1,36(1,54)

0,02(0,06)

-0,46(-0,71)

1,59(2,51)*

0,76(1,17)

-0,34(-0,55)

N/A

market timing ability (γ)

-0,06(-1,87)

-0,07(-2,76)*

-0,02(-1,45)

-0,01(-0,79)

-0,03(-1,29)

-0,03(-208)*

0,02(0,98)

N/A

Country Specialists -4,42 -0,78 4,47 -0,04 5,93 0,08 -2,33 0,38

468

Page 469: Introduction to Econometrics 2

Far- East (-1,18) (-0,91) (0,91) (-0,15) (1,08) (0,29) (-0,75) (0,64)market timing

ability (γ)-1,05

(-1,59)-0,06

(-2,91)*-1,81

(-1,63)-0,01

(-1,06)-2,11

(-1,48)-0,02

(-2,57)*2,33

(1,68)0,01

(0,85)

Sector Specialists Property

-0,94(-1,21)

-0,19(-0,56)

-0,61(-0,89)

2,53(0,83)

-0,15(-0,29)

3,71(1,19)

0,42(1,02)

0,32(0,68)

market timing ability (γ)

0,01(0,28)

-0,05(-5,99)**

0,00(0,01)

-0,28(-1,64)

0,00(0,26)

-0,33(-1,61)

0,02(1,20)

0,02(1,89)

Total observations 12 36 60 144Source: author calculation* represents t –value that is statistically significant at 5% significance level** represents t-value that is statistically significant at 1% significance level

As indicated above a positive and statistically significant α indicates a skilled fund manager whose decisions add value to the fund. On the other hand, negative α values or statistically insignificant values represent inferior or neutral performance of the manager. According to Table 27, the results are mixed. Four out of sixteen sectors display an α that is positive and statistically significant at the 5% and 1% significance level. Five out of sixteen sectors display an alpha that is negative and statistically significant. The mixed results provide a picture of semi-strong and strong form market efficiency in the UK through the various years. Efficiency is said to be semi-strong if today’s prices reflect all available information and strong if the available information (public and inside) is reflected in current shares or fund prices. Fifteen out of the sixteen sectors show a negative and statistically significant market timing ability (γ). For example, Sector Specialist Property displays a negative and statistically significant γ at the 1% significance level for the first year for Losers. UK Growth and Income shows a negative and statistically significant γ of (-2,66) and (-3,03) at the 1% significance level also for the first years for both winners and losers. So while there is some evidence for managerial performance persistence in the short-term, there is little evidence for persistence in the long-run. The persistence of winners and losers according to the above results could be justified by the skill of the fund managers and market timing ability to predict the movement of the market or the opposite.

6. Bootstrap Method

This method involves taking the whole sample under investigation and re-sampling it. This is performed by taking N samples from the initial ones and re-calculates the regression coefficients, and t-statistic of each new sample. This method was first introduced by Efron (1979, 1982) and then used by Freedman (1984), Freedman and Peters (1984a,b) and Efron and Tibshirani (1993). The advantage of this technique is that we can make inferences without strong distributional assumptions from the risk adjusted excess NAV return regression equation. Thus a sampling distribution of the mean is a frequency distribution of a large number of new samples and this re-sampling could be done with or without replacement. Karolyi and Kho(2004), Allen and Tan (1999), argue that without replacement simulations face the dangers of small sample bias. As the sample size is increased, the sampling distribution of the mean approaches the normal distribution regardless of the shape of the frequency distribution of the population. The approximation is sufficiently good for n ≥ 30. This is known as the central-limit theorem. The regression reported in risk adjusted excess NAV return section will be bootstrapped on the actual data.

469

Page 470: Introduction to Econometrics 2

In addition, we are going to test if there is linear trend in the time series by adding a drift which will reveal short or long term persistence or mean reverting tendency. The random walk (RW) equation will be as followed:

r NAV i, t=μ+γY t−1+∑

λ=4aλ ΔY t−λ+ βt+εt

(5)

Where r NAV i,t the total average returns of all funds i at period t.

∑λ=4

aλ ΔY t−λ are

lags included so that ε t contains no autocorrelation, is the measure of stationarity, β t

is a measure of time trend. ε t is the stochastic error term that is assumed to be non-autocorrelated with a zero mean and with a constant variance. Such an error term is also known as a white noise error term. We test whether excess NAV on UK investment trusts follow a random walk with drift and trend or are stationary.

We state the hypothesis as follows:H0 : β , γ=0 (existence of a unit root) H1 : β , γ < 0 (stationarity)

The existence of a unit root is measured using an ADF test. For a 1 per cent significance level and a sample size larger than 100 observations, the critical value of the t-statistic from Dickey-Fuller’s tables is - 4.02. Table 28 summarises the unit root test of excess NAV return for UK investment trusts by including a constant and a drift.

Table 28 ADF test of UK excess NAV return by including a constant and a trend.

Table 28 shows ADF test for the period January 1990 to January 2006 for two different critical values one per cent and five per cent. We test if excess NAV return follows a random walk by including a constant and a linear time trend.

ADF Test Statistic -9.83 1% Critical Value* -4.00 5% Critical Value -3.43

*MacKinnon critical values for rejection of hypothesis of a unit root. F-statistic 168.07

Source: calculated by the author

According to the Table 28, the sample evidence suggests that we can reject the null hypothesis namely the existence of a unit root with one per cent significance level. The t-statistics are greater than the critical value of -4.00 with one per cent significance level. The t-statistic for all UK sectors is -9.83. Thus the excess NAV return is stationary. To check if there is a time trend we compare the F-statistic of the model with the one given from the tables of ADF. From our model F statistic 168.07 6.34 so we reject the null hypothesis.

We are going to replicate by sampling without replacement this method 210 times on the monthly return of both survival and dead funds over two successive years for the whole period of 1990 to 2006. The regression test is computed by both comparing this year estimated and simulated return with last year estimated and simulated returns.

470

Page 471: Introduction to Econometrics 2

This method is superior from Monte-Carlo as it gives us the opportunity to test regularly our sample free from distributional assumptions. The hypotheses to be tested are as follows:

H0 : There is no evidence of performance persistence over the successive years among estimated and simulated average return of winners and losers.H1: There is evidence of performance persistence over the successive years among estimated and simulated average return of winners and losers.

Table 29 summarizes the bootstrapped test on the monthly returns over successive years over the whole period starting from 01/01/1990 to 01/01/2006. The estimated and simulated return was computed by sampling without replacement. The test is performed 210 times over the actual data through a regressing estimated and simulated data. The coefficient and t-statistic indicate the significance of performance persistence among winners and losers.

Table 29 shows the bootstrapped test of estimated and simulated average return

Estimated return

Simulated return

1990-1991 -2,27(-20,42)*

-2,26(-19,59)*

1991-1992 0,90(8,32)*

0,99(7,58)*

1992-1993 2,44(2,84)*

-0,16(-0,12)

1993-1994 1,98(13,65)*

2,77(22,86)*

1994-1995 -0,90(-7,72)*

-0,69(-4,84)*

1995-1996 0,90(10,81)*

1,01(12,48)*

1996-1997 0,49(4,46)*

0,34(3,84)*

1997-1998 0,13(0,57)

0,11(0,63)*

1998-1999 0,41(3,30)*

0,59(2,88)*

1999-2000 1,96(11,11)*

1,98(19,32)*

2000-2001 0,14(0,42)

0,01(0,04)

2001-2002 -0,44(-5,90)*

-0,80(-7,50)*

2002-2003 -1,83(-8,32)*

-2,52(-5,39)*

2003-2004 1,75(7,42)*

1,22(4,18)*

2004-2005 1,29(1,57)

1,11(2,02)*

2005-2006 0,86(10,04)*

0,90(10,08)*

Source: calculated by the author* represents t -value that is statistically significant at 5% significance level

471

Page 472: Introduction to Econometrics 2

According to table 29, there is evidence of performance persistence over the successive years among estimated and simulated average return of winners and losers computed through the bootstrap test of 210 iterations. Most of the observations through two year interval show a positive and significant t-statistic of average estimated return. The average simulated return obtained from the bootstrapped test shows a significant t-statistic. The sample evidence suggests that there is evidence of performance persistence over the successive years among estimated and simulated average return of winners and losers. The test shows that persistence could be explained by long term mean reversion process through replication without replacement.

7. Conclusion

In summary, the results of most of the chi-square tests show significant evidence of persistence from funds that have positive return during successive periods or negative return in consecutive periods. In terms of percentage points, losers in the previous year continued to be losers in the following years similarly is the case for winners. A possible explanation of this situation it is due to market timing ability of the manager for the winners and the continuation of negative sentiment for losers.

So while there is some evidence for managerial performance persistence in the short-term, there is little evidence for persistence in the long-run. The persistence of winners and losers according to the above results could be justified by the skill of the fund managers and market timing ability to predict the movement of the market or the opposite.

The sample evidence suggests that there is evidence of performance persistence over the successive years among estimated and simulated average return of winners and losers. The test shows that persistence could be explained by long term mean reversion process through replication without replacement.

References

472

Page 473: Introduction to Econometrics 2

Allen. D.E, Tan.M.L, (1999). A Test of the Persistence in the Performance of UK Managed Funds. Journal of Business Finance and Accounting. Vol.24, No 2, 155- 178.

Agarwal, V and Naik, N.Y, (1999). On Taking the Alternative Route: Risks, Rewards, Style and Performance Persistence of Hedge Funds, IFA working paper 289, London Business School.

Brown,S.J, Goetzmann,W.N, Ibbotson, R.G, and Ross,S.A,(1992). Survivorship bias in performance studies, Review of Financial Studies 5, 553-580.

Brown,S.J, and Goetzmann,W.N,(1995). Performance persistence, Journal of Finance,50,679-698.

Carhart, M,(1997). On Persistence in Mutual Fund Performance, Journal of Finance 52(1), 57-82.

Carpenter, J.N and Lynch.A.W,(1999).Survivorship bias and attrition effects in measures of performance persistence, Journal of Financial Economics, vol.54, 337-374.

Dimson, E. and Minio-Kozerski,(2001).The closed-end fund discount and performance persistence, Working paper, London Business School.

Elton, E., Gruber, M., Das, S.,and Hlavka, M,(1993). Efficiency with costly information: A re-interpretation of evidence from managed portfolios, Review of Financial Studies, 6, pp.1-21.

Elton. E.J, Gruber,M.J, Blake.C.R,(1996a). The persistence of risk-adjusted mutual fund performance. Journal of Business, 69, pp 153-157.

Fama, E.F. and French, K.R,(1993), Common risk factors in the returns of stocks and bonds, Journal of Financial Economics 33, 3-56.

Gile, Wilsdon and Worboys (2002). Performance persistence in UK equity funds – An empirical analysis. Charles River Associates Limited (CRA) report.

Grinblatt, M. and Titman, S, (1992). The persistence of mutual fund performance, Journal of Finance 47. pp 1977-1984.

Grinblatt, M. and Titman S,(1988). The evaluation of mutual fund performance: an analysis of monthly returns, Working paper, University of California, Los Angeles.

Goetzmann, W.N, and Ibboston, R.G,(1994). Do winners repeat?, Journal of portfolio management, Vol.20 (Winter), pp.9-18.

473

Page 474: Introduction to Econometrics 2

Gruber, M,(1996). Another puzzle: The Growth in Actively Managed Mutual Funds, Journal of Finance 51, pp 783-810.

Heffernan, S.(2001).” All UK investment trusts are not the same”, City University Business School Working Paper No 1.

Hendricks. D. Patel, J., Zeckhauser, R.,(1993). Hot hands in mutual funds: the peristence of performance, 1974-1988, Journal of Finance, 48 ,pp 93-130.

Malkiel, B.G.(1995). Returns from investing in equity mutual funds 1971-1991, Journal of Finance 50, pp 549-572.

Sharpe W.F.(1992). Asset Allocation: Management Style and Performance Measurement. Journal of Portfolio Management, pp7-19.

Tonks,I.(2002). Performance Persistence of Pension Fund Managers. Centre for Market and Public Organisation. University of Bristol.

Wermers, R.(1996). Momentum investment strategies of mutual funds, performance persistence, and survivorship bias, Working paper, Graduate School of Business and Administration, University of Colorado at Boulder.

Wood Mackenzie Company, (2002). “ A Comparison of Active and Passive Management of Unit Trusts. Produced for Virgin Money Personal Financial Service, Edinburgh, pp.1-20.

474

Page 475: Introduction to Econometrics 2

475