Vector Error Correction Models

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Contents 1

Introduction 2

Literature review 2

Methodology Plan 11

Vector Error Correction Models 11

Cointegration model – Johansen test 11

Error Correction Model - ECM 13

Bibliography 16

Introduction

This proposal consists of two parts. At the first part, we focus on a literature review across the globe about a forecasting stock prices out-of-sample and a comparison of alternative time series models. At the second part, a methodology plan is taken place where the research case study is described. Also, it is mentioned the models which will be used.

 

Literature review

Latif et al (2011) examined the forecasting the Stock by utilizing the Dart Board theory which belongs to one of the most popular theories. "Dart Board Theory of Stock Selection": you may choose a share by throwing a dart at the Wall Street Journal and chance of your success is higher than others who based on expert's advice. Their research assesses that important figure behind the success of dart board theory is Chain Reaction in stock's values. The data of thirty shares chose from KSE 100 index (stocks reflects about 50-60% of entire market volume) is arranged in four sub-categories according to their weights assigned in KSE 100 index respectively. The stock's price moves are measured in percentage and set up from single day to four months. Findings point that shares have supreme impact on shares value. Also, the value of pros/expert recommendation is calculated and it is related with a comprehensive analysis of modification in mutual funds earning with alteration in average KSE 100 index. Findings propose that chain reaction concept; as positive correlation exist between performance of KSE 100 index and mutual funds earnings. Findings also mention that pros recommendations are keystones to begin a chain reaction.

According to Harrison and Moore (2012), it has observed that important care has been taken place on modeling and predicting stock market volatility. Stock market volatility counts because stock markets play an inherent role of the financial and econmic design in market economies. In addition, it is an intermediate in channeling funds from savers to investors. The initial aim of their project is on predicting stock market volatility in Central and East European (CEE) countries. The basic question is how volatility may predict and whether one technique consistently outperforms other techniques. Additionally, a range of techniques have been discovered, starting from the relatively simple to the more complex and difficult conditional heteroscedastic models of the GARCH family. Furthermore, they occupied with the forecasting of 12 models to discover volatility in the CEE countries. Their findings support that the models which permit asymmetric volatility consistently exceed all other models.

Abd El Aal (2011) investigated the performance of five models for forecasting the Egyptian stock market return volatility. He used the period from 1 January, 1998 until 31 December, 2009 as an in-sample period. He utilized also the next 30 days after the in-sample period to be our out-of-sample period. The competing models are: EWMA, ARCH, GARCH, GJR, and EGARCH. We examined also the ARCH effect to test the validity of using GARCH family to predict the volatility of market indices. The empirical findings express that EGARCH is the best model between the examined models according to the usual evaluating statistical metrics (RMSN, MAE, and MAPE). When he used Diebold and Mariano (DM) test to examine the significance of the difference between errors of volatility forecasting models, he found no significance difference between the errors of competing models. The results also reject the null hypothesis of homoscedastic normal process for both EGX30 and CIBC100 indices.

In addition, Janchung (2009) made a significant effort to utilize a general equilibrium model of stock index futures with both stochastic market volatility and stochastic interest rates to the TAIFEX and the SGX. In addition, he equates the forecasting power of the general equilibrium models. His work also reflects the first effort to examine which of the five volatility estimators are able to increase the forecasting operation of the general equilibrium model. In addition, the effect of the up-tick rule and other various explanatory indicators on mispricing is also examined by using a regression framework. Overall, the general equilibrium model exceeds the cost of carry model in predicting prices of the TAIFEX and the SGX futures. This result support that in the higher volatility of the Taiwan stock market incorporating stochastic market volatility into the pricing model aids in forecasting the values of these two futures. Moreover, the equivalence results of different volatility indicators point out that the strength of EWMA and the GARCH(1,1) estimators may increase the predicting performance of the general equilibrium model compared to the other estimators. Moreover, the easiness of the up-tick rule decreases the degree of mispricing.

Utilizing a time-varying regime-switching vector error correction approach, Kanas (2008) tries to discover which factors highly explain the transition across regimes of the US and the UK stock index futures markets. The results propose that the basis exert an important impact in government transition. The basis impact is related with a dividend yield effect in the UK, and with a dividend yield effect and an interest rate effect in the USA. The volatility of the inherent index is another important factor. Moreover, there is proof of a global government transition impact from the UK to the USA. In most cases, provisions based on time-varying regime transition models are more precise than forecasts based on models with stable transition possibilities.

According to Srinivasan (2011), volatility predicting is a crucial area of research in financial markets and vast impact has been expended in amending volatility models, since better provisions translate themselves into better value of options and better risk management. In this direction, his research makes an effort at modeling and predicting the volatility (conditional variance) of the S&P 500 Index returns of United States stock market, utilsing daily data covering a period from January 1, 1996 to January 29, 2010. The provision models in his research range from the relative simple GARCH (1, 1) model to relatively complex GARCH models, including Exponential GARCH (1, 1) and Threshold GARCH (1, 1) models. Based on out-of-sample provisions and a majority of rating measures, his research presents that the symmetric GARCH model execute better in forecasting conditional variance of the S&P 500 Index return rather than the asymmetric GARCH models, despite the existence of leverage impact. His results are coherent with the proof of Gokcan (2000) that relatively parsimonious symmetric GARCH model is supreme in predicting the conditional variance of emerging stock market return series to the asymmetric GARCH model. 

Moreover, using data from July 1997 to July 2007, Vega et al (2012) examined if the FTSE index is affected by the past behavior of the DOW, DAX, NIKKEI, Hang Seng and Shanghai indices. They compare three different methods of estimating regression parameters. The results show that the FTSE lagged variable and the Nll(KEI and DOW past performance are good indicators of the future performance of the FTSE. The models produce different predictive values but the effect of the variables is the same when examining the direction of the coefficients. Both the Newey-West OLS and GARCH models are better predictive models than the OLS with a standard error. The predictive power of the model increases as a result of allowing time varying variances. 

According to Kumar (2011), a hybrid machine learning system based on Genetic Algorithm (GA) and Time Series Analysis is proposed. In stock market, a technical trading rule is a popular tool for analysts and users to do their research and decide to buy or sell their shares. The key issue for the success of a trading rule is the selection of values for all parameters and their combinations. However, the range of parameters can vary in a large domain, so it is difficult for users to find the best parameter combination. In his paper, he presented the Genetic Algorithm (GA) to overcome the problem in two steps. First, setting a sub-domain of the parameters with GA. Second, finding a near optimal value in the sub domain with GA and Time Series Analysis in a very reasonable time. 

Furthermore, utilizing data in monthly basis from 1953 to 2003, Bohl (2008) used a real-time modeling approach to examine the implications of U.S. political stock market anomalies for predicting excess stock returns in real-time. His empirical results present that political variables, selected on the basis of widely utilized model-selection criteria, are often admitted in real-time forecasting models. On the other hand, political factors do not bestow systematically to amend the performance of simple trading rules. Thus, political stock market anomalies are not necessarily an indication of market inefficiency.

Koutroumanidis (2011) aimed at constructing Confidence Intervals (C.I) for the foresating values of a Time Series with the application of a Hybrid method. The presented methodology is complicated and thus is completed in different stages. Initially the Artificial Neural Networks (ANNs) is applied on the raw time series in order to estimate C.I of the forecasts. Then, the Bootstrap method is employed on the residuals generated by the preceded process. On the upper and lower limit of the estimated C.I., two new ANNs are employed in order to make point estimations (of the upper and lower limits) using of Object Oriented Programming. For the empirical analysis daily observations of the closing prices of Alpha Bank stocks have been used. The sample period is extended from 28/01/2004 until 30/11/2005. The nonstationarity of the time series employed in our study is not a forbidding condition for the estimation of the confidence intervals, in our case, since the level of bootstrap still provides a satisfactory approximation for the roots arbitrarily close to unity (Berkowitz, Kilian 1996). The accuracy of the forecasts was surveyed with the use of different criteria and the results were satisfactory.

Furthermore, Sutheebanjard (2010) suggested a new provision function for the Stock Exchange of Thailand (SET index). He used the significant economic factors: namely, the Dow Jones, Nikkei, and Hang Seng indexes; the minimum loan rate (MLR); and the previous SET index. The tuning coefficients of each indicator in this research were measured by utilizing the two-membered evolution strategy (ES) technique. The experiment was executed by analyzing the SET index during three different time periods. The first time period includes the era from January 2004 to December 2004, and the second time period extended from 9 August 2005 to December 2005. These data were utilized to assess the performance of the proposed forecasting function for periods in the short-run by comparing the findings with those accomplished using the existing methods. Lastly, the data in the long-run include period from January 2005 to March 2009, which covered 1040 days in totals, were used to predict the SET index. The results indicate that the proposed forecasting function not only yields the lowest mean absolute percentage error (MAPE) in the short-run but also yields a MAPE lower than 1% in the long-run.

According to Yalama (2008), volatility provision is highly significant for option pricing, risk management and portfolio management. The best predicting volatility model is controversial. The goal of his research is to handle with seven different GARCH class models to predict in-sample of daily stock market volatility in 10 different countries. The results of the study emphasizes that the class of asymmetric volatility models apply better in predicting of stock market volatility than the historical model.

Also according to Merh (2011), data mining techniques possess a significant place in finance as the size of the data is rising exponentially and the accuracy which the data should be analyzed is very significant. In his research, an effort is made in order to evolve two models, one utilizing three-layer feed-forward back propagation Artificial Neural Network (ANN 4-4-1) and the other using Autoregressive Integrated Moving Average (ARIMA 1, 1, 1) for predicting the future index value of Sensex (BSE 30). Simulations have been executed by utilizing values of daily open, high, low and close of Sensex. These are selected as input data prices and output is the predicted closing price of Sensex for next day. Convergence and performance of models have been assessed on the basis of the simulation findings.

A comparison between the performance’s prediction of Artificial Neural Network and linear regression strategies in Istanbul Stock Exchange was made by Altay et al (2005). Consequently they present a number of evidences of statistical and financial outperform of Artificial Neural Network models. Nevertheless the statistical predictions (RMSE, MAE and Theil's U) of Artificial Neural Network models which use daily and monthly data are not more effective than the alternative regression models, they outcome exciting evidences for markets’ forecasting. The Artificial Neural Network models predict accurately the signs of stock indexes up to 57,8% for daily, 67,1% for weekly and 78,3%, for monthly data. In addition to this, as strategic trading tools, the Artificial Neural Network models create higher returns than the linear regression models. For instance, a portfolio of 1 YTL initial value reach up to 2,76 YTL for daily, 2,63 YTL for weekly and 3,35 YTL for monthly data respectively. The above results are extremely better than corresponding results of regression and buy – and – hold strategies.

In conformity with Aditya et al (2008), Indic economy seems to be upbeat focusing on an increasing trend in the stock market’s investments. Added to this, the fluctuations detected in market are considerable high. Consequently, levels like the above make the prevention of presumable values and decisions making extremely difficult. Correspondingly, volatility is also useful for forecasting the future price of options via the Black Scholes model. It is obvious that knowing the volatility in the future would be useful. As the price index could be used like indicator of total market value, volatility index would be used as indicators of predictable volatility of the whole market over the specific period. Indices like the above could be detected in countries of Europe and the United States of America. An apt example is the C.B.O.E. Volatility Index which is the first indicator which provides for the Chicago Board of Exchange, volatility provisions continuously. Aditya et al (2008) aim to establish a similar index in the Indian market. To be more specific, they focus on the construction of an Indian Market Volatility Index (I.M.V.I.). The above indicator will use the Nifty option series by benchmarking the Standard & Poor’s C.N.X. Nifty Index. Furthermore, the study tries to confirm the preventive properties and general effectiveness of I.M.V.I. The conclusions of the above work could be a valuable guide for potential researchers. Lastly, this work pays attention to the index’s constitution daily and proposes the development of the index for continuous prediction.

Added to the above, Sarno (2005) established a vector equilibrium model. This model is able to turn into account market’s information in the futures, allowing the same time for both regime – switching behaviour and international spillovers via stock market indexes. By adopting the model for three stock market indexes from 1989, we realise that:

This model outperforms a number of alternative models regarding standard statistical criteria;

This model is not proper to produce high gains in terms of point provisions relative to more poor alternative specifications. However it achieves it in terms of market timing capacity and in density preventing performance.

On the other hand Malik (2011) agrees that bad news increase volatility but disagrees over the impact of positive news on the volatility of stock market and often report it as statistically insignificant. His article shows that accounting for endogenously determined structural breaks within the asymmetric Generalized Autoregressive Conditional Heteroscedastic (GARCH) model reduces volatility persistence and good news significantly decreases volatility. However, good news does not affect volatility if structural breaks are ignored. He validates his empirical results with Monte Carlo simulations and provide an intuitive explanation for his results. His results resolve earlier inconsistencies in the literature and have important practical implications for building accurate asset pricing models and forecasting of stock market volatility.

Kuan (2006) established a technique of novel neural network, Support Vector Regression (S.V.R.), in order to predict the financial time series. The main target of this work is to measure the effectible of S.V.R. as a tool of financial time series prevention via comparing it with the traditional R.W. model and Artificial Neural Network (A.N.N.). To set an effective Support Vector Regression model, the parameters should be set carefully. He proposes an alternative approach, GA-SVR, which aims to search for optimal parameters by adopting real value genetic algorithms. Continuously, he applies the optimal parameters in order to establish a S.V.R. model. Taiwan Stock Exchange Market Weighted Index (T.A.I.E.X.) from January 2, 2001 to January 23, 2003 were chose as the source of data. The empirical results show that S.V.R. outperforms the A.N.N. and the traditional RW models based on the Normalized Mean Square Error (N.M.S.E.), Mean Square Error (M.S.E.) and Mean Absolute Percentage Error (M.A.P.E). Furthermore, in order to evaluate the significance and apprehend the features of S.V.R. model, this work measures the effects of the number of input node.

Added to the above, Hen (2007) evaluates the performance of the conditional autoregressive range (CAR) model formulated by Chou (2004). Taking into account the daily data on the British’s stock market from 1990 to 2000, he realized that the Conditional Autoregressive Range model gives sharper volatility provisions than the generalized autoregressive conditional heteroscedasticity (GARCH) model. Moreover, he detected that the inclusion of the lagged return and trading volume can significantly upgrade the providing ability of the CARR model. The results also show the occurence of a leverage effect in the local stock market.

Moreover, McMillan (2000) analyzed the provisioning performance of a great number of statistical and econometric models of U.K. FTA All Share and FTSE 100 stock index volatility at the monthly, weekly and daily frequencies taking into consideration both symmetric and asymmetric loss functions. Under symmetric loss, results shows that the random walk model provides greatly superior monthly volatility provisions, while random walk, moving average, and recursive smoothing models provide moderately superior weekly volatility forecasts, and GARCH, moving average and exponential smoothing models provide marginally superior daily volatility provisions. If attention is restricted to one forecasting method for all frequencies, the most consistent forecasting performance is provided by moving average and GARCH models. More generally, results shows that the previous results underlining that the class of GARCH models give relatively poor volatility provisions should not be robust at really higher frequencies and failing to hold here for the crash – adjusted FTSE 100 index in particular.

Finally, Rozhkov (2005) focused on the provisions for the trends of the Russian stock market. The value of the market value could be increased if the corporation develops successfully. Consequently, market shareholders expect an increase in total incomes and hence a rise in the performance of owning the stock. The present value of corporate revenue is negatively associated with the level of interest rates: the higher the interest rates are, the lower the present income that could be received from buying the stock, and, consequently, the less it will be worth. Company earnings can often be closely associated with the business cycle.

Methodology Plan

This part presents the methodology which will be used. In specific, it explains in details what each econometric model is by using mathematics and statistics. This information is expressed because the reader needs to know the meaning and the use of these econometric models. Thus, this part provides details about VECM models, Johansen cointegration test, Phillips - Peron test, and ECM models.

Vector Error Correction Models

A Vector error correction model (VECM) allows understanding better and accurately the nature of no stationarity among the different time series. In addition, VECM is an effective tool which may improve longer term forecasting over an unconstrained model. A VECM could be written such as (Brooks, 2009), (Halkos, 2006).

ΔΥt = β1 ΔΧt – (1-γ1) (Υt-1 – α0 –α1 Χt-1) + εt

where  is the differencing operator, such that Δyt = yt – yt-1

Cointegration model – Johansen test

Johansen test in econometrics is a procedure which is used in order to test the cointegration of several time series. There are two of Johansen test, either with trace either with eigenvalues. According this test, variables are not required to reach the same order of integration. This test is based on Dickey Fullet test for unit roots. The Johansen test is based on VAR models. In Johansen framework we must estimate a congruent, unrestricted, closed qth order VAR in k variables. We assume a VAR (vector autoregressive model) model with m variables,

Also, we assume that all variables are simultaneously co-integrated of order one either all variables are co-integrated of order zero. The model may be written such as:

Where

Β = - (Ι – Α1 – Α2 - … - Ακ)

And

Βj = -(Aj+1 – Aj+2 - … - Aj+k) for j = 1,2,…, k+1

Now the model looks such as a vector error correction model (VECM). If all m variables of VECM model are integrated of order one, thus the variables ΔΥt-j are stable. Also, we could assume that all the variables of the model are co-integrated, thus variables BYt-j are stable too.

If the rank of matrix B = 0 then every bij =0 In this case, vector error correction BYt-j does not exist. Thus, the variables are not co-integrated.

If the rank of matrix B=m then the vector {Yt} is stable. In this case the variables are co-integrated of order zero and no co-integration exists.

If the rank of matrix B=r, where r<m, this matrix may be written such as:

B=D*C΄ (where D and C are matrices mxr) The C matrix is called co-integration matrix and the D matrix is called adaptation matrix. When Yt ~I(1) then C΄*Yt ~I(0). It means that the variables are co-integrated. (Katos, 2004), (Johansen, 1995)

Phillips – Peron Test

Phillips and Peron proposed a test in order to estimate the unit roots. Phillips – Peron (PP) test is based on the Dickey – Fuller test. PP test show that the process generating data for xt might have a higher order of autocorrelation than is admitted in the test equation.

ΔΧt = δXt-1 + εt

tt

ttt

The correction of t-test statistics at δ is non-parametric and takes into account at the residuals, both the heteroscedasticity and the autocorrelation of the unknown order. (Verbeek, 2008), (Perron, 1987).

Error Correction Model - ECM

An error correction model in econometrics is a dynamical system with the characteristics that the deviation of the current state from its long-run relationship will be fed into its short-run dynamics. An error correction model is not a model that corrects the error in another model. A rough long-term relationship could be determined by the cointegration vector, and then this relationship can be utilized to develop a refined dynamic model which may have a focus on long-run or transitory aspect such as the two VECM of a usual VAR in Johansen test. In specific, we consider a simple, proportional, long-run equilibrium relationship between two variables:

Yt = KXt

For instance, we might think of Y as inventory and X as sales, or Y as consumption and X as income, or whatever. But, a fully specified equilibrium model may well include more variables, and the equilibrium relationship need not be one of direct proportionality.

The relationship above might be written in log form as

yt = k + xt (1)

where we follow the convention of letting a lower-case letter designate the natural log of the variable represented by the corresponding upper case letter. (Taking logs reduces the multiplicative relationship to an additive one, which is a helpful mathematical simplification.)

Then, we may write down a general dynamic relationship between y and x:

Yt = β0 + β1χt + β2χt-1 α1yt-1 + ut (2)

By including lagged values of both x and y this specification allows for a wide variety of dynamic patterns in the data.

However, the basic question is: Under what conditions is the generic dynamic equation (2) consistent with the long run equilibrium relationship (1)? In order to assess this, we "zero out" the factors that could cause divergence from equilibrium, namely changes in xt and stochastic fluctuations, ut.

Hence, we set yt = y* and

xt = x* for all t, and set ut = 0.

Therefore,

If the above corresponds with equation (1) we estimate that

If we may suppose that this is the case. Then, the second relationship above means that β1 + β2 = 1- α1. In addition, we might suppose that γ denotes the common value of these two terms. Hence, β2 may be written as γ – β1 and α1 may be written as 1 – γ. Therefore, the equation (2) may be transmitted as

Yt = β0 + β1χt + (γ- β1) χt-1 (1-γ) yt-1 + ut

Or

Yt – Yt-1 = β0 + β1 (χt – χt-1) + γ (χt-1 - yt-1) + ut

Hence, Δyt = β0 + β1Δxt + γ(χt-1 – yt-1) + ut (3)

where Δxt = xt − xt−1. This is the characteristic "error correction" specification, where the change in one variable is related to the change in another variable, as well as the gap between the variables in the previous period. (Brooks, 2009) (Engle, 1987)



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now