The History Of Vector Autoregressive Model

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

3.0 Methodology

In this paper, due to it is examining the relationship between stock index futures(FKLI), stock index (FBMKLCI), exchange rate( Ringgit Vs US dollars) and interest rate (Lending Rate) in Malaysia from 1996 – 2012 and it was extracted monthly data (total is 204 data) to conduct empirical model to define the relation.

At this research, it will be separate to 4 parts, which are

Period 1-Learning part (1/1/1996 -1/2/1997)

The reason that this period become first part is because it is established to reflect the learning due to the recent introductions to the stock index futures market. Also, during that period of the volume is relatively low and the market condition is stable. Therefore, it is a good period test that before crisis happen, what is the correlation around others variables.

Period 2-Crisis 1 part(1/3/1997 -1/3/2000)

Due to the Asian Financial Crisis was happen at that moment, so the heavily impact making the market having a huge volatility. At this period, it will test that whether is because other variable will be effect by these phenomena. Also, it is established to reflect the effect due to the onset of the financial crisis which reflects highly fluctuating prices and high trading volume.

Period 3- Stable period ( 1/4/2000- 1/10/2007)

At this stable period, from the graph can conclude that it is a period of mildly volatile prices and fairly high trading volume. So, it can be test that after Asia Financial Crisis, whether it have changing the relationship compare with before crisis.

Period 4 –Crisis 2 period (1/11/2007- 1/12/2012)

From the last period of this research, the reason that it need to test the relationship also, which is because that crisis was due to Greece budget deficit to trigger whole world enter into another crisis storm, so it can be determine that whether Malaysia variables would be effect by Euro zone financial crisis.

3.1 Data Sampling

First step, which is collecting time series data in order to start experiment. For Stock index futures (FKLI), it was extracted from DBS Dwang investment bank databank. For Stock index (FBMKLCI), it was extracted from Bursa Malaysia website. Regarding Exchange rate and Interest rate, it were taken from Thomas Data stream.

The reason that it need to use quantitates data is because causal relationships are studied by manipulating factors thought to influence the phenomena of interest while controlling other variables relevant to the experimental outcomes. Furthermore explanation can describe that quantitative methods of data analysis can be of great value to the researcher who is attempting to draw meaningful result from a large body of qualitative data. The main beneficial aspect is that it provides the means to separate out the large number of confounding factors that often obscure the main qualitative findings. Also, quantitative analytical approaches also allow the reporting of summary results in numerical terms to be given with a specified degree of confidence. Besides that quantitative analysis approaches are meaningful only when there is a need for data summary across many repetitions of a participatory process. Data summarization in turn implies that some common features do emerge across such repetitions. Thus the value of a quantitative analysis arises when it is possible to identify features that occur frequently across the many participatory discussion aimed at studying this research. Last, quantitative analysis approaches are particularly helpful when the qualitative information has been collected in some structured way, even if the actual information has been elicited through participatory discussions and approaches.

3.2 Descriptive Statistics

Second, after prepared data sample, then it will proceed to next step which is computing descriptive statistics. Descriptive statistics is the discipline of quantitatively describing the main features of a collection of data. Descriptive statistics is also a set of brief descriptive coefficients that summarizes a given data set that represents either the entire population or a sample. The measures that describe the data set are measures of central tendency and measures of variability or dispersion. Measures of central tendency include the means, median and mode, while measures of variability include the standard deviation, the minimum and maximum variables, kurtosis and skewness.

Descriptive Statistics just like an introduction of the data sampling. It can be known that the movement of data trend, basic information and others. Such as skewness, it is a measure of the asymmetry of the probability distribution of a real-valued random variable. Its value can be positive or negative or even undefined. For the Kurtosis, it is a descriptor of the shape of a probability distribution and just as for skewness. There are different ways of quantifying it for a theoretical distribution and corresponding ways of estimating it from a sample from a population. In a nutshell, it involves activities such as counting, measuring, describing, tabulation, ordering and census taking sets of data.

Appropriate Measures of Central Tendency

Mode

the most frequent value (or) the highest frequency in a

data set.

Median

the middle value in a data set.

Mean

the average value of the data set.

Appropriate Measures of Dispersion

Range

the difference between the highest and lowest values of a

data set.

Variance

defined by the sum of squared deviations divided

by n-1 (one less than the sample size).

Standard Deviation

the positive square root of the variance.

s = square root of s2

3.3 Stationary Test (Unit root test)

Unit Root Test is the way of testing data series whether is stationary; it means that a given time series is non-stationary. If time series variables are non-stationary, which means that mean and variance is not constant. At this research, it will be conducting 2 unit root test, which is Augmented Dickey-Fuller (ADF) test and Philip-Perron tests. Dickey and Fuller (1979) have developed a test, it known as the Augmented Dickey-Fuller (ADF) test. The reason that need to check data whether is stationary is because that if time series variables are non-stationary ,which means data contained unit root, the data lead to spurious regression result. Therefore, a stochastic process is said to be stationary if its mean and variance are constant. Spurious regression is the result of regression may pass the usual statistical data criteria. However, if variables are not cointegrated. It called spurious regression. If a time series has to be differenced "d" times to become stationary, it is integrated of order "d" ,denoted as l (d). Level data- Original data its undifferenced form, so under a time series which is non-stationary.

3.3.1 ADF Test Methodology

The ADF test here consists of estimating the following regression:

∆ yt =α + ₹ yt-1 + β1∆yt-1 +β2∆yt-2 + ......+βp∆yt-p+ ɛt (1)

Where ɛt is an error term and ₹ is independent of the number of lagged first differences included in the ADF regression. This augmented specification is then used to test in equation (1)

The ADF test hypotheses are:

H0 : ₹ = 0 (There is unit root)

HA : ₹ < 0 ( There is no unit root)

By running Eview 6,in ADF test of the data at level data, choose the intercept in test equation and add the lag length of 1 using Akaike information Criterion (AIC) to the ADF test. First, check the ADF t-statistic value and indicated that ADF t-statistic value is smaller than "MacKinnon critical values" at all three levels of significance. Most of the time, the level data would not be stationary. Then, repeat step again, the unit root test (ADF) of the data at first difference and check the ADF t-statistic value and indicated that the ADF t-statistic value. If it will be bigger than that indicated " MacKinnon critical values at all three level of significance, therefore ,it need to reject the hypothesis (H0) of a unit root. So, once the first difference data is stationary, which means that the first differences of a random walk time series are stationary.

3.3.2 Philip-Perron test Methodology

For the Phillip-Perron (PP) test, it is developed a nonparametric method of controlling for higher-order serial correlation in a series. The PP t-statistic is the same as the ADF t statistic and Eview 6 package again reports the Mackinnon critical values. Besides that, this test has to specify the truncation lag for the Newey-West correction, which is the number of periods of serial correlation to include. The PP test will be used the default Newey-West automatic truncation lag selection of 4 as suggested by Newey and West (1994).

First, to specify the lead-lag data and choose the intercept in test equation and choose Barlett Kernel Method and the truncation lag for the Newey-West correction to the PP test. Initially, check the PP t-statistic value and indicated that the PP t-statistic value is s whether greater than "MacKinnon Critical values" at all three levels of significance. Therefore, the if the data is stationary, it means that the lead-lag data is not unit root and accept HA. Besides, the probability value will also be significance at 0.05level.

The dependent variables estimated residual model by using the single equation.

∆yt = α + Ф1 yt-1 + ɛt

3.4 Granger Causality Test

Granger causality is a statistical concept of casuality that is based on prediction (Granger, 1969). The Engle-Granger tests are statistical procedures that can be used to determine if two time series are cointegrated. If a varoab;e X said to "Granger-cause" or "G-cause" a variable Y, the direction of a Granger causality relationship of X and Y is (XY) and it can be shown significantly F-tests of X to Y. The stationary linear combination is called the conintegrating equation and may be interpreted as a long-term equilibrium relationship among the variables. Granger causality relationship is expressed in two pairs of regression equation by simply twisting independent (x) and dependent (Y) variables as follows:

Xt = α1 Xt_1 + α2Xt_2 +…αp Xt_p + β1 Yt_1 +β2 Yt_2 +…+βp Yt_p +u1,t

Yt = α1 Yt_1 + α2Yt_2 +…αp Yt_p + β1 Xt_1 +β2 Xt_2 +…+βp Xt_p +u2,t

Xt = α1 Xt_1 + α2Xt_2 +…αp Xt_p +u1,t

Yt = α1 Yt_1 + α2Yt_2 +…αp Yt_p +u2,t

Where:

R2UR = the coefficient of determination of unrestricted equation

R2R = the coefficient of determination of restricted equation

n = the number of observations

m= the number of lagged periods

3.5 Vector Autoregressive Model (VAR Model)

A vector autoregressive method (VAR model ) is an unrestricted vector autoregression designed for use with stationary series that are known to be cointegrated (Gilbert, 1986) and (Hendry and Ericsson, 2001). The VAR approach is based on a simultaneous system in that all variables are considered endogenous (Box and Jenkins, 1976) and ( Engle and Granger, 1991). In VAR, main objective is to develop a model primarily for forecasting and modeling purposes. Note that unless the underlying variables are stationary or cointegrated, one should not use these procedures.

The simple autoregressive model:

The basic idea: Vector Variables (Yt) =f (lagged variables Yt)

VAR Model Procedure

Step 1:

Estimate the model – Ordinary Least Square (OLS)/ Two Stage Least Square (TSLS)

Step 2:

Producing the forecasts (Ex-post Forecast)

Check which model is best using RMSE, MAE, MAPE, U-Theil

Run VAR model based on the best model

Step 3:

Lag Selection-With only lags they can be produced automatically

Step 4:

Run VAR Model with selected lag and exogenous variables may be added in.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now