- Vector autoregression
**Vector autoregression (VAR)**is aneconometric model used to capture the evolution and the interdependencies between multipletime series , generalizing the univariate AR models. All the variables in a VAR are treated symmetrically by including for each variable an equation explaining its evolution based on its own lags and the lags of all the other variables in the model. Based on this feature, Christopher Sims advocates the use of VAR models as a theory-free method to estimate economic relationships, thus being an alternative to the "incredible identification restrictions" in structural models [*Christopher A. Sims, 1980, "Macroeconomics and Reality",*] .Econometrica 48**pecification****Definition**A VAR model describes the evolution of a set of "k" variables (called

**"endogenous variables**") over the same sample period ("t" = 1, ..., "T") as alinear function of only their past evolution. The variables are collected in a "k" × 1 vector "y_{t}", which has as the i^{th}element "y_{i,t}" the time "t" observation of variable "y_{i}". For example, if the "i"^{th}variable isGDP , then "y_{i,t}" is the value of GDP at "t".A

**"(reduced) p-th order VAR**", denoted**"VAR(p)**", is:$y\_t\; =\; c\; +\; A\_1\; y\_\{t-1\}\; +\; A\_2\; y\_\{t-2\}\; +\; cdots\; +\; A\_p\; y\_\{t-p\}\; +\; e\_t,$

where "c" is a "k" × 1 vector of constants (

), "Aintercept _{i}" is a "k" × "k" matrix (for every "i" = 1, ..., "p") and e"_{t}" is a "k" × 1 vector of error terms satisfying#$mathrm\{E\}(e\_\{t\})\; =\; 0,$ — every error term has mean zero;

#$mathrm\{E\}(e\_\{t\}e\_\{t\}\text{'})\; =\; Omega,$ — the contemporaneouscovariance matrix of error terms is Ω (a "n" × "n"positive definite matrix);

#$mathrm\{E\}(e\_\{t\}e\_\{t-k\}\text{'})\; =\; 0,$ for any non-zero "k" — there is nocorrelation across time; in particular, noserial correlation in individual error terms.The "l"-periods back observation "y"

_{"t"−l}is called the "l"-th**"lag**" of "y". Thus, a "p"th-order VAR is also called a**VAR with "p" lags**.**Order of integration of the variables**Note that all the variables used have to be of the same

order of integration . We have so the following cases:*All the variables are I(0) (stationary): one is in the standard case, ie. a VAR in level

*All the variables are I(d) (non-stationary) with d>1:

**The variables are cointegrated: the error correction term has to be included in the VAR. The model becomes a Vector error correction model (VECM) which can be seen as a restricted VAR.

**The variables are not cointegrated: the variables have first to be differenced d times and one has a VAR in difference.**Concise matrix notation**One can write a VAR("p") with a concise matrix notation:

:$Y=BZ\; +U\; ,$

Details of the matrices are in a separate page.

**Example**For a general example of a VAR(p) with "k" variables, please see this page.

A VAR(1) in two variables can be written in matrix form (more compact notation) as

:$egin\{bmatrix\}y\_\{1,t\}\; \backslash \; y\_\{2,t\}end\{bmatrix\}\; =\; egin\{bmatrix\}c\_\{1\}\; \backslash \; c\_\{2\}end\{bmatrix\}\; +\; egin\{bmatrix\}A\_\{1,1\}A\_\{1,2\}\; \backslash \; A\_\{2,1\}A\_\{2,2\}end\{bmatrix\}egin\{bmatrix\}y\_\{1,t-1\}\; \backslash \; y\_\{2,t-1\}end\{bmatrix\}\; +\; egin\{bmatrix\}e\_\{1,t\}\; \backslash \; e\_\{2,t\}end\{bmatrix\},$

or, equivalently, as the following system of two equations

:$y\_\{1,t\}\; =\; c\_\{1\}\; +\; A\_\{1,1\}y\_\{1,t-1\}\; +\; A\_\{1,2\}y\_\{2,t-1\}\; +\; e\_\{1,t\},$:$y\_\{2,t\}\; =\; c\_\{2\}\; +\; A\_\{2,1\}y\_\{1,t-1\}\; +\; A\_\{2,2\}y\_\{2,t-1\}\; +\; e\_\{2,t\}.,$

Note that there is one equation for each variable in the model. Also note that the current (time "t") observation of each variable depends on its own lags as well as on the lags of each other variable in the VAR.

**Writing VAR("p") as VAR(1)**A VAR with "p" lags can always be equivalently rewritten as a VAR with only one lag by appropriately redefining the dependent variable. The transformation amounts to merely stacking the lags of the VAR("p") variable in the new VAR(1) dependent variable and appending identities to complete the number of equations.

For example, the VAR(2) model

:$y\_\{t\}=c\; +\; A\_\{1\}y\_\{t-1\}\; +\; A\_\{2\}y\_\{t-2\}\; +\; e\_\{t\}$

can be recast as the VAR(1) model

::$egin\{bmatrix\}y\_\{t\}\; \backslash \; y\_\{t-1\}end\{bmatrix\}\; =\; egin\{bmatrix\}c\; \backslash \; 0end\{bmatrix\}\; +\; egin\{bmatrix\}A\_\{1\}A\_\{2\}\; \backslash \; I0end\{bmatrix\}egin\{bmatrix\}y\_\{t-1\}\; \backslash \; y\_\{t-2\}end\{bmatrix\}\; +\; egin\{bmatrix\}e\_\{t\}\; \backslash \; 0end\{bmatrix\},$

where "I" is the

identity matrix .The equivalent VAR(1) form is more convenient for analytical derivations and allows more compact statements.

**tructural vs. reduced form****tructural VAR**A

**"structural VAR with p lags**" is:$B\_0\; y\_t\; =\; c\_0\; +\; B\_1\; y\_\{t-1\}\; +\; B\_2\; y\_\{t-2\}\; +\; cdots\; +\; B\_p\; y\_\{t-p\}\; +\; epsilon\_t,$

where "c"

_{0}is a "k" × 1 vector of constants, "B_{i}" is a "k" × "k" matrix (for every "i" = 0, ..., "p") and "ε"_{"t"}is a "k" × 1 vector oferror terms. Themain diagonal terms of the "B"_{0}matrix (the coefficients on the "i"^{th}variable in the "i"^{th}equation) are scaled to 1.The error terms ε"

_{t}" (**"structural shocks**") satisfy the conditions (1) - (3) in the definition above, with the particularity that all the elements off the main diagonal of the covariance matrix $mathrm\{E\}(epsilon\_tepsilon\_t\text{'})\; =\; Sigma$ are zero. That is, the structural shocks are uncorrelated.For example, a two variable structural VAR(1) is:

:$egin\{bmatrix\}1B\_\{0;1,2\}\; \backslash \; B\_\{0;2,1\}1end\{bmatrix\}egin\{bmatrix\}y\_\{1,t\}\; \backslash \; y\_\{2,t\}end\{bmatrix\}\; =\; egin\{bmatrix\}c\_\{0;1\}\; \backslash \; c\_\{0;2\}end\{bmatrix\}\; +\; egin\{bmatrix\}B\_\{1;1,1\}B\_\{1;1,2\}\; \backslash \; B\_\{1;2,1\}B\_\{1;2,2\}end\{bmatrix\}egin\{bmatrix\}y\_\{1,t-1\}\; \backslash \; y\_\{2,t-1\}end\{bmatrix\}\; +\; egin\{bmatrix\}epsilon\_\{1,t\}\; \backslash \; epsilon\_\{2,t\}end\{bmatrix\},$

where

:$Sigma\; =\; mathrm\{E\}(epsilon\_t\; epsilon\_t\text{'})\; =\; egin\{bmatrix\}sigma\_\{1\}0\; \backslash \; 0sigma\_\{2\}end\{bmatrix\};$

that is, the

variance s of the structural shocks are denoted $mathrm\{var\}(epsilon\_i)\; =\; sigma\_i^2$ ("i" = 1, 2) and thecovariance is $mathrm\{cov\}(epsilon\_1,epsilon\_2)\; =\; 0$.Writing the first equation explicitly and passing "y

_{2,t}" to theright hand side one obtains:$y\_\{1,t\}\; =\; c\_\{0;1\}\; -\; B\_\{0;1,2\}y\_\{2,t\}\; +\; B\_\{1;1,1\}y\_\{1,t-1\}\; +\; B\_\{1;1,2\}y\_\{2,t-2\}\; +\; epsilon\_\{1,t\},$

Note that "y"

_{2,"t"}can have a contemporaneous effect on "y_{1,t}" if "B"_{0;1,2}is not zero. This is different from the case when "B"_{0}is theidentity matrix (all off-diagonal elements are zero — the case in the initial definition), when "y"_{2,"t"}can impact directly "y"_{1,"t"+1}and subsequent future values, but not "y"_{1,"t"}.Because of the

parameter identification problem ,ordinary least squares estimation of the structural VAR would yield inconsistent parameter estimates. This problem can be overcome by rewriting the VAR in reduced form.From an economic point of view it is considered that, if the joint dynamics of a set of variables can be represented by a VAR model, then the structural form is a depiction of the underlying, "structural", economic relationships. Two features of the structural form make it the preferred candidate to represent the underlying relations:

:1. "Error terms are not correlated". The structural, economic shocks which drive the dynamics of the economic variables are assumed to be independent, which implies zero correlation between error terms as a desired property. This is helpful for separating out the effects of economically unrelated influences in the VAR. For instance, there is no reason why an oil price shock (as an example of a

supply shock ) should be related to a shift in consumers' preferences towards a style of clothing (as an example of ademand shock ); therefore one would expect these factors to be statistically independent.:2. "Variables can have a contemporaneous impact on other variables". This is a desirable feature especially when using low frequency data. For example, an

indirect tax rate increase would not affecttax revenues the day the decision is announced, but one could find an effect in that quarter's data.**Reduced VAR**By premultiplying the structural VAR with the inverse of "B"

_{0}: $y\_t\; =\; B\_0^\{-1\}c\_0\; +\; B\_0^\{-1\}\; B\_1\; y\_\{t-1\}\; +\; B\_0^\{-1\}\; B\_2\; y\_\{t-2\}\; +\; cdots\; +\; B\_0^\{-1\}\; B\_p\; y\_\{t-p\}\; +\; B\_0^\{-1\}epsilon\_t,$

and denoting

: $B\_\{0\}^\{-1\}\; c\_0\; =\; c,quad\; B\_\{0\}^\{-1\}B\_i\; =\; A\_\{i\}\; ext\{\; for\; \}i\; =\; 1,\; dots,\; p\; ext\{\; and\; \}B\_\{0\}^\{-1\}epsilon\_t\; =\; e\_t$

one obtains the

**"p"th order reduced VAR**:$y\_t\; =\; c\; +\; A\_1\; y\_\{t-1\}\; +\; A\_2\; y\_\{t-2\}\; +\; cdots\; +\; A\_p\; y\_\{t-p\}\; +\; e\_t$

Note that in the reduced form all right hand side variables are predetermined at time "t". As there are no time "t" endogenous variables on the right hand side, no variable has a "direct" contemporaneous effect on other variables in the model.

However, the error terms in the reduced VAR are composites of the structural shocks "e"

_{"t"}= "B"_{0}^{−1}"ε"_{"t"}. Thus, the occurrence of one structural shock "ε_{i,t}" can potentially lead to the occurrence of shocks in all error terms "e_{j,t}", thus creating contemporaneous movement in all endogenous variables. Consequently, the covariance matrix of the reduced VAR:$Omega\; =\; mathrm\{E\}(e\_t\; e\_t\text{'})\; =\; mathrm\{E\}\; (B\_0^\{-1\}\; epsilon\_t\; epsilon\_t\text{'}\; (B\_0^\{-1\})\text{'})\; =\; B\_0^\{-1\}Sigma(B\_0^\{-1\})\text{'},$

can have non-zero off-diagonal elements, thus allowing non-zero correlation between error terms.

**Estimation****Estimation of the regression parameters**Starting from the concise matrix notation (for details see this annex):

:$Y=BZ\; +U\; ,$

*Ordinary least squares(OLS) estimation of each equation in the reduced VAR is both consistent and asymptotically efficient. It is furthemore equal to the maximum likelihood estimator (MLE) (Hamilton 1994, p 293).

The OLS estimator for B is given by:

:$hat\; B=\; YZ^\{\text{'}\}(ZZ^\{\text{'}\})^\{-1\}$

*Generalized least square (GLS) yields the same estimation, as was shown by Zellner (1962).

The GLS estimator for B is given by:

:$mbox\{Vec\}(hat\; B)\; =\; ((ZZ^\{\text{'}\})^\{-1\}\; Z\; otimes\; I\_\{k\})\; mbox\{Vec\}(Y)$

Where $otimes$ denotes the

Kronecker product and Vec the vectorization of the matrix "Y".**Estimation of the covariance matrix of the errors**As in the standard case, the MLE estimator of the covariance matrix differs from the OLS estimator.

MLE estimator: $hat\; Sigma\; =\; frac\{1\}\{T\}\; sum\_\{t=1\}^T\; hat\; epsilon\_that\; epsilon\_\{t\}^\{\text{'}\}$

OLS estimator: $hat\; Sigma\; =\; frac\{1\}\{T-kp-1\}\; sum\_\{t=1\}^T\; hat\; epsilon\_that\; epsilon\_t^\text{'}$ for a model with a constant, "k" variables and "p" lags

In a matrix notation, this gives:

: $hat\; Sigma\; =\; frac\{1\}\{T-kp-1\}\; (Y-hat\{B\}Z)(Y-hat\{B\}Z)^\text{'}.$

Note that for the GLS estimator the covariance matrix of the errors becomes:

: $old\; hat\; Sigma\_epsilon\; =\; I\_T\; otimes\; hat\; Sigma\_epsilon.$

**Estimation of the covariance matrix of the parameters**The covariance matrix of the parameters can be estimated as

: $hat\; mbox\{Cov\}\; (mbox\{Vec\}(hat\; B))\; =(\{ZZ\text{'}\})^\{-1\}\; otimeshat\; Sigma.,$

**References*** Walter Enders, "Applied Econometric Time Series", 2nd Edition, John Wiley & Sons 2003, ISBN 0-471-23065-0

*James D. Hamilton . "Time Series Analysis". Princeton University Press. 1995.

* Helmut Lütkepohl. "New Introduction to Multiple Time Series Analysis". Springer. 2005.

* Zellner (1962) An Efficient Method of Estimating Seemingly Unrelated Regressions and Tests for Aggregation Bias. "Journal of the American Statistical Association", Vol. 57, No. 298 (Jun., 1962), pp. 348-368**Notes**

*Wikimedia Foundation.
2010.*

### Look at other dictionaries:

**Christopher A. Sims**— Born October 21, 1942 (1942 10 21) (age 69) Washington, D.C. Nationality American Institution Princeton University Field Macroeconomics Econometrics Time ser … Wikipedia**Autoregressive moving average model**— In statistics, autoregressive moving average (ARMA) models, sometimes called Box Jenkins models after the iterative Box Jenkins methodology usually used to estimate them, are typically applied to time series data.Given a time series of data X t … Wikipedia**Degrees of freedom (statistics)**— In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary.[1] Estimates of statistical parameters can be based upon different amounts of information or data. The number… … Wikipedia**List of mathematics articles (V)**— NOTOC Vac Vacuous truth Vague topology Valence of average numbers Valentin Vornicu Validity (statistics) Valuation (algebra) Valuation (logic) Valuation (mathematics) Valuation (measure theory) Valuation of options Valuation ring Valuative… … Wikipedia**Principal component analysis**— PCA of a multivariate Gaussian distribution centered at (1,3) with a standard deviation of 3 in roughly the (0.878, 0.478) direction and of 1 in the orthogonal direction. The vectors shown are the eigenvectors of the covariance matrix scaled by… … Wikipedia**Linear regression**— Example of simple linear regression, which has one independent variable In statistics, linear regression is an approach to modeling the relationship between a scalar variable y and one or more explanatory variables denoted X. The case of one… … Wikipedia**Covariance**— This article is about the measure of linear relation between random variables. For other uses, see Covariance (disambiguation). In probability theory and statistics, covariance is a measure of how much two variables change together. Variance is a … Wikipedia**Least squares**— The method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more equations than unknowns. Least squares means that the overall solution minimizes the sum of… … Wikipedia**Partial correlation**— In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed. Contents 1 Formal definition 2 Computation 2.1 Using… … Wikipedia**Monte Carlo methods for electron transport**— The Monte Carlo method for electron transport is a semiclassical Monte Carlo(MC) approach of modeling semiconductor transport. Assuming the carrier motion consists of free flights interrupted by scattering mechanisms, a computer is utilized to… … Wikipedia