The vector autoregressive (VAR) model

We will see how the vector autoregressive VAR(p) model extends the AR(p) model to k series by creating a system of k equations where each contains p lagged values of all k series. In the simplest case, a VAR(1) model for k=2 takes the following form:

This model can be expressed somewhat more concisely in matrix form:

The coefficients on the own lags provide information about the dynamics of the series itself, whereas the cross-variable coefficients offer some insight into the interactions across the series. This notation extends to the k series and order p, as follows:

VAR(p) models also require stationarity, so that the initial steps from univariate time series modeling carry over. First, explore the series and determine the necessary transformations, then apply the Augmented Dickey-Fuller test to verify that the stationarity criterion is met for each series and apply further transformations otherwise. It can be estimated with OLS conditional on initial information or with maximum likelihood, which is equivalent for normally-distributed errors but not otherwise.

If some or all of the k series are unit-root non-stationary, they may be co-integrated. This extension of the unit root concept to multiple time series means that a linear combination of two or more series is stationary and, hence, mean-reverting. The VAR model is not equipped to handle this case without differencing, instead use the Vector Error Correction model (VECM, see references on GitHub). We will further explore cointegration because, if present and assumed to persist, it can be leveraged for a pairs-trading strategy.

The determination of the lag order also takes its cues from the ACF and PACF for each series but is constrained by the fact that the same lag order applies to all series. After model estimation, residual diagnostics also call for a result resembling white noise, and model selection can use in-sample information criteria or, preferably, out-of-sample predictive performance to cross-validate alternative model designs if the ultimate goal is to use the model for prediction.

As mentioned in the univariate case, predictions of the original time series require us to reverse the transformations applied to make a series stationary before training the model.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset