From Wikipedia, the free encyclopedia
Variance Decomposition or Forecast error variance decomposition indicates the amount of information each variable contributes to the other variables in a Vector autoregression (VAR) models. [ 1] Variance decomposition determines how much of the forecast error variance of each of the variable can be explained by exogenous shocks to the other variables.
Calculating the Forecast error variance
For the VAR (p) of form
y
t
=
ν
+
A
1
y
t
−
1
+
⋯
+
A
p
y
t
−
p
+
u
{\displaystyle y_{t}=\nu +A_{1}y_{t-1}+\dots +A_{p}y_{t-p}+u}
Change this to a VAR (1) by writing it in companion form (see General matrix notation of a VAR(p) )
Y
t
=
ν
+
A
Y
t
−
1
+
U
{\displaystyle Y_{t}=\mathbf {\nu } +AY_{t-1}+U}
where
A
=
[
A
1
A
1
…
A
p
−
1
A
p
I
k
0
…
0
0
0
I
k
0
0
⋮
⋱
⋮
⋮
0
0
…
I
k
0
]
{\displaystyle A={\begin{bmatrix}A_{1}&A_{1}&\dots &A_{p-1}&A_{p}\\\mathbf {I} _{k}&0&\dots &0&0\\0&\mathbf {I} _{k}&&0&0\\\vdots &&\ddots &\vdots &\vdots \\0&0&\dots &\mathbf {I} _{k}&0\\\end{bmatrix}}}
,
Y
=
[
y
1
⋮
y
p
]
{\displaystyle Y={\begin{bmatrix}y_{1}\\\vdots \\y_{p}\end{bmatrix}}}
,
ν
=
[
ν
0
⋮
0
]
{\displaystyle \mathbf {\nu } ={\begin{bmatrix}\nu \\0\\\vdots \\0\end{bmatrix}}}
and
ν
=
[
ν
0
⋮
0
]
{\displaystyle \mathbf {\nu } ={\begin{bmatrix}\nu \\0\\\vdots \\0\end{bmatrix}}}
,
U
=
[
u
0
⋮
0
]
{\displaystyle U={\begin{bmatrix}u\\0\\\vdots \\0\end{bmatrix}}}
where
y
t
{\displaystyle y_{t}}
,
ν
{\displaystyle \nu }
and
u
{\displaystyle u}
are
k
{\displaystyle k}
dimensional column vectors,
A
{\displaystyle A}
is
k
p
{\displaystyle kp}
by
k
p
{\displaystyle kp}
dimensional matrix and
Y
{\displaystyle Y}
,
ν
{\displaystyle \mathbf {\nu } }
and
U
{\displaystyle U}
are
k
p
{\displaystyle kp}
dimensional column vectors.
Calculate the mean squared error of the variables,
M
S
E
[
y
j
,
t
(
h
)
]
{\displaystyle \mathbf {MSE} [y_{j,t}(h)]}
, which is given by the diagonal elements of the mean squared error matrix
Σ
y
(
h
)
{\displaystyle \Sigma _{y}(h)}
.
M
S
E
[
y
j
,
t
(
h
)
]
=
diag
(
Σ
y
(
h
)
)
{\displaystyle \mathbf {MSE} [y_{j,t}(h)]=\operatorname {diag} (\Sigma _{y}(h))}
.
Σ
y
(
h
)
=
∑
i
=
0
h
−
1
Φ
i
Σ
u
Φ
i
′
{\displaystyle \Sigma _{y}(h)=\sum _{i=0}^{h-1}\Phi _{i}\Sigma _{u}\Phi _{i}'}
where
Φ
i
=
J
A
i
J
′
{\displaystyle \Phi _{i}=JA^{i}J'}
where
J
=
[
I
k
0
…
0
]
{\displaystyle J={\begin{bmatrix}\mathbf {I} _{k}&0&\dots &0\end{bmatrix}}}
so
J
{\displaystyle J}
is
k
{\displaystyle k}
by
k
p
{\displaystyle kp}
dimensional matrix.
Σ
u
{\displaystyle \Sigma _{u}}
is the covariance matric of the errors
u
{\displaystyle u}
.
The amount of forecast error variance of variable
j
{\displaystyle j}
accounted for by exogenous shocks to variable
k
{\displaystyle k}
is given by
ω
j
k
,
h
{\displaystyle \omega _{jk,h}}
ω
j
k
,
h
=
∑
i
=
0
h
−
1
(
e
j
′
Θ
i
e
k
)
2
/
M
S
E
[
y
j
,
t
(
h
)
]
{\displaystyle \omega _{jk,h}=\sum _{i=0}^{h-1}(e_{j}'\Theta _{i}e_{k})^{2}/MSE[y_{j,t}(h)]}
where
e
j
{\displaystyle e_{j}}
is the
j
th
{\displaystyle {\text{j}}^{\text{th}}}
column of
I
k
{\displaystyle I_{k}}
and
Θ
i
=
Φ
i
P
{\displaystyle \Theta _{i}=\Phi _{i}P}
.
P
{\displaystyle P}
is a lower triangular matrix obtain by a Cholesky decomposition of
Σ
u
{\displaystyle \Sigma _{u}}
such that
Σ
u
=
P
P
′
{\displaystyle \Sigma _{u}=PP'}
See also
Notes
^ Lütkepohl, H, "New Introduction to Multiple Time Series Analysis", Springer, 2007, p. 63.