378 Handbook of Discrete-Valued Time Series
π
∗
i,t
> π
i,t
only when the observed number of nonzero spatial neighbors is greater than the
expected number of nonzero spatial neighbors under independence. That is,
y
j,t
>
j∈N
i
j∈N
i
π
j,t
.If θ
p+1
= 0, then π
i
∗
,t
> π
i,t
only when the observed number of nonzero tempo-
ral neighbors is greater than the expected number of nonzero temporal neighbors under
independence. That is, y
i,t−1
+ y
i,t+1
> π
i,t−1
+ π
i,t+1
. Thus, the interpretation of θ
p+1
and
θ
p+2
as local dependence parameters is more sensible. Further, the simulation study in
Wang (2013) showed that the marginal expectation of Y
i,t
under the centered parameter-
ization remains constant over moderate levels of spatial and temporal dependence (i.e.,
E(Y
i,t
|x
k,i,t
, k = 1, ..., p) ≈ π
i,t
). The interpretation of regression coefcients as effects of
covariates is more sensible as well.
17.3.2 Statistical Inference
For the model with centered parameterization, its statistical inference has been devel-
oped based on expectation–maximization pseudo-likelihood, Monte Carlo expectation–
maximization likelihood, and Bayesian inference (Wang and Zheng, 2013).
17.3.2.1 Expectation–Maximization Pseudo-Likelihood Estimator
To obtain the maximum pseudo-likelihood estimates of the model parameters, the combi-
nation of an expectation–maximization (EM) algorithm and a Newton–Raphson algorithm,
called the expectation–maximization pseudo-likelihood estimator (EMPLE), is considered.
Specically, update π
i,t
, the expectation of Y
i,t
under the independent model, at the E step
and then at the M step, update
θ
ˆ
l
by maximizing
exp
p
k=0
θ
k
x
k,i,t
y
i,t
+
j∈N
i
θ
p+1
y
i,t
y
∗
j,
(
t
l−1)
+ θ
p+2
y
i,t
(y
∗
i,
(
t−
l−
1
1)
+ y
∗
i,
(
t+
l−
1
1)
)
i,t
1 + exp
k
p
=0
θ
k
x
k,i,t
+
j∈N
i
θ
p+1
y
j
∗
,
(
t
l−1)
+ θ
p+2
(y
i
∗
,
(
t−
l−
1
1)
+ y
i
∗
,t
(
+
l−
1
1)
)
,
where y
i
∗
,t
(l)
is the centered response at the lth iteration. The M step can be carried out
by a Newton–Raphson algorithm using the standard logistic regression and the E and
M steps are repeated until convergence. A parametric bootstrap can be used to compute
the standard error of the EMPLE. For the starting value θ
0
at the start of the algorithm, dif-
ferent starting points can impact how long it takes to convergence. The maximum MPLE
from the uncentered autologistic regression model is a natural choice.
17.3.2.2 Monte Carlo Expectation–Maximization Likelihood Estimator
Let z
∗
= (
∗
...,
∗
1
y
∗
y
∗
∗
y
∗
. We consider a
θ
i,t
x
0,i,t
y
i,t
,
i,t
x
p,i,t
y
i,t
,
2
i,t j∈N
i
i,t j,t
,
i,t
y
i,t i,t−1
)
rescaled version of the likelihood function
c
∗
(ψ)
exp(θ
z
∗
)
−1
c
∗
(ψ)L(Y; θ) =
c
∗
(θ)
exp(θ
z
θ
∗
) =
E
ψ
θ
∗
exp(θ
z
θ
∗
),
exp(ψ
z
ψ
)