220 Handbook of Discrete-Valued Time Series
Hudecová [17] and Fokianos et al. [11] propose change point statistics for binary time
series models while Franke et al. [13] and Doukhan and Kegne [7] consider changes
in Poisson autoregressive models. Related procedures have also been investigated by
Fokianos and Fried [9,10] for integer valued GARCH and log-linear Poisson autoregres-
sive time series, respectively, but with a focus on outlier detection and intervention effects
rather than change points.
Section 10.2 explains how change point statistics can be constructed and derives asymp-
totic properties under both the null and alternative hypotheses, based on regularity
conditions, which are summarized in Appendix 10.7.1 to lighten the presentation. This
methodology is then applied to binary time series in Section 10.3 and to Poisson autoregres-
sive models in Section 10.4, generalizing the statistics already discussed in the literature.
In Section 10.5, some simulations as well as applications to real data illustrate the perfor-
mance of these procedures. A short review of sequential (also called online) procedures for
count time series is given in Section 10.6. Finally, the proofs are given in Appendix 10.7.2.
10.2 General Principles of Retrospective Change Point Analysis
Assume that data Y
1
, ..., Y
n
are observed with a possible structural break at the (unknown)
change point k
0
. We will rst look at likelihood ratio tests for structural breaks before
explaining how to generalize these ideas. To this end, we assume that the data before and
after the change can be parameterized by the same likelihood function L but with different
(unknown) parameters θ
0
, θ
1
∈ ⊂ R
d
. A likelihood ratio approach yields the following
statistic:
max
(k) := max
(
Y
1
, ..., Y
k
)
,
θ
k
+ Y
k+1
, ..., Y
n
,
θ
k
◦
−
(
Y
1
, ..., Y
n
)
,
θ
n
,
1�k�n 1�k�n
where (Y, θ) is the log-likelihood function and θ
k
and θ
◦
k
are the maximum likelihood
estimator based on Y
1
, ..., Y
k
and Y
k+1
, ..., Y
n
, respectively. The maximum over k is due to
the fact that the change point is unknown, so the likelihood ratio statistic maximizes over
all possible change points. A similar approach based on some asymptotic Bayes statistic
leads to a sum-type statistic, where the sum over (k) is considered (see, e.g., Kirch [19]).
Davis et al. [6] proposed this statistic for linear autoregressive processes of order p with
standard normal errors:
p
i.i.d.
Y
t
= β
0
+ β
j
Y
t−j
+ ε
t
, ε
t
∼ N(0, 1). (10.1)
j=1
In this situation (which includes mean changes as a special case (p = 0)), this maximum
likelihood statistic does not converge in distribution to a nondegenerate limit but almost
surely to innity (Davis et al. [6]). Nevertheless, asymptotic level α tests based on this
maximum likelihood statistic can be constructed using a Darling–Erdös limit theorem as
stated in Theorem 10.1b. In small samples, however, the slow convergence of Darling–
Erdös limit theorems often leads to some size distortions.