448 Handbook of Discrete-Valued Time Series
chain Monte Carlo (MCMC) sampling algorithm is devised for efcient estimation. Also
worth mentioning are the parameter-based (process-based) approach of Creal et al. (2013)
and the estimating equation approach of Thavaneswaran and Ravishanker (2015; Chapter 7
in this volume).
This chapter proceeds as follows. Section 21.2 shows why some classical approaches
will not generate discrete-valued long memory time series. Methods capable of generat-
ing long memory count time series are presented in Section 21.3. The special case of a
binary long memory series is discussed in Section 21.4. Bayesian methods are pursued
in Section 21.5, where some open research questions are suggested. Section 21.6 provides
concluding discussion.
21.2 Inadequacies of Classical Approaches
Integer autoregressive moving-average (INARMA) models (Steutel and van Harn 1979,
McKenzie 1985, 1986, 1988, Al-Osh and Alzaid 1988) and discrete ARMA (DARMA)
methods (Jacobs and Lewis 1978a, 1978b) cannot produce long memory series. A simple
rst-order integer autoregression, for example, obeys the recursion
X
t
= p ◦ X
t−1
+ Z
t
, t = 0, ±1, ±2, ..., (21.1)
where p ∈ (0, 1) is a parameter, ◦ is the thinning operator, and {Z
t
}
∞
t=−∞
is an indepen-
dent and identically distributed (IID) sequence supported on the nonnegative integers.
Clarifying, p ◦ M is a binomial random variable with M trials and success probabil-
ity p. The thinning in (21.1) serves to keep the series integer valued. While the solution
of (21.1) is stationary, its lag h autocovariance is proportional to p
h
and is hence abso-
lutely summable over all lags. Higher-order autoregressions have the same autocovariance
summability properties; that is, one cannot construct long memory count models with
INARMA methods.
Similarly, DARMA methods also cannot produce long memory sequences. A rst-order
discrete autoregression with marginal distribution π obeys the recursion
X
t
= A
t
X
t−1
+ (1 − A
t
)Z
t
, t = 1, 2, ...,
where {Z
t
} is IID with distribution π and {A
t
} is an IID sequence of Bernoulli trials with
P[A
t
= 1]≡ p. The recursion commences with a draw from the specied marginal distri-
bution π: X
0
=
D
π. While any marginal distribution can be achieved, the lag h autocovariance
is again proportional to p
h
, which is absolutely summable in lag. Moving to higher-order
models does not alter this absolute summability.
ARMA methods will not produce long memory series, even in noncount settings. The
classical ARMA(p, q) difference equation is
X
t
− φ
1
X
t−1
−···−φ
p
X
t−p
= Z
t
+ θ
1
Z
t−1
+···+θ
1
Z
t−q
, t = 0, ±1, ..., (21.2)