5
Renewal-Based Count Time Series
Robert Lund and James Livsey
CONTENTS
5.1 Introduction...................................................................................101
5.2 Models for Classical Count Distributions.................................................103
5.2.1 Binomial Marginals...................................................................103
5.2.2 Poisson Marginals....................................................................105
5.2.3 Geometric Marginals.................................................................108
5.2.4 Generalities. .. .........................................................................110
5.3 Multivariate Series............................................................................113
5.4 Statistical Inference...........................................................................116
5.5 Covariates and Periodicities.................................................................118
5.6 Concluding Comments.......................................................................119
References............................................................................................119
5.1 Introduction
Some classical methods of generating stationary count time series do not yield a par-
ticularly exible suite of autocovariance structures. For example, integer autoregressive
moving-average (INARMA) (Steutel and Van Harn 1979; McKenzie, 1985, 1986, 1988;
Al-Osh and Alzaid 1988) and discrete autoregressive moving-average (DARMA) methods
(Jacobs and Lewis, 1978a,b) cannot produce negatively correlated series at any lags. This
is because mixing ratios or thinning probabilities must always lie in (0,1). Also, INARMA
and DARMA models cannot generate series with long memory autocovariances.
This chapter takes a different approach to modeling stationary integer count time series:
the renewal paradigm. A renewal sequence is used as a natural model for a stationary
autocorrelated binary sequence. Independent copies of the renewal sequence can then be
superimposed in various ways to build the desired marginal distribution. Blight (1989) and
Cui and Lund (2009) are references for the general idea.
A renewal process is built from a nonnegative integer-valued lifetime L
0
and a sequence
of independent and identically distributed (IID) lifetimes {L
i
}
Here, L
i
∈{1, 2, ...}has an
i=1
.
aperiodic support (the support set of L is not a multiple of some integer larger than unity)
and a nite mean μ = E[L
1
] (notice that the distribution of L
0
may be different than those
of the other L
i
s). Below, L will denote a random draw whose distribution is equivalent to
that of any L
i
for i 1. Dene the random walk
S
n
= L
0
+ L
1
+···+L
n
, n = 0, 1, ....
101
102 Handbook of Discrete-Valued Time Series
A renewal is said to occur at time t if S
n
= t for some n 0. Let X
t
be unity if a renewal
occurs at time t and zero otherwise. This is the classical discrete-time renewal sequence
popularized in Smith (1958), Feller (1968), and Ross (1996) (among others). In fact, {X
t
} is a
correlated binary sequence. Copies of {X
t
}will be used to build our count series shortly. For
notation, let u
t
be the probability of a renewal at time t in a nondelayed renewal process
(a nondelayed process has L
0
=0). The probabilities of renewal can be recursively calcu-
lated via u
0
= 1and
t1
u
t
= P[L = t]+ P[L = ]u
t
, t = 1, 2, ..., (5.1)
=1
and E[X
t
]=u
t
. The elementary renewal theorem (Smith, 1958) states that lim
t→∞
u
t
= μ
1
when L has a nite mean E[L]=μ and is aperiodic, both of which we henceforth assume.
While u
t
E[L]
1
as t →∞, {X
t
} is not weakly stationary unless the distribution of L
0
is strategically selected. Specically, if L
0
has the rst tail distribution derived from L,viz.,
P(L > k)
P(L
0
= k) =
, k = 0, 1, ... (5.2)
μ
then {X
t
} is covariance stationary. Under this initial distribution, E[X
t
]≡μ
1
(Ross, 1996)
and the autocovariance function of {X
t
}, denoted by γ
X
(h) = Cov(X
t
, X
t+h
),is
γ
X
(h) = P[X
t
= 1 X
t+h
= 1]−P[X
t
= 1]P[X
t+h
= 1]=μ
1
u
h
1
(5.3)
μ
for h = 0, 1, .... This calculation uses P[X
t+h
= 1 X
t
= 1]= P[X
t+h
= 1|X
t
= 1]
P[X
t
= 1]=u
h
μ
1
.
As an example, suppose that L is geometric with P(L = k) = p(1 p)
k1
for k = 1, 2, ...
and some p (0, 1). Then L is aperiodic and E[L]=1/p. The initial lifetime L
0
in (5.2) that
makes this binary sequence stationary has distribution P(L
0
= k) = p(1p)
k
for k = 0, 1, ...
(this is not geometric as the support set of L
0
contains zero). In the nondelayed case, (5.1)
can be solved to get u
h
= p for every h 1. When L
0
has distribution as in (5.2), u
h
= p
for every h 0. Obviously, u
h
1/E[L] as h →∞. While explicit expressions for u
h
are
seldom available, this case provides an example of a lifetime where u
h
quickly (and exactly)
achieves its limit.
The construction described above connects count time series with renewal processes.
First, stationary time series knowledge can be applied to discrete renewal theory. As an
example, stationary series have a well-developed spectral theory.
Specically, any station-
ary autocovariance γ(·) admits the Fourier representation γ(h) =
(π,π]
e
ihλ
dF(λ) for some
nondecreasing right continuous function F satisfying F(π) =0and F(π) =γ(0)
(Brockwell and Davis, 1991). A spectral representation for the renewal probabilities {u
t
}
t=0
immediately follows; specically,
γ
X
(h) =
1
u
h
1
=
e
ihλ
dF(λ) (5.4)
μ μ
(π,π]
holds for some “CDF like” F(·) supported over (π, π]. Solving this yields a spectral rep-
resentation for the renewal probabilities: u
h
= μ
1
+ μ
(π,π]
e
ihλ
dF(λ). Here, F may not
103 Renewal-Based Count Time Series
be a proper cumulative distribution function (CDF) (F(π) = 1), but rather has total mass
μ
1
(1 μ
1
). While a different spectral representation for u
h
is known from Daley and
Vere-Jones (2007) via other methods, the representation here follows from the time series
result.
Second, and perhaps more importantly, discrete-time renewal knowledge (which is vast)
can now be used in count time series problems. For one example of what can be inferred, it
is known (Kendall, 1959; Lund and Tweedie, 1996) that if L has a nite geometric moment
(E[r
L
] < for some r > 1), then the renewal sequence geometrically decays to its limit:
|u
h
μ
1
|≤κc
h
for h = 0, 1, ... for some nite κ and c > 1 (the values of c and r are not
necessarily the same). Hence, for lifetimes L with some nite geometric moment, {X
t
} must
have short memory in that
0
|γ(h)| < . Long memory count time series are explored
h=
further in Lund et al. (2015; Chapter 21 in this volume).
5.2 Models for Classical Count Distributions
To demonstrate the exibility of renewal-based methods, we now build stationary time
series models with classical count marginal distributions, including binomial, Poisson, and
geometric. For this, let {X
t,1
}, {X
t,2
}, ... denote independent copies of the binary discrete-
time renewal process {X
t
}.
5.2.1 Binomial Marginals
Constructing a count time series model with binomial marginal distributions with a xed
number of trials M is easy—just add independent Bernoulli renewal processes:
M
Y
t
= X
t,i
(5.5)
i=1
as in Blight (1989). The lag h autocovariance is
M
1
γ
Y
(h) = Cov(Y
t
, Y
t+h
) =
u
h
, h = 0, 1, ... (5.6)
μ μ
and autocorrelations of {Y
t
} obey
u
h
μ
1
ρ
Y
(h) = Corr(Y
t
, Y
t+h
) = h = 0, 1, .... (5.7)
1 μ
1
,
The autocovariances produced by this scheme can be positive or negative. In fact,
if L is such that u
1
=P(L =1)<μ
1
, then (5.6) implies that ρ
Y
(1)<0. Let > 0bea
small probability and consider a lifetime L that takes values 1, 2, and 3 with probabili-
ties P[L = 1]=P[L = 3]=and P[L = 2]=1 2. Then E[L]=2, u
1
= ,and
Corr(Y
t
, Y
t+1
) = 2 1. Letting 0 shows existence of a joint random pair (Y
t
, Y
t+1
) with
binomial marginal distributions and a negative correlation arbitrarily close to 1. This is
104 Handbook of Discrete-Valued Time Series
A realization of a binomial series
Count
6
5
4
3
2
1
0
0
200 400 600 800 1000
Time
−0.5
0.0
0.5
1.0
Autocorrelation
0 5 10 15 20 25 30
Lag
FIGURE 5.1
A realization of length 1000 of a stationary time series with binomial marginal distributions with mean 5 trials
and success probability 1/2. Sample autocorrelations are also shown with pointwise 95% critical bounds for white
noise. Notice that negative correlations can be achieved.
but one choice for L; other choices yield different autocorrelations. Positively autocorre-
lated {Y
t
} are also easily constructed. In fact, any lifetime L generating a renewal sequence
with u
h
μ
1
for all h = 0, 1, ... (these are plentiful) will produce stationary series with
nonnegative autocorrelations.
Figure 5.1 shows a realization of length 1000 of a binomial count series with
M = 5and L supported on {1, 2, 3} with probabilities P[L =1]=P[L =3]=1/10 and
P[L = 2]= 8/10. This process has a negative lag one correlation of ρ
Y
(1) =−4/5.
The sample autocorrelations of this realization are graphed along with 95% critical bounds
under the null hypothesis of IID noise; the plot reveals an oscillating structure, with some
negative autocorrelations.
Some properties of this process are worth discussing. First, the distributional
relationship
= u
h
Y
t
+
(M Y
t
), h = 1, 2, ... (5.8)
Y
t+h
D
1
μ
u
1
h
holds. Here, denotes binomial thinning, which is dened to operate on a nonnegative
integer-valued random variable X via p X =
i
X
=1
B
i
, where B
1
, B
2
, ... are independent
Bernoulli trials, independent of X, with the same success probability p. Equation (5.8) is
105 Renewal-Based Count Time Series
justied as follows. At time t, Y
t
of the M Bernoulli processes in the summation in (5.5) are
unity and M Y
t
are zero. The distribution of Y
t+h
is obtained by adding two components:
(1) all +1 “particles” at time t that remain +1 at time t + h (this happens with probability
P[X
t+h
= 1|X
t
= 1]=u
h
) and (2) all zero particles that turn to +1 at time t +h (this happens
with probability (1 u
h
)/(μ 1)). This relationship is useful for inference procedures. Cui
and Lund (2010) discuss additional properties of this process.
5.2.2 Poisson Marginals
To construct stationary series with Poisson marginal distributions, suppose now that M
in (5.5) is random, independent of {X
t,i
} for all i 1, and has a Poisson distribution with
mean λ. It is easy to check that {Y
t
} is stationary in time t and has a Poisson marginal
distribution with mean E[Y
t
]≡λ/μ. The covariance function of {Y
t
} is
λ
λ
1
γ
Y
(h) = Cov(Y
t
, Y
t+h
) = +
u
h
, h = 1, 2, ... (5.9)
μ μ μ
with γ
Y
(0) = λ/μ. One may view the covariance in (5.9) as unrealistic since the term λ/μ
appears in each and every lag (hence, this series has long memory). A simple modication
remedies this—just examine
M
t
Y
t
= X
t,i
, (5.10)
i=1
where {M
t
}
1
is an IID sequence of Poisson random variables each having mean λ.
t=
With such {M
t
}, {Y
t
} is stationary with Poisson marginal distributions and E[Y
t
]≡λ/μ.
A calculation shows that {Y
t
} has γ
Y
(0) = λ/μ and
C(λ)
1
γ
Y
(h) =
u
h
, h = 1, 2, ..., (5.11)
μ μ
where C(λ) = E[min(M
t
, M
t+h
)] does not depend on h 1. The form of C(λ) is identied
in the following result.
Lemma 5.1 If M
1
and M
2
are independent and identically distributed Poisson random
variables with mean λ, then
C(λ) = E[min{M
1
, M
2
}] = λ[1 e
2λ
{I
0
(2λ) + I
1
(2λ)}],
where I
j
(·) is the modied Bessel function
(x/2)
2n+j
I
j
(x) =
, j = 0, 1.
n!(n + j)!
n=0
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset