437 Dynamic Models for Time Series of Counts with a Marketing Application
Let β
0,t,h
= (β
0,t,1,h
, ..., β
0,t,q,h
)
. In the system equation, we assume a random walk
evolution of the state parameter vector, so that for h = 1, ..., H,
β
0,t,h
= Gβ
0,t−1,h
+ w
t,h
,
where G is an identity matrix, and w
t,h
∼ N
p
(0, W
h
).
20.5.2 Bayesian Inference for the MDFM Model
We write the mixture model in terms of missing (or incomplete) data; see Dempster et al.
(1977) and Diebolt and Robert (1994). For i = 1, ..., N and t = 1, ..., T, recall that
H
p(y
i,t
|λ
i,t,h
) =
h=1
π
h
MP
3
(y
i,t
|λ
i,t,h
).Letz
i,t
= (z
i,t,1
, ..., z
i,t,H
)
be an H-dimensional vector
indicating the component to which y
i,t
belongs, so that z
i,t,h
∈{0, 1} and
H
= 1. The
h=1
z
i,t,h
pmf of the complete data (y
i,t
, z
i,t
) is
H
f (y
i,t
, z
i,t
|λ
i,t,h
) = π
z
i,t,h
[p(y
i,t
|λ
i,t,h
)]
z
i,t,h
. (20.13)
h
h=1
For h = 1, ..., H, we make the following prior assumptions. We assume that the p-
dimensional initial state parameter vector β
0,1,h
∼ MVN(a
0,h
, R
0,h
),the m-dimensional
subject-specic coefcient β
1,i,h
∼ MVN(a
1,i,h
, R
1,i,h
),and the m-dimensional parameter
β
2,i,h
∼ MVN(a
2,i,h
, R
2,i,h
). We assume inverse Wishart priors, IW(n
h
, S
h
), for the variance
terms W
h
, and assume the conjugate Dirichlet(d
1
, ..., d
H
) prior for π = (π
1
, ..., π
H
), which
simplies to the Beta(d
1
, d
2
) prior for π
1
when H = 2.
The joint posterior density of the unknown parameters is proportional to the product of
the complete data likelihood and the product of the priors discussed earlier. We denote
the set of unknown parameters by = (β
0
, β
1
, β
2
, W, π). The Gibbs sampler enables
posterior inference by drawing samples using suitable techniques such as direct draws
when the complete conditionals have known forms and sampling algorithms such as the
Metropolis–Hastings algorithm otherwise. The complete conditional densities are propor-
tional to the joint posterior and are shown in the Appendix, along with details of the
sampling algorithms.
We t the MDFM model with H = 2. The posterior mean and standard deviation of
the mixing proportion π
1
are, respectively, 0.758 and 0.008, and the 95% highest posterior
density (HPD) interval is (0.742, 0.770). Table 20.1 shows the posterior summary of the state
variances W
h
for h = 1, 2.
We calculate the independent Poisson means λ
i,t,h,1
, ..., λ
i,t,h,q
for h = 1, 2 and q = 6.
Through the unconditional expectation formula for the nite mixture of multivariate Pois-
son distributions, we obtain the predicted means of the own drug and the competing
drugs. We then make Poisson draws with the corresponding predicted means. Figure 20.1
shows time series plots of the observed prescription counts for the three drugs from four
randomly selected physicians, and Figure 20.2 shows time series plots of the correspond-
ing predicted means. Figure 20.3 shows one-month-ahead predictions for all physicians.
The absolute differences between observed counts and predicted counts are 1.869, 1.980,
and 2.996, respectively, for the own, challenger, and leader drugs.