FILL, MACHIDA, MURDOCH, AND ROSENTHAL’S METHOD 85
5.2 Fill, Machida, Murdoch, and Rosenthal’s method
So the two problems with basic CFTP is that it is read twice and noninterruptible.
ROCFTP is read once but still noninterruptible.
Fill [35] was the first to introduce a variant of CFTP that was interruptib le, but
sadly is still read twice. Møller and Schladitz [103] extended his method to anti-
monotone chains, and then in [37], Fill, Machida, Murdoch, and Rosenthal general-
ized Fill’s algorithm for general update functions. This last algorithm will be referred
to he re as FMMR.
To use ROCFTP, first you need an S block, followed by a number of F blocks,
then fo llowed by a second S block. The stationary state is the state at the beginning
of the second S block. An S block is an update such that
φ
(Ω,U)={x} for some
state x in Ω.
Suppose we first fixanx ∈ Ω. Now call a block an S
x
block if
φ
(Ω,U)={x}.
Otherwise call the block an F block. Then the same argument for why the output
of ROCFTP is stationary means that if we generate an S
x
block, followed by some
number of F blocks, then a second S
x
block, the state at the beginning of the S
x
block
will be stationary.
Suppose that we could just generate an S
x
block, and it was possible to run the
chain backwards in time, to figure out what the state at the beginning of the block y
was given that it ended at state x.Theny ∼
π
.
Here is the central FMMR idea: start with state X
t
= x.RunX
t
backwards in
time to get X
t−1
,X
t−2
,...,X
0
. Now run the chain forward in time conditioned on
X
0
,X
1
,...,X
t
.If
φ
(Ω,U)={x}, then accept this as a draw from an S
x
block. Other-
wise reject and start over.
This is similar to CFTP,butinAR form! So unlike CFTP it is interruptible. This
makes FMMR the first widely applicable interruptible perfect simulation algorithm.
(It was not the last, however, see Chapters 8 and 9.)
To see what is meant by running the Markov chain backwards in time, it helps to
have an example. Consider the biased random walk on {0,1,2}, with update function
φ
(x,U)=x + 1(x < 2,U > 2/3) −1(x > 0,U ≤ 2/3).
Hence the chain adds one to the state with probability 1/3, and subtracts one from
the state with proba bility 2/3 (unless the move would take the state out of {0,1,2}.)
The stationary distribution of this chain is
π
({0})=4/7,
π
({1})=2/7,
π
({2})=1/7. Suppose X
4
∼
π
, X
5
∼
π
, and the goal is to figure out how to simulate
[X
4
|X
5
].
If X
5
= 1, then X
4
is either 0 or 2. By Bayes’ rule
P(X
4
= 0|X
5
= 1)=P(X
4
= 0)P(X
5
= 1|X
4
= 0)/P (X
5
= 1)
=
π
({0})(1/3)/
π
({1})=(4/7)(1/3)/(2/7)=2/3.
In fact, it turns out that P(X
4
= i|X
5
= j)=P(X
5
= i|X
4
= j), the transition prob-
abilities for the Markov chain look exactly the same wh e ther the chain is being run
forward or backward in time. In general, this situation holds when
π
(dx)P(X
t+1
∈