188 STOCHASTIC DIFFERENTIAL EQUATIONS
uous derivatives,
f (B
T
) f (B
0
)=
T
0
f
(B
t
) dB
t
+
1
2
T
0
f

(B
t
) dt. (10.8)
Apply this to the function A(x)=
x
0
α
(s) ds. Then A
(x)=
α
(x), A

(x)=
α
(x),
and It¯o’s Lemma gives
T
0
α
(B
t
) dB
t
= A(B
T
) A(B
0
)
1
2
T
0
α
(B
t
) dt, (10.9)
which means (using that B
0
= x under W
x
)
dQ
dW
x
(B
t
)=exp
A(B
T
) A(x)
1
2
T
0
[
α
(B
t
)+
α
2
(B
t
)] dt
. (10.10)
Under W
x
, the distribution of B
T
is normal with mean x and variance T .Sayfor
simplicity that x = 0, and suppose that a user wanted to generate a Brownian Motion
A with A
0
= x but with A
T
uniform over [1, 1].
Intuitively, acceptance/rejection could b e used to accomplish this as follows.
Generate a standard Brownian Motion B
t
over [0,T ] from W
0
, then accept as a draw
from the target distribution with probability
a(B
T
)=
exp(1/(2T))1(B
T
[1,1])
exp(B
2
T
/(2T ))
. (10.11)
Then A
T
has the correct density, and given that this is an acceptance/rejection
method whose chance of accepting is f
A
T
(B
T
)/[c
φ
(B
T
)], for a constant c, that means
f
A
T
/
φ
(B
T
) is also the density of the measure of {A
t
} with respect to the measure of
{B
t
}.
More generally, the following is true.
Lemma 10.3 (Proposition 1 of [11]). Let M = {M
t
}
t[0,T ]
and N = {N
t
}
t[0,T ]
be
two stochastic processes o n C with corresponding probability measures M and N.
Let f
M
and f
N
be the densities of M
T
and N
T
respectively, and both densities are
supported on R. When it holds that [M|M
T
=
ρ
] [N|N
T
=
ρ
] for all
ρ
R,then
dM
dN
({N
t
})=
f
M
f
N
(N
T
). (10.12)
The rigorous proof is in Appendix 1 of [11].
Recall the pinned Brownian motion {R
t
} from earlier. Then to draw R
t
so that
R
T
has a desired distribution, rst draw R
T
from the distribution, then ll in the rest
of the points from R
0
= x to R
T
using Lemma 10.1.
This lemma can be used to simplify (10.10) as follows. Let H have unnormalized
density f
H
(u) exp(A(u) (u x )
2
/(2T )). Assume here that A(u)=
u
x
α
(u) du is
a function that grows slowly enough that f
H
(u) can be normalized. Then for {R
t
} a
draw from Brownian Motion conditioned to have R
T
H,
dL {R
t
}
dW
x
({B
t
})
exp(A(B
T
) (B
T
x)
2
/(2T ))
exp((B
T
x)
2
/(2T ))
= exp(A(B
T
)). (10.13)
RETROSPECTIVE EXACT SIMULATION 189
Now Radon-Nikodym derivatives behave m uch like regular derivatives, including
obeying the chain and inverse rules. So
dQ
dL {R
t
}
({R
t
})=
dQ
dW
x
({R
t
})
/
dL {R
t
}
dW
x
({R
t
})
exp(A(R
T
) (1/2)
T
0
[
α
(R
t
)+
α
2
(R
t
)] dt )
exp(A(R
T
))
= exp
T
0
(
α
(R
t
)+
α
2
(R
t
))/2 dt
.
At this point, to make acceptance/rejection work, assume that the function (
α
+
α
2
)/2 is bounded below by k
α
. Then setting
(u)=
α
(u)+
α
2
(u)
2
k
α
(10.14)
makes (u) 0, and since exp(k
α
T ) is a constant,
dQ
dL {R
t
}
({R
t
}) exp
T
0
(R
t
) dt
1, with probability 1 under L {R
t
}.
(10.15)
It is necessary here that A(u) be a function such that exp(A(u) (u x)
2
/(2T ))
is a normalizable density. This is equivalent to saying that there is a constant a
1
such
that A(u)=a
1
u
2
for sufciently large or small u.Whena
1
is large, T might need to
be very small for the density to be normalizable.
When the density is nor malizable, if
α
2
+
α
is bounded below, it is possible in
theory to use an AR approach to generate draws from Q using draws of {R
t
}.
Beskos et al. [11] introduced a Poisson point process procedure for accom-
plishing this. Let A = {t,y : t [0,T ], y [0,(B
t
)]}. Recall that if P is a Poisson
point process over B where A B,thenP(#(P A)=0)=exp(
ν
(A)).Using
ν
(A)=
T
0
(R
t
) dt then gives the desired result.
So how to determine if #(P A)=0? First consider the case where the function
is bounded above by M.ThenA [0,T ] ×[0,M], and it is possible to generate a
PPP P of constant intensity 1 over [0,T ] ×[0,M]. Consider a point (t
i
,y
i
) P.When
does this point fall into A?
Since B
t
and are continuous maps, this point falls into A if and only if y
i
(R
t
i
).
So to determine if any point of P falls into A, it is only necessary to evaluate R
t
at the
nite set of time values {t
1
,t
2
,...,t
#P
}.
Alternately, given the time value t
i
of the points of P, the chance that y
i
> R
i
is
max{0,(M R
t
i
)/M}. So it is not necessary to draw y
i
explicitly.
When the PPP on [0,T ] ×[0,M] is projected onto the interval [0,T ] by removing
the second coordinate, the result is a one-dimensional PPP of rate T . It is possible
to generate such a PPP by adding exponential random variables of rate M to 0 until
the result is greater than T . An exponential random variable of rate M has the same
distribution as (1/M)ln(1/U),whereU is uniform over [0,1].
The authors referred to this method as retrospective sampling, since once the
190 STOCHASTIC DIFFERENTIAL EQUATIONS
Poisson process is simulated, the user needs to go back retrospectively and simulate
the proper number of points from the conditional Brownian motion to determine if
acceptance or rejection occurs.
The following pseudocode generates X X
T
for dX
t
=
α
(X
t
) dt + dB
t
where
(u)=(
α
(u)+
α
(u)
2
)/2 inf
w
(
α
(w)+
α
(w)
2
)/2and(u) M for all u.
Retrospective sampling for SDEs
Input: T , x
0
Output: X X
T
1) X x
0
2) Repeat
3) Draw R from density f
R
(u) exp(A(u) (u X)
2
/2), a 1
4) i 1, U Unif([0,1]), t
1
←−(1/M)ln(U)
5) While t
i
T
6) U Unif([0,1]), i i + 1, t
i
t
i1
+(1/M)ln(1/U )
7) Draw (B
t
1
,...,B
t
i1
,B
T
) Standard Brownian Motion(t
1
,...,t
i1
,T )
8) For j from 1 to i 1do
9) V
j
Unif([0, 1])
10) R
t
j
x
0
+ B
t
i
+(t
j
/T )(R x
0
B
T
)
10) a a ·1(V
j
(R
t
j
)/M)
11) Until a = 1
12) X R
Lemma 10.4. Suppose Retrospective sampling for SDEs uses N random vari-
ables in a run. Then
2(1 + MT) E[N] 2(1 + MT )e
MT
. (10.16)
Proof. One random variable R is chosen at line 3. In line 4, one random variable is
used to determine #P,thenif#P > 0, 2MT more are needed on average to d etermine
the PPP P. Note that for a random variable X that is nonnegative, P(X > 0)E[X|X >
0]=E[X],soE[N] 1 + 1 + 2MT = 2(1 + MT ).
Suppose that rejection occurs. This can only happen when #P > 0, but in this
case, the process starts over and E[N] variables are needed. The chance that #P > 0
is 1 exp(MT). Hence
E[N] 2(1 + MT)+(1 exp(MT))E[N],
(10.17)
and solving for E[N] gives E[N] 2(1 + MT )e
MT
.
This bound is far too wide to be useful when MT is large. Therefore, a better
approach is to break the interval [0,T ] into n different intervals of width T /n.The
diffusion {X
t
} has the Markov property, which means that one can simulate X
T /n
|X
0
,
then X
2T /n
|X
T/n
and so on until reaching X
T
.
RETROSPECTIVE EXACT SIMULATION 191
Divided retrospective sampling for SDEs
Input: T , x
0
, n Output: X X
T
1) X x
0
2) For i from 1 to n
3) X Retrospective
sampling for SDEs(T /n,X)
Lemma 10.5. Suppose Divided retrospective sampling for SDEs uses N
random variables. Then
2n(1 + MT/n) E[N] 2n(1 + MT /n)exp(MT/n). (10.18)
In particular, if n = MT
φ
,where
φ
=(1+
5)/2 is the golden mean, then 3.61MT
E[N] 9.72MT.
Proof. There are n intervals each of width T /n, so the previous Lemma immediately
gives (10.18).
So if x = MT /n,then2(1 +
φ
)MT E[N] MT(2/x)(1 + x)exp(x). The right-
hand side is minimized at x = 1/
φ
, and the result follows.
One more comment: usually acceptance/rejection algorithms are interruptible,
since the number o f times a proposal is made does not affect the distribution of the
nal outco me. But here there is an interesting situation where the time to determine
if acceptance o r rejection occurs depends on the number of points in the PPP over
[0,T ], and that does affect the chance of acceptance. So if time is measured by the
number of random variates generated, this algorithm and the rest of the algorithms
presented in this chapter) are noninterruptible.
10.3.1 Somewhat bounded
Now suppose that is unbounded, but sup
x[M,)
exists for all M. Then the tech-
niques of the previous section still can be used, but the execution is a bit trickier.
Again, the presentation of this a lgorithm follows [11] closely. The rst thing that
is needed is a way of nding the minimum of a Brownian bridge.
To accomplish this, rst consider the distribution of the rst time that Brownian
motion reaches a xed level. In this literature this is known as the inverse Gaussian
distribution. The following lemma connects this distribution to rst passage times.
Lemma 10.6. Let B
t
be standard Brownian motion, and X
t
= k
1
t + k
2
B
t
for param-
eters k
1
and k
2
that satisfy k
1
> 0,k
2
= 0.LetT
α
= inf{t : X
t
=
α
}.Then
T
α
InvGau(
α
/k
1
,
α
2
/k
2
2
). (10.19)
An inverse Gaussian can be simulated easily with the ability to generate nor-
mally distributed random variables using an algorithm of Michael, Schucany, and
Haas [92].
192 STOCHASTIC DIFFERENTIAL EQUATIONS
Inverse Gaussian Input:
μ
,
λ
, Output: X InvGau(
μ
,
λ
)
1) Draw W N(0,1),letY W
2
2) X
1
μ
+
μ
2
Y /(2
λ
)
μ
4
μλ
Y +
μ
2
Y
2
/(2
λ
)
3) Draw C Bern(
μ
/(
μ
+ X
1
))
4) X X
1
C +(1 C)
μ
2
/X
1
Given the story behind an inverse Gaussian, it is unsurprising that this distribution
will enter into the simulation of the minimum of the Brownian bridge.
Minimum Brownian Bridge
Input: T , a Output: Z
1
min
t[0,T ]
R
0,a
t
, Z
2
arg m in
t[0,T ]
R
0,a
t
1) Draw A Exp(1)
2) Z
1
(a
2 ·A ·T + a
2
)/2
3) c
1
(a Z
1
)
2
/(2T ), c
2
Z
2
1
/(2T )
4) Draw U Unif([0, 1])
5) Draw I
1
Inverse Gaussian(
c
1
/c
2
,2c
1
)
6) Draw I
2
1/Inverse Gaussian(
c
2
/c
1
,2c
2
)
7) V I
1
·1(U < (1 +
c
1
/c
2
)
1
)+I
2
·1(U (1 +
c
1
/c
2
)
1
)
8) Z
2
T /(1 +V )
Lemma 10.7 (Proposition 2 of [11]). The output of Minimum Brownian Bridge
(Z
1
,Z
2
) is that of the value and location of the minimum of Brownian motion condi-
tioned to have B
0
= x and B
T
= y.
See [11] for the proof.
After running this procedure the minimum value b (and the time t at which the
minimum occurs) for the Brownian bridge is known. To get a Brownian bridge be-
tween x and y, nd a Brownian bridge between 0 and y x,andaddx to each point.
Let M = max
u[b+x,)
(u) be the largest that (B
t
) can be for times in [0,T ].Then
generate the PPP over [0,T ] of rate M as before, and compare to the Brownian motion
simulated over [0, T ] at those times to see if acceptance o ccurs.
The slightly tricky part at this stage is simulating the Brownian bridge condi-
tioned on the minimum value b occurring at time t. The Brownian bridge over [0,t]
is conditioned to have B
0
= 0andB
t
= b,andB
s
b for all s [0, T ].
Such a path can be built by considering the time interval [0,t] and the interval
[t,T ] sepa rately. To build these paths, it is necessary to be able to simulate a Brownian
motion conditioned to stay nonnegative. Such a path is called a Brownian meander.
Because it is also an example o f a Bessel Process conditioned on the endpoints, it is
also known as a Bessel bridge. Exact simulation of such entities follows from a result
of Bertoin and Pitman [7].
Lemma 10.8. Suppose {B
1
t
}, {B
2
t
}, and {B
3
t
} are Brownian bridges over [0,1] that
begin and end at 0.Then
B
meander,y
s
=
#
(ys + B
1
s
)
2
+(B
2
s
)
2
+(B
3
s
)
3
(10.20)
is Brownian motion over [0,1] that is conditioned to stay nonnegative, and B
y
(1)=y.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset