OMNITHERMAL APPROXIMATION 203
Proof. From the last fact, the points −ln(U
1
),−ln(U
2
) − ln(U
3
),... form a
Poisson process of rate 1. The number of such points that fall in the inter-
val [−ln(
ν
(B)),−ln(
ν
(A))] will be Poisson distributed with mean −ln(
ν
(A)) −
(−ln(
ν
(B))) = ln(
ν
(B)/
ν
(A)).
Poisson random variables, like normals, have the nice property that when in-
dependent values are added, you simply add the individual parameters. So if TPA
is run k times and the results added, the result N
k
is a Poisson random variable
with mean k ln(
ν
(B)/
ν
(A)). For any choice of k by the simulator, it is possible to
construct an exact confidence interval for the r esulting estimate of
ν
(B) ≈ ˆa where
ˆa =
ν
(A)exp(N
k
/k).
So far we have assumed that
ν
(A) is easy to compute, but
ν
(B) is hard. Of course,
TPA just estimates between the two, so it is just as useful when
ν
(B) is easy to
compute, and the goal is to approximate
ν
(A).
11.2.2 TPA for Gibbs distributions
Suppose that a density can be written as f (x)=exp(−
β
H(x))/Z
β
for a function
H(x). A d ensity of this form is said to represent a Gibbs distribution. TPA can
be applied to Gibbs distributions when H(x) ≤ 0. (Actually, any upper or lower
bound on H(x) suffices, but usually H(x) ≤ 0 in practice, so that is the case con-
sidered here.) For instance, the Ising model with no external magnetic field has
H(x)=−
∑
{i, j}∈E
1(x(i)=x( j)) ≤ 0. As usual, let the set of configurations be Ω.
Let B
β
= {x,y : x ∈ Ω : y ∈ [0,exp(−
β
H(x))]}. The restriction that H(x) ≤ 0en-
sures that for
β
<
β
, B
β
⊆ B
β
.Now
ν
(B
0
)=#Ω, which is usually easy to compute.
For example, in the Ising model Ω = {−1, 1}
V
,so
ν
(B
0
)=2
#V
.
To draw (x,y) uniformly from B
β
, first draw X from the distribution with param-
eter
β
,andthenY |X can be drawn uniformly from [0, exp(−
β
H(X ))].WhenY ≤ 1
then
β
< 0 and the center has been reached. When Y > 1, the next value for
β
is
then the smallest value of b such that Y ∈ [0, exp(−bH(X))].Sob = −ln(Y )/H(X).
(When H(X)=0justsetb = −∞.)
TPA for Gibbs Input:
β
Output: N ∼ Pois(ln(Z
B
/#Ω))
1) N ← 0
2) While
β
> 0
3) Draw X from the Gibbs distribution with parameter
β
4) Draw Y uniformly from [0,exp(−
β
H(X ))]
5) If Y ≤ 1orH(X)=0then
β
= 0
6) Else
β
←−ln(Y )/H(X) and N ←N + 1
11.3 Omnithermal approximation
For Gibbs distributions, it is not only possible to use TPA to generate estimates for
a single Z
β
in this fashion, but also to simultaneously give estimates of Z
β
for all
β
in some interval [
β
A
,
β
B
].Since
β
is also kn own as the inverse temperature, getting