RANDOMIZED APPROXIMATION FOR #P COMPLETE PROBLEMS 33
Definition 2.12. A problem is NP complete if all problems in NP are polynomial
time reducible to it. A problem is #P complete if all problems in #P are polynomial
time reducible to it.
In an NP complete problem, the goal is to determine if there is at least one solu-
tion. In #P complete problems, the goal is to count exactly the number of solutions.
Therefore #P complete problems are in one sense more difficult that NP complete
problems.
So why bring all this u p now? Because counting the number of satisfying assign-
ments to a DNF formula is a #P complete problem.
This means that it is unlikely that an efficient algorithm for counting the number
of such solutions exactly will be found. It turns out that having an AR algorithm
means that there is an efficient method for estimating the number, though!
The number of draws used by an AR algorithm is geometric with parameter equal
to the probability of success. So in basic AR where draws from
ν
over B are made
until the result falls in A, the chance of success is
ν
(A)/
ν
(B). Say that T draws are
used by the algorithm. If
ν
(B) is known, then 1/T is an estimate of
ν
(A)/
ν
(B),so
ν
(B)/T is an estimate of
ν
(A).
Definition 2.13. Let P be a problem with input in I and output in R.Callacom-
putable function f : I ×[0,1] an (
ε
,
δ
)-randomized approximation scheme (ras) if
for U ∼ Unif([0,1])
(∀I ∈ I )
P
f (I,U)
P(I)
−1
>
ε
≤
δ
.
For an (
ε
,
δ
)-ras, the chance that the relative error is greater than
ε
is at most
δ
.
Consider the problem of estimating the probability of heads for a coin that
can be flipped as often as needed. This is equivalent to estimating the mean of
Bernoulli random variables X
1
,X
2
,... which are identically distributed with mean
p. Dagum, Karp, Luby, and Ross gave an (
ε
,
δ
)-ras for this problem that uses
O(p
−1
ε
−2
ln(
δ
−1
)) samples [26]. Rather than describe this algorith m here, a more
advanced version is given in Section 2.5.
This is exactly what is needed when AR is being used to estimate the solution to
a problem in #P. The value of p is
ν
(A)/
ν
(B), therefore, AR gives an (
ε
,
δ
)-ras that
uses O(
ν
(B)
ν
(A)
−1
ε
−2
ln(
δ
−1
)) samples.
Fact 2.1. An (
ε
,
δ
)-ras for counting a DNF satisfying assignments, where the for-
mula contains n variables and m clauses, exists that uses O((n+ln(m))m
ε
−2
ln(
δ
−1
)
Bernoulli random variables on average.
Proof. The underlying measure
ν
here is just counting measure on the 2
n
possible
true-false values for the variables. The target m easure is f (x)=1(x is satisfying),
and so Z
f
is the number of satisfying assignments. Now g(x)=#{satisfied clauses},
and so Z
g
≤ mZ
f
. Finally, c = 1. Hence Z
f
/[cZ
g
]geq1 /m.
Each draw requires choosing I ∼ Unif({1,...,m}) (which requires ln(m)
Bernoullis) and then an assignment of up to n variables (which requires n
Bernoullis).