Appendix C

Theoretical Analysis of Stochastic Relaxation Radiosity

In this appendix, we show how the variance of the incremental shooting iterative algorithm of Section 6.3 can be analyzed, and demonstrate how a number of practical results can be derived from it. The analysis of the other algorithms is very similar and is a recommended exercise for the interested reader.

We start with the derivation of the variance of the incremental shooting iterative algorithm. The first thing to point out is that the resulting radiosities are obtained as the sum of increments computed in several iteration steps. We first derive the variance of a single iteration and next show how the variance on the converged results is composed from the single-iteration variances.

Variance of a single incremental shooting iteration. The variance of a single incremental shooting iteration can be derived by straightforward application of the definition of Monte Carlo summation variance:

S=i=1naisum  to be computed (n terms),Saispissingle-sample estimate,V[S]=i=1nai2piS2single-sample variance.

For N samples, the variance is V[Ŝ]/N.

The sum to be estimated here is given in Equation 6.11. The probabilities p for picking terms from the sum are in Equation 6.12. The resulting single-sample variance of the kth incremental shooting iteration is

V[ΔPi(k+1)]=ρiΔPT(k)ΔPi(k+1)(ΔPi(k+1))2.                       (C.1)

The latter term is usually negligible compared to the former (ΔPi(k +1) << ΔPT(k)).

Variance of a sequence of incremental shooting iterations until convergence. The solution Pi is eventually obtained as a sum of increments ΔPi (k) computed in each iteration step. The single-sample variance on each increment ΔPi(k) is given above in Equation C.1. Assuming that subsequent iterations are independent (which is to good approximation true in practice), and that Nk independent samples are used in the kth iteration, the variance on the result of K iterations will be

V[Pi]=k=1k1NkV[ΔPi(k)].

Optimal allocation of N=k=1kNk samples over the individual iterations is obtained if 1/Nk is inversely proportional to V[Pi]=k=1K1NkV[ΔPi(k)]. (Section 3.6.5). For all patches i, V[Δ P^i(k)] (Equation C.1) is approximately proportional to PT(k–1), suggesting that we choose the number of samples in the kth iteration proportional to the total unshot power ΔPT(k – 1) to be propagated in that iteration:

NkNΔPT(k1)PT.

When Nk drops below a small threshold, convergence has been reached. Combining all above results, it can be shown that the variance on the radiosity Bi after convergence is to good approximation given by

V[Bi]PTNρi(BiBie)Ai.                (C.2)

Time complexity. We now turn to the question of how the number of samples N needs to be varied as a function of the number of patches n in order to compute all radiosities Bi to prescribed accuracy ε with 99.7% confidence. According to the central limit theorem (Section 3.4.4), the number of samples N shall be chosen so that

V[Bi]N3ε

for all i. Filling in Equation C.2 then yields

N9PTε2maxiρi(BiBie)Ai.                     (C.3)

This formula allows us to examine how the number of rays to be shot must be increased as a scene to be rendered is “made larger.” There are, however, many possible scenarios of how a scene can be “made larger.” For instance, new objects can be added, or one can switch to a finer tessellation of the surfaces in the scene without adding new objects. If all patches in a scene are split in two, the required number of rays in order to obtain a given accuracy will need to be doubled, as dividing the patches (asymptotically) has no effect on reflectivities and radiosities. The cost of shooting a ray is often assumed to be logarithmic in the number of polygons. Although the truth is much more complicated, it is often stated that Monte Carlo radiosity algorithms have log-linear complexity. In any case, their complexity is much lower than quadratic. This result is not only valid for incremental stochastic shooting of power but also for other Monte Carlo radiosity algorithms based on shooting [169], [162], [15].

A heuristic for choosing the number of samples N. We have demonstrated that the number of samples in each incremental shooting iteration shall be chosen proportional to the amount of power to be distributed in that iteration. In other words, each ray to be shot shall propagate the same “quantum” of light energy. We have not yet answered the question of how large the quanta should be, however, or equivalently how many rays N to choose for a complete sequence of incremental shooting iterations to convergence. That’s the point of this paragraph.

Equation C.3 allows us to derive the answer. Suppose one wants to choose N so that with 99.7% confidence, the error ε on any patch i will be less than the average radiosity Bav = PT/AT in the scene. The total power Pt in Equation C.3 can then be replaced by ATε. Typically, Bie = 0 for most patches in the scene. Approximating Bi — Bie by the average radiosity and thus by ε, then yields

N9.maxiρiATAi.                       (C.4)

In practice, it makes a lot of sense to skip, for instance, the 10% of patches in a scene with the largest ratio ρi/Ai. Note that a rough heuristic for N suffices: a higher accuracy can always be obtained by averaging the result of several independent runs of the algorithm.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset