5Three-dimensional image reconstruction

This chapter is focused on 3D tomographic imaging. In 3D, we will consider the parallel line-integral projections, parallel plane-integral projections, and cone-beam line-integral projections, separately. For the 3D parallel line-integral projections and parallel plane-integral projections, there exist the central slice theorems, from which the image reconstruction algorithms can be derived. However, for the cone-beam projections the situation is different; there is no central slice theorem for cone beam. We have to somehow establish a relationship between the cone-beam projections and the 3D image itself. Since the cone-beam image reconstruction is an active research area, this chapter spends a significant effort on discussing cone-beam reconstruction algorithms, among which the Katsevich algorithm is the latest and the best one.

5.1Parallel line-integral data

In many cases, 3D image reconstruction can be decomposed into a series of slice-by-slice 2D image reconstructions if the projection rays can be divided into groups, where each group contains only those rays that are confined within a transaxial slice (see Figure 5.1 Left).

In other cases, the projection rays run through transaxial slices, where the slice-by-slice 2D reconstruction approach does not work (see Figure 5.1, middle and right).

The foundation for 2D parallel-beam image reconstruction is the central slice theorem (Section 2.2). The central slice theorem in 3D states that the 2D Fourier transform P(ωu, ωv, θ of the projection P(u, v, ) of a 3D function f(x, y, z) is equal to a slice through the origin of the 3D Fourier transform F(ωx, ωy, ωz) of that function which is parallel to the detector (see Figure 5.2). Here, is the normal direction of the uv plane and the ωuωv plane. The direction represents a group of rays that are parallel to

Based on this central slice theorem, we can determine some specific trajectories of so that we are able to fill up the (ωx, ωy, ωz) Fourier space. One such option is shown in Figure 5.3, where the trajectory of is a great circle. A great circle is a circle with unit radius that lies on the surface of the unit sphere (see Figure 5.4). Each unit vector on the great circle corresponds to a measured P(ωu, ωv, plane in the (ωx, ωy, ωz) Fourier space. After the unit vector completes the great circle, the measured P(ωu, ωv, ) planes fill up the entire (ωx, ωy, ωz) Fourier space. In fact, due to symmetry, if the unit vector completes half of the great circle, a complete data set is obtained.

The above example can be generalized, as stated in Orlov’s condition: A complete data set can be obtained if every great circle intersects the trajectory of the unit vector (, which is the direction of the parallel rays. The trajectory can be curves on the sphere and can also be regions on the sphere. Some examples of the trajectories (shaded with red color) are shown in Figure 5.5, where the first three satisfy Orlov’s condition, and the last two do not.

image

Fig. 5.1: The measurement rays can be in the planes perpendicular to the axial direction and can also be in the slant planes.

image

Fig. 5.2: The central slice theorem for the 3D line-integral projections.

The image reconstruction algorithm depends on the trajectory of the direction vector geometry. The basic algorithm development can follow the guidelines below.

image

Fig. 5.3: One measuring direction gives a measured plane in the Fourier space. A great circle trajectory provides full Fourier space measurements.

image

Fig. 5.4: A great circle is a unit circle on the unit sphere.

5.1.1Backprojection-then-filtering

If the data are sufficiently measured, the 3D image can be exactly reconstructed. Like 2D image reconstruction, one can reconstruct an image either by performing the backprojection first or by performing the backprojection last. If the backprojection is performed first, the algorithm is a backprojection-then-filtering algorithm, and it is described in the following steps:

image

Fig. 5.5: The directional vector trajectories are illustrated as curves or shaded areas on the unit sphere. The trajectories on the top row satisfy Orlov’s condition; the trajectories on the bottom row do not satisfy the condition.

(i)Use an arbitrary point source and find the 3D projection/backprojection point spread function (PSF) h (defined in Figure 3.2). If the original image is f(x, y, z) and the backprojected image is b(x, y, z), then

b=fh,f∗ ∗ ∗h,

where “ ∗∗∗” denotes the 3D convolution. For example, if the trajectory of the directional vector is the full unit sphere as shown in the leftmost case in the first row of Figure 5.5, the PSF is

h(x,y,z)=1x2+y2+z2=1r2,

where r is the distance to the point source. In 2D, this projection/backprojection PSF is 1/r. This implies that in 3D, the PSF is sharper than that in 2D because the PSF falls off at a faster rate as r increases.

(ii)Take the 3D Fourier transform of the relationship b = f h and obtain

B(ωx,ωy,ωz)=F(ωx,ωy,ωz)H(ωx,ωy,ωz).

After this Fourier transform, b, f, and h become B, F, and H, respectively, and convolution becomes multiplication. If we again use the example in the leftmost case in the upper row of Figure 5.5, the transfer function H is

H(ωx,ωy,ωz)=πωx2+ωy2+ωz2.

Thus, a 3D ramp filter

1H(ωx,ωy,ωz)=ωx2+ωy2+ωz2π

can be used for image reconstruction in this case.

Solve for F as

F(ωx,ωy,ωz)=B(ωx,ωy,ωz)ωx2+ωy2+ωz2π.

Finally, the image f(x, y, z) is obtained by taking the 3D inverse Fourier transform of F.

In general, 3D line-integral data are measured with heavy redundancy. Therefore, the image reconstruction algorithm is not unique because you can always weigh redundant data differently.

5.1.2Filtered backprojection

In the filtered backprojection (FBP) algorithm, the projection p(u, v, is first filtered by a 2D filter (or a 2D convolution), obtaining q(u, v, A backprojection of the filtered data q(u, v, gives the reconstruction of the image f(x, y, z).

Due to data redundancy, the 2D filter is not unique. The filter is usually different depending on different data orientation One way to obtain a Fourier domain filter is through the central slice theorem.

In Section 5.1.1, we had a projection/backprojection PSF h(x, y, z) and its Fourier transform H(ωx, ωy, ωz). If we let G(ωx, ωy, ωz)=1/H(ωx, ωy, ωz), then G is the Fourier domain filter in the backprojection-then-filtering algorithm. The Fourier domain 2D filter for projection p(u, v, can be selected as the central slice of G(ωx, ωy, ωz)with the normal direction (see Figure 5.2). Note that the filter in general depends on the direction

5.2Parallel plane-integral data

In 3D, the parallel plane-integral p(s, ) of an object f(x, y, z) is referred to as the Radon transform (see Figure 5.6). In 2D, the Radon transform is the parallel line-integral p(s, θ) of f(x, y). In a general n-D space, the (n–1)-D hyperplane integral of an n-D function f is called the Radon transform. On the other hand, 1D integral of the object is called the line integral, ray sum, X-ray transform, or ray transform. In 2D, the Radon transform and the X-ray transform are the same thing.

image

Fig. 5.6: In 3D, the plane integral of an object is the Radon transform.

Unlike the line-integral data, the plane-integral data are not popular in medical imaging. Nevertheless, Radon transform in 3D is still worthwhile to investigate because it has a simple and nice inversion and can be used to solve other related imaging problems.

To study the Radon transform in 3D, we imagine a 1D detector that is able to measure plane integrals p(s, ) with the planes orthogonal to the detector. The detector is along the direction The central slice theorem for the Radon transform in 3D states that the 1D Fourier transform P(ω, ) of the projection p(s, ) of a 3D function f(x, y, z) is equal to a 1D profile through the origin of the 3D Fourier transform F(ωx, ωy, ωz) of that function which is parallel to the detector (see Figure 5.7). Here, is the direction of the 1D detector and the 1D profile in the (ωx, ωy, ωz) space.

image

Fig. 5.7: The central slice theorem for the 3D Radon transform.

image

Fig. 5.8: Three-dimensional Radon backprojection is implemented as two steps: a point to a line and then a line to a plane.

We observe from Figure 5.7 that each detector position only measures the frequency components along one line in the (ωx, ωy, ωz) space. The direction must go through a half unit sphere to get enough measurements for image reconstruction. After the data are acquired, the image reconstruction algorithm is simple and is in the form of FBP. This form is also referred to as the Radon inversion formula.

In order to reconstruct the image, first take the second-order derivative of p(s, ) with respect to variable s. This step is called filtering. Then backproject the filtered data to the 3D image array. You will not find an image reconstruction algorithm simpler than this.

The 3D backprojector in the 3D Radon inversion formula backprojects a point into a 2D plane. There is a trick to perform the 3D backprojection with the Radon data. This trick is to perform the 3D backprojection in two steps, and each step is a 2D back-projection. In the first step, a point is backprojected into a line (see Figure 5.8(i)). All data points along a line are backprojected into a set of parallel lines, and these lines are in a 2D plane. In the second step, each line is backprojected into a 2D plane (see Figure 5.8(ii)). The backprojection directions in these two steps are orthogonal to each other.

5.3Cone-beam data

Cone-beam image reconstruction is considerably more complex than that of parallel line-integral and parallel plane-integral data. There is no equivalent central slice theorem known to us. Cone-beam imaging geometry (see the two lower figures in Figure 4.13) is extremely popular, for example, in X-ray computed tomography (CT) and in pinhole SPECT (single-photon emission computed tomography); we will spend some effort to talk about its reconstruction methods.

First, we have a cone-beam data sufficiency condition (known as Tuy’s condition): Every plane that intersects the object of interest must contain a cone-beam focal point.

image

Fig. 5.9: The circular orbit does not satisfy Tuy’s conditions. The circle-and-line and the helix orbits satisfy the conditions.

This condition is very similar to the fan-beam data sufficiency condition: Every line that intersects the object of interest must contain a fan-beam focal point.

In Figure 5.9, the circular cone-beam focal-point orbit does not satisfy Tuy’s condition. If we draw a plane cutting through the object above (or below) the orbit plane and parallel to the orbit plane, this plane will never intersect the circular orbit. The helical and circle-and-line orbits shown in Figure 5.9 satisfy Tuy’s condition, and they can be used to acquire cone-beam projections for exact image reconstruction. Modern CT uses helical orbit to acquire projection data (see Figure 4.8).

5.3.1Feldkamp’s algorithm

Feldkamp’s cone-beam algorithm is dedicated to the circular focal-point trajectory. It is an FBP algorithm and is easy to use. Because the circular trajectory does not satisfy Tuy’s condition, Feldkamp’s algorithm can only provide approximate reconstructions. Artifacts can appear especially at locations away from the orbit plane. The artifacts include reduction in activity in the regions away from the orbit plane, cross-talk between adjacent slices, and undershoots.

Feldkamp’s algorithm is practical and robust. Cone angle, as defined in Figure 5.10, is an important parameter in cone-beam imaging. If the cone angle is small, say less than 10°, this algorithm gives fairly good images. At the orbit plane, this algorithm is exact. This algorithm also gives an exact reconstruction if the object is constant in the axial direction (e.g., a tall cylinder).

image

Fig. 5.10: The coordinate system for Feldkamp’s cone-beam algorithm.

Feldkamp’s cone-beam algorithm (Section 5.4) is nothing but a modified fan-beam FBP algorithm (Section 3.4.1). It consists of the following steps:

(i)Pre-scale the projections by a cosine function cos α (see Figure 5.10 for angle α).

(ii)Row-by-row ramp filter the pre-scaled data.

(iii)Cone-beam backproject the filtered data with a weighting function of the distance from the reconstruction point to the focal point.

5.3.2Grangeat’s algorithm

Feldkamp’s algorithm converts the cone-beam image reconstruction problem to the fan-beam image reconstruction problem; Grangeat’s algorithm, on the other hand, converts it to the 3D Radon inversion problem (Section 5.2). Feldkamp’s algorithm is derived for the circular orbit; Grangeat’s algorithm can be applied to any orbit. If the orbit satisfies Tuy’s condition, Grangeat’s algorithm can provide an exact reconstruction.

Grangeat’s method first tries to convert cone-beam ray sums to plane integrals, by calculating the line integrals on the cone-beam detector (see Figure 5.11).

We observe that the line integral on the detector plane gives a weighted plane integral of the object with a special nonuniform weighting function 1/r. Here r is the distance to the cone-beam focal point. We must remove this 1/r weighting before we can obtain a regular unweighted plane integral.

From Figure 5.12 we observe that the angular differential d! multiplied by the distance r equals the tangential differential dt, that is, rd! = dt. If we perform the angular derivative on the 1/r weighted plane integral, we will cancel out this 1/r weighting factor by the factor r, obtaining the derivative of the unweighted plane integral with respect to variable t, which is in the normal direction of the plane, that is,

image

Fig. 5.11: Integration along a line on the cone-beam detector gives a weighted plane integral of the object.

image

Fig. 5.12: The differential dt in the tangent direction is equal to the angular differential d! times the distance r.

(1rweightedplaneintegral)α=(Radontransfrom)t.

Recall that the Radon inversion formula is the second-order derivative of the plane integral with respect to t, followed by the 3D Radon backprojection. Therefore, a cone-beam image reconstruction algorithm can be implemented as follows:

(i)Form all possible (all orientations and all locations) line integrals on each detector plane (see Figure 5.11), obtaining (1/r)-weighted plane integrals.

(ii)Perform the angular derivative on results from (i).

(iii)Rebin the results from (ii) to the (s, ) Radon space (see Figure 5.6).

(iv)Take the derivative of the results of (iii) with respect to t, in the normal direction of the plane.

(v)Perform the 3D Radon backprojection (see Figure 5.8).

We now expand on Step (iii). For a practical focal-point orbit, the (s, ) Radon space is not uniformly sampled. Data redundancy must be properly weighted. For example, if the value at a particular Radon space location (s, ) is measured 3 times, then after rebinning this value needs to be divided by 3.

Grangeat’s algorithm is not an FBP algorithm, and it requires data rebinning, which can introduce large interpolation errors.

5.3.3Katsevich’s algorithm

Katsevich’s cone-beam algorithm was initially developed for the helical orbit cone-beam geometry and was later extended to more general orbits. Katsevich’s algorithm is in the form of FBP, and the filtering can be made shift invariant. By shift invariant we mean that the filter is independent of the reconstruction location.

Using a helical orbit (Figure 5.9), Tuy’s data sufficiency condition is satisfied. The main issue in developing an efficient cone-beam FBP algorithm is to properly normalize the redundant data. Katsevich uses two restrictions to handle this issue.

It can be shown that for any point (x, y, z) within the volume surrounded by the helix, there is a unique line segment that passes through the point (x, y, z) where both endpoints touch two different points on the helix and are separated by less than one pitch, say, at sb and st shown in Figure 5.13. This particular line segment is referred to as the π-segment (or π-line). The first restriction is the use of the cone-beam measurements that are measured only from the helix orbit between sb and st.

The second restriction is the filtering direction, which handles the normalization of redundant data. In order to visualize the data redundancy problem, let us look at three cone-beam image reconstruction problems: (a) data are sufficient and not redundant; (b) data are insufficient; and (c) data are sufficient but redundant.

(a)The scanning cone-beam focal-point orbit is an arc (i.e., an incomplete circle). The point to be reconstructed is in the orbit plane and on the line that connects the two endpoints of the arc. The line connecting the arc’s endpoints is the 0-segment of the reconstruction point. For our special case that the object is only a point, the cone-beam measurement of this point at one focal-point position can provide a set of plane integrals of the object. Those planes all contain the line that connects the focal point and the reconstruction point. After the focal point goes through the entire arc, all plane integrals are obtained. Recall the central slice theorem for the 3D Radon transform; an exact reconstruction requires that the planar integrals of the object are available for all directions , which is indicated in Figure 5.14 (Top).

image

Fig. 5.13: For any point (x, y, z) inside the helix there is one and only one π-segment.

image

Fig. 5.14: Top: At one focal-point position, the directional vectors trace a unit circle. A directional vector represents a measured plane integral. The plane containing this unit circle is perpendicular to the line that connects the reconstruction point and the focal point. Bottom: When the focal point travels through the entire arc, the directional vectors trace a full unit sphere.

The unit vector traces a circle (let us call it a -circle) in a plane that is perpendicular to the orbit plane, and the line connecting the focal point and the reconstruction point is normal to this plane. Thus, every focal point on the arc orbit corresponds to a -circle. Let the focal point travel through the entire arch, then the corresponding -circles form a complete unit sphere, which we can call a sphere (see Figure 5.14, bottom).

(b)If the same arc orbit as in (a) is used and the object to be reconstructed is a point above the orbit plane, then the data are insufficient. When we draw the -circle for each focal-point position, the -circle is not in a vertical plane, but in a plane that has a slant angle. If we let the focal point travel through the entire arc, the corresponding -circles do not form a complete unit sphere anymore – both the Arctic Circle and the Antarctic Circle are missing (see Figure 5.15).

image

Fig. 5.15: If the reconstruction point is above the orbit plane, the unit sphere is not completely covered by all the directional vectors.

image

Fig. 5.16: For the helical orbit, the unit sphere is completely measured. At the North Pole and South Pole area, a small region is covered (i.e., measured) three times.

(c)Now we consider a helical orbit with the reconstruction point inside. We determine the π-segment for the reconstruction point and find the π-segment endpoints. The segment of the helical orbit between the π-segment endpoints are shown in Figure 5.16.

In Figure 5.16, the sphere is fully measured. In fact, it is over measured. The small triangle-like regions near the North and South Pole are measured 3 times. Let us look at this situation in another way. Draw a plane passing through the reconstruction point. In most cases, the plane will intersect the piece of helix orbit at one point. However, there is a small chance that the plane can intersect the piece of helix at three points (see a side view of the helix in Figure 5.17).

If a plane intersects the orbit three times, we must normalize the data by assigning a proper weight to each measurement. The sum of the weights must be 1. Common knowledge teaches us that we should use all available data and weigh the redundant measurement by the inverse of its noise variance. However, in order to derive a shift-invariant FBP algorithm, we need to do something against common sense. In Katsevich’s algorithm, a measurement is weighted by either +1 or –1. If a plane is measured once, we must make sure that it is weighted by +1. If a plane is measured 3 times, we need to make sure that two of them are weighted by +1 and the third one of them is weighted by –1. In other words, we keep one and throw the other two away. Ouch! Further discussion about the weighting and filtering will be given in the next section.

image

Fig. 5.17: The π-segment defines a section of helix. A cutting plane is a plane passing through the reconstruction point. The cutting plane either intersects the section of helix 1 or 3 times.

In Katsevich’s algorithm, the normalization issue is taken care of by selecting a proper filtering direction. Here, filtering means performing the Hilbert transform. After the filtering direction and the normalization issues have been taken care of, Katsevich’s algorithm is implemented with the following steps:

(i)Take the derivative of the cone-beam data with respect to the orbit parameter along the helix orbit.

(ii)Perform the Hilbert transform along the directions that have been carefully selected.

(iii)Perform a cone-beam backprojection with a weighting function, similar to the backprojection in Feldkamp’s algorithm.

Some later versions of the algorithm have replaced the derivative with respect to the orbit parameters by partial derivative with respect to the detector coordinates.

5.4Mathematical expressions

Some 3D image reconstruction algorithms are presented here without proofs. For the 3D parallel line-integral projections, we have a backprojection-then-filtering algorithm and FBP algorithms, which are not unique. For the parallel plane-integral projections (i.e., the Radon transform), we also have a backprojection-then-filtering algorithm and an FBP algorithm which is the Radon inversion formula.

For cone-beam projections, Felkamp’s circular orbit algorithm and the Katsevich’s helical orbit algorithm are highlighted, because they are in the form of convolution and cone-beam backprojection. Tuy’s relation and Grangeat’s relation are also discussed in this section.

5.4.1Backprojection-then-filtering for parallel line-integral data

For this type of algorithm, the projections are backprojected to the image domain first, obtaining b(x, y, z). Then, the 3D Fourier transform is applied on b(x, y, z), obtaining B(ωx, ωy, ωz). Next, a Fourier domain filter G(ωx, ωy, ωz) is used to multiply B(ωx, ωy, ωz), obtaining F(ωx, ωy, ωz)= B(ωx, ωy, ωz) G(ωx, ωy, ωz). Finally, the 3D inverse Fourier transform is applied to F(ωx, ωy, ωz) to obtain the reconstructed image f(x, y, z). Here, the filter transfer function G(ωx, ωy, ωz) is imaging geometry dependent. Some of the imaging geometries are shown in Figure 5.5, where the trajectories are displayed as shaded regions. Let Ω denote the occupied region by the trajectories on the unit sphere.

When Ω = Ω4π is 4π, that is, Ω4π is the entire unit sphere, G is a ramp filter:

G(ωx,ωy,ωz)=ωx2+ωy2+ωy2/π.

If Ω is not the full sphere, this ramp filter will be modified by the geometry of Ω. Then the general form of G is

G(ωx,ωy,ωz)=ωx2+ωy2+ωz2/D(θ),

where D() is half of the arc length of the intersection of a great circle with Ω. The normal direction of the great circle in the Fourier domain is , where is the direction from the origin to the point (ωx, ωy, ωz).

When Ω = Ωψ is the region shown in Figure 5.18, D() is the arc length γ, which is orientation dependent. Using the geometry, we have γ=πifθψ, and we have sinγ2=sinψsinθifθ>ψ.

5.4.2FBP algorithm for parallel line-integral data

In the FBP algorithm, we need to find a 2D filter transfer function for each orientation θΩ.IfΩ=Ω4π, this filter is a ramp filter

image

Fig. 5.18: The definition of the Kψ and the arc length γ, which is a part of a great circle.

Q(ωu,ωv)=ωu2+ωv2/π,

which is the same for all orientations θΩ.

If Ω is not the full sphere Ω4π,this ramp filter Q(ωu, ωv) becomes orientation dependent as Qθ(ωu,ωv) (note: a subscript is added) and can be obtained by selecting the “central slice” of G(ωx, ωy, ωz) with the normal direction . Here, the u-axis and v-axis are defined by unit vectors v and v, respectively. The three vectors, , u, form an orthogonal system in 3D, where represents the direction of a group of parallel lines perpendicular to a detector, u and v are on the detector plane, and u is also in the xy plane of the global (x, y, z) system in 3D.

If we consider the case of Ω=Ωψ shown in Figure 5.18, Qθ(ωu,ωv)has two expressions in two separate regions (see Figure 5.19):

Qθ(ωu,ωv)=ωu2+ωv2π

if

0ωu2+ωv2cosθωu2+ωv2sinψ;Qθ(ωu,ωv)=ωu2+ωv22sin1(ωu2+ωv2sinψωu2+ωv2cos2θ)

if

ωu2+ωv2sinψ<ωu2+ωv2cos2θωu2+ωv2.

image

Fig. 5.19: The 2D filter transfer function Q for the imaging geometry Kψ.

5.4.3Three-dimensional Radon inversion formula (FBP algorithm)

The 3D Radon inversion formula can only be applied to 3D plane-integral data:

f(x,y,z)=18π24π2p(s,θ)s2|s=xθsinθdθdϕ

where Unexpected text node: '' The coordinate systems are shown in Figure 5.20. If we backproject over a 2π solid angle, then 1/(8π2) should be replaced by 1/(4π2).

5.4.4Three-dimensional backprojection-then-filtering algorithm for Radon data

Let the backprojected image be

b(x,y,z)=2πp(s,θ)|s=xθsinθdθdϕ.

For the 3D Radon case, the Fourier transform of the projection/backprojection PSF is Therefore, the Fourier transform of the image f(x, y, z) can be obtained as

F(ωx,ωy,ωz)=B(ωx,ωy,ωz)×(ωx2+ωy2+ωz2).

In the spatial domain, the backprojection-then-filtering algorithm for Radon data can be expressed as

f(x,y,z)=Δb(x,y,z)=2b(x,y,z)x2+2b(x,y,z)y2+2b(x,y,z)z2,

where Unknown node type: m is the Laplacian operator.

image

Fig. 5.20: The coordinate system for the 3D Radon inversion formula.

5.4.5Feldkamp’s algorithm

First, let us write down the fan-beam FBP reconstruction algorithm for the flat detector and express the image in polar coordinates (see Figure 5.21):

f(r,φ)=1202π(DDs)2DD2+l2g(l,β)h(l'l)dldβ,

where h(l) is the convolution kernel of the ramp filter, D is the focal length, g(l, β) is the fan-beam projection, l is the linear coordinate on the detector, s = r sin(φβ), and l'=Drcos(φβ)Drsin(φβ). In this formula, D/D2+l2 is the cosine of the incidence angle. When this algorithm is implemented, we first multiply the projections by this cosine function. Then we apply the ramp filter to the pre-scaled data. Finally, we perform the fan-beam backprojection with a distance-dependent weighting [D/(Ds)]2, where s is the distance from the reconstruction point to the virtual detector, which is placed at the center of rotation for convenience.

Feldkamp’s algorithm is almost the same as this fan-beam algorithm, except that the backprojection is a cone-beam backprojection. The ramp-filtering is performed in the row-by-row fashion. There is no filtering performed in the axial direction. Let the axial direction be the z direction (see Figure 5.22), then

f(r,φ,z)=1202π(DDs)2DD2+l2+z^2g(l,z^,β)h(l'l)dldβ.

In this formula, g(l,z^,β) is the cone-beam projection, D/D2+l2+z^2 is the cosine of the incidence angle, are defined in Figure 5.22.

image

Fig. 5.21: The coordinate system for the flat-detector fan-beam imaging geometry.

image

Fig. 5.22: The coordinate system for Feldkamp’s cone-beam algorithm.

5.4.6Tuy’s relationship

Tuy published a paper in 1983. In this paper, he established a relationship between the cone-beam data and the original image, which plays a similar role to the central slice theorem. Let us derive this relationship in the section.

The object to be imaged is f. Let the cone-beam focal-point trajectory be denoted by a vector Φ and let α be a unit vector, indicating the direction of a projection ray. Therefore, the cone-beam data can be expressed by the following expression:

g(Φ,α)=0f(Φ+tα)dt,||α||=1.

We now replace the unit vector image by a general 3D vector image, and the above 2D projection becomes an extended 3D function:

g(Φ,x)=0f(Φ+tx)dt.

Taking the 3D Fourier transform of this function with respect to and using notation β as the frequency domain variable, we have

G(Φ,β)=g(Φ,x)e2πixβdx=0f(Φ,tx)e2πixβdtdx.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset