1Basic principles of tomography

This is an introductory chapter that presents the fundamental concepts in tomography. It first defines what the tomography is and shows how a tomographic image can be obtained from its measurements using two simple examples. The concept of projection is then explained. Next, the filtered backprojection (FBP) image reconstruction method is introduced using a point source example. Finally, the concept of backprojection is discussed.

1.1Tomography

The Greek word tomos means a section, a slice, or a cut. Tomography is the process of imaging a cross section. For example, if you are given a watermelon and would like to see inside, the easiest way to do so is to cut it open (Figure 1.1). Clearly, this approach to obtain a cross-sectional image is not a good idea in medicine. Nobody wants to be cut open in order to see what is inside.

Let us look at another example. You are visiting a small park, which is closed for maintenance. You walk around the park and take a few pictures of it. After you get home, you can use your pictures to make a map of the park. To make your life easier, let us assume that there are two large trees in the park, and you take two pictures from the east and the south (Figure 1.2, left). Using these two pictures, you can map out where these two trees are (Figure 1.2, right). This can be done by positioning the pictures at the original orientations at which the pictures were taken, drawing a line from each tree, and finding the intersections. If you have enough pictures, it is not hard to find out where the trees are.

Tomography is a mathematical problem. Let us do a fun mathematical exercise here. We have a 2 × 2 matrix. We do not tell you what it is yet. Here are the hints: The sum of the first row is 5, the sum of the second row is 4, the sum of the first column is 7, and the sum of the second column is 2 (see Figure 1.3). Now, you figure out what this 2 × 2 matrix is.

You can solve this puzzle by setting up a system of linear equations with the matrix entries as unknowns:

x1+x2=5,x3+x4=4,x1+x3=7,andx2+x4=2.x1+x2=5,x3+x4=4,x1+x3=7,andx2+x4=2.

Solving these equations, you will get

x1=3,x2=2,x3=4,andx4=0.x1=3,x2=2,x3=4,andx4=0.

Congratulations! You have just mathematically solved a tomography problem. Usually, a tomography problem is solved mathematically; hence, the term computed tomography (CT). The row sum or column sum in this example can be generalized as a ray sum, a line integral, or a projection. The procedure to produce a tomographic image from projections is called image reconstruction.

image

Fig. 1.1: Cutting open to see what is inside.

image

Fig. 1.2: Reconstruct a map from two pictures.

image

Fig. 1.3: A 2 × 2 matrix puzzle.

What if the tomography problem gets more complicated? If there are many more trees in the park, taking only two pictures may not provide us enough information to map out the park. If the matrix size is larger than 2 × 2, the row sum and column sum alone do not form enough equations to solve for the matrix entries.

We need more views! For the matrix identification case, we need to sum the matrix diagonally at various angles. In turn, more sophisticated mathematics is required to solve the tomography problem.

1.2Projection

In order to understand the concept of projection (ray sum, line integral, or Radon transform), we will present more examples here.

In the first example, the object is a uniform disc on the xy plane, the center of the disc is at the origin, and the (linear) density of the disc is ρ (see Figure 1.4). The projection (i.e., the line integral) of this object can be calculated as the chord length t times the linear density ρ. That is,

p(s)=ρ=2ρR2s2if|s|<R;ρ(s)=0otherwise.p(s)=ρ=2ρR2s2if|s|<R;ρ(s)=0otherwise.

In this particular example, the projection p(s) is the same for any view angle θ, which is the orientation of the detector.

If the object is more complicated, the projection p(s, θ) is angle θ dependent (see Figure 1.5).

In the next example we use a point source on the y-axis to further illustrate the angle θ dependency of the projection p(s, θ) (see Figure 1.6). Here we pay attention to the location s of the spike on the one-dimensional detector, which can be evaluated as

s=rsinθ.s=rsinθ.

This is a sine function with respect to θ. If you display this point source projection data set p(s, θ) in the sθ coordinate system (see Figure 1.6, right), you will see the trajectory of a sine wave. Because of this observation, people refer to the projection data set as a sinogram.

The fourth example is a discrete object of a 2 × 2 matrix, similar to that in Figure 1.3. The detector is also discrete with four detector bins (see Figure 1.7). Each matrix element represents a uniform pixel, and xi (i = 1, 2, 3, 4) is the linear density in the ith pixel. Here we would like to find the line-integral p(s, θ) of the matrix at a view angle θ. The quantity aij (i = 1, 2, 3, 4 and j = 1, 2, 3, 4) is the segment length of the path towards the detector bin i within the pixel j, and aij = 0 if the jth pixel is not on the path to the ith detector bin. The projection p(i, θ) is calculated as

image

Fig. 1.4: The line integral across the disc is the length of a chord times the density.

image

Fig. 1.5: The projections are usually different at a different view angle.

image

Fig. 1.6: A sinogram is a representation of the projections on the sθ plane.

image

Fig. 1.7: The projections are weighted by the line length within each pixel.

p(i,θ)=ai1x1+ai2x2+ai3x3+ai4x4fori=1,2,3,4.p(i,θ)=ai1x1+ai2x2+ai3x3+ai4x4fori=1,2,3,4.

1.3Image reconstruction

In this section, we illustrate the common image reconstruction strategy by considering a point source. Let us consider an empty two-dimensional (2D) plane with an xy coordinate system, and we place a small dot with a value, say 1, somewhere on this plane and not necessarily at the origin (see Figure 1.8). We now imagine that there is a detector (e.g., a camera) rotating around the origin, acquiring images of projections. At a particular angle θ, we denote the projection as p(s, θ), where s is the coordinate on the detector.

image

Fig. 1.8: Projection of a point source object.

The projection p(s, θ) is formed by drawing a line across the xy plane, orthogonal to the detector, and meeting on the detector at location s. Then we evaluate the line integral along this line, and the integral value is p(s, θ). In our example, if the line does not touch the point source, p(s, θ) is zero. If the line passes through the point source, then p(s, θ) is 1.

Now we are going to reconstruct the image using the projections p(s, θ). Our strategy is similar to that in the tree-map example in Section 1.1, where we drew a line from each tree on the detector and found the location of the intersections. In image reconstruction, we not only need to find the location but also the intensity value of the object of interest.

As shown in Figure 1.9(a), a number of projections are taken from the point source at various view angles. We attempt to reconstruct the point source image in the following manner.

When you look at the projections p(s, θ) at one view θ, you see a spike of intensity 1. This spike is the sum of all activities along the projection path. To reconstruct the image, you must redistribute the activity in the spike back to its original path. The problem is that you do not know where you need to put more activity along the path and where you put less. Before you give up, you decide to put equal amounts of activity everywhere along the path, and the amount is the magnitude of the projection spike (see Figure 1.9b). If you do that for few more angles, you will have the situation as shown in Figure 1.9(c). Due to the superposition effect, there will be a tall spike in the xy plane at the location of the point source.

What you have just done is a standard mathematical procedure called backprojection. If you backproject from all angles from 0° to 360°, you will produce an image similar to the one shown in Figure 1.9(d).

After backprojection, the image is still not quite the same as the original image but rather is a blurred version of it. To eliminate the blurring, we introduce negative “wings” around the spike in the projections before backprojection (see Figure 1.9e). The procedure of adding negative wings around the spike is called filtering. The use of the negative wings results in a clear image (see Figure 1.9f). This image reconstruction algorithm is very common and is referred to as an FBP algorithm.

image

Fig. 1.9: Reconstruction of a point source image by backprojecting unfiltered and filtered data. (a) Project the point source; (b) Backproject from one view; (c) Backproject from a few views; (d) Backproject from all views; (e) Add negative wings; and (f) Backproject modified data.

In this section, we use a point source to illustrate the usefulness of filtering and backprojection with many views in image reconstruction. We must point out that if the object is a point source, we only need two views to reconstruct the image, just like the map-making example in Section 1.1.

1.4Backprojection

One must first define projection before backprojection can be defined. We must make it clear that backprojection is not the inverse of projection. Backprojection alone is not sufficient to reconstruct an image. After you backproject the data, you do not get the original image back. We will illustrate this point by a simple discrete 2 × 2 problem below (see Figure 1.10).

The original image is defined as x1 =3, x2 =2, x3 =4, and x4 = 0. The associated projections are p(1, 0°) = 7, p(2, 0°) = 2, p(1, 270°) = 5, and p(2, 270°) = 4. The projections are formed one view at a time (see Figure 1.10). The backprojected image is also formed one view at a time. The final backprojected image is the summation of the backprojections from all views, as shown in Figure 1.11. Please note that the backprojected image is different from the original image.

image

Fig. 1.10: View-by-view projection.

image

Fig. 1.11: View-by-view backprojection, then sum all backprojected images.

Even though the backprojected image is not the original image, they are closely related. Their relationship will be further discussed in Chapter 2.

1.5Mathematical expressions

In every chapter we dedicate a section especially to mathematical expressions. These mathematical expressions help the advanced readers to better grasp the main concepts which are discussed in the chapter. Mathematical expressions of projection and backprojection for 2D parallel-beam imaging are presented in this section. The Dirac δ function has an important role in analytic algorithm development; its definition and some properties are also covered in this section.

1.5.1Projection

Let f(x, y) be a density function in the xy plane. The projection (ray sum, line integral, or Radon transform) p(s, θ) has many equivalent expressions such as

p(s,θ)=f(x,y)δ(xcosθ+ysinθs)dxdy,p(s,θ)=f(x,y)δ(xθs)dxdy,p(s,θ)=f(scosθtsinθ,ssinθ+tcosθ)dt,p(s,θ)=f(sθ+tθ)dt,p(s,θ)=fθ(s+t)dt,p(s,θ)=f(x,y)δ(xcosθ+ysinθs)dxdy,p(s,θ)=f(x,y)δ(xθs)dxdy,p(s,θ)=f(scosθtsinθ,ssinθ+tcosθ)dt,p(s,θ)=f(sθ+tθ)dt,p(s,θ)=fθ(s+t)dt,

where x=(x,y),θ=(cosθ,sinθ),θ=(sinθ,cosθ),δx=(x,y),θ=(cosθ,sinθ),θ=(sinθ,cosθ),δ is the Dirac delta function, and fθ is the function f rotated by θ clockwise. We assume that the detector rotates counterclockwise around the object or that the object rotates clockwise while the detector stays still. The coordinate systems are shown in Figure 1.12.

1.5.2Backprojection

Backprojection is the adjoint of projection. Here “adjoint” is a mathematical term. It refers to the conjugate transpose in linear algebra. For a real matrix A, its adjointis simply the transposed matrix AT. In the discrete case as in Section 1.4, the projection is

P=AX,P=AX,

image

Fig. 1.12: Coordinate systems for 2D parallel-beam imaging.

where X represents an image, but in a column form. For example, the 2 × 2 image is expressed as (see Figure 1.3 or 1.10)

X=[x1,x2,x3,x4]T.X=[x1,x2,x3,x4]T.

The column matrix P represents the projections. If we use the example in Figure 1.10,

P=[p(1,0)p(2,0),p(1,270),p(2,270)]T=[7,2,5,4]T.P=[p(1,0)p(2,0),p(1,270),p(2,270)]T=[7,2,5,4]T.

The matrix A is the projection operator. Its entries aij are defined in Figure 1.7. Using the example of Figure 1.10, the backprojection of P can be calculated using matrix multiplication as

B=ATP=[1010010111000011]T[7254]=[1010011010010101][7254]=[127116],B=ATP=1010011010010101T7254=10100101110000117254=127116,

which is the same as the result obtained “graphically” in Figure 1.11.

For the continuous case, the backprojection image b(x, y) can be expressed in the following equivalent ways:

b(x,y)=π0p(s,θ)|s=xcosθ+ysinθdθ,b(x,y)=π0p(s,θ)|s=xθdθ,b(x,y)=π0p(x,θ,θ)dθ,b(x,y)=122π0p(xcosθ+ysinθ,θ)dθ.b(x,y)=0πp(s,θ)|s=xcosθ+ysinθdθ,b(x,y)=0πp(s,θ)s=xθdθ,b(x,y)=0πp(x,θ,θ)dθ,b(x,y)=1202πp(xcosθ+ysinθ,θ)dθ.

1.5.3The Dirac δ-function

The Dirac δ-function is not a regular function that maps a value in the domain to a value in the range. The Dirac δ-function is a generalized function or a distribution function. The δ-function can be defined in many ways. Here, we use a series of Gaussian functions to define the δ-function. Each of the Gaussian functions (see Figure 1.13) has a unit area underneath its curve, and as the parameter n gets larger, the curve gets narrower and taller (see Figure 1.13):

(nπ)1/2enx2.(nπ)1/2enx2.

image

Fig. 1.13: Using a train of Gaussian functions to define the δ-function.

Let f(x) be a smooth function that is differentiable everywhere with any order and limxxNf(x)=0limxxNf(x)=0 for all N. Then the δ-function is defined implicitly as

limx(nπ)1/2enx2f(x)dx=δ(x)f(x)dx=f(0).limx(nπ)1/2enx2f(x)dx=δ(x)f(x)dx=f(0).

The δ-function has some properties:

δ(xa)f(x)dx=δ(x)f(x+a)dx=f(a),δ(ax)f(x)dx=1|a|f(0),δ(n)(x)f(x)dx=(1)nf(n)(0)[the nth-order derivative],δ(g(x))f(x)=f(x)n1|g(λn)|δ(xλn),δ(xa)f(x)dx=δ(x)f(x+a)dx=f(a),δ(ax)f(x)dx=1|a|f(0),δ(n)(x)f(x)dx=(1)nf(n)(0)[the nth-order derivative],δ(g(x))f(x)=f(x)n1|g(λn)|δ(xλn),

where λn s are the zeros of g(x).

In 2D and 3D cases, δ(x)=δ(x)δ(y)  and δ(x)=δ(x)δ(y)δ(z),δ(x)=δ(x)δ(y)  and δ(x)=δ(x)δ(y)δ(z),respectively. In the last property, |g′| will be replaced by |grad(g)| =(?g/?x)2+(?g/?y)2=(g/x)2+(g/y)2 and |grad(g)| = (?g/?x)2+(?g/?y)2+(?g/?z)2,(g/x)2+(g/y)2+(g/z)2, respectively, in 2D and 3D cases.

In 2D imaging, we use a 2D δ-function δ(xx0)δ(xx0) to represent a point source at location x=x0.x=x0. The Radon transform of f(x)=δ(xx0)=δ(xx0)δ(yy0)f(x)=δ(xx0)=δ(xx0)δ(yy0) is given as

p(s,θ)=f(x)δ(xθs)dx,p(s,θ)=δ(x,x0)δ(yy0)δ(xcosθ+ysinθs)dxdy,p(s,θ)=δ(yy0)[δ(x,x0)δ(xcosθ+ysinθ)dx]dy,p(s,θ)=δ(yy0)δ(x0cosθ+ysinθs)dy,p(s,θ)=δ(x0cosθ+y0sinθs),p(s,θ)=f(x)δ(xθs)dx,p(s,θ)=δ(x,x0)δ(yy0)δ(xcosθ+ysinθs)dxdy,p(s,θ)=δ(yy0)δ(x,x0)δ(xcosθ+ysinθ)dxdy,p(s,θ)=δ(yy0)δ(x0cosθ+ysinθs)dy,p(s,θ)=δ(x0cosθ+y0sinθs),

which is a sinogram similar to that shown in Figure 1.6.

1.6Worked examples

Example 1: If you see two separate trees on both views, can you uniquely reconstruct the map of trees (see Figure 1.14)? If not, you may need to take more pictures. If you are only allowed to take one more picture, at which direction should you take the picture?

Solution

Both of the two situations as shown in Figure 1.15 can satisfy the two views.

If we take another picture at 45°, we are able to solve the ambiguity.

Example 2: Find the projections of a uniform disc. The center of the disc is not at the center of detector rotation.

Solution

We already know that if the center of the disc is at the center of detector rotation, the projection can be evaluated as

p(s,θ)=2ρR2s2if|s|<R;p(s,θ)=0otherwise.p(s,θ)=2ρR2s2ifs<R;p(s,θ)=0otherwise.

image

Fig. 1.14: Twotreescanbeseenon both views.

image

Fig. 1.15: Two potential solutions for the mapping problem.

image

Fig. 1.16: Imaging of an off-centered disc.

Without loss of generality, we now assume that the center of the disc is on the positive x-axis with the coordinates (r, 0) (Figure 1.16).

For this new setup, we need to shift the projection data on the s-axis. The shifting distance is r cos θ. That is,

p(s,θ)=2ρR2(srcosθ)2if|s=rcosθ|<R;p(s,θ)=0otherwise.p(s,θ)=2ρR2(srcosθ)2ifs=rcosθ<R;p(s,θ)=0otherwise.

Example 3: Show that the parallel-beam data redundancy condition is p(s, θ)= p(–s, θ + π).

Proof. Using the projection definition in Section 1.5, we have

p(s,θ+π)=f(x,y)δ(xcos(θ+π)+ysin(θ+π)(s))dxdy=f(x,y)δ(xcosθysinθ+s)dxdy=f(x,y)δ((xcosθ+ysinθs))dxdy=f(x,y)δ(xcosθ+ysinθs)dxdy=p(s,θ).p(s,θ+π)=f(x,y)δ(xcos(θ+π)+ysin(θ+π)(s))dxdy=f(x,y)δ(xcosθysinθ+s)dxdy=f(x,y)δ((xcosθ+ysinθs))dxdy=f(x,y)δ(xcosθ+ysinθs)dxdy=p(s,θ).

[The δ-function is an even function].

Example 4: Show that the point spread function of the projection/backprojection operator is 1/r, where r=xx0r=xx0 and the point source object is f(x)=δ(xx0).f(x)=δ(xx0).

Proof. Using the definition of the backprojection, we have

b(x)=π0p(xθ,θ)dθ=π0f((xθ)θ+tθ)dtdθ.b(x)=0πp(xθ,θ)dθ=0πf((xθ)θ+tθ)dtdθ.

We realize that the line integral f((xθ)θ+tθ)f((xθ)θ+tθ) dt is along the line that passes through the point xx and in the direction of θθ (see Figure1.17), so

f((xθ)θ+tθ)dt=f(xˆtθ)dˆt.f((xθ)θ+tθ)dt=f(xtˆθ)dtˆ.

Therefore, the projection/backprojection image can be obtained as

b(x)=π0f(xˆtθ)dˆtdθ.b(x)=0πf(xtˆθ)dtˆdθ.

Let ˆx=ˆtθwithˆx=|ˆt|,dˆx=|ˆt|dˆtdθ.xˆ=tˆθwithxˆ=tˆ,dxˆ=tˆdtˆdθ. The above expression becomes

b(x)=f(xˆx)||ˆx||dˆx.b(x)=f(xxˆ)xˆdxˆ.

image

Fig. 1.17: The line integral is performed on a line passing through the backprojection point.

Let f(x)=δ(xx0).f(x)=δ(xx0). The point spread function of the projection/backprojection operator is

b(x)=δ(xx0ˆx)||ˆx||dˆx=1||xx0||=1r.b(x)=δ(xx0xˆ)xˆdxˆ=1xx0=1r.

Example 5: Evaluate δ(e2(x3)(x+4)1)f(x)dx.δ(e2(x3)(x+4)1)f(x)dx.

Solution

Let

g(x)=e2(x3)(x+4)1.g(x)=e2(x3)(x+4)1.

Solving g(x)= e2(x–3)(x+4) –1= 0, we obtain the zeros of g(x) as

λ1=3andλ2=4.λ1=3andλ2=4.

The derivative of g(x) is

g'(x)=2[(x3)+(x+4)]e2(x3)(x+4)=2(2x+1)e2(x3)(x+4).g(x)=2[(x3)+(x+4)]e2(x3)(x+4)=2(2x+1)e2(x3)(x+4).image

Thus, at the two zeros of g(x), we have

g'(3)=14andg'(4)=14.g(3)=14andg(4)=14.

Hence,

δ(e2(x3)(x+4)1)f(x)dx=δ(x3)|14|f(x)dx+δ(x+4)|14|f(x)dx=f(3)+f(4)14δ(e2(x3)(x+4)1)f(x)dx=δ(x3)|14|f(x)dx+δ(x+4)|14|f(x)dx=f(3)+f(4)14

1.7Summary

Tomography is a process of taking projection data and converting the data into cross-sectional images. Projection data from multiple views are required.

A projection is a line integral (or ray sum, Radon transform) of an object. Projection data are acquired with detectors. Objects overlap on the detectors.

Backprojection is a superposition procedure and it sums the data from all projection views. Backprojection evenly distributes the projection domain data back along the same lines from which the line integrals were formed.

Image reconstruction is a mathematical procedure that dissolves the overlapping effect in the projection data and recreates the original image with nonoverlapped objects. A mathematical procedure is called an algorithm.

Dirac’s δ-function usually acts as a point source in algorithm development.

The readers are expected to understand two main concepts in this chapter: projection and backprojection.

Problems

Problem 1.1 If a 2D object is a point source, it is sufficient to use the projection data from two different detector views to obtain an exact reconstruction. Let us consider a 2D object that consists of three point sources which are not on a same straight line (i.e., they are not co-linear). Determine the smallest number of detector views so that sufficient projection data are available to obtain an exact reconstruction.
Problem 1.2 It is known that the Radon transform of a shifted point source δ(xx0, yy0) is δ(x0 cos θ + y0 sin θs). This result can be extended to a general object f(x, y). If p(s, θ) is the Radon transform of the unshifted object f(x, y), determine the Radon transform of the shifted object f(xx0, yy0).
Problem 1.3 Use the definition of the δ-function to prove that the following two definitions of the Radon transform are equivalent:
p(s,θ)=f(x,y)δ(xcosθ+ysinθs)dxdy,p(s,θ)=f(scosθtsinθ,ssinθ+tcosθ)dt.p(s,θ)=f(x,y)δ(xcosθ+ysinθs)dxdy,p(s,θ)=f(scosθtsinθ,ssinθ+tcosθ)dt.
Problem 1.4 The backprojection in the Cartesian coordinate system is defined as
b(x,y)=π0p(xcosθ+ysinθ,θ)dθ.b(x,y)=0πp(xcosθ+ysinθ,θ)dθ. Give an equivalent expression bpolar(r, y) of the backprojection in the polar coordinate system.

Bibliography

[1]Barrett H, Swindell W (1988) Radiological Imaging, Academic, New York.

[2]Bracewell RN (1955) Two-Dimensional Imaging, Prentice Hall, Englewood Cliffs, NJ.

[3]Bushberg JT, Seibert JA, Leidoldr EM, Boone JM (2002) The Essential Physics of Medical Imaging, 2nd ed., Lippincott Williams and Wilkins, Philadelphia.

[4]Carlton RR, Adler AM (2001) Principles of Radiographic Imaging: An Art and a Science, 3rd ed., Delmar, Albany, NY.

[5]Cho ZH, Jones JP, Signh M (1993) Foundations of Medical Imaging, Wiley, New York.

[6]Cormack AM (1963) Representation of a function by its line integrals, with some radiological applications. J Appl Phys 34:2722–2727.

[7]Herman G (1981) Image Reconstruction from Projections: The Fundamentals of Computerized Tomography, Academic Press, New York.

[8]Kak AC, Staney M (1987) Principles of Computerized Tomography, IEEE Press, Piscataway, NJ.

[9]Macovski A (1983) Medical Imaging Systems, Prentice Hall, Englewood Cliffs, NJ.

[10]Natterer F (1986) The Mathematics of Computerized Tomography, Wiley-Teubner, New York.

[11]Prince JL, Links JM (2006) Medical Imaging Signals and System. Pearson Prentice Hall, Upper Saddle River, NJ.

[12]Shung KK, Smith MB, Tsui BMW (1992) Principles of Medical Imaging, Academic Press, San Diego, CA.

[13]Zeng GL (2001) Image reconstruction – a tutorial. Computerized Medical Imaging and Graphics 25:97–103.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset