In Chapter 5 we discussed how functions could be represented as power series using the expansions of Taylor and Maclaurin. Those expansions are valid only within certain radii of convergence, where the functions must be continuous and infinitely differentiable. However, this is not the only way that functions can be expressed as a series. In this chapter we will consider another expansion, which may also be used for functions that are neither continuous nor differentiable at certain points. To start with, the discussion will be centred on functions f(x) that are periodic, that is, they obey the relation f(x) = f(x + np), where p is the period, and n = 1, 2, …. Many functions that occur in physical science are of this type. For example, solutions of the equations for problems concerning wave motion involve sinusoidal functions. The form of f(x) can be arbitrarily complicated, such as the continuous function shown in Figure 13.1. We shall also see that the method can be applied to functions that are only defined in a finite range of x. This leads naturally to an important extension in which non-periodic functionsf(x), defined for all x, can be expressed in terms of simple sinusoidal functions, provided f(x) → 0 rapidly enough as x → ∞. Such expressions, called Fourier transforms, are extremely useful and will be discussed in Section 13.3.
Initially, we will assume for convenience that the function to be expanded is periodic with a period p = 2π, so that
for any integer n. Then, in certain circumstances, f(x) may be written as a sum of trigonometric functions of the form
(13.2)
where the coefficients an and bn are chosen so that fN(x) is the best representation of f(x). (The reason for the factor in the first term will be made clear presently.) Since fN(x) also satisfies (13.1), it is sufficient to consider only the range − π ≤ x ≤ π, because outside this range both f(x) and fN(x) repeat themselves. Two questions now have to be considered: firstly, what do we mean by ‘best’, and secondly, does the convergence of the series in (13.2) ensure that fN(x) → f(x) as N → ∞?
There is no unique way of defining ‘best’, but the one that is most convenient is to choose the coefficients to minimise the integral
In this case, fN(x) is said to be the best approximation in the mean to f(x). To find the coefficients an and bn, we start by substituting (13.2) into (13.3), giving
Then expanding the integrand on the right-hand side of (13.4a) gives
This may be simplified by using the general integrals
(13.5a)
(13.5b)
(13.5c)
which for p = 2π reduce to
(13.6a)
(13.6b)
(13.6c)
Finally, we define
(13.7)
for n = 0, 1, 2, …. Then, using (13.6) and (13.7) in (13.4b), we have, after simplification,
which is a minimum when an = An and bn = Bn for all n = 0, 1, 2, …, N.
Setting an = An and bn = Bn for all n = 0, 1, 2, …, N in the equation for IN, and using the fact that from its definition IN ≥ 0, gives Bessel's inequality
(13.8)
which becomes an equality if fN(x) is an exact representation of f(x) in the mean, that is, if IN = 0. It can be shown that this occurs for all reasonably well-behaved functions in the limit N → ∞. In this case the expansion (13.2) becomes the Fourier series
(13.9)
where the Fourier coefficients an and bn are given by
(13.10a)
(13.10b)
The Fourier series (13.9) is simplified if the function f(x) has a definite symmetry. Iff(x) is an even function, that is, f( − x) = f(x), then the integral (13.10b) vanishes for all n [cf. (4.32c)] so that bn = 0 and the Fourier expansion (13.9) reduces to a cosine series
(13.11a)
Likewise, if f(x) is an odd function, so that f( − x) = −f(x), then the coefficients an = 0 and the expansion is a sine series
(13.11b)
For example, if f(x) = x for − π < x < π, and has period 2π, then an = 0 by symmetry and the coefficients bn are given by
(13.12a)
on integrating by parts, and the Fourier series becomes
(13.12b)
We now return to the second question posed above: under what conditions does the Fourier series converge tof(x)? This is important because even if the Fourier series converges in the mean to f(x), it does not follow that it converges to f(x) for all values of x. The proof of this convergence property of Fourier series is lengthy and we will not reproduce it here,1 but the conditions under which convergence holds, called the Dirichlet conditions, may be summarised by the following theorem.
Dirichlet theorem If f(x) is periodic with interval 2π, and in the interval − π to π is a single-valued continuous function, except for a possible finite number of jump discontinuities, with a finite number of maxima and minima, and in addition the integral
(13.13)
is finite, then the Fourier series converges to f(x) at all points where f(x) is continuous. At discontinuous points, the series converges to the mid-point of the jump discontinuity.
An example of a discontinuous function is shown in Figure 13.4, and in accordance with the Dirichlet theorem, as x → x0 the series converges to π/2.
To establish convergence in principle requires the integral (13.13) to be evaluated. However, this is usually not necessary in practice, because if the value of f(x) is bounded, that is, it lies between ± B for some positive value B, then
and so is finite. It is worth noting that the converse of Dirichlet's theorem is not true. Thus, if a function fails to satisfy the conditions of the theorem, it may still be possible to expand it as a Fourier series. However, functions of this type are seldom encountered in physical situations.
In practice, only a finite numbers of terms can be evaluated in a Fourier series, so it is of interest to see how well such a series represents a given function when a limited number of terms is used. For simple functions, the convergence of the series is usually quite rapid. For example, consider the ‘rectified half-wave’ function. This is a continuous function periodic in x with period 2π and satisfies the Dirichlet conditions. For the interval − π ≤ x ≤ π it is given by
(13.14)
and is shown in Figure 13.5. The coefficients of its Fourier series may be found using (13.10). Thus, from (13.10a),
For n = 0,
and for n = 1,
while for n ≠ 0, 1, using (2.36c), we have
using cos (n + 1)π = cos (n − 1)π = ( − 1)n + 1. Similarly, from (13.10b),
So, for n = 1,
and for n ≠ 1, using (2.36e), we have
Finally, collecting terms, the Fourier series is
(13.15)
The goodness of this representation is shown in Figure 13.6, where it can be seen that convergence is very rapid.
A second example is the function shown in Figure 13.7. This is discontinuous and coincides with the [0, 1] step function in the range − π ≤ x ≤ π, i.e.
(13.16)
The coefficients of its Fourier expansion are found in an analogous way to those for the previous example, and without further detail, the Fourier series is
(13.17)
Figure 13.8 shows the comparison of the Fourier approximation (13.17) with the exact function (13.16), for 4 and 11 terms in the series. Again the convergence is rapid and tends towards the mid-point of the jump discontinuity at x = 0 in accordance with the Dirichlet theorem.
An interesting point about Fourier series for discontinuous functions is that near to a discontinuity, the series will overshoot the true value of the function being represented, the size of the overshoot being proportional to the magnitude of the discontinuity. This behaviour is known as the Gibbs phenomenon, and is clearly seen in Figure 13.8. Although the overshoot moves closer to the discontinuity as the number of terms is increased, it does not vanish even with an infinite number of terms.2
In the above discussion we have assumed that f(x) is periodic with a period p = 2π, but this can easily be generalised to an arbitrary period p, by using the trigonometric functions
which also have period p. The Fourier expansion then becomes
(13.18)
where
(13.19a)
and
(13.19b)
In (13.19) the integration range is symmetric, over an interval − p/2 < x < p/2. However, for any periodic function f(x), the integral over a single period x0 < x < x0 + p is independent of x0, since
(13.20)
and the last two integrals cancel because f(x + p) = f(x). Hence (13.19) can be replaced by the more general forms
(13.21a)
(13.21b)
where x0 can be any convenient value.
Up to now we have assumed that the function is periodic. If we want to find the Fourier decomposition of a non-periodic function only within a fixed range, then we may continue the function outside this range to make it periodic. The Fourier series of the resulting periodic function will represent the non-periodic function in the desired range. Since the function can often be continued in a number of different ways, it is convenient to make the extension so that the resulting function has a definite symmetry.
For example, consider the non-periodic function shown in Figure 13.9 and defined for 0 < x < L. Three ways of continuing this outside the interval 0 < x < L are shown in Figure 13.9. In (b) there is no definite symmetry. The coefficients are given by (13.21) with x0 = 0 and p = L, and both an and bn are in general non-zero in the Fourier expansion (13.18). In (c) there is odd symmetry about x = 0. The coefficients are now given by (13.19) with period p = 2L and the an will vanish by symmetry leaving a sine series of the form (13.11b); whereas in (d) there is even symmetry about x = 0 leading to a cosine series of the form (13.11a). Using either of (c) or (d) therefore reduces the number of integrals to be calculated and leads to a simpler series. Other than this, provided the series is only used to represent the function in the range 0 < x < L, the choice of which continuation to use is unimportant.
In Section 5.4.2, we saw that the Taylor expansion of a given function f(x) can be integrated or differentiated term by term to give a new series that converges to the integral or derivative of the original function. Thus, for example, the power series expansion for cos x given in Table 5.1 can be obtained by differentiating the corresponding series for sin x, assuming that this series has already been obtained.
Similarly, integrating a well-defined Fourier expansion for a function f(x) yields a series that converges to the integral of f(x). For simplicity, consider a function of period 2π that has a given functional form f(x) in the range − π < x < π. Then integrating (13.9), one obtains
(13.22)
for the indefinite integral in the range − π < x < π, where c is an integration constant that can be evaluated by equating the right-hand and left-hand sides of (13.22) at any value in the range − π < x < π. However, because of the second term in (13.22), this convergent series only reduces to a Fourier series if the mean value vanishes, i.e.
(13.23)
The differentiation of Fourier series is more problematic. Consider, for example, the function
(13.24)
shown in Figure 13.11a. Since f(x) = −f( − x), it has a sine series, with coefficients
on integrating by parts. The Fourier series is therefore
where we have assumed − π < x < π on the left-hand side. Differentiating this gives
which is a meaningless result because the series on the right-hand side diverges by (5.15), since cos nx → / 0 as n → ∞. On the other hand, if we consider the function
(13.25)
shown in Figure 13.11b. The corresponding Fourier series can be differentiated without difficulty, as shown in Example 13.5 below.
Clearly, a criterion is needed that tells us whether the expansion of a given function can be differentiated or not. It is provided by the following theorem:
If f(x) is periodic and continuous for all values of x, then at each point for which f(x) is continuous, the Fourier series for f(x) may be differentiated term by term and the resultant Fourier series will converge to f′(x), provided the latter satisfies the Dirichlet conditions.
From Figure 13.11, we see that the function defined in (13.24) fails these conditions because f(x) is discontinuous at x = nπ, whereas the function defined in (13.25) is continuous at these points.
It remains to prove this theorem. In doing so, we shall initially assume for simplicity that the function has period 2π. Then, since f′(x) satisfies the Dirichlet conditions, it has a Fourier expansion
(13.27)
where
(13.28)
Furthermore, by Dirichlet's theorem, this converges to f′(x) at all points where f′(x) is continuous. Hence, to establish the theorem, we just have to show that (13.27) is identical to the series
(13.29)
obtained by differentiating the Fourier series (13.9) for f(x) itself. To do this, we integrate (13.28) by parts to obtain
(13.30a)
using (13.10) together with f(π) = f( − π) by periodicity. Similarly,
(13.30b)
Then substituting (13.30) into (13.27) gives (13.29) as required.
This proof is easily extended to an arbitrary period p. The essential point is that, provided the conditions (13.26) on f(x) and f′(x) are satisfied, the Fourier series can be safely differentiated away from discontinuities, where f(x) is ill-defined.
If f(x) is a function periodic in some interval, then Parseval's theorem relates the average, or mean, value of [f(x)]2 over the interval, to the coefficients of the Fourier expansion of f(x) in the interval. From (13.19a), we see that the mean value of f(x) over a single period is related to the first Fourier coefficient a0. Explicitly,
(13.31)
In addition, the average value of the squared function f2(x) can be related to the Fourier coefficients that occur in the expansion of f(x). If
then
where the integrals extend from − p/2 to p/2 and we have used (13.19a) and (13.19b). Hence
(13.32)
This is Parseval's theorem.
In this section, we show how to rewrite Fourier series in a complex form. This is simpler in some ways than the original series, and serves as the foundation of the discussion of Fourier transforms that follows. We also use it to point out the strong analogy between Fourier expansions and the expansion (9.18) of a vector in terms of a set of orthogonal basis vectors.
Using Euler's formula (6.18),
(13.33)
one easily shows that the complex Fourier series
(13.34)
is identical to the Fourier series (13.18), provided that
(13.35)
Together with (13.19a) and (13.19b), these equations imply
(13.36)
for all integer n, irrespective of sign. Equations (13.35) also allow Parseval's theorem (13.32) to be expressed in terms of the coefficients cn, giving
(13.37)
In the above discussion, we have stressed the equivalence of the complex Fourier series (13.34) to the real Fourier series (13.18) and obtained key results, such as (13.36) and (13.37), by relating the complex coefficients cn to the real coefficients an, bn. However, it is important to note that they can be obtained directly from (13.34), without reference to the original series. In doing this, it will be instructive to introduce basis functions
(13.38a)
which are easily shown to satisfy the orthogonality relations
(13.38b)
where δnm is the Kronecker delta symbol (9.24b). The complex Fourier series (13.34) then becomes
(13.39a)
and on multiplying by and integrating using (13.38b), one obtains
(13.39b)
for any m, which is identical to (13.36) as required. Parseval's theorem in the form (13.37) can also be derived directly from the complex Fourier series (13.34) by using (13.38b) in a similar way.
The discussion of complex Fourier series also highlights the similarities between the Fourier expansion (13.39a) and the expansion (9.24) of an n-dimensional vector a in terms of orthonormal basis vectors ei, discussed in Section 9.2. On comparing (13.39a) and (9.24), one sees that f(x) is the analogue of the vector a, the fn(x) are the analogues of the basis vectors ei, while the integral on the left-hand side of (13.38b) plays the same role in the discussion as the scalar product (ei, ej). To emphasise this, we define the scalar product of any two periodic functions f(x) and g(x) to be
(13.40)
With this notation, (9.38b) and (9.39b) take the form
(13.41a)
and
(13.41b)
corresponding to [cf. (9.24) and (9.25)]
and
where we have relabelled the indices i, j to m, n in the vector case, Furthermore, if we substitute (13.39a) into (13.40) with g = f, and use the orthogonality relation (13.41a), we obtain
(13.41c)
compared to
in the vector case. At this point, we note that there is no need to restrict f(x) to real functions in (13.39), but if we do c− n = c*n and (13.41c) reduces to Parseval's theorem (13.37) for real f(x). On the other hand, if we do not restrict ourselves to real f(x), the coefficients cn and c− n are in general independent, and Parseval's theorem becomes
This discussion shows that the analogy between the expansion of a vector in terms of a complete set of basis vectors and the expansion of a periodic function in terms of its Fourier components is very close, and one can introduce a more general formalism that includes both.3 We shall not pursue this further here, except to note that the expansion of functions in terms of sets of basis functions is not confined to periodic functions. Two further examples will occur later, in Sections 15.3.1 and 15.4.2.
Fourier series are widely used in the study of waves because they allow complicated waveforms to be decomposed into a sum of harmonic waves, that is, simple sines or cosines. However, many situations occur in practice where waves are localised in space and time. For example, at a given time, the waveform could be described by the function
(13.42)
whose real part is shown in Figure 13.12 for the case a = 10, q = 1. When they move through space, such localised waveforms are called wave packets and fail to satisfy the periodicity condition
(13.43)
required to formulate a Fourier series for any finite period p. This is one reason, among others, why it is useful to take the limit p → ∞ in the complex Fourier series (13.34), so that a single period covers the whole x-axis − ∞ < x < ∞.
We will show below that taking this limit leads to the expansion
(13.44a)
where g(k) is given by
(13.44b)
The relation (13.44b) is called a Fourier transform of f(x) and (13.44a) is called the inverse Fourier transform of g(k). The latter is a generalisation of the complex Fourier series (13.34) and allows a non-periodic function f(x) to be decomposed into a continuous sum of harmonic waves of definite wave number k ≡ 2π/λ, where λ is the wavelength. This distribution, which is characterised by g(k), is called the spectrum of f(x). It converges provided that the integral
(13.45)
converges and that f(x) is continuous except for finite (or jump) discontinuities, where, as for the Fourier series, the expansion (13.44) converges to the value of f(x) at the mid-point of the discontinuity.
In the case of functions with definite symmetry, either even or odd, the Fourier transforms can be written as Fourier cosine and Fourier sine transforms, respectively. Explicitly, if f+(x) = f+( − x), then
(13.46a)
where g+(k) ≡ g(k); and if f−(x) = −f−( − x), then
(13.46b)
where in this case g−(k) ≡ −ig(k). However, the more general form (13.44) is often retained even for symmetric and antisymmetric functions.
It remains to obtain the key relations (13.44) by taking the limit p → ∞ in the complex Fourier series (13.34). To do this, we define variables
with spacing
which tends to zero as p → ∞. In terms of these, the Fourier expansion (13.34) becomes
(13.47a)
where
(13.47b)
and the dummy variable x in (13.47b) has been relabelled z for later convenience. Substituting (13.47b) into (13.47a) then gives
(13.48a)
where
(13.48b)
Comparing (13.48a) with the Riemann definition of an integral (4.15), and recalling that Δkn → 0 as p → ∞, then gives
where k is now a continuous variable, and since p → ∞,
Hence (13.48a) becomes
where
These are identical to the relations (13.44a) and (13.44b).
Finally, it is worth noting that some authors use different definitions to those above. Thus the Fourier transform is sometimes defined as
so that
and the external factors in the Fourier and inverse Fourier transforms are then the same. The only requirement is that the product of both external factors must 1/2π. We will use convention (13.44) throughout.
The rest of this section will be devoted to a discussion of the properties of Fourier transforms (13.44b) and the corresponding Fourier expansion (13.44a).4
In this section we will introduce some general properties of Fourier transforms. In doing so, we shall use the abbreviation to mean the Fourier transform of the function f(x), so that (13.44b) becomes
(13.49a)
and the abbreviation to mean the inverse Fourier transform, so that (13.44a) becomes
(13.49b)
The operators F and F − 1 are linear, so for example
(13.49c)
and FF− 1 = 1. Similarly, if in the inverse Fourier transform (13.44a) we re-label the variables k, x as x, −k, it becomes
which, on dividing by 2π, becomes
(13.50)
Equation (13.50) is just one of a number of relations which, given the Fourier transform of a function f(x), enable the Fourier transforms of related functions – in this case g(x), where g(k) is the Fourier transform of f(x) – to be written down immediately, without further calculation. For example, since
it follows from (13.44a) that
(13.51a)
and similarly,
(13.51b)
where g(k) is defined by (13.44b) as usual. From (13.51b) we see that the Fourier transform of a symmetric or antisymmetric function is itself symmetric or antisymmetric, i.e.
(13.52)
However, from (13.49a) and (13.51b) one sees that the Fourier transform of a real function f(x) = f*(x) is not a real function unless g(k) = g( − k). Hence the Fourier transform of a real symmetric function is also real and symmetric, as shown in Figure 13.13 above.
Further useful results are as follows.
(13.53a)
(13.53b)
and so on for higher derivatives of f(x).(13.54)
where a is a constant.(13.55a)
where a is a constant. Inverting (13.55a), we also have(13.55b)
These results are easily proved, as shown in Example 13.10. Here we illustrate a very important property of Fourier transforms arising from (13.54) by considering the Fourier transform
evaluated in Example 13.9a. Then from (13.54) we see that the Fourier transform of
(13.56a)
is
(13.56b)
The functions f(x) and g(k) are shown in Figure 13.14. They are peaked, with widths at half maximum height of 2ln 2/a = 1.4/a and 2a, respectively. Hence, if the peak in f(x) is broadened by reducing a, the corresponding peak in g(k) is narrowed; and vice versa if a is increased. This behaviour is a general characteristic of Fourier transforms and is a direct consequence of (13.54). For a Fourier transform to exist, we must have f(x) → 0 as x → ±∞ by (13.45); and if the intervening distribution is broadened without changing shape by a transformation x → ax, with a < 1, then the Fourier transform g(k) is narrowed by a corresponding transformation k → k/a; and vice versa if a > 1. Another example is given in Example 13.12 below.
In physical applications, one is often more interested in the widths of the distributions |f(x)|2 and |g(k)|2 rather than those of |f(x)| and |g(k)| themselves. The former are conveniently characterised by the root mean square deviations Δx and Δk from the means and , respectively. These are defined by
where
and the mean is defined by
with analogous expressions for (Δk) and in terms of g(k). For example, if one uses the distribution (13.56), one finds that because the relevant integrands are odd, and using these results in the integrals for the mean square deviations give (Δx)2 = 1/(2a2) and (Δk)2 = a2, so that . More generally, it can be shown that
(13.57)
where the minimum value is obtained for the Gaussian distribution, whose Fourier transform is given in (13.71) below. This result is called the bandwidth theorem and is closely related to an important result in quantum mechanics. In this case, if f(x) is the wavefunction of a particle, then |f(x)|2 is proportional to the probability of finding the particle at position x, and |g(k)|2 is proportional to the probability that the particle has an associated wave number k = 2π/λ. Since in quantum mechanics the momentum p of the particle is given by the de Broglie relation p = h/λ, where h is the Planck constant, using k = 2π/λ we may write the bandwidth theorem as
where ℏ ≡ h/2π is the ‘reduced’ Planck constant. This result is called the Heisenberg uncertainty principle and sets a fundamental limit on the precision with which the position and momentum of a particle can be simultaneously known.
In applications of Fourier transforms, it is often convenient to extend their definition to include F[eiqx], even though eiqx → / 0 as x → ∞. This achieved by introducing the Dirac delta function δ(x), defined by the relation
(13.58a)
where a is a real constant that lies within the range of integration, so that
and
If a lies outside the range of integration, the integral is defined to be zero. In particular,
(13.58b)
and δ(x) = 0 except at the single point x = 0. This property of the δ function is unlike that of any function we have met previously, and for this reason it was viewed with some scepticism by mathematicians when Dirac, a theoretical physicist, first introduced it in his classic formulation of quantum mechanics. It is now recognised as the first of a new class of functions called generalised functions,5 to which a set of well-behaved functions fn(x) approximates ever more closely as the integer n → ∞. The choice of these functions is not unique and for the δ function, possible sequences include
(13.59)
and
(13.60)
as shown in Figures 13.16a and 13.16b, respectively.
In both cases, fn(x) satisfies the normalisation condition (13.58b) for all n and fn(x) → 0 as n → ∞ provided x ≠ 0. Other useful properties of the δ function that follow from (13.58) are
(13.61)
Finally, let us consider δ[f(x)], which is clearly non-zero only at points x = ai, where f(ai) = 0. If these are all ‘simple zeros’, so that
near x = ai, then using
(13.62a)
and adding the contributions from all the zeros gives the useful result
(13.62b)
for any functions f(x) with only simple zeros on the real axis.
The delta function is very useful in physical applications and many situations may be analysed using it even if they are not described exactly by a delta function, but by a very narrow distribution like (13.59) and (13.60) for large n. It also enables us to define the Fourier transform F [eiqx], even though the integral (13.45) is manifestly divergent in this case. From the Fourier integral formulas (13.44) and the definition (13.58), we have
(13.63)
for any f(x), so that
(13.64a)
or
(13.64b)
implying that the Fourier transform
(13.65)
In any conventional definition the integrals in (13.64) do not exist. Nonetheless, these expressions may be safely used inside integrals,6 as in (13.63). To illustrate their use, suppose
and consider
where we have integrated over x and used (13.64). Integrating over q using (13.65) then gives Parseval's relation
(13.66)
Finally, setting f1 = f2 = f and hence g1 = g2 = g, we obtain
(13.67)
which is the generalisation of Parseval's theorem for Fourier transforms.
Given two functions f1(x) and f2(x), their convolution is defined by
(13.68)
assuming that the integral converges. Substituting z = x − y for fixed x gives
so that the apparent asymmetry in (13.68) is illusory and f1*f2 = f2*f1.
Now suppose we have two functions f1(x) and f2(x), which have Fourier transforms
and are such that
(13.69)
is convergent. Then the convolution theorem states that the Fourier transform of the product f1(x)f2(x) is the convolution of the Fourier transform g1(k) and g2(k), i.e.
(13.70a)
and conversely, the Fourier transform of the convolution f1*f2 is given by
(13.70b)
In what follows, we will first derive (13.70b) and then give some examples of its application. The Fourier transform of the convolution is
Substituting z = x − y then gives
which is the required result. The proof of (13.70a) proceeds in a similar way and is left as an exercise for the reader.
In the introduction to Fourier transforms, we were motivated by the need to Fourier analyse wave-packets like
From (13.65) the Fourier transform of eiqx is δ(k − q), while the Fourier transform of a Gaussian is7
(13.71)
Hence from the convolution theorem (13.70a), we have
(13.72)
This is illustrated in Figure 13.17, where we show the Fourier transforms (13.65) and (13.71) and their convolution, which is the Fourier transform of their product (13.72). From this we can see that the effect of multiplying eiqx by the Gaussian is to smear out its Fourier transform δ(k − q) into a Gaussian; and the narrower the Gaussian in x, the broader it becomes in k.8
13.1 Find the Fourier expansion of the function
with a period of 2π.
13.2 Find the Fourier series for the function
with a period of 2π, and hence deduce the sum of the series
13.3 Find the Fourier series of the function
with periodicity 2π.
13.4 Show that the Fourier series with period 2π for the function f(x) = cos (μx), where − π ≤ x ≤ π and μ is non-integer, is
Hence deduce an expansion for and show that
13.5 The following functions f(x) have period 2π and are given by
in the range − π < x < π. Which of these functions are guaranteed to have convergent Fourier series by the Dirichlet conditions? If so, find the series.
13.6 Find the Fourier series of period 2π for the function
and hence deduce the value of .
13.7 Find the Fourier series of the function f(x) of period 2L if f(x) = x in the interval (0, 2L).
13.8 Find the Fourier cosine series for the function
13.9 The function defined by f(x) = f(x + 2π) and
has a Fourier expansion
Integrate this series twice and hence deduce a Fourier series for f(x) = x4 in the interval [ − π, π] and with period 2π. Hence show that
13.10 Use the expansion of x2 given in Example 13.4 to show that
13.11 Find the Fourier series of period 2 for the function f(x) = (x − 1)2 in the interval [0, 2] and hence evaluate the sum
13.12 Use the Fourier series for cos (μx) found in Problem 13.4 to show that
13.13 Expand the function f(x) = x2 + x in a complex Fourier series in the interval − π < x < π. A useful integral is
for integer k ≥ 0.
13.14 Find a complex Fourier series for the function
13.15 Find the Fourier transform of the function shown in Figure 13.19.
13.16 Derive the Fourier sine transform (13.46b) for antisymmetric functions f−(x) = −f( − x).
13.17 Find the Fourier sine transform of the function
and hence show that
13.18 Find the Fourier cosine transform of the function f(x) given by
and hence deduce the value of
13.19 Given the Fourier transform
where α > 0, deduce the Fourier transforms of the functions
13.20 Show that the Fourier transform of a real, antisymmetric function is purely imaginary. What are the Fourier transforms of (a) f(x) = cos (2x)sin (3x), (b) δ(x2 − a2)?
*13.21 Show that the function
is the convolution of the ‘top hat’ function
with itself. Check the convolution theorem in this case by calculating the Fourier transform F[Λ] from (a) the convolution theorem and (b) directly from the definition (13.44b).
*13.22 A quantity f(x) is measured using an instrument with known resolution
which is a Gaussian form centred at zero and with a ‘standard deviation’ σr. The experimentally observed distribution e(x) is also of Gaussian form with standard deviation σe > σr. Use the result e = f*r and the convolution theorem, to shown that f(x) also has a Gaussian from and deduce its standard deviation.
Note the integral
*13.23 (a) Show that the Fourier transform of the function
is F[f(x)] = sinc k, where [cf. (5.63)], sinc x ≡ sin x/x. Hence evaluate the integrals
(b) Find the Fourier transform of sinc x, and show that the convolution