Perhaps the most important application of the higher derivatives introduced in Section 3.4 is that, provided they exist, they enable functions in the neighbourhood of a given point to be approximated by polynomials in such a way that the accuracy of the approximation increases as the order of the polynomials increases. Such so-called Taylor expansions are useful because the resulting polynomials are often much easier to study and evaluate than the original functions themselves, and they have many applications, as we will see. Firstly, however, we must introduce some basic ideas about series and expansions in general.
A series is the sum u0 + u1 + u2 + ⋅⋅⋅ of an ordered sequence {un} of elements un(n = 1, 2, … ). The elements may be numbers, for example , obtained from
or functions, such as un = 1, x, x2, … , obtained from
and the sequence may contain a finite number of (N + 1) terms,
or an infinite number of terms
where UN and U are the sums of the series. In the latter case, we often require the limiting form of un as n becomes arbitrarily large. In general, for any sequence {un}, the statement
or equivalently
means that for any ϵ > 0, however small, we can find an integer p such that
Thus for the series (5.1) we have
whereas for the series (5.2) the behaviour depends on the variable x. For example, for |x| < 1, un → 0 as n → ∞, whereas for x > 1, un → ∞. For x ≤ −1 the terms oscillate in sign as n increases and there is no definite limit.
At this point, we note that in writing (5.3) and (5.4) with the same elements un, we have implicitly assumed the existence of a well-defined finite limit
If such a limit does exist, the infinite series (5.4) is said to converge, that is, the infinite number of terms yields a finite sum U. On the other hand, if a well-defined finite limit U does not exist, the series does not converge and has no obvious meaning.
The question of whether or not a given sequence {un} leads to a convergent series will be discussed in general in Section 5.2. Here we will consider just two examples that occur frequently in applications.
Arithmetic series
These are any series that can be written in the form
for any values of a and x that are independent of n. The series contains (N + 1) terms and since they increase at a steady rate, their average value is given by
that is, the average of the first and last terms. The sum of the series is therefore given by
As N → ∞, AN → ∞, so that the arithmetic series does not lead to a convergent infinite series.
Geometric series
These are series that can be written in the form
where again, a and x are independent of n. Explicitly,
and
Hence
and the geometric series (5.9) sums to
In this case, provided |x| < 1, as N → ∞, so that
is a well-defined convergent series. In particular, setting a = 1 gives the useful result
For |x| > 1, however, the limit of (5.10) as N → ∞ is not finite and well-defined, so that
is not a convergent series.
For most series of the form (5.3), the evaluation of the sum UN as an explicit function of N, enabling the limit (5.6) to be directly taken, is far from easy, if not impossible. Nonetheless, we will often need to know whether such a limit exists, in order to determine whether or not the corresponding infinite series (5.4) converges. In this section we shall introduce two simple tests that will enable this question to be answered in most cases.
The simplest test is that an infinite series can only converge if
To see this, we note that the existence of a finite limit (5.6), together with the definition of a limit (5.5), implies
and substituting (5.3) into this equation leads directly to (5.15) as required. The condition (5.15) is very useful, since we can immediately conclude that any series which does not satisfy (5.15) does not converge. However, we cannot conclude the inverse, that any series satisfying (5.15) does converge, and while many such series do converge, others do not.
The most useful single result on the convergence of infinite series is d'Alembert's ratio test. It is formulated in terms of the behaviour of the ratio
at large n, and can be stated in two forms:
(i) The series (5.4) converges if rn < r < 1, where r is a constant1, for all n ≥ p, where p is a finite integer; and does not converge if rn > r > 1 for all n ≥ p.
(ii) If rn has a well-defined limit rn → ρ as n → ∞, then if ρ < 1, the series (5.4) converges, and does not converge if ρ > 1.
The second form (5.17b) of the test follows directly from the first form (5.17a). This is because if rn → ρ < 1, for example, then the definition of a limit (5.5a) implies that we can always find an integer p such that rn < 1 for all n ≥ p. A similar argument applies to the case rn > ρ > 1. Turning to (5.17a) itself, non-convergence for the case rn > r > 1 follows because for all n ≥ p requires |un| to increase as n increases, which is clearly incompatible with (5.15). For the case rn < r < 1, however, the proof of convergence is lengthy and complicated. It will be given for completeness in Section 5.5. Here we simply illustrate its use by example, after first considering its implication for power series of the form
where x0 and an are constants and x is a variable. From the ratio test, if
the series will converge for ρ < 1 and diverge for ρ > 1. Since |ab| = |a||b|, this in turn implies that the series converges for values of x such that
where R is called the radius of convergence of the series (5.18). Conversely, the series does not converge outside the radius of convergence, that is, for |x − x0| > R, while the case |x − x0| = R, corresponding to ρ = 1, requires special treatment in each case.
In this section, we introduce a fundamental theorem that enables differentiable functions to be approximated by polynomials in such a way that the approximation becomes more accurate as the order of the polynomials increases. In this section, we will first state and prove the theorem, before discussing some of its simpler applications.
If f(x) is continuous in the range x0 to x0 + h inclusive and all its derivatives up to and including f(N + 1)(x) exist in the same range, then Taylor's theorem states that f(x) can be written in the form
where the remainder
for at least one value of θ in the range 0 < θ < 1.
To prove this result, we define a number P for any given h by
and introduce a function F(x) defined by
We then have
where the first of these relations follows directly from (5.23) and the second by setting x = x0 in (5.23) and then using (5.22). Since F(x) is continuous in the range x0 to x0 + h, it follows from (5.24) that there must be at least one value x = ζ in this range where F(x) is either a maximum or a minimum: and since F(x) is differentiable in this range, we must have2
by (3.43). On substituting (5.23) into this equation, one finds that the resulting terms cancel in pairs, except for the last two, which give
Hence P = fN + 1(ζ), and writing ζ = x0 + θh, where 0 < θ < 1, we obtain the desired result (5.21). We now turn to its applications.
Most applications of Taylor's theorem rest upon its use to approximate functions by simple polynomials for small values of h. Thus from (5.21) we see that
where the notation O(h2)means that terms containing factors of h2 or higher powers are neglected. (This is referred to as ‘neglecting terms of order h2’.) For sufficiently small h = δx, it is often a good approximation to write
where ≈ means ‘approximately equal to’. Equation (5.26) also tells us how the limit f(x) → f(x0) is approached for small h = (x − x0). This is particularly useful for taking limits of ratios of two functions f(x) and g(x) that both vanish at x = x0, so that is indeterminate. In this case,
if
This result is known as l'Hôpital's rule. Of course if f'(x0) and g'(x0) also both vanish, i.e.,
then (5.28a) is again indeterminate, and repeating the argument gives
and so on.
At the end of Section 2.1.1, we introduced the bisection method for finding a solution to any desired precision for equations of the form
given an approximate solution x = x0. In Newton's method, solutions are found by substituting the approximation (5.27) into (5.30) to give
and hence a new solution
which will be an improvement on x0 provided the latter is close enough to the exact solution for (5.27) to be a reasonable approximation. This can be iterated in the same way to give a second improved solution
and so on until the desired precision is achieved.
When using Taylor's theorem (5.21), the remainder function RN can often be bounded, enabling the function f(x) = f(x0 + h) to be evaluated with known accuracy. In this section, we shall illustrate this by evaluating Euler's number e. To do this, we first apply Taylor's theorem to f(x) = ex, using f(n)(x) = ex for all n. Choosing x0 = 0 in (5.21), we obtain
where 0 < θ < 1. Setting h = 1 gives
where the remainder
From this we see that
where we have used the fact that e < 3 to bound RN. Hence if we take , in the middle of the allowed range (5.34), then (5.33a) will give a value of e that is accurate to less than .
To illustrate this, we will calculate e to six significant figures, which requires (N + 1)! ≥ 105, i.e. N ≥ 8. Taking N = 8, we have
to six significant figures. This precision increases very rapidly with N, and indeed as N → ∞ we obtain the convergent series
as RN → 0.
In this section, we investigate the existence of power series expansions of the form
for a given function f(x) about a point x = x0.
Once again, we start from Taylor's theorem (5.21), which we write in the form
obtained by setting x = x0 + h, where
and 0 < θ < 1. Then taking the limit N → ∞ gives
provided that: an infinite number of derivatives exist in the range x0 to x inclusive; the series (5.38) converges; and that3 RN → 0 as N → ∞. Equation (5.38) then has the form of the desired series (5.36) with4
and is called a Taylor series. In the special case x0 = 0, it reduces to
and is called a Maclaurin series.
It is important to stress that a Taylor or Maclaurin series does not always exist. For example, ln x cannot be expanded as a Maclaurin series of the form (5.40) because it is singular at x = 0 and no derivatives f(n)(0) exist. Alternatively, as N → ∞, or may do so only for a finite range of x, as we shall see shortly. However, none of these problems arise for f(x) = exp (x). Then f(n)(x) = exp (x) for all n, and
as N → ∞, since the exponential is always smaller than the larger of ex or , and as n → ∞ by (5.20). The Taylor series (5.38) is
and converges for all finite values of x and x0, as is easily confirmed using the d'Alembert ratio test. For x0 = 0, it reduces to the Maclaurin series
Equation (5.42) is one of a number of standard Maclaurin series for important elementary functions. Some of the most important of these are listed together in Table 5.1 and two of them are derived as worked examples below. Before doing so, however, some comments are in order. Firstly, in the trigonometric functions the variable x must be measured in radians, as usual. Secondly, the even (odd) nature of some of the functions is reflected in the fact that only even (odd) powers of x appear in their expansions. Finally, the last series in the table is called the binomial series, because for positive integers α = m the series terminates and reduces to the binomial theorem (1.23) for y = 1, while for α = −1 it is identical to the geometric series (5.12).
Table 5.1 Standard Maclaurin series
|
So far we have concentrated on deriving series expansions directly from Taylor's theorem. However, it is often easier to derive them using ‘standard series’ that have already been obtained, provided that care is taken to confine oneself to regions where the series converges. For example, consider the Maclaurin expansion of f(x) = ln (2 + x2). Then
where . Expanding ln (1 + z) using (5.44) and substituting for z then gives the Maclaurin expansion
However, since the expansion of ln (1 + z) is only valid for , the expansion (5.46) only holds for the corresponding range .
We next turn to the algebraic manipulation of series. Suppose we have two series
which both converge in some given region of x. Then for any number α,
and
both hold and converge in the same region. These results are almost trivial – they follow easily from the definition (5.5) of a convergent series as a limit of a finite series, together with the rules of arithmetic before the limit is taken – but are very useful. For example, from
and
the series
and
given in Table 5.1 follow directly from the definitions (2.57), together with (5.48) and (5.49).
Another useful result, which we will state without proof, is the Cauchy product. This states that
where
is the convergent series for the product of the convergent series (5.47a), provided that at least one of the series
also converges.5
Convergent series can also be differentiated and integrated term by term to give new series that converge in the same region. Suppose we have a series
Then differentiation and integration yield
and
respectively, where c is an arbitrary constant; and one easily shows, using (5.19), that all three series converge for the same region
For example, the series
given in Table 5.1 follows directly from the corresponding series for sin x given in the same table.
Finally, before giving some more examples, we note that while we have for simplicity considered Maclaurin expansions about x0 = 0 in this section, the results extend quite easily to expansions about other values x0 ≠ 0.
In Section 5.2, we omitted the derivation of d'Alembert's ratio test and any discussion of series to which it cannot be applied. These omissions will be rectified in this and the following section.
We start by considering positive series, in which all the terms un ≥ 0, so that UN, given by (5.3), can only increase as N increases. So either UN → U as N → ∞ and the corresponding infinite series converges, or UN → ∞ and the corresponding series diverges. To decide which, we will obtain two useful results by comparing the series with another series
whose convergence properties are already known; and then obtain d'Alembert's ratio test by choosing (5.54) to be appropriate geometric series.
The first of these results is called the comparison test and may be stated as follows:
If for all n ≥ p, where c is a positive constant and p is a non-negative integer, then the series U converges if the series Vconverges; and if , the series U diverges if the series V diverges.
We start by proving this for the case p = 0, that is, when the un are less than or greater than for all n ≥ 0. Then in the case , we have
where VN → V if the series V converges. Hence UN → U < cV and the series U also converges. A similar argument shows that U diverges if and V diverges. Finally, since the convergence of the series is obviously unaffected by changing the values of the first p terms, the result follows for any finite integer p.
The second result is the ratio comparison test:
If for all n ≥ p, then the series U converges if the series V converges; and if for all n ≥ p, the series U diverges if the series V diverges.
To begin we prove this for the case for all n ≥ p. Then for n ≥ p,
where the constant . The result (5.56) then follows directly from the comparison test (5.55), and a similar argument holds for .
We can now use (5.56) to complete the proof of d'Alembert's ratio test for positive series un ≥ 0 by choosing V to be the geometric series (5.12) for positive x. This series has for all n, and it converges for x < 1 and diverges for x > 1. Hence if
and we choose V to be a geometric series with r < x < 1, then the series V converges and for all n ≥ p. The convergence of the series U then follows directly from the ratio comparison test (5.56), as required. A similar argument establishes that U diverges if rn > r > 1, completing the proof of the d'Alembert ratio test (5.17a) for positive series.
To generalise the proof of d'Alembert's test to any infinite series
where the terms un may be of either sign, we first consider the related positive series
in which all the terms are replaced by their moduli. The key result is that if this series is convergent, then the original series (5.4) is also convergent. In this case, U is said to be absolutely convergent, to distinguish it from conditional convergence where (5.4) is convergent, but the related series (5.57) is not convergent.
To show that the convergence of the series (5.57) implies the convergence of (5.4) itself, as stated above, we define
and the corresponding series
Then, since (5.57) converges and 0 ≤ wn ≤ 2|un|, the series W converges by the comparison test (5.55), and from (5.58), if W and (5.57) converge, we see that U must also converge. We now obtain the desired result by noting that we have already proved that a positive series like (5.57) will converge if for all n ≥ p, where p is a finite integer. Hence, if this condition is satisfied, the related series (5.4) is absolutely convergent, and therefore convergent, as required by the ratio test (5.17a). Since we have already proved, following (5.17a) and (5.17b), that the series (5.4) does not converge if for all n ≥ p, this completes the proof of d'Alembert's ratio test in the form (5.17a). The form (5.17b) then follows directly from (5.17a) using the definition of a limit, as already noted.
D'Alembert's ratio test enables the convergence properties of most infinite series to be established rather easily, but it says nothing about the convergence of series for which
Such series must be considered separately, case by case. However, we have already obtained several general results that can be applied in any given instance. To recapitulate:
There are also two important results that apply to alternating series of the form
They are
(iv) an alternating series of the form (5.60) converges if un → 0 as n → ∞ and un + 1 ≤ un for all n ≥ p, where p is a non-negative integer;
(v) the error in curtailing an alternating series in which the magni- tude of the terms is monotonically decreasing is less than the magn- itude of the first term omitted.
To prove these results, assume that p is even and consider the sum of the first 2 r terms, starting at p. This can be written in the form
and since all the terms in brackets are positive, Sr can only increase as r increases. However, the same sum can also be written in the form
implying Sr < up as r → ∞, since all the terms in brackets are again positive. Since Sr increases and remains less than up as r → ∞, the original series (5.60) must converge. Furthermore, since
and 0 < Sr < up, (5.62) is also established. A similar argument holds for the case where p is odd.
As well as deriving (5.61) and (5.62), the above proof illustrates the method of grouping terms to rewrite a series in such a way that simple arguments and standard results can be used. This technique is frequently used in determining the convergence of series with ρ = 1, as is illustrated in Example 5.12.
Sum all the odd integers from 23 to 771 inclusive.
Sum the series
for any r > 0. For what values of r, if any, does the series converge as N → ∞?
Sum the series
where x > 0. Does the series converge as N → ∞?
The arithmo-geometric series
where a, x and y are constants, may be summed in a similar manner to the geometric series (5.9). Sum the series and show that
provided |y | < 1.
Which of the following series are convergent?
For what range of x values do the following series converge?
Find the limit as x → 1 of
Find the limits of the following functions as x → 0:
Expand cos x as a Taylor series about and establish its region of validity.
How many terms must be retained in the Maclaurin expansion to evaluate sin x at x = 0.6 rad with an accuracy of 10− 4? Confirm your result by comparing it to the more precise value obtained by using a calculator.
Derive the form of the first three non-vanishing terms in the Maclaurin expansion of .
Show that there is no Maclaurin expansion for (a) , (b) and (c) e− 1/x.
If f(x) = exp ( − x2), show that f(n)(0) = 0 for all n ≥ 0, and no Maclaurin series exists. Sketch f(x).
Use the binomial expansion to find the first three terms in the Taylor expansion of about x = 1 and x = 2, respectively. What are the regions of validity of the corresponding expansions?
Find the following limits:
The polynomial x5 + x3 − 1 has a single root in the range 0 < x < 1. Use Newton's method to locate this root to 3 significant figures.
The function is defined by
Sketch the function and show that it has a maximum at x = 0. For x ≠ 0, the stationary points of occur approximately at , where n = ±1, ± 2, …, as should be clear from the sketch. Show that this approximation becomes increasingly precise as x → ∞. Use Newton's approximation to find the position of the first minimum to an accuracy of one degree.
Identify the basic rules of elementary algebra (cf. Section 1.2.1) needed to establish the identity
and hence the identity (5.49) in the limit N → ∞, when the series on the left-hand side converges.
Show that all three series (5.52a, b, c) converge in the same region.
Deduce the first three terms in the Maclaurin expansion of . For what values of x is the series valid?
Deduce the first four terms in the Maclaurin expansion of f(x) = excos x. For what values of x is the series valid?
Determine which of the following series are absolutely or conditionally convergent and indicate whether the result depends on the real variable α.
Determine which of the following series, where α > 0, are convergent, and state whether they are absolutely or conditionally convergent?