III.62 Normed Spaces and Banach Spaces


It is often useful to approximate a function f by a polynomial P. For example, if you are designing a pocket calculator and want it to calculate LOGARITHMS [III.25 §4], you cannot expect it to do so exactly, since a calculator cannot handle infinitely many digits, so instead you will get it to calculate a different function P(x) that approximates log(x) well. Polynomials are a good choice, because they can be built up from the basic operations of addition and multiplication. This idea raises two questions: which functions can you hope to approximate, and what counts as a good approximation?

Clearly, the answer to the second question determines the answer to the first, but there is no single right answer to the second: it is up to you what you would like to declare to be a good approximation. However, not all decisions are equally natural. Suppose that P and Q are polynomials, f and g are more general functions, and x is a real number. If P(x) is close to f(x) and Q(x) is close to g(x), then P(x) + Q(x) will be close to f(x) + g(x). Also, if Image is a real number and P(x) is close enough to f(x), then ImageP(x) will be close to Imagef(x). This informal argument suggests that the functions that we can approximate well will form a VECTOR SPACE [I.3 §2.3].

We have arrived, by one of many possible routes, at the following general situation: we are given a vector space V (consisting, in our case, of certain functions) and we would like to be able to say in a precise way what it is for two elements of the vector space to be close.

The idea of closeness is formally captured by the notion of a METRIC SPACE [III.56], so the obvious approach is to define a metric d on the vector space V. Now a general principle, when putting two structures together (in this case, the linear structure of the vector space and the distance structure of the metric), is that the two structures should relate to one another in a natural way. In our case, there are two natural properties that one should ask for. The first is translation invariance. If u and v are two vectors and we translate them by adding w to both, then their distance should not change: that is, d(u + w, v + w) = d(u, v). The second is that the metric should scale correctly. For example, if one doubles two vectors u and v, then the distance between them should double. More generally, if one multiplies u and v by a scalar Image, then the distance between them should multiply by |Image|: that is, d(Imageu, Imagev) = |Image|d(u, v).

If a metric has the first of these properties, then, setting w = -u, we find that d(u, v) = d(0, v - u). It follows that if we know distances from 0, then we know all distances. Let us write ||v|| instead of d(0, v). Then what we have just shown is that d(u, v) = ||V - u||. The expression || · || is called a norm, and ||v|| is the norm of v. The following two properties of norms are easy to deduce from the fact that d is a metric that scales properly.

(i) For any vector v, ||v|| ≥ 0. Moreover, ||v|| = 0 only if v = 0.

(ii) For any vector v and any scalar Image, ||Imagev|| = |Image| ||v||.

We also have the so-called triangle inequality.

(iii) ||u + v|| ≤ ||u|| + ||v|| for any two vectors u and v.

This follows from translation invariance and the triangle inequality for metric spaces, since

||u + v|| = d(0, u + v) ≤ d(0, u) + d(u, u + v)

      = d(0, u) + d(0, v) = ||u|| + ||v||.

In general, any function || · || on a vector space v that has properties (i)–(iii) is called a norm on v. A vector space with a norm on it is called a normed space. Given a normed space v, we can say that two vectors u and v are close if their distance ||v - u|| is small.

There are many important examples of normed spaces, several of which are discussed elsewhere in this volume. One class of examples that stands out is that Of HILBERT SPACES [III.37], which can be thought of as norms given by distances that stay the same not just when you translate but also when you rotate. Other examples are discussed in FUNCTION SPACES [III.29].

Let us return to the problem of how to discuss approximation by polynomials. The most commonly given answers to the two questions that arose earlier are as follows. The functions that one can approximate well are all continuous functions defined on some closed interval [a, b] of real numbers. These functions form a vector space which is denoted C[a, b]. To make the notion of good approximation precise, we introduce a norm on this space: ||f|| is defined to be the largest value of |f(x)| for any x in the interval (that is, for any x between a and b). With this definition, the distance ||f - g|| between two functions f and g will be small if and only if |f(x) - g(x)| is small for every x in the interval. In this situation one says that f uniformly approximates g. It is not obvious that every continuous function on [a, b] can be uniformly approximated by a polynomial: the statement that it can is called the Weierstrass approximation theorem.

Here is a different way in which normed spaces arise. For most PARTIAL DIFFERENTIAL EQUATIONS [I.3 §5.4] it is not possible to write down a tidy formula that solves them. However, there are many techniques for proving that solutions exist, and they usually involve limiting arguments. For example, sometimes one can generate a sequence of functions fl, f2,. . . and show that these functions “converge” to some “limiting function” f, which, owing to the way we constructed the sequence fl, f2, . . . , must be a solution to the equation. Again, if we want to make sense of this, we must know what it is for two functions to be close, which means that the functions fn should belong to a normed space.

How can we show that these functions converge to a limit f if we cannot already describe f? The answer is that most interesting normed spaces, including Hilbert spaces and most important function spaces, have an additional property, called completeness, which guarantees, under certain conditions, that limits do indeed exist. Informally, it says that if the vectors in a sequence v1, v2,... all get very close to each other when you go far enough along the sequence, then they must converge to a limit, v, that belongs to the normed space as well. A complete normed space is known as a Banach space, after the Polish mathematician STEFAN BANACH [VI.84], who developed much of the general theory of such spaces. Banach spaces have many useful properties that normed spaces do not have in general: the completeness property can be thought of as ruling out pathological examples.

The theory of Banach spaces is sometimes known as linear analysis, since by mixing vector spaces and metric spaces it mixes linear algebra and analysis. Banach spaces arise throughout modern analysis: see, for example, the articles in this volume on PARTIAL DIFFERENTIAL EQUATIONS [IV.12], HARMONIC ANALYSIS [IV.11], and OPERATOR ALGEBRAS [IV.15].

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset