70 CHAPTER 3 Solving for the Scalar Magnetic Potential
A little more abstractly, suppose a problem has been cast in the form
Ax = b, where b symbolizes the data, x the solution, and A some
mapping of type
SOLUTION --~ DATA.
Solving the problem means, at a
high enough level of abstraction, finding 7 the inverse A -1, which may
not be defined for some values of b (those corresponding to sharp
corners, let's say, for illustration). But if there is a solution x n for each
element
b n
of some sequence that converges toward b, it's legitimate to
define the limit x = lim n _, ~ x n as the solution, if there is such a limit, and
if there isn't, to invent one.
That's the essence of completion. Moreover,
attributing to Ax the value b, whereas A did not make sense, a priori,
for the
generalized solution
x, constitutes a prolongation of A beyond its
initial domain, a thing which goes along with completion (cf. A.4.1).
Physicists made much mileage out of this idea of a generalized solution,
as the eventual limit of a parameterized family, before the concepts of
modern functional analysis (complete spaces, distributions, etc.) were
elaborated in order to give it status.
Summing up: We now attribute the symbol O* to the completion of
the space of piecewise smooth functions in D, null on sh0 and equal to
some constant
on
Shl , with respect to the norm IIq011,
=
[~D ~ I grad
q) i2] 1/2.
Same renaming for
(I)I
(which is now the closure of the previous one in
O*). Equation (1), or Problem (3), has now a (unique) solution. The next
item in order s is to
solve
for it.
3.3
DISCRETIZATION
But what do we mean by that? Solving an equation means being able to
answer specific questions about its solution with controllable accuracy,
whichever way. A century ago, or even more recently in the pre-computer
era, the only way was to represent the solution "in closed form", or as
the sum of a series, thus allowing a numerical evaluation with help of
formulas and tables. Computers changed this: They forced us to work
from the outset with
finite
representations. Eligible fields and solutions
7An unpleasantly imprecise word. What is required, actually, is some
representation
of
the inverse, by a formula, a series, an algorithm.., anything that can give
effective
access to
the solution.
SWhether Problem (3) is well posed (cf. Note 1.16) raises other issues, which we
temporarily bypass, as to the continuous dependence of q~ on data: on I (Prop. 3.2 gave
the answer), on ~ (cf. Exers. 3.17 and 3.19), on the dimensions and shape of the domain
(Exers. 3.18 and 3.20).
3.3 DISCRETIZATION 71
must therefore be parameterized, with a perhaps very large, but finite,
number of parameters.
3.3.1 The Ritz-Galerkin method
Suppose we have a finite catalog of elements of cI),
{~k, i" i
e y }, often
called
trial functions,
where y is a (finite) set of indices. Each ~i must be
a simple function, one which can be handled in closed form. If we can
find a family of real parameters
{(Di "
i e y} such that ~i
~0i ~i
is an
approximation of the solution, this will be enough to answer questions
the modelling was meant to address, provided the approximation.is good
enough, because all the data-processing will be done via the K~s. The
parameters q~i (set in bold face) are called the
degrees offreedom
(abbreviated
as DoFs or DoF, as the case may be) of the field they generate. We shall
denote by q~, bold also, the family q~
= {~0 i
i e y }.
The Ritz-Galerkin idea consists in restricting the search for a field of
least energy to those of the form
~i ey ~i ~i
that belong to cI) I. The catalog
of trial functions is then known as a
Galerkin basis.
(We shall say that it
defines an
approximation method,
and use the subscript m to denote all
things connected with it, when necessary; most often, the m will be
understood.) This is well in the line of the above contructive method for
proving existence, for successive enlargements of the Galerkin basis will
generate a sequence with
decreasing
energy, and if, moreover, this is a
minimizing sequence, the day is won.
To implement this, let us introduce some notation: ~ is the finite
dimensional space of linear combinations of functions of the catalog, that
is to say, the space spanned by the Xis, and we define
(7) cI) I
--(I3 ("~ (I~ I, (I3 0 --- (I3 ('h(I)0.
m m m m
The approximate problem is
thus:
(8)
Find (Pm
E
(I)Im such that
F(CPm ) < F(~g) V ~g
e (I)I .
m
This problem has a solution (by the compactness argument of A.2.3,
cf. Remark A.5), since we are considering here a positive definite quadratic
form on a
finite-dimensional
space. If in addition we assume that CI)~m
and ~0m are parallel, just
as (I)I
and cI) ° were in Fig. 3.2 (this is not
automatic, and depends on a sensible choice of trial functions), then (8) is
equivalent, by exactly the same reasoning we made earlier, to the following
Euler equation:
72 CHAPTER 3 Solving for the Scalar Magnetic Potential
(9)
Find {Pro ~ rbIm such that
~D ~t grad q}m-grad qY= 0 V q}'~ {I}°m-
This parallelism condition is usually easy to achieve: It is enough that
some specific
combination %Dim = ~2 i {[}iI~ i
satisfy q}I m = 0 on S h
and q)I m ~-
h
I on S 1" If necessary, such a function will be built on purpose and
added to the list of trial functions. Now all functions of {I}Im are of the
form q0Im
-}-
q) with q) ~ {I}°m, which we can write in compact form like
this:
(10)
{I }I = {D I + (I }0 .
m m m
In words"
{I }I
is the
translate
of {I} ° by vector q}i (Fig. 3.4).
m m m
It would now be easy to show that (9) is a
regular linear system.
The
argument relies on uniqueness and on the equality between the number
of unknowns (which are the degrees of freedom) and the number of
equations in (9), which is the dimension of {I} ° (cf. Exer. 2.7). We defer
m
this, however, as well as close examination of the properties of this linear
system, till we have made a specific choice of trial functions.
I}*
I
{I}
m
{I} 0
m
/ f --~
FIGURE 3.4. Geometry of the Ritz-Galerkin method.
Problem (9) is called the "discrete formulation", as opposed to the
"continuous formulation" (1). Both are in weak form, but (9) is obviously
"weaker", since there are fewer test functions. In particular, the weak
solenoidality of b = ~t grad q} has been destroyed by restricting to
afinite
set of test functions. The span of such a set cannot be dense in {I} °, so the
proof of Prop. 2.3 is not available, and we can't expect Eqs. (2.23) and
(2.24) in Chapter 2 (about divb=0 and n.b =0) toholdfor b m=
grad q}. Still, something must be preserved, which we shall call, for
lack of a better term, "m-weak solenoidality 9 of b" and "m-weak
3.3 DISCRETIZATION 73
enforcement of the n. b boundary condition"
on Sbo
This also will wait
(till the next chapter).
Meanwhile, it's interesting to examine the geometry of the situation
(Fig. 3.4). The figure suggests that qom, which is the projection of 0 on
(Did is also the projection of qo (the exact solution)
on (I)I
This is
correct: To see it, just restrict the test functions in (1) to elements of cD°m,
which we have assumed (cf. (7)) are contained in cD °, which gives
YD ~ grad qo. grad qo' = 0 V qo'~ cD ° .
m
But by (9) we also have
fD
~
grad qo m . grad qo'= 0 V qo'~ cD°m,
therefore, by difference,
(11)
fD ~t
grad(qo m- qo) grad qo'= 0 V qo'~ cD °
m !
which expresses the observed orthogonality.
The figure also suggests a general method for error estimation. Let
rmqo be an element of
(I)Im
that we would be able to associate with qo, as
an approximation,
if
we knew qo explicitly. Then we have, as read off
the figure, and proved by setting qo'= qo m- r qo in (11),
IIq~ - q~ II _< Ilcp - rmq~ II
(rmq~ is farther from q~, in energy, than cp~ is). So if we are able somehow
to bound IIq~- rq~ll,, an error bound on q~ -q~ will ensue. The potential
of the idea for error control is obvious, and we whall return to it in
Chapter 4, with a specific Galerkin basis and a specific r m.
The Ritz-Galerkin method is of surprising efficiency. If trial functions
are well designed, by someone who has good feeling for the real solution,
a handful of them may be enough for good accuracy in estimating the
functional. But it's difficult to give guidelines of general value in this
respect, especially for three-dimensional problems. Besides, the computer
changed the situation. We can afford many degrees of freedom nowadays
(some modern codes use millions [We]} and can lavish machine time on
the systematic design of Galerkin bases in a
problem-independent
way:
This is
what finite elements
are about.
9The terminology is hesitant: Some say these equations are approximately satisfied
"in the sense of weighted residuals", or "in the weak sense of finite elements", or even
simply "in the weak sense", which may induce confusion. "Discrete" solenoidality might
be used as a more palatable alternative to "m-weak" solenoidality.
74 CHAPTER 3 Solving for the Scalar Magnetic Potential
3.3.2 Finite elements
So let be given a bounded domain D
c E 3
with a piecewise smooth
boundary S, and also inner boundaries, corresponding to material
interfaces (discontinuity surfaces of ~t, in our model problem).
A finite element mesh
is a tessellation of D by volumes of various
shapes, but arranged in such a way that two of them intersect, if they do,
along a common face, edge, or node, and never otherwise. We shall
restrict here to tetrahedral meshes, where all volumes have six edges and
four faces, but this is only for clarity. (In practice,
hexahedral meshes are more popular. 11) Note that a
volume is not necessarily a straight tetrahedron, but may
be the image of some 'reference tetrahedron by a smooth
mapping u (inset). 12 This may be necessary to fit curved
boundaries, or to cover infinite regions. Usually, one
also arranges for material interfaces to be paved by faces
of the mesh.
Exercise 3.7. Find all possible ways to mesh a cube by
tetrahedra, under the condition that no new vertex is added.
Drafting a mesh for a given problem is a straightforward, if tedious,
affair. But designing
mesh generators
is much more difficult, a scientific
specialty [Ge] and an industry. We shall not touch either subject, and
our only concern will be for the output of a mesh-generation process.
The mesh is a complex data structure, which can be organized in many
different ways, but the following elements are always present, more or
less directly: (1) a list of nodes of the mesh, pointing to their locations;
(2) a list of edges, faces, and volumes, with indirections allowing one to
know which nodes are at the ends of this and that edge, etc.; (3) parameters
describing the mapping of each volume to the reference one; (4) for each
volume, parameters describing the material properties (for instance, the
average value of ~t, in our case).
For maximum simplicity in what follows, we assume that all volumes
are straight tetrahedra. This can always be enforced, by distorting D to
a polyhedron with plane faces, which is then chopped into tetrahedra.
1°Or vertex. For some, "vertex" and "node" specialize in distinct meanings, vertices
being the tips of the elementary volumes, and nodes the points that will support degrees of
freedom. This distinction will not be made here.
11Most software systems offer various shapes, including tetrahedra and prisms, to be
used in conjunction. This is required in practice for irregular regions.
12A more
precise definition will be given in Chapter 7.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset