6.1 A SYMMETRICAL VARIATIONAL FORMULATION 169
Now, of course, neither q0 nor a need be unique. Equivalent weak
formulations are
(17)
find (p e
([~ I
such
that
S D ~
grad q0. grad q0'= 0 V q0'e cI) °,
(18)
find
a
e a F
such that
~D
~-1
rot a. rot a'= 0 V a' e A °.
Dualizing
the constraints (6) and (7), as in Exer. 2.9, is also an option,
which leads to
(11')
find
he IH
suchthat
SD~
Ihla-2Fy(h)
isminimum,
(12')
find be IB
suchthat
SD~ -1
Ibla-2Iy-(b)
isminimum,
with again associated Euler equations and equivalent formulations with
potentials q0 e and a e A, similar to (15)-(18).
6.1.3 Complementarity, hypercircle
At this stage, we have recovered the variational formulation in q0, Eq.
(15), and derived the other one, on the "curl side", in a strictly symmetrical
way. This is an encouragement to proceed in a similarly parallel fashion
at the discrete level.
So let ~ be the subspace of mesh-wise affine functions in (recall
they are constant over both parts of sH). Likewise, let's have OI and
~0 as Galerkin subspaces for
(I)I
and ~0 (with, again, the now-standard
m
caveat about variational crimes and polyhedral domains). As in Chapter
3, we replace Problem (15) by the approximation
(19)
find q) e
(D I~
minimizing
S D
~t I grad q0
I 2,
which is equivalent to
(20)
find
(~m E
([~Im such
that
S D ~l~ grad q0. grad q0'= 0 V q0'e cI) °.
There is a tiny difference in the present treatment, however: Observe
that the solution is not unique! An additive constant has yet to be chosen,
which can be achieved by, for instance, imposing q0n
--
0 for those nodes
n that lie in S h, and this is most often done, without even thinking
about it. So did we in Chapter 3. But this time, I
do
want to call attention
on this "gauge-fixing" procedure, trivial as it is in this case.
Since the minimization in (19) is performed on a smaller space than
in (17), the minimum achieved is an upper estimate of the true one:
170 CHAPTER 6 The "Curl Side": Complementarity
I2/Rm instead of I2/R, with
I2/R_m >
I2/R, hence Rm< R. Solving (19) or
(20) thus yields a
lower bound
for the reluctance.
Now, it would be nice to have an
upper
bound as well! If we could
perform the same kind of Galerkin approximation on the "vector potential"
version of the problem, (16) or (18), we would indeed have one: For if
AFro
is some finite dimensional subspace of
A F,
solving either the quadratic
optimization problem,
(21)
find a m ~ AFro minimizing
fD ~1-1
I rot a l 2,
in terms of still to be defined degrees of freedom, or the associated linear
system,
(22)
find a m ~ AF such that
~
D ~.l- 1
rot a . rot a' = 0 V a' e
m m m !
will yield an upper bound Rm F2 to RF 2. Hence the bilateral estimate
(23) R <R<R m,
_m
obviously a very desirable outcome. This is
complementarity,
as usually
referred to in the literature [Fr, HP, PF].
There is more to it. Suppose one has solved both problems (19) and
(21), and let us set
E(bm, hm) =
ID. -1
Ibm-.hml2-~D bt -1
Irot am-bt gradq0ml 2.
Be well aware that I and F are unrelated, a priori (we'll return to this
later), so even for the exact solutions h and b of (11) and (12), the error
in constitutive law E(b, h) does not vanish. Thus E(b m, hm), obtained by
minimizing on finite dimensional subspaces, will be even larger. Now,
Proposition 6.3.
One has
E(bm ' hm )
= ID -1
ib m _ b + b - la h + la(h - hm) l 2
(24)
=IDg -1
Ibm-bl2+~D/.t -~ Ib-/.thl2+~D/.tlh-hl 2,
Proof.
Develop the first line and observe that all rectangle terms vanish,
because of a priori orthogonality relations which Fig. 6.3 should help
visualize. Indeed, b m -b and h m -h belong to rot A °= IB ° and grad ~0
- IH ° , which are orthogonal by Lemma 6.1. This disposes of the term
I D ~-1
(bm _ b). (bt (h - h)). As
for I D -1
(b - bt h). (g (h - hm), this is
equal
to ~D (b
~ h). grad ~ for some ~ e ~0, which vanishes because
both b and gh are solenoidal. Same kind of argument for the third
term I D (b m - b).
(bt-lb- h),
because rot h = 0 and rot(g-lb)= 0. ~)
6.1 A SYMMETRICAL VARIATIONAL FORMULATION 171
IL2(D)
--..~
I
I
IB F ~ ~~" ~-~
~l:._
rm
Jq
j~
IH I
FIGURE 6.3. The geometry of complementarity. All right angles in sight, marked
by carets, do correspond to orthogonality properties, in the sense of the scalar
product of IL2(D), which result from Lemma 6.1 and from variational
characterizations. (All IBFs [resp. all IHIs] are orthogonal to IH [resp. to IB].)
Note how h, h m, b, b m, all stand at the same distance r m from C m = (h m + bin)/2,
on a common "hypercircle", and how the equality (24) can be read off the picture.
For readability, this is drawn as if one had B = 1, but all geometric relations stay
valid if all symbols h and [] [resp. b and IB] are replaced by ~tl/2h and
~tl/2[] [resp. by
~t-1/2b and
~t-1/2IB].
Since E(b~ hm)
-- fD ~ I'-1
Ibm - ~ hm ]2
= ~TeT I T
/.t- 1]bm - ~t hm
]2
is a
readily computable quantity, (24) stops the gap we had to deplore in
4
Chapter 4: At last, we have a posteriori bounds on the approximation errors
for both h m and b m, which appear in first and third position at the
right-hand side of (24). All it requires is to solve for both potentials q0
and a, by some Galerkin method. Of course, the smaller the middle
term E(b, h), the sharper the bounds, so I and F should not be taken at
random. For efficiency, one may set I, get qD m, evaluate the flux F by the
methods of Chapter 4 (cf. Subsection 4.1.3, especially Fig. 4.7), and finally,
compute a m for this value of the flux.
Exercise 6.1. Show that this procedure is actually optimal, giving the
sharpest bound for a given I.
4Global bounds, not local ones: It is not necessarily true that F_,r(b m, h~ ), that is,
~T
-1 ibm_ g
hm 12, is an upper bound
for YT
~ t-1 Ib - b m
12
and fT g Ih-
hm la.
Still, it's
obviously a good idea to look for tetrahedra T with relatively high E T, and to refine the
mesh at such locations. Cf. Appendix C.
172
CHAPTER 6 The "Curl Side": Complementarity
Remark 6.3. As Fig. 6.3 suggests, the radius r of the hypercircle is
given by [E(b, h)]1/2/2. This information allows one to get bilateral
bounds on other quantities than the reluctance. Suppose some quantity
of interest is a linear continuous functional of (say) h, L(h). There is a
Riesz vector h L for this functional (cf. A.4.3), such that L(h)
= fD ~ hL"
h.
What is output is L(h). But one has I L(h) - L(h) l = I J" D ~t h L . (h - hm)]
< Ilhrll IIh- h~l, < r IIhLIl,, hence the bounds. There is a way to express
the value of the potential at a point x as such a functional [Gr]. Hence
the possibility of
pointwise
bilateral estimates for the magnetic potentials.
This was known long ago (see bibliographical comments at the end), but
seems rarely applied nowadays, and some revival of the subject would
perhaps lead to interesting applications. 0
6.1.4 Constrained linear systems
With such incentives, it becomes almost mandatory to implement the
vector potential method. All it takes is some Galerkin space AF , and
since the unknown a is
vector-valued,
whereas q0 was
scalar-valued,
let's pretend we don't know about edge elements and try this: Assign a
vector-valued
DoF a_ n to each node, and look for the vector field a as a
linear combination
a = ~n~N a--nWn"
(We shall have to refer to the space spanned by such fields later, so let us
name it Iplm, on the model of the p1 of Chapter 3, the "blackboard" style
reminding us that each DoF is a vector.) Now (21) is a quadratic
optimization problem in terms of the Cartesian components of the vector
DoFs. The difficulty is, these degrees of freedom are not
independent,
because a must belong to AF. As such, it should first satisfy n. rot a =
0 on S b that is, on each face f of the mesh that belongs to S b. Assume
again flat faces, for simplicity, and let nf be the normal to f. Remembering
that N(f) denotes the subset of nodes that belong to f, we have, on
face f,
n. rot a = Ev
~ N(f)
nf. rot(av w) = ~v, N(f)nf" (V w v x a~)
= E ~ x(f) (n~ x Vw~). a_~,
since n x Vw~ vanishes for all other nodes v (Fig. 6.4), hence the linear
constraints to be verified by the DoFs:
Z~ N(f) (nf x Vw~).
a_~ = 0
6.1 A SYMMETRICAL VARIATIONAL FORMULATION 173
for each face f in
sb.
The condition on the flux, ~c n. rot a = F, will also
yield such a constraint. Taken all together, these constraints can be
expressed as L a = F k, where L is some rectangular matrix and k a
fixed vector. (We shall be more explicit later about L and k. Just note
that entries of L are not especially simple, not integers at any rate, and
frame-dependent.)
V
V
FIGURE
6.4. All surface fields n x Vw v vanish, except when node v belongs to
the boundary.
To sum up: The quadratic optimization problem (21) is more complex
than it would appear. Sure, the quantity to be minimized is a quadratic
form in terms of the vector a of DoFs (beware, this is a vector of
dimension 3N, if there are N nodes):
f D ~-1 I rot a l 2 =
(M a, a),
if we denote by M the associated symmetric matrix. But (Ma, a)
should be minimized under the constraint L a - F k, so the components of
a are not independent unknowns.
Problems of this kind, that we may dub constrained linear systems,
happen all the time in numerical modelling, and there are essentially two
methods to deal with them. Both succeed in removing the constraints,
but one does so by increasing the number of unknowns, the other one by
decreasing it.
The first method s consists in introducing Lagrange multipliers:
minimize the Lagrangian (M a, a) + 2(~,, L a) with respect to a, without
constraints, and adjust the vector of multipliers ~, in order to enforce L a
= F k. This amounts to solving the following augmented linear system,
in block form:
5Often referred to as the "dualization of constraints", a systematic application of the
trick we encountered earlier in Exer. 2.9.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset