Introduction

Numerical simulation is a central tool in the design of new human artifacts. This is particularly true in the present decades due to the difficult challenge of climate evolution. Yet recently climatic constraints were simply translated into the need for further progress in reducing pollution, a big job, in particular for specialists of numerical simulation. Today, it is likely that the use of numerical simulation, and particularly computational mechanics, will be central to the study of a new generation of human artefacts related to energy and transport. Fortunately, these new constraints are contemporary with the rise of a remarkable maturity of numerical simulation methods. One sign of this maturity is the flourishing of mesh adaptation. Indeed, mesh adaptation is now able to solve in a seamless way the deviation between theoretical physics and numerical physics, managed by the computer after discretization. A practical manifestation of this is that the engineer is freed from taking care of the mesh(es) needed for analysis and design. A second effect of mesh adaptation is an important reduction of energy consumption in computations, which will be amplified by the use of so-called higher order approximations. Mesh adaptation is thus the source of a new generation of more powerful numerical tools. This revolution will affect a generation of conceptualizers, numerical analysts and users, who are the engineers in design teams.

These books (Volumes 1 and 2) will be useful for researchers and engineers who work in computational mechanics, who deal with continuous media, and in particular who focus on computational fluid dynamics (CFD). They present novel mesh adaptation and mesh convergence methods developed over the last two decades, in part by the authors. They are expanded from a series of scientific articles, which are re-written, reorganized and completed in order to make the new content up-to-date, self-contained and more educational.

Let us describe in our own way the central role of meshes in the numerical simulation process, making it possible to compute a prediction of a physical phenomenon. In short, real-life mechanics consists of molecules and their interactions. The notion of continuous medium helps to transform a large but finitely complex system into a infinitely but smoothly complex system. For example, the understanding of gas flow relies today on the kinetic gas theory, which says that a gas is made up of a large number of molecules. We imagine the molecules as balls (monoatomic gas), but this is only a model for our imagination. We next consider that these balls are playing some sort of 3D “billiard” and that, if we are lucky, a continuum model describing the macroscopic behavior is a representative mean of the individual behaviors. In a similar manner, solids consist of a large number of molecules interacting with each other and can be modeled as continuum media. Then the history of our billions of molecules is transformed into a continuous medium. Strictly speaking, the amount of information has gone from very large to infinitely large! But we have an extra assumption: the infinitely complex functions that describe the continuous medium are smooth almost everywhere because there exists a small scale such that even smaller scales behave in an expected way (predictable by interpolation, for example) except for some error that is very small.

This assumption allows us many mathematical strategies:

– some laws for such functions can be constructed by considering that the functions have (regular enough) derivatives, with respect to time, and/or to space;

– such functions can be accurately represented by a set of special functions described by a small amount of information, such as polynomials, thanks, typically, to the Taylor formula. The first point corresponds to the construction of partial differential equations (PDEs). The second point corresponds the approximation or interpolation of known functions or the approximation of PDE solutions.

For clarity, we shall speak about “interpolation” for a discrete representation close to a known function and about “approximation” only for a discrete representation close to an (a priori unknown) PDE solution. Both strategies, interpolation and approximation, are related to discretization, the purpose of which is to reduce the infinitely complex continuous model into finitely complex discrete models, so that both the representation and the seeking of these for a particular physical situation is the matter of a finite amount of computational resources:

– a finite number of digits;

– a finite number of operations.

With respect to digits, any real number can still be extremely/infinitely complex and has to be replaced by a floating point representation. We shall not analyze the difference between a real number and its floating point representations, which is out of the scope of this book. For us, the only consequences of replacing real numbers by floating point representations is that using round-off truncation may amplify error modes in iterative algorithms. We shall have to choose only numerical algorithms that will produce good results even with round-off error, namely stable algorithms.

With respect to operations, our discrete model will rarely be computed on a sheet of paper but more frequently with a computer. It is common to call the complexity of an algorithm the number of elementary operations (additions, multiplications, etc.) necessary to perform it on a given set of data. We extend this notion by defining the complexity of the interpolation of a function as the number of numbers necessary to represent (with a given accuracy) that function. Two important ingredients of numerical analysis are connected by this notion:

– to approximate a known function with a certain level of accuracy, we need to handle (to store) some amount of real numbers, the degrees of freedom;

– to compute these degrees of freedom for an unknown function using an algorithm solving a system to provide the approximate solution, we need accordingly some amount of operations.

In both cases, we need to adapt a data structure to the geometrical domain on which the function has its definition set. This data structure, which we call a mesh, locates nodes on the domain. Approximating a smooth function can be done with a storage that increases at best only inverse-exponentially (spectral approximation) or, in most cases, inverse polynomially (approximation of given order) with the prescribed error. The computation of this approximation can, at best, be done with linearly complex algorithms such as full multigrid. More precisely, let us consider the combination of

– a smooth unknown solution u of a PDE in a domain inside f01_Inline_13_15.gif ;

– an approximation of the PDE of order α for a certain norm |.|, that is,

f01_Inline_13_14.gif

with K1 depending on the solution u and where N is the number of nodes of the computational domain; and

– a linearly complex solution algorithm, CPU(N) = K2N with K2 depending on the algorithm and where “CPU” is the computational time.

Then for a prescribed error f01_Inline_13_16.gif the CPU effort should be at least

f01_Inline_13_17.gif

If the order α is one, the CPU increases like ε−3 in 3D (steady case) or even ε−4 in the unsteady 3D case. In many cases, this indicates that the computation at a useful accuracy level is simply not affordable. For a smooth function, the affordability increases importantly when order is increased.

In the case of a non-smooth function, the effort to represent it can be extremely large if this function involves a very large or infinite number of singularities. A more typical and interesting case is that of a function with a few singularities. To simplify our explanation, let us consider a Heavyside-type function, equal to 1 for an input larger than x0, and equal to −1 otherwise. This function is very easy to represent/approximate with a few (four) real numbers, but very complex to approximate with a series of smooth approximations lying on uniform meshes. Conversely, it can be accurately represented by smooth functions lying on an adapted mesh.

The object of these volumes is to present a few analyses and methods devoted to the relation between an approximation and its mesh as well as how to adapt the mesh to both the approximation and the precise computational case.

Let us review what is presented in the chapters of this volume.

Recall that in Chapter 1 of Volume 1, we specified two particular approximation methods for compressible and two-fluid incompressible flows. We consider these particular approximation methods and try to cover a set of important questions to answer when we want to adapt a mesh to both the chosen approximation and the precise case to be computed.

The techniques presented in the remaining chapters of Volume 1 provided a new approach for an approximate solution, which is automatic through the mastering of mesh-convergence and safer through the mastering of approximation error, toward the certification of the approximate solution, presented in Chapter 1 of this volume.

In numerical simulation, we dream of an exact solution (let us call it “the grape”) and we get a basket with half figs and half grapes, the fig here being “an error”. It is then compulsory to know how many figs have replaced the grapes. Chapter 2 addresses the delicate problem of having a sufficiently clear quantification of the error. For this, we introduce the notion of corrector and describe means to compute a corrector.

Most physical processes of interest are unsteady and we have to build appropriate mesh adaptation strategies for unsteady phenomena (Chapter 3).

But adapted time steps can be small and penalizing for the global CPU efficiency. Chapter 4 addresses a multi-rate strategy (i.e. with non-uniform time steps) and explains its paramount impact on the efficiency of unsteady mesh adaptation.

An important objection to the minimization of interpolation error is the limited pertinence of this error in representing the approximation error, which is the difference between the exact solution of the PDE and the solution of the discretization of the PDE (approximate solution). A class of approximation-based mesh adaptation methods is the goal-oriented formulation, which minimizes the approximation error found on the evaluation of a scalar output depending on the PDE solution. In aeronautics, several outputs can be considered (drag, lift, moment, etc.). The goal-oriented method involves the expression of the error on the output in terms of the metric through a priori or a posteriori error estimates. We propose an analysis addressing this issue in Chapter 5 for steady inviscid flows.

An extension to viscous flows is described in Chapter 6.

Adapting mesh for the best evaluation of a single scalar output, as is the case for the goal-oriented method, is in many cases a severe limitation. The engineer is often interested in a small error in the whole representation of the flow. To answer this need, a second approximation error-based formulation is to search the mesh that minimizes a norm of the error committed on the flow field. This formulation is a norm-oriented formulation. It is presented in Chapter 7.

A goal-oriented approach can be also built for unsteady problems, and we present an example in Chapter 7. An extension to higher order is discussed in Chapter 8.

As a guide to the reader, an introduction to the basic methods can be obtained by reading Chapters 3–5 of Volume 1, which are restricted to feature-based/multiscale adaptation for steady models. The sequel for steady models should concern goal-oriented methods, and the reader can directly pass to Chapter 6 of this volume.

The reader interested in unsteady problems should continue in this volume with Chapter 3 for multiscale adaptation and Chapter 8 for goal-oriented adaptation.

Numerical experiments described in the book are performed with three different computational codes with different features. They are shortly described in Chapter 1 and mentioned in each numerical presentation in order to fix the ideas concerning the important details of their implementation.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset