Chapter 11

Model-driven Design and Simulation 1

 

11.1. General points

Since the industrial era, our world has undergone major changes, whether it be on a geostrategic, politic, economic, social or technological level. Today’s world is multipolar. A multitude of agents are interacting with one another. The dynamics of such interactions are more complex than they used to be, both because of a heightened connectivity and dependence between these agents, and the uncertainty about emergence properties relative to those dynamics. The global economy is stimulated by growing activities and exchanges between governmental and international organizations, enterprises and individuals. It is also sustained by multiple new technologies, such as nanotechnologies, biomedicine, genetics, robotics, NTICs, etc. These technologies have deep repercussions on our society.

The notion of networks is of increasing importance: more and more often, individuals, organizations and systems are organized into networks: computer networks, influence networks, old classmates networks, partnerships and alliances, etc. The digitalization of information and the systems’ mobility and modular nature are only facilitating this tendency. It has even become necessary, individuals moving more easily and readily than they used to. This need emphasizes the constraints of interoperability between the components of a system or a system of systems.

Technical and technological progress allow the marketing of more and more integrated systems, which feature various and varied functions. To speed up these products’ release, business enterprises must optimize the specification, design and development delays. But they must also fight competition by lowering their production costs, among other things. Faced with such challenges, reuse seems to be able to meet the strong requirements, which are only getting stronger with the technologies’ rapid evolution, the changes in the industrial scenery and the regulations.

It would be naive to think that all or part of a system or a system of systems might be reused regardless of the context. Generally speaking, an existing component is reused with a set of components – already existing or to be conceived – in order to reach a given purpose. This shows how reuse requires a certain level of interoperability between the system’s constituents and between the system and its environment, but it must also takes into account the need, the context and the use’s purpose.

The increasing complexity of systems and systems of systems is making this double issue of interoperability and reuse all the more difficult to solve. According to Le Moigne [LEM 95], complexity can neither be attributed to the increasing number of components, nor to their level of interaction. Rather, it is linked to the unforeseeable character of emergent behaviors, knowing that a complex system is a combination of implex components (which cannot be broken down any further without loss of data) in interaction, rather than a disjointed sum of these components.

This complexity cannot be studied through an analytic, Cartesian approach. This is why Le Moigne advises the use of a systemic modeling approach, which helps achieve better comprehension of the complex systems by relying on the following basic concept: what is the system doing? The answer to this question is a necessary, but not sufficient, condition to favor such interoperability and the components’, the systems’, or even the systems of systems’ reuse. Knowing the environment, the context of use (purpose), the structure (static aspects), and the temporal evolutions (dynamic aspects) of the system is imperative. This approach is at the base of the theory of general system [LEM 94] and the system engineering standards follow this methodological framework in detail, featuring support tools for its implementation.

The same methodological approach can be found within the software engineering community, thanks to the Object Management Group’s (OMG) efforts of standardization since 2000. The OMG recommends model driven engineering (MDE), with a particular example of this methodology, namely model driven architecture (MDA). MDE and MDA offer a methodological framework and associated tools which help with the interoperability and reuse of all or part of the complex systems, for several reasons:

– model driven engineering is coherent with the ideas laid out in [LEM 95] about the modeling of complex systems;

– it is complementary to systems engineering: on the one hand, the current systems are software-intensive. On the other hand, since simulations are used with increasing frequency during the systems’ life cycle, the models associated with said simulations naturally belong to systems engineering;

– its basic principle, relative to the separation between business logic and technological aspects, facilitates the capitalization of knowledge, necessary for reuse;

– it helps verify and validate models, thanks to the control mechanisms of the (meta-) information featured in the models (see infra for the details).

Figure 11.1.MDE and MDA favor interoperability, reuse and capitalization

images

This chapter aims at showing how MDE and MDA can favor interoperability, reuse and capitalization (see Figure 11.1), upstream of a system’s life cycle, from the analysis of need to the design (the development and deployment may be automated). In order to avoid any ambiguity, we will first quote some definitions and works about modeling and metamodeling, simulation, as well as testing and validation. We will then briefly expose the state of the art on the MDE and MDA approaches. Finally, we will illustrate those concepts through examples of implementation within the research and technology programs led within the French Ministry of Defense.

11.2. A few definitions

Experience shows that it is often better to define a common vocabulary for a particular field, so as to avoid any ambiguity in the understanding and interpretation of the ideas which will be discussed between participants. This is why we deem it useful to dedicate a few paragraphs to the terminology used in the context of systems of systems engineering, driven by models and simulations.

11.2.1. Modeling

The art of modeling goes back to the paleolithic era, at the very least (around 30,000 years B.C.), during which the human being started putting his observations on a medium (rock, wood, bones, etc.). Cave paintings can thus be considered as examples of modeling of animals, anthropomorphic subjects, hunting scenes, etc. (see Figure 11.2). It is important to note that, in these examples, the result of modeling – the model – is the representation of a subject (object, being, phenomenon) belonging to the real world [DOD 94]. This representation might be achieved through imitation or a mental process [VOL 04], about a subject existing or not. These various ideas were well expressed by Joseph Nonga Honla [NON 00]: “the model is the artificial representation which “one constructs in his head”, and which is “drawn” on a physical medium.”

Figure 11.2. Cave painting (Lascaux) showing a man crushed by a bison

images

As is demonstrated by the cave painting, this artificial representation is a simplification of the real world. It does not transcribe every detail or the whole complexity of the observed reality, but only certain points of view (or aspects) which the author is trying to express, with a certain level of abstraction, through the use of signs or symbols, which have a meaning (signified) for the person which receives them. Several interoperability issues are apparent. Do the author and the receiver share the same knowledge and use the same signifying symbols? Do the latter properly transcribe what the author is trying to express? Do said symbols hold the same meaning for the receiver? The key to these answers lies in the link between the signified and the signifier, but also between the emitted signified and the received signified. This interface issue should be given due attention. The systemic approach undeniably follows this logic, by laying more importance on the interfaces than on the components of the studied system. We will see in the following that the MDE and MDA approach completes this systemic approach through the concepts of metamodeling and transformation.

The elaboration of a model generally serves a purpose. With cave painting, the author was probably looking to transmit a message or share his knowledge as an observer of the scene. The use of such a model is rather limited, since description is static, and people then talk of a contemplative model. This is a particular case of analytic models, which help explain a precise number of properties, and the foreseeable and deterministic behaviors which characterize the complex systems (see [LEM 95] or [MCX 05]). The downfall is that this type of model a priori requires exhaustive and explicit knowledge of the system, concerning the description of its components, the relationships between them and the precise links between the whole and its parts. This explicit character is very useful when it comes to controlling the description, in that we can think that what is precisely described is perfectly known. The mathematical progresses of the 20th century, and in particular the demonstration of the impossibility to display explicit solutions to certain equations (for example partial derivative equations found in fluid mechanics, which are at the base of turbulence modeling in aeronautics or meteorology), have put an end to this vague desire for control with fully analytical models.

A new family of models was therefore developed, based on another paradigm: modeling is used to formulate problems in order to achieve better comprehension, or reach a better solution, through simulation, within which these models can be executed. Modeling is then defined as [MCX 05]: “the operation through which a phenomenon’s model is established, so as to offer a representation of said phenomenon which can be interpreted, reproduced, and simulated.”

The executable models are a real improvement over contemplative models, for they help integrate every dynamic aspect of the modeled system. The expression “every dynamic aspect” is at the heart of the problem, so let us define it more precisely. From a strictly mathematical point of view, they represent the dependency of certain descriptive parameters on time, and a paradigmatic jump doesn’t seem to apply to the models that are used. But if we take the informal definition – without getting into mathematical details – which says that an analytical model is a priori the data of a set of form equations fixed once and for all, it is obvious there might be difficulties in taking some things into account, for example changes in structure, which may be caused by the failure of some components, the replacement or the insertion of new components. On the contrary, an executable model is constructed on a generative paradigm. Control is a priori replaced by an iterative search of approximate models, which eventually leads to control.

This having been said, the problem then lies in the construction of executable models able to represent the real world’s complexity. Attempts at using methods such as expert systems, hence with simple rules written along the lines of “if premises, then conclusion”, have turned out to be much too limitative for the modeling of the emergent behaviors which characterize the complex systems. This result is not surprising, since the expert systems define a finite number of foreseeable, deterministic rules. We find the pitfalls highlighted by the analytical approach. The obtained models are therefore more adapted to complex systems, and it would be unrealistic to want to use this type of modeling to answer problems of a complex nature, even if acceptable solutions may be found and fit. The systemic method of modeling should be better adapted to remedying this [LEM 95].

Systemic modeling is based on phenomenology and teleology hypotheses. Which means that the modeled phenomenon or system explicits its functions and functioning, and also its purposes. The semantic cohesion (or congruency) is more important there than in the formal coherence of the modeled system [MCX 05]. We will later see how model driven engineering helps respect semantic cohesion. If systemic modeling helps achieve a better comprehension of complex systems, the complexity paradigm developed by Morin [MOR 77] offers, a conceptual modeling framework of phenomena perceived as complex by the observer-designer. This paradigm considers that complexity is organized, recursive and organizing, depending on which point of view the modeler is interested in. Following systemic modeling principles, these points of view may be modeled as quasi-breakable systems, which means they are defined by the interrelations networks between subsystems, the input-output relationships of each subsystem, as well as the relationships which link the system’s input-output to the subsystems’ relationships with the environment. These principles form the epistemic basis of MDE and MDA modeling.

This brief study of the modeling concepts and the associated works is enough to make the reader aware of some of the difficulties which may be encountered during the modeling of complex systems. More difficulties will arise with the reuse of those models, such as: verification and validation, as well as the models’ capitalization. We will see how model driven engineering claims to bring pragmatic elements of solution to those problems.

11.2.2. Metamodeling

Beyond its basic meaning of “after, beyond”, the Greek prefix “meta” is often used in scientific language to express self-reference. Thus, metamathematics is the mathematical theory of the foundations of mathematics, then considered as objects of study. In computer sciences, “meta” designates a higher level of abstraction, a model (e.g. metadata is a model of data).

The two following definitions shed some light on the meaning of metamodeling in model driven engineering:

– metamodeling is the “definition of a set of concepts, properties, operations and relationships between concepts, whose purpose is to define all the necessary entities during the modeling of a specific system” (see [KAD 05], p. 9);

– “metamodeling acts as a toolbox: it captures the variety of the models’ properties, articulates models with one another, insures the data mapping between these models, etc.” (see [MET 05]).

In short, metamodeling goes beyond modeling, reaching a higher level of abstraction; we might say that metamodeling consists of model modeling (produced through modeling), following the idea of self-reference. This notion of “model of model” seems too vague and difficult to understand. To try and be more specific, the computer lingo would consider a model as an “instance” of a metamodel (produced through metamodeling). However, in order to avoid any possible confusion between the object-centric concepts – in which the notion of instance takes on an operative character – and those of model engineering, an ambiguity which is indeed not fortuitous, as will be demonstrated by the digital implementations of methodologies, usage recommends the use of notion (or property) of conformity to quality the relationship of a model to its metamodel [BEZ 04]. Thus, a metamodel “encompasses” a set of conform models; in order to guarantee such conformity, metamodeling must define the implemented methods and languages, used during the modeling process.

Respect of this conformity helps favor interoperability between models, on both the syntactic and semantic levels, depending on the models’ level of abstraction. In [BEZ 01], the authors claim that the notion of metamodeling is strongly linked to the notion of ontology in knowledge engineering. Let us remind you that an ontology is defined as the specification (formal description) of a conceptualization (a way of describing on a certain level of abstraction) of a knowledge field within a specific context [GRU 93].

The result is displayed as a semantic network which links the concepts together through taxonomical relationships (for example, hierarchy of concepts), via, for example, relationships of composition and heritage, from the point of view of object-oriented languages and semantics. The existence of a description of those semantic relationships precisely helps align the signifier and the signified, and hence increase the level of interoperability.

Whether it be for metamodel or ontology, we should define a language fit to describe them. This language then becomes the meta-metamodel which, in comparison with the metamodel, must also provide a set of concepts, properties, operations and relationships between concepts, in order to define all the entities needed during metamodeling. The concepts used in metamodeling also apply. The level of abstraction in meta-metamodeling is higher than in metamodeling: a metamodel must apply to its meta-metamodel. In fact, the latter provides a unique language of definition of the set of metamodels (see [KAD 05], p. 9). This uniqueness guarantees interoperability between interacting metamodels, since those are described with the same language on a level of abstraction where semantics are taken into account in the description.

Now, the reader is certainly wondering whether this stack of “meta” could not be followed on to further elevate the levels of abstraction. In fact, the OMG offers a four-layer architecture (see Figure 11.3): M0 (real world), M1 (model), M2 (metamodel) and M3 (meta-metamodel). The last level, M3, is reflective, or self-referent, which means it can be described with its own language. The MOF (meta object facility) language recommended by the OMG features this reflexive property. Other languages are available for the design of metamodels, such as ECore, KM3 or DSMDL (which belongs with the DSL Tools). One goes down the metamodeling stack through vertical transformations, also called refinements, which can be of various nature: specialization, elaboration, development, derivation, breakdown. Readers wishing to see the precise definition of such refinements can refer to [GRE 04] (Chapter 14, p. 458). In addition to these vertical transformations, there exist horizontal transformations, when these operate on models or metamodels of the same level of abstraction. It is of course possible to combine the vertical and horizontal transformations. We will study these subjects in a later section.

Figure 11.3. OMG multilevel metamodeling stack

images

11.2.3. Simulation

A rather general definition of simulation likens it to “a method for implementing a model which will evolve over time” [DOD 94]. Sometimes, several models are necessary for the composition of a simulation. Depending on the criteria (field and purpose, techniques and means, temporal characteristics, etc.), the simulations can be classified in several categories. Table 11.1 provides an example of such classification. Of course, other classifications are possible: discrete event simulation (discrete temporal variable), distributed simulation, etc. (see [DGA 97, DOD 94] for other types of simulation). These few examples clearly show the complexity which might arise from a simulation which becomes a complex system in its own right, system which will have to be controlled.

Within this chapter’s context, we focus on digital simulations, whose models are implemented as codes executable on a computer. This focus is motivated by the following reasons:

– the current systems are software-intensive;

– digital simulation offers many advantages, essentially in terms of cost and flexibility, compared to simulations which use physical models (e.g. scale models);

– digital simulation can be used all through a system’s life cycle, from its feasibility study to its disposal: this is all the more important in the light of today’s requirements on the ecodesign of complex systems, where we must, from the first stages, evaluate the impacts which the entire life cycle might have on the environment or in relation to a requirement on sustainable development.

Table 11.1. Simulation classification (featured in [KAM 02])

images

In the field of defense, simulation appears as a mandatory tool to control performances, delays and costs in the design of large systems. The concept of an integrated and concurring use of simulation means in the various stages of a system’s life cycle was first introduced by the Defense Systems Management College in 1998, in the United States, under the title Simulation Based Acquisition [JOH 98]. Works have demonstrated the economical profit which may be garnered from this approach: [LUZ 03], for example, is based on the cost model COCOMO 2.0 to show that resources allotted to simulation based acquisition are instantly compensated by the cost premiums resulting from the first incident happening during the acquisition process.

It is rather intuitive to imagine that the use of simulation contributes to the early identification of mistakes and therefore reduces cost premium risks, or even plain project failure. Indeed, it helps verify specifications and validate concepts developed in the stages preceding the life cycle; reduce the risks of development and integration errors; prepare and scale qualification tests; and treat the problems of use and disposal of the system before they actually occur (such as integrated logistic support, recycling, etc.).

However, the implementation of a simulation requires us to ponder about interoperability and reuse. These two themes imply the verification and validation of data, models and simulations, but also the capitalization of knowledge. The MDA approach, and in a broader way model engineering, therefore present a certain potential in favoring the resolution of such problems.

11.2.4. Interoperability

This neologism rather explicitly expresses the notion of capability to operate together. This basic notion is found in many definitions, more or less precise and specific to a field of application (see [DGE 01, GRE 04, MIC 07, OTA 07]). Whichever meaning we choose to follow, interoperability is increasingly considered as an important aspect in the design of a system. It has a positive impact on the whole value chain, through the increase of productivity, decrease of costs and, in the end, improved client satisfaction. To achieve this, efforts must be made to standardize and open the systems [MIC 07], despite the quasi-monopoly and the misgivings some giants of informatics may have.

The field of telecommunication is a pioneer in the search of interoperability. As early as 1865, a great number of European countries had already grasped the necessity and utility of the implementation of a common coding for the transmission of telegraphic messages from one country to the other. The creation of the International Telecommunication Union (ITU) breaks another barrier by tackling the standardization of international telecommunications. However, it should be noted that the standards issued from the ITU workgroups provide technical recommendations. Thus, following these standards only ensures interoperability on a technical level, even if semantic information (network communications) may transit within the “presentation” and “application” layers. This is not surprising, for these standards are defined in relation to the layers of the OSI (Open Systems Interconnection) reference model, whose level of abstraction lies on the M1 level of the multilevel metamodeling stack.

Rapidly, it becomes clear that the level of technical interoperability is not enough for the target system to ensure proper operation in its context of use. Indeed, this level of interoperability is focused on the signifier of the exchanged data and not the signified meaning. In other words, it does not take semantic aspects into account. It is commonly acknowledged that these two levels of interoperability (technical and semantic) must be sufficiently covered in order to guarantee a system’s proper operation. In fact, more is needed to reach complete interoperability, whose definition and characterization are still left to harmonize. Attempts at conceptualization of interoperability levels have resulted in reference models such as the Levels of Information Systems Interoperability [DOD 98], or the NC3TA Reference Model for Interoperability [OTA 03]. According to the [TOL 03] authors, both models ensure the coherence of data, which is a necessary but insufficient condition to ensuring the interoperability between applications. They are therefore offering an alternative model, broken down into five Levels of Conceptual Interoperability Model (LCIM), which also requires the documenting of the interfaces:

– level 0 (system specific data): the data is used within each system in a proprietary way; there is no interoperability required;

– level 1 (documented data): the data is accessible through interfaces; the data and interfaces are documented using a common protocol;

– level 2 (aligned static data): the data is documented using a common reference model, based on a common ontology;

– level 3 (aligned dynamic data): the use of the data is well defined using standard software engineering methods such as ULM;

– level 4 (harmonized data): the semantic coherence of data is guaranteed by a conceptual model.

Level 1 corresponds to the technical interoperability, in which the physical connections and network layers are taken care of: data can be exchanged following standard formats and protocols. Such is the level of interoperability reached by the reference model OSI, previously talked of. Level 2 concerns the meaning of data. This is the semantic interoperability, where the description of data must be precise and non ambiguous, through the use of an ontology which structures the concepts according to a semantic graph, whose edges express the semantic relationships between these concepts. When combined, levels 1 and 2 answer the previously raised questions about the link between signifier and signified. The answers relative to the link between the received and the emitted signified are found from level 3, which studies what should and can be done with the received data so as to heighten the level of interoperability. As for the last level, it seeks to achieve a common and complete vision of the modeled field, through a conceptual model. The latter must describe what is modeled and what is not, namely the limits and constraints. Within a systemic approach, this means properly defining the borders of the modeled system, as well as its interfaces.

This succession of levels of interoperability can be compared with the multilevel metamodel stack; in both cases, a higher level in the stack goes with a higher abstraction. The two models can even be matched: the LCIM level 1 corresponds to the level M1, levels 2 and 3 can be associated to the level M2, and level 4 can be compared to level M3. Following the hypothesis according to which a higher level of abstraction of the models favors reuse, it can be deduced from this comparison that a higher level of interoperability also favors reuse. Conversely, the positive correlation between interoperability and reuse also helps verify that a higher level of abstraction favors reuse.

It therefore seems natural to treat these issues of reuse and interoperability conjointly. The fields of application which are concerned by these issues may very well make use of the LCIM model, which remains sufficiently general despite coming from the world of simulation.

11.2.5. Verification and validation

As with the other concepts we have talked about so far, no common definition exists to describe the activities of verification and validation (V&V) between the communities of system engineering, software engineering and modeling and simulation. In that last field, a general consensus is still to be found, essentially because of the partial character of the models’ and simulations’ representation of the real world. However, concepts can be found, common to the various definitions.

To put it bluntly, verification controls that the work has been properly done, and validation ensures that the proper work has been done. To be more specific, verification is the control that a product of an activity or a process (example: specification, design, development) follows the entry requirements of said activity or process, that is to say that the resulting technical characteristics follow the requirements. As for validation, it consists of ensuring that the product of an activity or a process applies to the need for a precise purpose. The concept of “efficiency” is developed in the works of Le Moigne.

Therefore, conformity is judged against a referent, made of requirements in one case and needs in the other. Both types are linked, since the requirements stem from the need. From this observation, we can easily deduce that validation is more difficult than verification, considering the commonly acknowledged difficulties in compiling the need and transforming it into requirements. As for verification, the underlying activities seem more easily apprehensible, since, on the one hand, system and software engineering recommends expressing requirements in a verifiable, quantified way, and on the other hand, there exist numerous techniques and tools issuing from software industry, with one of the most promising approach based on formal methods. The latter however require large resources and calculation time, and are still reserved for critical systems in which test coverage must be at its maximum. In most systems, the residual errors can therefore endure after the V&V process.

If the total eradication of such errors, even when it is technically doable, is not economically viable, the favored solution would be to contain them, so as to guarantee Fitness for Purpose. To ensure such fitness, the results of V&V constitute tangible proofs to inform, as much as possible, on the field of validity of the target system, in terms of capacity, performance, constraints and limitations, depending on the purpose and the context of use. It seems natural to think that, the wider the V&V coverage, both in width and depth, of the detail of the possible tests and try-outs, the better the evaluation of the fitness for purpose.

This perception is not erroneous, but may induce an error if the quality of the tests and try-outs is not high enough. Perception of quality is associated to the level of trust built up by the beneficiary (user or decision maker), not only from the objective proofs collected by V&V, but also according to the experimental framework and operating mode, the skills and fame of the operating agents, as well as the processes and organizations implemented to support them. The authors of [SCO 03] have even introduced the idea of evaluating the maturity of the V&V process according to a five-tiered maturity model, akin to the CMMI (capability maturity model integration). The next logical step would be the implementation of a system of independent certification of the ISO kind, which would probably have a positive impact on the perception of quality. We would then have an addition pertaining to software engineering, for system engineering purposes, which would complete purely software-related standards such as ISO/IEC 9126 (software engineering: product quality), ISO/IEC 15504 (ISO/SPICE process assessment) and ISO/IEC 14598 (software engineering: product evaluation).

The works of the European project on research and technology, REVVA, taken over by the SISO (Simulation Interoperability Standards Organization) standardization group [SIS 07], introduce the idea of the V&V process’s independence, by offering to have it designed by a third party, other than the system’s contractor or its project manager. From this independence emerges the importance of distributing responsibility between the various stakeholders. Of course, one must be aware that a better fulfillment of needs and a higher level of trust both require human and financial efforts. Since residual errors introduce uncertainties characterized by risks of use or reuse, a compromise will have to be found between the beneficiary’s personal tolerance towards risks, and the efforts put into the V&V process.

From there, the works of project REVVA have helped clarify a hitherto confusing terminology around acceptation, and most of all accreditation, in the international community of modeling and defense simulation. The term “acceptation” results from the aforementioned compromise, in the decision to use or reuse the target system for a given purpose and within a particular context; this is indeed the term we are interested in. The term “accreditation” corresponds to the procedure through which an organizational authority recognizes that a moral or physical person is apt to perform some specific activities.

The presented principles and concepts must go along with a V&V methodology. It is not uncommon to find V&V processes integrated concurrently with the process of system engineering, or the process of simulation development: for example, the IEEE standard 1516.3 relative to good working practices for the development and federation process HLA (FEDEP), which follows a classic V model of system engineering, integrates a subprocess VV&A Overlay. This paralleling of process is adapted to currently developed systems and simulations. For existing products, integrated within new developments, the REVVA project offers a generic process of V&V methodology post hoc, similar to the V model (see Figure 11.4). The descending branch of this generic process starts with the formalization of the V&V need, that is to say the analysis of need and context. This need is then developed into a structured, arborescent set of acceptance criteria (Target of Acceptance: ToA) which are themselves developed into a set of V&V criteria (Target of V&V: ToV) from which the V&V director can plan all the underlying activities. The satisfaction, or lack thereof, of the V&V criteria by the items of evidence, is then analyzed and described within a V&V report. The final stage consists of evaluating the satisfaction, or lack thereof, of the need for V&V by the aggregation of the acceptation criteria. The beneficiary has the final responsibility of accepting or refusing, based on the acceptation report which was produced during that stage.

A database must be built up at the heart of this process: it helps capitalize on all the data and the knowledge issued from the V&V activities. It would be useful to link it to a business database featuring the reference business data. These databases constitute a precious source of information for any beneficiary wishing to reuse products that have been verified and validated. Moreover, thanks to these pieces of information, including the knowledge of the products’ field of validity, the interoperability with other products is favored. Without this information and the agents involved in these products’ development, a much longer work of reverse engineering would have to be undertaken, and it would not necessarily produce all the necessary data. Such a situation would be comparable to the works of archeologists, who try to determine the origin of the previously shown cave painting, the true message its author was trying to transmit, its original purpose, etc.

This process is sufficiently generic to be adapted to the various V&V objectives, the nature of the investigated products, the available human and financial resources, and the risks tolerated by the beneficiary. Within the project REVVA, case studies led on simulations for acquisition and training simulations have shown the way this process can be applied to such types of application. This generic process is currently analyzed for standardization within the SISO, which has become a chapter of the IEEE dedicated to the field of modeling and simulation. On the other hand, this generic process is not necessarily adapted to the V&V of technical-operational simulations which implement performance or behavioral models, helping appreciate the operational efficacy of a future system used within a given scenario depending on its unitary performance (see [RAB 06, RAB 07], in which a process is based on the standardization of the notion of trust and credibility of simulations, which includes the levels of relative skill and trust of all stakeholders, on top of strictly technical criteria). Keeping that knowledge in mind, the reader must be conscious that as long as no international V&V standard exists, and no sufficient hindsight on these methodologies is had, we will have to choose between the various existing V&V methodologies and adapt it to our needs.

Figure 11.4. The GM V&V Generic Process (quoted from [SIS 07])

images

11.3. Model-driven engineering

11.3.1. The MDA conceptual framework

The MDA architectural model is a standard from the OMG, whose main purpose is to ensure the durability of the corporation’s business knowledge against the rapid evolution of technologies. Since 2000, the OMG sponsors the MDA approach as an alternative to the failed attempts at standardizing CORBA, in the 1990s, as a lone middleware able to guarantee real interoperability of the software components issued from heterogenous sources. The main reasons for such a failure stemmed from the over-dependency of the CORBA components on technologies.

The offered conceptual and methodological framework provides a set of directives to structure the specifications which have been given as models through UML (unified modeling language); then, a code is created for any operating platform (see [OFTA-4 04] for a proposed definition of a platform). Taking the specification of the system’s functionalities away from any notion of implementation on a technological platform helps implement the paradigm “write once and generate everywhere”. Actually, the IT press [ITE 04] admits that, with the MDA approach, models went from the contemplative to the productive mode.

The associated method of software engineering is derived from the object method and even complements it, as underlined by [BEZ 04]. On the one hand, the UML language, the MDA basis, has its roots in object-oriented methods of analysis and design, such as: OMT (object modeling technique) or Booch (see Figure 11.5). Moreover, in both cases, a higher level of abstraction is required (classes in object-oriented methods, metamodeling in MDA and MDE). The main difference is that in model-driven engineering, the focus is put on models, not objects, and a higher level of abstraction is possible. This helps avoid running headlong into development details (and therefore the solution) as early as the upstream stages of specification and design of the system’s life cycle, as is often the case in traditional object-oriented methods, whose adaptability to evolutions is limited.

Let us remark, however, that the recent evolutions of the object-oriented technologies – design patterns and aspect weaving – are trying to compensate for this problem of adaptability in a way close to some underlying MDA principles. The idea to build design patterns as architectural components, which can be reused to solve a recurring problem, stemmed from the works of architect C. Alexander [ALE 77]. For every pattern, there is a purpose (the problem to solve), the problem’s solution, and the limits of its use in a given context. In practice, the construction of a pattern goes through the analysis of the commonality (common features) and of the variability (structures liable to change) of the problem’s field [SHA 02]. The commonality is obtained through abstraction, based on variations of specific concrete cases. This abstraction must integrate as global a vision as possible on the field, in order to ensure the maximum level of interoperability (level 4 of the LCIM model shown above). Thus, the commonality provides a certain structural stability and robustness, while variability characterizes the pattern’s aptitude at being reused for various problems in one unique field. This separation between commonality and variability can in fact be compared to the MDA approach, in which the first would correspond to the business process, and the latter to the technical characteristics of the technological platforms which implement these processes.

Figure 11.5. Relationship between UML and object-oriented methodologies with important temporal milestones (OOPSLA was the main meeting of the IT community around object-oriented languages)

images

The starting point of the MDA methodology is often presented through two separate models: PIM (platform independent model) and PSM (platform specific model). As demonstrated by their names, the former is independent from the platform, unlike the latter. The MDA principle requires modeling components to be waved between the PIM and the PSM, before applying a set of transformations in order to obtain a code which can be compiled and run on the target platform. The OMG reference documents mention the necessity of an intermediary model, a so-called PDM (platform dependent model), to go from PIM to PSM, but the definition of such a model is rather confused. Thus, in [OFTA-4 04], the authors offer to clarify the definition and role of a PDM. Their vision can be summed up by the right-hand scheme of figure 11.6. Putting it in parallel with the traditional Y-shaped architectural process used in system engineering, the PIM would then be akin to the functional architecture which characterizes the business aspects: the PDM would correspond to the technical architecture which defines the technical components and their technical characteristics; the weaving stage between the PIM and the PDM would be akin to the stage of allocation of the technical components to the expected functions; and finally, the PSM and the code obtained through transformations can be compared to the physical architecture, specific to chosen technologies.

Figure 11.6. Parallel between system engineering and MDA

images

This break-up between PIM and PSM must go along with the appropriate organization of design components. The authors of [SHA 02] recommend encapsulating the objects’ variations and composition, rather than classify them via class heritage relationships, as is done in object-oriented languages. Composition helps achieve a more modular architecture, and encapsulation helps isolate complexity-generating variations through the successive addition and modification of the design details. According to Brooks [BRO 87], this so-called accidental complexity can be reduced, in opposition with so-called essential complexity, which is intrinsic to the real world problem. Nevertheless, the authors of the introductory chapter [OFTA-1 04] of the collective work [OFTA 04] rightly underline that modeling helps reduce the intrinsic complexity of the real world by ignoring the details which do not pertain to the expected purpose. By controlling the refinement process within the developing process, accidental complexity is gradually controlled. The MDA approach is actually trying to achieve such control by offering to automate refinement through vertical transformations leading to the production of a compilable and executable code. These transformations should also be modeled as models. This is not enough, however. It is most important to verify and validate them, so as to enable reuse with a good knowledge of the associated quality, and therefore of the level of trust which a beneficiary will have when reusing it for his own benefit.

11.3.2. MDA methodological framework

MDA’s main methodologic innovation lies on the level of mapping. In system engineering, the components of the technical architecture are manually assigned to functions, while in MDA methodology, the models’ weaving and transformations can be automated if the transformations and the use of MDA tools have been previously modeled (see the OMG’s Internet site http://www.omg.org, which supplied the list of support tools, or [OFTA-8 04], which offers a synthesis of said tools’ evaluation). This is the main plus of this software engineering approach in which “everything is a model”. Hence, the code becomes disposable, since it can be automatically regenerated, and becomes similar to a commodity without any real added value. The transformations, acting as interfaces between models, take on new importance, such as is the case with the interfaces between components in systemic logic. Better still, the conformity link between two consecutive levels of abstraction up to the highest level (meta-metamodel or conceptual model of reference compatible with the MOF) guarantees global coherence and favors interoperability and reuse once all the (meta-) models and the rules of transformations have been verified and validated (see [TOL 04]).

We have previously brushed on the two categories of transformation: vertical and horizontal. They are both used in the Y cycle of Figure 11.6: the transition from PSM to coding relies on vertical transformations (for example TV1,1→0 and TV2,1→0 in Figure 11.7), which add details of implementation within models whose level of abstraction is lowered. On the other hand, the weaving between PIM and PDM belongs with horizontal transformations, in which various specifications or conceptions are integrated within one unique specification/conception, without changing the level of abstraction. Vertical transformations may be necessary to obtain the corresponding PSM. In the case of technological evolutions, instead of adapting or directly transferring the existing source code towards a new source code specific to another platform (for example TH0,1→2 in Figure 11.7), we may use alternative solutions, as long as effort is put on the abstraction of PSM and PDM platforms, and a composition of transformations (the symbol ° denotes the composition operator): TH0,1→2 = TH2,1→0 ◦ TH1,1→2 ◦ TH-11,1→0 and TH1,1→2 = Weaving2 ◦ TH*2,1→2 ◦ Weaving1-1.

If we refer to Figure 11.7 in order to understand both these formulas in a natural language, they are thus expressed:

– the transformation between the C1 and the C2 codes happens when returning to the PSM1 model, transformed into PSM2, and the C2 code is generated;

– the transformation of the PSM1 model into PSM2 happens when going from PSM1 back to PDM1 (therefore, through the “inversion” of the weaving process),then transforms the latter in PDM2, and generates the PSM2 model via the weaving process.

Despite the apparent complexity of these formulas, these transformations are all the easier to achieve for they are done on models with an increasingly high level of abstraction (transformation between models on the level M0 happens via the transformation of models on the level M1, which in its turn happens via the transformation of metamodels on the level M2), and therefore with minimum focus on implementation details which are a priori complicating factors and must not take the advantage over structuring aspects, lest the portability be substantially diminished.

This course of action is preferable to facilitate the modeling of horizontal transformations and the maintenance of their evolutions, and therefore increase the levels of interoperability and reuse thanks to a higher level of abstraction. Indeed, the amount of design and implementation details diminishes with the elevation of the level of abstraction, which facilitates the control of the modeled system’s complexity. Moreover, the break between business and technological logics will be better controlled on a higher level of abstraction. Should the opposite occur, efforts to perform the break would be vain, for transformations would have to integrate both aspects. For example, on the M1 level, the transformation TH1,1→2 would have to integrate elements from PIM and PDM, to go from one PSM to another (see Figure 11.7).

Figure 11.7. MDA transformations on the different levels of the metamodeling stack

images

Thirdly, even if horizontal transformations may be necessary for the evolution of business aspects (example: TH2,1→2, see Figure 11.7), an automation of transformations performed as early as possible before the MDA cycle undoubtedly improves the system’s flexibility towards technological evolutions and reduces production costs and delays.

11.3.3. Another instance of MDE: the DSL tools

Even though the MDA approach is the most advanced specific instance of the general MDE approach, its acknowledgement by the software community and its industrial development are still impending, due to numerous critics. First of all, the MDA approach focuses on the separation between the specific or independent aspects of the technological platform, whereas the MDE approach aims at being more general: for example, it includes development methods such as aspect-oriented programming, in which the functional and non-functional aspects (performance, quality of service, reliability, security, etc.) are separated in a modular way, then weaved within an object-oriented application.

It is true that the UML language, an MDA medium, has the merit of being generic and features assets to become a universal language unifying both system and software engineering practitioners. In that way, it helps specify, build, visualize and document the components, the static and the dynamics of complex systems through a thorough representation of these systems into models. However, this generic nature leads to imprecise descriptions and hampers the development of the MDA approach. Indeed, use shows that UML is rather adapted to the informal graphic documentation of design [GRE 04]. A more thorough use of UML must go through specialization by business field, which requires confirmed and rare skills, both in UML and business knowledge. Without this double competence, the recurring problem of dialogue and comprehension between technicians and functionalists (for example an interface problem) is a major obstacle to the adoption of MDA.

Some specialists are even more severe towards the UML language, calling into question the imprecision within the UML language metamodeling, and notably the way semantics are subject to various possible interpretations, but also the limits of UML, notably in the modeling of Java or C# interfaces, and the reuse of UML parts. Even the arrival of UML 2.0 cannot make up for its weaknesses, since this new version does not cover the needs of some aspects of software development, such as the modeling of data and user interfaces (see [GRE 04], Appendix B for more details).

Another difficulty in applying MDA and using the UML language lies in the way no method is provided to guide the user in his modeling work, and notably in the choice of the proper level of abstraction, relative to the metamodeling stack, to meet his needs.

From the standpoint of model transformation technologies, [KAD 05] underlines the weaknesses of the MDA standards. For example, respecting the standard MOF often leads to overly complex interfaces, by imposing the use of IDL CORBA when the standard XMI is often judged too complex for the programming of model exchange. Besides the necessary simplification of the use of MDA tools, it advocates the use of a Framework (software development infrastructure) of model transformation which will a minima allow:

– the application of specific design patterns on the models;

– the fusion of a model’s various views;

– the generation of code specific to certain platforms;

– the execution of the models’ validation tools.

For others [ITE 04], the future of MDA methodology seems linked to the development of a market of transformation components, where a component bought from a supplier might be run on another supplier’s transformation engine. At this time, such a commercial and industrial development can hardly be imagined, since an MDA tool editor cannot profit from the marketing of tools which might interoperate with the competition’s. At least, such a thing will not happen as long as the field’s industry has not reached a higher degree of maturity.

The current trend rather consists of building on DSL (domain specific language), materialized through small specialized metamodels, expressed in a language (textual or graphic notation) close to the end-user’s business language, in order to separately take into account the systems’ various aspects. This approach thus solves the problems stemming from the generic nature and the UML semantic imprecision that we have previously mentioned. For example, through the use of the XML Schema (extended markup language) format, DSL becomes portable and independent from the general-purpose programming languages designed for IT specialists. DSL implementation must be performed with tools which follow the aforementioned [KAD 05] requirements. In that way, the efforts put in the design and development of business-specific software and systems may be capitalized on, through the factoring of repetitive tasks and the reuse of design patterns encapsulated in verified and validated components. Nowadays, two competing editors offer DSL tools to implement MDE with DSLs: Microsoft’s “Visual Studio” or IBM’s “Eclipse”.

11.4. Feedback

11.4.1. Issue faced by the DGA

Within the French Ministry of Defense, the Délégation générale pour l’armement (DGA, General Armament Delegation) prepares the future defense capabilities and conducts armament programs for the French military. It works in close relation with the General Staff, from the identification of future needs to the monitoring of the users’ satisfaction. Like in civilian industries, achieving the best product while respecting budget and delay requirements is one of the main, constant preoccupations of project leaders. The way to achieve this is to take advantage of what exists without having to redevelop it for a particular system, and also to make functionalities and performances evolve at the cheapest cost possible, all the while guaranteeing a continuity of service to satisfy the needs of the client or the end-user.

Nowadays, defense capacities are no longer defined by following the simple logic of weapon systems (example: fight plane, tank, frigate, etc.) but rather force systems which offer capability effects (for example: engagement and fight, dissuasion, etc.) and simultaneously involving different armies (land, air, sea), whether national or within coalitions. In this way, the number of interacting components increases inexorably and leads to the increasing complexity of defense systems, which have not necessarily been designed to operate together; their life expectancy becomes heterogenous, their respective life cycles are not synchronized, and their architectures and interconnections vary. Moreover, the predominance of software in those complex systems only heightens the difficulty in controlling the variability of their components in time and space.

To define future systems and be able to do the proper compromises according to the geopolitical context, the analysis of threats, and the documentation use policy, one must have the proper simulation tools. Using simulations in the various stages of a defense system’s life cycle is an old practice. However, simulation tools were used in a decentralized, fragmented and non coordinated way. The evolution towards systems of systems goes along with organizational changes: pooling of resources, creation of multidisciplinary teams and coordination of all actions. In this framework, interoperability, reuse and capitalization are the main focus and constitute privileged research themes. The DGA has launched several research and technological projects on those themes, applied to the modeling and simulation of defense systems through the use of model-driven engineering methodologies and tools. The three following parts each describe one of these projects, and their feedback in terms of model-driven engineering.

11.4.2. Feasibility study of the MDA approach

The various DGA technical centers and the industry in the defense field own an important catalogue of simulation models and keep developing more. We have noticed that, on the one hand, these models’ definition, design, development and operation depend on a great variety of modeling and programming languages (UML, XML, ADA, C, C++, JAVA, etc.); and on the other hand, these models have been developed in order to be integrated in specific and/or proprietary simulation platforms, which operate with specific equipment configurations (machines, physical communication networks, etc.) and software (operating system, communication protocols, middleware such as CORBA or a run time infrastructure compatible with the high level architecture interoperability standard, resulting from works led by the American Department of Defense).

These platforms can offer integrated work environments, featuring services such as: edition and coupling of models, definition and execution of simulations, definition and implementation of communications, visualization and analysis of results, capitalization, information configuration management, etc.

In 2003, therefore, a project was launched in order to define a level of modeling sufficiently abstract and independent from simulation platforms, to help with the design of simulation models. The expected result is a design chain of models adapted to these platforms and which will enable, from a UML modeling, an automated generation of executable code on said platforms. The attained level of abstraction must allow the models’ interoperability and reuse in various contexts of simulation, and therefore a durability of investments. Convinced that the business part is the enterprise’s true capital, and that the infrastructure part will be linked to the evolution of IT technologies, the DGA saw a possible solution in the MDA approach.

Instead of taking on the M1 level of the metamodeling stack through the direct definition of a PIM and some model transformations, to obtain a PSM which will then generate executable applicative code specific to one of the target platforms, this study follows another approach: it elevates abstraction to the level M2 and defines so-called MOF transformations (see Figure 11.8) which operate on metamodels with which the PIM and PSM are respectively compliant.

If this approach favors the models’ interoperability and reuse, as defined by the objectives, there are however non negligible differences between this solution and the transformations described in Figure 11.7, and which deserve some explanations. The PIM in Figure 11.7 is located on the level M2, whereas it is located on the level M1 in Figure 11.8. This gap in the levels of abstraction is in direct link with the blurry specifications of the OMG on the MDA, and to the difficulty of positioning oneself on the abstraction scale since the four levels of the metamodeling stack are expressed in a relative way.

Figure 11.8. Transformations of MOF models (featured in [MIL 03])

images

Metamodeling goes through the elaboration of a metamodel of the simulation’s field, in order to abstract the business concepts and the search for the concepts’ community, among the specific variations of simulation platforms. The expression of the metamodels relies on the UML v1.5 language, by using the Rational Rose modeling tool (version 2.0 of the UML was not available at the time of the study).

The exercise shows, however, that because of the aforementioned semantic ambiguity in UML, the resulting metamodeling is too generic and elementary, from the business and technological standpoint. For example, the representation of time cannot be taken into account in a satisfying way. Moreover, the study also shows the necessity of having an in-depth knowledge of the platforms in order to master the scope of concepts in the implemented field of simulation.

As for the definition of the models’ transformation, an analysis of the currently available tools recommended the use of the MIA-Transformation tool for the chosen type of transformation (MOF). Expressing transformations with the MIA-TL language, 280 rules of transformation have had to be defined for two of the platforms. Such work required an important effort of design and development.

For one of the modeled platforms, the study did not go past the metamodeling stage, because of the impossibility of modeling its simulation engine’s time and dynamics. For other platforms, the automatic generation of the executable code was realized with the language Rose Basic Script of Rational Rose.

The feasibility of the MDA approach has therefore only been partially proven in that study, which underlined a gap between theory and practice. From that experience, it is clear that the main difficulties that are met during the implementation of the MDA approach are caused by:

– the need to reuse basic classes in every platform;

– the differences of service levels in the various platforms;

– the differences in the models’ granularity;

– the incompleteness of the tools which instrument the approach;

– the lack of functional and technical skills.

11.4.3. Feasibility study of the MDE approach

On top of the previous project, in which the models obtained through MDA must adapt to the existing technological platforms, another project was launched by the DGA in 2005 to study the feasibility of a joint evolution of several simulation platforms towards a single platform based on more modern software technologies.

This platform must provide the analysts and developers with an environment for the development and exploitation of models of systems (arms and forces), to transform them into executable code, and exploit and shape the results.

From a methodological standpoint, the target platform must enable the implementation of an MDE/MDA approach, applied to the particular field of technical-operational simulations (which are used during the analysis of need, where the operational need is formalized and the functional requirements, or even some technical requirements on the level of the system, are defined), as the informal description of the simulation are needed for the generation of executable code, with a clear separation between business logic and technical and technological considerations (see Figure 11.9):

– the informal description of the problem is expressed in a free format (text and diagrams), providing precisions on the service functions of the system to be developed. In the case of technical-operational simulations, it describes the scenario which represents the operational context we wish to simulate, and its desired use, that is to say the purpose of the simulation’s exploitation;

– the analysis model defines the way the functional needs will be covered, while staying within business considerations. The lower levels components may be modeled, as well as their behavior, and the main operations, both provided and required;

– the untargeted design pattern is obtained through the application of architectural patterns, untainted by the features of the target language, on stereotypes (example: extension of ULM modeling components) which define a component’s semantics and the way it should be used in a model;

– the targeted design pattern takes into account the technical considerations (in particular the implementation language) through the application of architectural patterns specific to the target platform. It expresses a more precise modeling (used types, operations signatures, etc.);

– the code itself, following the compiling and link, provides the application for simulation. It is represented by text files, which also include free commentaries as well as complementary structured information (structured commentary, attributes, etc.) which will enable the automatic generation of documentation, or help reverse engineering by keeping data about the design pattern.

Figure 11.9. Transformations of models within the reference platform

images

The first stage of the project consists of a definition study, during which the available technologies and tools are evaluated and compared. For the specific field of technical-operational simulations, it turns out the use of DSL Tools is better adapted than an approach purely based on MDA and UML. Indeed, unlike general-purpose languages such as UML for graphic modeling, Java for programming or XML for the sharing of structured data, a DSL is, by its mere principle, a language adapted to particular corporations or specific needs, such as with technical-operational simulations. Moreover, a DSL is closer to a business language (thanks to the XML Schema format), and much more simple to use than ULM for business experts who do not necessarily master every concept of a language of a high level of abstraction. Moreover, a DSL formalism helps capitalize on efforts made in software development by favoring repetitive tasks while providing application developers with an appropriate space of creativity.

Parallel to this first stage, the state technical experts have defined a standard for the capture and analysis of needs within technical-operational simulations, baptized XMS. This new DGA XMS standard relies, for its syntactic part, on the W3C XML Schema for its syntax, and, for its graphic semantics, on a form akin to UML while respecting the standard factual format of Rational Rose (.mdl). This approach implies that the code is generated from a graphic model (direct top-down engineering) and the metamodel’s reverse engineering done from the code (bottom-up engineering). The goal is to use the XMS standard as a DSL graphic, within an MDE approach, leading to the best possible reuse of the model components managed in a Framework library dedicated to technical-operational simulations (see the project described in the following section). Since this framework takes care of the average design components in its field of activity, it enables the developer to concentrate on the functional added value which should be added to its development, and not on its technical resolution. The Framework answers the requirements defined by [KAD 05]. It implements concepts with a higher level of abstraction, which appear in the model. Thus, in the event of a change in the programming technology, one must and only need reconstitute the Framework’s basic components. The effort of evolution is reduced to the Framework’s components, but the essential of the business expertise is capitalized in the DSL.

The XMS standard corresponds to an M2 metamodel. It describes the usual concepts used in technical-operational simulations, and defines these concepts’ rules of use. It constitutes a kind of grammar, guaranteeing the syntactic and semantic coherence of the model during its construction, for example by forbidding the association of graphic components when they cannot be linked together. Since the generated code can be manually modified by the developer, in order to add business expertise, there is a non-negligible risk that the XMS metadata initially featured in the code will eventually be incomplete.

To control this risk, the reverse engineering component verifies the XMS metadata, to see that all elements own the associated XMS metadata, and signal everything that might hamper the reverse engineering process. Building on the bootstrap mechanism of .net and the exploitation of the XML file which contains the commentaries of the source code, reverse engineering can find the components (classes, methods, attributes, etc.) as well as the associated metadata. A function of validation of the meta-data is also provided, to verify that the editing operations of the source code have upheld the coherence of the XMS meta-data.

11.4.4. Feasibility study of the models’ capitalization and reuse

The two previous projects focus on the interoperability of models and simulations, as well as their reuse, thanks to the implementation of a home structure and a methodological framework which complies with the requirements of object-driven engineering. In terms of reuse, capitalizing the models is crucial to fully profit from that approach. Thus, a third project has been launched, to create a library of models associated to the single reference platform, models which must be:

– generic: the models do not reference any particular hardware, system or equipment, but rather a class of tools, representing various devices. The entry data will define which particular device will be represented by an instance. The models will therefore be adaptable and reusable in a broad variety of contexts (e.g. technical-operational simulation, training war game) and constitute a basis for the build-up of finer or more complex models if necessary;

– available: these models are capitalized within a structure (digital and organizational), so as to facilitate access and reuse by the broadest public, within the ministry of defense or by the agents of the defense industry;

– durable and evolutionary: the models must be able to evolve according to the needs (integration of new modeled systems, upgrade of the models, etc.) and be able to adapt to a modification of their operating environment (home structure, operating system, etc.);

– validated: the models are validated through a standardized process. Precise elements will be needed to evaluate these models’ applicability for each context of use;

– coherent with the current needs: the models will enable the modeling of new forms of operation (control of violence, humanitarian aid, counter-terrorism, etc.).

The models which will be capitalized within that library will eventually have to include three levels of modeling (e.g. granularity):

– elementary: the simplest level to describe a physical agent (e.g. punctual point to describe a plane);

– intermediary: an elementary level refined on certain parameters that are useful to modeling (e.g. taking into account the 3D attitude of a plane);

– evolved: technical-functional level which models the detail of the modeled system’s operation.

Each of these models will feature description documents of function, technical, design, the source code, the associated technological data, and finally of validation. This set of knowledge, both business and technical, will be capitalized on to facilitate the models’ reuse. Thus, the architect of a simulation will be able to choose, among the models featured in the model library’s database, the models with the proper level of granularity, to assemble them (easily, thanks to a plug and play connection) and create the appropriate simulation.

We have previously pointed out that the UML language is rather adapted to documentation needs. It therefore looks like the perfect candidate for the constitution of documents about the functional, technical and design descriptions which go hand in hand with the models. However, taking into account the difficulty met by business experts in apprehending UML, it became obvious the models should be expressed in XML and follow the XMS metamodel. This is why a XMS-to-UML translating tool has been developed, to help business experts produce the necessary UML documents for the capitalization within the model library.

Both the approach and the tools have been experimented on existing models, with the aim of putting them through reverse engineering and capitalizing them according to the MDE methodology. The business experts were involved in the project from the start, which eased the change of their work habits, notably in terms of the documenting and traceability of their work. On the other hand, multilevel modeling, which allowed plug and play connections, did not provide any sufficiently convincing results for this type of reuse to be considered at the moment.

Once the set of models has been developed and validated, their organization is imperative to facilitate their exploitation and reuse. A taxonomy was therefore defined to structure the information of the library’s generic and reusable models. To this taxonomy, an ontology based on OWL (Ontology Web Language) is added, specified by W3C, the international organization of standardization of web technologies. Such an ontology corresponds to a metamodel dedicated to the engineering of knowledge, by standardizing the semantic relationships existing between the capitalized information (for example, level 2 of the LCIM model, see section 11.2.5). It must be coherent with the XMS metamodel (for example level 4 of the LCIM model) in order to guarantee the best reuse of the models. When this chapter was written, further work was ongoing to deepen the taxonomic and ontological aspects for the project’s advancement.

11.5. Conclusion and perspectives

The complexity of systems of systems must be controlled even before the beginning of their life cycle. Modeling and model-driven engineering, from analysis to design, help reduce said complexity without losing sight of the fitness for purpose. The use of models and simulations helps reduce developing costs and delays, while providing early warnings against errors of design. This affirmation is all the truer since the models and simulations are interoperable and can be reused, as long as efforts are put in their verification, validation, and capitalization.

In software engineering or in modeling and simulation, experts agree that heightening the level of abstraction, such as is recommended by the MDA/MDE approach, definitely favors interoperability and reuse, by defining a model of reference of the field (level 2 metamodel) which guarantees the semantic cohesion between models. Since the business and technological aspects are separated, their relationship must be modeled through the transformation of existing models which will play the role of interface, a crucial point for systems engineering. To ensure the durability of investments, capitalization of verified and validated data constitutes the third complementary panel to achieve interoperability and reuse within the control of complexity. This third pillar requires an important effort of documentation and meta-information, which analysts, designers and developers must provide and update according to mastered engineering processes which reflect a certain maturity in their methodological approach.

Feedback attests to this methodology’s feasibility, and to the expected profit. The MDA approach is tried-and-true in the case of general database applications, in which the implemented technologies, and the software and physical architectures are relatively standard. On the other hand, in the case of nested applications such as technical-operational simulations, an MDE approach with DSL Tools is more appropriate, considering the field’s specificities and the associated simulation technologies. Difficulties are still left to overcome, notably in choosing the models’ and metamodels’ level of abstraction, the reduction of the system of systems’ intrinsic complexity, but also the availability of mature MDA/MDE tools. Here are some leads on possible improvements so as to minimize such difficulties:

– the use of functional analysis might help the designer in specifying his needs in terms of modeling and metamodeling;

– the use of appropriate interview techniques, featuring open questions and following a candid attitude, may result in implicit and unexpected information (unknown unknowns, see [MUL 06]) thus reducing the uncertainty about the modeled complex system.

From an economical standpoint, model-driven engineering modifies the chain of value. Business enterprises ought not to worry about the production stages, since code generation can be automated. They ought to focus on their core business, in which they have a competitive advantage and through which they can bring their clients true added value. If the approach is industrialized, economies of scale may be realized, or at least economies of scope for particular fields (see [GRE 04]). To achieve such profit, people must be trained to acquire the proper skills, and both their culture and state of mind must evolve. For example, the added value must be precisely analyzed, in terms of know-how between the physical or phenomenological equations subjacent to the models and technological performance data which help calculate precise numerical values with the help of equations. This added value is directly linked to possible ownership or industrial rights, which the producers of these technological data may well claim. The scope within which this claim of ownership applies should be very precisely defined, so as not to put unneeded restraints on the distribution of components through excessive and groundless protection. Model-driven engineering may bring practical solutions to this question insofar as the designer will take these aspects into account upstream of the life cycle: on the level of exchanged and manipulated data, one ought to distinguish, depending on the levels of generalization, what may be freely shared and what may not be without a specific agreement. To caricature, in an infrared camera, the general optical model must be generic, and it would be counterproductive to see it as a black box model which could not be freely accessed; on the other hand, data specific to the equipment, such as sensibility, may pertain to industrial protection. The interest of dissociating between generic and specific data is that it is always possible to exploit and use these models with the standard data of the open art, even if this means reaching in fine a digital performance less precise than it would have been through the use of exact data.

11.6. Bibliography

[ALE 77] ALEXANDER C., ISHIKAWA S., SILVERSTEIN M., A Pattern Language, Oxford University Press, New York, 1977.

[BEZ 01] BÉZIVIN J., GERBÉ O., “Towards a Precise Definition of the OMG/MDA Framework”, Proceedings of Automated Software Engineering, San Diego, United States, November 2001.

[BEZ 04] BÉZIVIN J., “Sur les principes de base de l’ingénierie des modèles”, RSTI – L’objet. Où en sont les objets?, p. 145-156, 2004.

[BRO 87] BROOKS F., “No silver bullet: essence and accidents of software engineering”, Computer Magazine, 1987.

[DGA 97] DÉLÉGATION GÉNÉRALE POUR L’ARMEMENT, Guide dictionnaire de la simulation de défense, April 1997.

[DGE 01] DIRECTION GÉNÉRALE DES ENTREPRISES, définition de la mission à la stratégie de normalisation pour la société de l’information (http://www.telecom.gouv.fr/rubriques-menu/organisation-du-secteur/normalisation/interoperabilite-349.html).

[DOD 94] DoD DIRECTIVE 5000.59-M, DoD M&S glossary, January 1994.

[DOD 98] DoD C4ISR ARCHITECTURES WORKING Group, Levels of information systems interoperability, March 30, 1998 (http://www.c3i.osd.mil/org/cio/i3/).

[GRE 04] GREENFILED J., SHORT K., Software Factories: Assembling Applications with Patterns, Models, Frameworks, and Tools, Wiley, New York, 2004.

[GRU 93] GRUBER T.R., Towards Principles for the Design of Ontologies Used for Knowledge Sharing in Formal Ontology in Conceptual Analysis and Knowledge Representation, Kluwer Academic Publishers, New York, 1993.

[ITE 04] IT Expert, issue n°51, September/October 2004.

[JOH 98] JOHNSON M.V.R., MD KEON M.F., SXANTO T.R., “Simulation based acquisition: a new approach”, Report of the 1997-1998 DSMC Military Research Fellows, Defense Systems Management College Press, Virginia, United States, December 1998.

[KAD 05] KADIMA H., MDA – Conception orientée objet guidée par les modèles, Dunod, Paris, 2005.

[KAM 02] KAM L., Terminologie M&S dédiée aux tableaux de synthèse des réponses au questionnaire DCE-CAD, Document technique n° CTA/00350109/NTECH/001 du Centre Technique d’Arcueil, 20/03/2002.

[LEM 94] LE MOIGNE J.L., La théorie du système général, PUF, Paris, 1994 (4th edition).

[LEM 95] LE MOIGNE J.L., Modélisation des systèmes complexes, Dunod, Paris, 1995.

[LUZ 03] LUZEAUX D., “Cost-efficiency of simulation-based acquisition”, Proceedings of SPIE Aerosense’03, Conference on Simulation, Orlando, United States, April 2003.

[MCX 05] Glossary of the European program MCX, “Modélisation de la CompleXité”, Définition usuelle de la complexité, 2005 (http://www.mcxapc.org/).

[MEN 02] MEINADIER J.P., Le métier d’intégration de systèmes, Hermès, Paris, 2002.

[MET 05] Workshop A08 FROM THE EUROPEAN PROGRAM MCX, “Modélisation de la CompleXité”, Méta modélisation : de la modélisation conceptuelle aux formalismes de modélisations, 2005 (http://www.mcxapc.org/).

[MIC 07] MICROSOFT CORPORATION, Livre blanc sur l’interopérabilité, April 2007, (further information at http://www.microsoft.com/interop/).

[MIL 03] MILLER J., MUKERJI J., MDA Guide version 1.0.1 of OMG, June 12, 2003, available at: http://www.omg.org/mda.

[MOR 77] MORIN E., La méthode T.I., Le Seuil, Paris, 1977.

[MUL 06] MULLINS J.W., Discovering Unk-Unks: How to Learn What You Don’t Know You Don’t Know, London Business School, London, 2006.

[NON 00] NONGA HONLA J., Les fiches de lecture de la chaire D.S.O. du CNAM, 1999-2000 (http://www.cnam.fr/lipsor/dso/articles/fiche/lemoigne.html).

[OFTA 04] OBSERVATOIRE FRANÇAIS des TECHNIQUES AVANCÉES., Ingénierie des modèles : logiciels et systèmes, series ARAGO 30, Tec & Doc, Paris, May 2004.

[OFTA-1 04] JÉZÉQUEL J.M., BELAUNDE M., BÉZIVIN J., GÉRARD S., MULLER P.A., “Contexte et problématique, chapitre 1”, in Ingénierie des modèles: logiciels et systèmes, Observatoire Français des Techniques Avancées, series ARAGO 30, Tec & Doc, Paris, May 2004.

[OFTA-4 04] BELAUNDE M., BÉZIVIN J., MARVIE R., “Transformations et modèles de plates-formes, chapitre 4”, in Ingénierie des modèles : logiciels et systèmes, Observatoire Français des Techniques Avancées, series ARAGO 30, Tec & Doc, Paris, May 2004.

[OFTA-8 04] KAM L., L’HOSTIS B., LUZEAUX D., “Application aux simulations Défense, chapitre 8”, in Ingénierie des modèles : logiciels et systèmes, Observatoire Français des Techniques Avancées, series ARAGO 30, Tec & Doc, Paris, May 2004.

[OTA 03] NATO ALLIED DATA PUBLICATION 34 (ADatP-34), NATO C3 Technical Architecture (NC3TA), Version 4.0, March 2003 (http://www.nato.int/docu /standard.htm).

[OTA 07] NATO Glossary of standardization terms and definitions, AAP-6, OTAN, 2007.

[PEL 98] PELISSIER C., Unix. Utilisation, administration, réseau Internet, Hermès, Paris, 1998 (3rd edition).

[RAB 06] RABEAU R., Proposition d’une démarche de validation des simulations technico-opérationnelles utilisées comme une aide à la décision, Thèse de doctorat de l’université de Versailles Saint-Quentin – spécialité informatique, July 7, 2006.

[RAB 07] RABEAU R., “Credibility of defense analysis simulations”, 20th International Conference on Software & Systems Engineering and their Applications, ICSSEA07, Paris, December 2007.

[SCO 03] SCOTT H., YOUNGBLOOD S., “A proposed model for simulation validation process maturity”, in Proceedings of 2003 Spring Simulation Interoperability Workshop, Orlando, April 2003.

[SHA 02] SHALLOWAY A., Trott J.R., Design patterns par la pratique, Eyrolles, Paris, 2002.

[SIS 07] SISO GM V&V PRODUCT DEVELOPMENT Group, Guide for Generic Methodology (GM) for Verification and Validation (V&V) and Acceptance of Models, Simulations, and Data – Reference Manual, Version 1.0, 2007.

[TOL 03] TOLK A., MUGUIRA J.A., “The levels of conceptual interoperability model”, in Proceedings of 2003 Fall Simulation Interoperability Workshop, Orlando, September 2003.

[TOL 04] TOLK A., “Composable mission spaces and m&s repositories – applicability of open standard”, in Proceedings of 2004 Spring Simulation Interoperability Workshop, Washington DC, April 2004.

[VOL 04] VOLLE M., A propos de la modélisation, Michel Volle’s website: http://www.volle.com/travaux/modelisation2.htm,March 5, 2004.


1 Chapter written by Lui KAM.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset