Chapter 8

Anticipating Integration, Verification and Validation 1

“Errare humanum est.”
SENECA THE YOUNGER

“To err is human, to forgive is divine.”
A. POPE

Let us jump forward in time, for a brief while, to a point a few months before the start of the operational mission in Antarctica, which will begin in February 20X (see calendar, Figure 3.1). At this point, what observations can we make concerning the progress of the creation of the life support facility?

We are at the KYN Systems integration and verification site near Toulouse, France. The technical integration manager of the Antarctica Life Support Facility has just met with one of the suppliers of components covering one of the main functions of the system: water treatment. The project is running late, and the team is in a hurry to install these components in order to continue with the system integration phase. To save time, and with Roger’s agreement, the manager decides not to spend too long checking the state of these deliveries (just a quick visual inspection to make sure nothing is missing), to install them “as is”, and to move on.

We jump forward a little further to the beginning of February 20X. Nathalie, Jean-François and the three other scientists are now occupying the Antarctica Life Support Facility and have been there for a few days. The operational mission has now begun: they have just completed the facility installation phase and have begun establishing programs of scientific experiments. At this moment, an incident occurs: sensors report that the level of water in the clean water tank is dropping, although no water has been consumed. Based on estimations, if nothing is done the tank will be empty in approximately 18.5 hours.

A last leap forward in time: we are now in March 20X, one month after the incident. In the end, the mission was aborted and the team brought back to Europe. The origin of the incident was identified on-site by the scientific team as a problem linked to the water distribution network: the diameter of the connection points in the tank was a few millimeters smaller than ordered. Unfortunately, although the leak was plugged locally, the risk of other connections also being affected was too strong and it was considered to be prudent to abort the mission.

Yves is furious that his mission has failed in this way. Anne and Yves express their disgust to Roger. Roger does not understand how a problem can have occurred at this level: Marc and his team had analyzed this point of connection between the distribution network and the tank perfectly, accounting for all constraints (pressure, temperature, etc.). Their calculations produced precise values that were communicated to the component supplier, including diameters to be respected. No anomalies were detected during the numerous phases of testing carried out by KYN Systems. After investigation, it transpires that data were misinterpreted by the suppliers, who confused the internal and external diameter of the connections. Moreover, this error could have been detected by Roger’s teams if, during the phase of assembly at the KYN Systems site, precise measuring tools had been used to check the components received. As we know, to save time only a brief visual inspection was carried out.

As we have seen throughout this work, all systems engineering processes include activities carried out by humans. From the phase of requirement definition to the production of the system, via definition and design, humans are at the heart of engineering activities. This is not without (major) problems (see section 7.2.1): humans, like materials, are fallible. Humans may make mistakes, introducing errors into the system that may then result in failures. A failure occurring during the operational life of the system always has its origins in the engineering phase (e.g. a design fault) or in the phase of manufacturing or assembly and integration.

A human may make a calculation error or a copying error or misunderstand something that may have consequences on the final product. In the case of the fictional example presented above, two human errors contributed to the failure:

– supplier confusion regarding the internal and external diameters of connectors1;

– the decision by KYN Systems not to carry out the verifications that would have led to detection of this error, with no, or little, mastery of the impact of this decision.

To reduce the risk of introducing errors, numerous “V&V” (verification and validation) activities are carried out. Thus, we have four main types of activities (review, analysis, demonstration and tests) that allow us to improve the level of trust in the final system. At the end of these activities, residual anomalies are dealt with using dependability techniques (which are based on the hypothesis that no major design or manufacturing errors have been made that may induce shared modes of failure).

In complex systems, such as the Antarctica system, these activities are compulsory as the probability of error is high, and the potential consequences of these errors are dramatic. In our fictional example, Roger chose not to carry out certain V&V activities (tests, in this case) and we have seen the results!

Let us return to an earlier phase of the project. Roger is very experienced and he is well aware of the crucial role V&V tasks play in the development of the Antarctica Life Support Facility, particularly during the phases of integration, V&V of the system. The scale of these tasks necessitates constant monitoring, during the first stages of system delivery, but also earlier on in the project. For this reason, he chooses a person, Elisabeth, whose role is exactly that: to direct IVV (integration, verification and validation) activities.

8.1. Positioning integration, verification and validation

Faced with a system on this scale, Elisabeth has a lot of questions concerning the way to approach IVV in relation to the Antarctica Life Support Facility. One of the first, fundamental questions concerns the position she must take in relation to systems engineering and the Antarctica Life Support Facility project, and thus her position in relation to Marc and Roger.

The notion of IVV is subject to a “historical” misunderstanding due to two different viewpoints, considering IVV either as phases in the lifecycle of the system or program or as processes.

Figure 8.1. IVV processes and technical processes in [ISO 08a]

image

On the one hand [ISO 08a], IVV is defined as a set of processes that are applied in a transverse manner to all phases of the system lifecycle, with the aim of ensuring2:

for verification: that the system has been created in the “right” way, i.e. satisfying the applicable input requirements of the process, which is the object of verification, standards, practices and conventions. Here, these “applicable input requirements” should be considered to be the technical requirements produced by the requirement analysis process [ISO 08a];

for validation: that the “right” system has been created, i.e. that it satisfies all the applicable input requirements of the system, and that it responds to the right problem. In this case, “applicable input requirements” should be understood as the formalization of needs and expectations, produced by the stakeholder requirements definition process [ISO 08a].

In order to avoid confusion, we shall use the terms “system verification” and “system validation” to designate these processes. Figure 8.2 places these IVV processes into perspective in relation to the set of technical processes defined by [ISO 08a]. We shall take this opportunity to go into further detail concerning the interpretation of a well-known development model: the V model. It is interesting to see this model as representing processes (rather than phases, which we sometimes find). The processes are thus seen as non-sequential, taking place in parallel throughout different phases (which are sequential). Thus, we should not see any temporal indications in this V model. The relationships that link the processes are input/output or producer/consumer relationships. Thus, in the specific case of IVV, the model shows the relationship of these processes to engineering activities for which they “verify” statements (input) on concrete and tangible components.

Figure 8.2. V model and processes

image

On the other hand, IVV may be defined as a succession of phases in the system lifecycle, which are carried out once the components of the system have been developed or acquired.

Figure 8.3. Phases of IVV

image

The “integration” phase precedes the phase of “assembly” of system components and is the phase in which we check that components integrate correctly (compatibility of physical interfaces, logics, etc.) and collaborate in the way expected by system design.

The “verification” phase is the phase that allows us to ensure that the system obtained conforms to the technical requirements by which it is defined, while the “validation” phase consists of ensuring that initial user requirements have been successfully taken into account for the operational context.

As validation is carried out in an operational context, this sometimes implies that it must be separated from the verification phase by a phase of deployment in its operational context. Some groups also talk of operational qualification3.

Misunderstandings over what is covered by IVV are sometimes increased by confusion with V&V.

First, let us look at the semantics of the terms “verify” and “validate”:

verify: “seek to know whether something is true, correct, if it is as it should be”. The word (like “verity”) has its etymological roots in the low Latin verificare, “present as true”, a word derived from the classical Latin verus (true, genuine, real) and facere (to do, to make);

validate: “show that something fulfills the required conditions to produce the desired effect, that it is valid”. The word (like “valid”) has its origins in the low Latin validare (fortify, re-establish), from valere in classical Latin (to be strong – also the source of our word “valor”).

V&V is a transverse and permanent activity: everything that is produced must be verified and validated. At the very least, this means that:

– the system under study must be verified and validated (something we have already discussed);

– all of the engineering elements produced by systems engineering processes must be verified and validated: each technical (and non-technical) process in the systems engineering approach must include V&V activities;

– all elements produced by other processes, particularly IVV processes in this case, must also be verified and validated;

– the process itself must be verified and validated (one of the roles of quality control in a business).

We should now consider the layout of processes with the addition of the necessary V&V activities for each process: V&V of the engineering elements produced by each of these activities, as indicated in Figure 8.4.

Figure 8.4. V&V and IVV processes

image

How, then, should we consider IVV?

One solution that allows us to reconcile different viewpoints is to consider IVV as a system, and more specifically as an enabling system of the system under study.

Enabling systems

In the course of its lifecycle, the system under study must make use of a certain number of services. These services will be provided by other systems. These systems are known as enabling systems.

image

These enabling systems must be operational during the stage of the system lifecycle where their services are required.

image

A system is defined first and foremost by its purpose and missions (see section 4.1.1). The IVV enabling system is not exempt. We can therefore define it by its purpose: to allow the successful concrete creation of the system.

Systems engineering, which may itself be seen as an enabling system of the system in question, allows us to invent the system rationally, objectively and in an optimal manner, in an abstract and virtual form. IVV then ensures, for the concrete system (made up of tangible components), that what has been “said” of the system during the engineering process is present and true. The main missions of IVV are fairly evident and contained in the name:

integrate the system to ensure that the different components “lie” well together;

verify the system to render it “useable”: ready to be deployed and operated;

validate the system to ensure that it is “useable”: its use corresponds to stakeholder requirements and it allows efficient execution of the missions for which it was developed.

In what follows, we shall use the term IVV to designate “the IVV enabling system”.

From this perspective, Elisabeth is the systems architect (or systems engineer) of the IVV system. Her role is therefore at the same level as Marc, on whose work her own depends. Marc and Elisabeth must therefore work together under Roger’s direction and arbitration.

Elisabeth begins working on her tasks. She rapidly becomes aware that she will need to work in cooperation with a number of very important elements that will, in varying ways, have a considerable impact on IVV activities:

– an operational launch date that cannot be pushed back: all IVV activities must be finished before the date determined for the operational launch of the base;

– the presence of sub-contractors responsible for the creation and provision of facility components. Elisabeth is not sure how to approach these sub-contractors: “Will they be reliable in terms of delivery deadlines? And in terms of quality?” Elisabeth does not know whether she can rely on them for certain IVV activities, or whether she should trust no-one but herself (and her team) to successfully implement the task;

– the critical character of the facility: if failures arise during operational use, human lives would be put at risk. Thus, certain IVV activities should be reinforced when ensuring the correct operation of the system;

– the particularly extreme operating environment of the system makes certain IVV tasks difficult.

From experience, Elisabeth knows that four key viewpoints are involved in designing IVV [POL 01]:

– the lifecycle: activities to carry out (“what?”) and their temporal sequence (“when?”);

– techniques: methods to use (“how?”);

– organization: roles (“with whom?”) and relationships;

– infrastructure: tools, resources, facilities, sites, etc., for use (“with what?”).

From a system design point of view, the first two points correspond to the design of the functional architecture of the IVV system, and the second two correspond to its physical design. The correspondences are indicated in the following diagram:

Figure 8.5. Correspondence between key viewpoints in IVV and systems design

image

The functional architecture of the IVV-enabling system therefore sets out “how the IVV system works”:

– What phases are involved in the lifecycle of the IVV system?

– How, and when, do they follow on from each other?

– What activities (functions) are involved in each phase?

– What is the input for these activities, and what is their output?

– How are these activities synchronized with each other?

The physical architecture of the IVV-enabling system identifies and defines the components of the IVV system: necessary products and processes, operator roles, and the links between different aspects (infrastructure and organization).

8.2. Integration, verification and validation in the system’s lifecycle

The dimension of definition and temporal placement of IVV activities responds to the questions “what?” and “when?”: What will the main activities be? At what point in the engineering cycle will they be carried out?

We may distinguish four main stages of IVV: preparation, specification, execution and close-out. These phases are supported by monitoring and direction activities and infrastructure-management activities.

Figure 8.6. The lifecycle of IVV (the “high” part of the functional architecture)

image

These are the stages involved in IVV as an enabling system, i.e. its lifecycle (see section 4.2). The execution phase of IVV, as an enabling system, is the operational phase during which services are requested by the main system under study. This latter system is considered to be in an “IVV stage” at this point.

Figure 8.7. Synchronization of the IVV lifecycle with that of the system under consideration

image

Let us consider the phases of the IVV system lifecycle in greater detail:

Preparation: the first IVV stage, in temporal terms. This occurs during the phases of specification and design of the system. If IVV is considered as an enabling (sub-)system, this represents the phase of engineering the specification. At this point, we define:

- the outline of the system, which is the object of IVV;

- the input, which will serve as a reference point for IVV. At this point, work is also carried out on this reference point to ensure that it is relevant, coherent and testable for IVV;

- IVV strategy;

- the tools that will be needed to support IVV activities (i.e. these tools must be specified). Where relevant, we launch projects to create these tools, idem for IVV data;

- organization: identification of tasks, planning, identification of roles, responsibilities, human resources, indicators, deliverables, etc.

Specification: this IVV stage covers the design and creation of the “IVV”- enabling system. Mostly, this consists of:

- the creation of IVV resources (test cases, demonstrations of attainment of coverage objectives for test cases, checklists, etc.);

- the creation of IVV data;

- the creation of IVV tools.

Execution: this IVV stage corresponds to the phase of use. The phase includes:

- effective execution of IVV tasks;

- analysis of results, and modifications where necessary, followed by IVV tasks applied to the modifications.

Close-out: this IVV stage corresponds to the retirement of the IVV-enabling system from service. This phase, at the very end of IVV, includes recording IVV data (for proof and repetition), disassembling IVV tools and the creation of final reports.

Two activities are carried out continuously throughout these four phases:

monitoring and direction: includes tasks for monitoring the correct execution of phases and, where relevant, making the necessary modifications;

infrastructure management: includes tasks for the creation and maintenance of the structure needed for IVV in an operational state.

Of course, in the real-life case of a complex system, this cycle is applied numerous times throughout the engineering process, considering the different levels of granularity of the system described in the SBS (i.e. at the level of the global system, at the level of each sub-system and at component level) and possibly multiple times at any given level.

In this context, it is essential to guarantee the coherence of IVV activities and ensure that they do not overlap, or, on the contrary, that parts have not been forgotten.

8.3. Analyzing input

In accordance with these principles, Elisabeth begins her activities by the preparation the IVV stage. Thus, after setting out the outlines of the system to be considered, she begins analyzing input documents to check their level of readiness for creating an IVV system.

For this analysis, she plans to use “review”, the V&V tool best suited to this task.

The input details that Elisabeth will verify are as follows:

Consistency: she must ensure that there are no contradictions between requirements. If this were the case, Elisabeth would have no point of reference to use in order to judge the results of tests referring to this point.

Completeness and precision, i.e. the absence of “holes” in the specification, parts that are not or are insufficiently described. Once again, if problems are encountered at this level, the absence of a strong reference point to use in building and judging tests would have a major impact on IVV: it is difficult to test something that is insufficiently described or open to multiple possible interpretations.

For example, Elisabeth has noticed that the following requirement poses problems:

“The Antarctica Life Support Facility must maintain a viable internal temperature for external temperatures reaching a minimum of -85°C.”

The notion of “viability” is imprecise and too vague to be the object of testing. Elisabeth proposes rewriting this requirement:

“The facility must maintain a temperature greater than or equal to 15°C for external temperatures greater than or equal to -85°C.”

Testability, i.e. the ability of the system to be tested:

- The absence of properties of vivacity, i.e. properties that set out things the system will always or never do. These properties cannot be verified by testing because an infinite number of tests would be required. If properties of this kind are found, Elisabeth must either attempt to have them modified (attempting to reduce them by adding limitations) or plan to use other V&V tools to verify them (modeling and formal proofs, for example).

For example, the following requirement contains a property of vivacity:

“The Antarctica Life Support Facility guarantees that communications will be established with the OTL laboratory following a request by one of the members of the scientific team.”

Elisabeth decides to add the following performance requirement:

“The facility will establish communications with the OTL laboratory in a time less than or equal to 60 seconds following a request by one of the members of the scientific team.”

- The presence of system stimulation capabilities (which give the system the possibility of manipulating its internal state and interfaces) and observation capacities (plan for the possibility of obtaining information on the internal state of the system).

For example, we might wish to cover the following requirement by using a test:

“The internal temperature of the facility should be between 18°C and 24°C.”

To do this, we need to be able to observe the internal temperature of the base, and for this we add the following requirement:

“The facility must allow measurement of the temperature of the laboratory on a scale from 15°C to 30°C that is precise to the nearest 0.5°C”.

If, despite Elisabeth’s vigilance and efforts, requirements remain that would be difficult to test, all is not lost. Although testing is the most widely used V&V technique, other approaches are available, such as inspection (based on review techniques) and analysis (based on the construction and use of system models).

These tools allow us to validate certain requirements that would be hard to test, such as “the facility must resist wind at speeds of up to 220 km/h”.

Moreover, like tests, these techniques may be applied early in the engineering cycle, long before the system becomes available. This constitutes potential savings in terms of efforts (and, consequently, costs) as we may identify errors early on in the project, thus reducing the impact of necessary correction activities.

8.4. Establishing an integration, verification and validation strategy

Having ensured that the input documents for the IVV stage are at the appropriate level of maturity (or, in any case, have reached a level that allows IVV activities to continue), Elisabeth can now prepare the definition of an IVV strategy. The creation of this strategy is the key to all IVV activities. In this, she has two objectives:

to identify the parts of the system affected by IVV efforts: the strategy consists first of identifying the most critical parts of the system. Two viewpoints are taken into account: characteristics (functionalities, performances, etc.) and components. Second, differing objectives are attributed based on levels of criticality;

to break down IVV into steps: the strategy must define the main steps of IVV that allow us to attain the objectives identified above and their possible allocation to stakeholders. This work should lead to the identification of stages that are complementary, coherent and optimized.

8.4.1. Identifying integration, verification and validation objectives

The first point is very important, as it responds to the following problem: given that the system must be operational at a certain date that cannot be renegotiated, the time available for the “execution” of IVV activities, between delivery by suppliers and the beginning of operations (the IVV execution window) is limited. Due to this time limit, it is not possible to carry out all IVV activities out on the whole system. Choices therefore need to be made: what IVV activities should be carried out on what parts of the system? In passing, note that these choices, once made, are not definitive. Even a little experience is enough to show that delays in delivery are more than likely. These delays reduce the IVV execution window still further, as the operational launch date is non-negotiable.

Thus, at any moment, following the hazards of the project, the person responsible for IVV must have the means of modifying this defined strategy, mastering all of the impacts of this modification and with the ability to justify the modification (and to respond to the question of why one particular IVV activity was abandoned rather than another).

How, though, can we define this strategy in concrete terms? Essentially, we must identify the parts of the system that present the highest risk and concentrate IVV activities in these areas, to the detriment of other areas that present a lower risk and will be barely covered, if at all.

In order to identify those parts that present the greatest risks, Elisabeth will study the system from two angles: that of characteristics and that of products.

Characteristics are expressed through requirements that define the system. They may be split into six main groups, which themselves may be divided into subgroups. The six basic groups are as follows [ISO 01]:

functionality: the capacity of the system to fulfill certain functions in order to satisfy a certain expressed need;

reliability: the capacity of the system to maintain a certain level of performance under certain conditions;

usability: the capacity of the system to be understood, learnt, used and attractive to the user;

effectiveness: the capacity of the system to provide a certain level of performance;

maintainability: the capacity of the system for modification following the correction of bugs, modifications to its characteristics or to its environment;

portability: the capacity of the system to be transferred from one environment to another.

Elisabeth will seek to identify which characteristics present the greatest risks, or, in other words, the characteristics that, if they are not respected by the system, will generate the greatest losses in terms of money, time or operations. Naturally, Elisabeth will first consider requirements that may pose a risk to human life. Second, she will look at requirements with an impact on the success of the mission.

Figure 8.8. Functional breakdown of what is needed to “provide clean water”

image

Of course, Elisabeth, as IVV manager, is sometimes unable to carry out this analysis alone. We may reasonably wonder if she is the best person to carry out this analysis. Would it would be better to delegate it to other stakeholders who are able to give a more pertinent analysis, for example those responsible for security, who would be better able to evaluate potential impacts on human life, or users or the client when considering mission success? Elisabeth will need to seek the appropriate resources, if not to carry out these analyses, then at least to obtain validation of the results.

Let us look at this aspect using an example: water treatment. The functional breakdown is shown in Figure 8.8.

The allocation of functions to components is shown in Table 8.1.

Table 8.1. Allocation of water treatment functions to components

Function Component
Drive black water recycling Recycle black water Black water recycling unit
Drive gray water recycling Recycle gray water Gray water recycling unit
Store clean water Measure level of clean water Melt ice/snow Clean water tank
Detect low level of clean water Start water supply Stop water supply Detect high level of clean water Facility supervision unit
Distribute clean water Water distribution network

Elisabeth has chosen to classify functions using the following criteria:

– functions where failure represents a vital risk are classified as level 1. These include water storage and distribution functions;

– water may be produced by two flows: either by melting snow or by reprocessing dirty water. As these two flows operate simultaneously, Elisabeth considers that neither is vital in character and that they should not therefore be placed at level 1. Nevertheless, she considers water production by melting snow to be more critical than production by recycling waste water. The latter is therefore classified as level 3 (the lowest level), whereas the former is at level 2.

Elisabeth obtains a classification of functions as shown in Table 8.2.

Table 8.2. Evaluation of criticality levels by function

Components are the physical elements that make up the system in its concrete form. Each supports one or more characteristics. Components are identified during the system design phase and are represented in the PBS and the physical architecture.

Elisabeth will attempt to identify those components that represent the greatest risks, as she did with characteristics, independently of the functions they support. For this, she will look at the new products designed specifically for this project.

Elisabeth has decided to classify products based on the criterion of newness. New components, i.e. components designed and produced entirely for this project, are considered to present a higher risk than the others. Elisabeth considers that the recycling units fall into this category, as they are a new development only tested in one similar life support facility (Concordia) for which little feedback is available. The other components, such as the facility supervision unit, are more traditional and present fewer risks. Elisabeth obtains the classification presented in Table 8.3.

Table 8.3. Criticality levels of components

Component Criticality
Black water recycling unit 1
Gray water recycling unit 1
Clean water tank 2
Facility supervision unit 2
Water distribution network 2

To consolidate her analysis, Elisabeth submits her study to Marc, the systems architect, who gives his full approval of the results. At the end of this double analysis Elisabeth knows which elements of the system present the greatest risks. This may be represented as a matrix crossing characteristics and products. For example, for the water treatment function of the facility, Elisabeth obtains the matrix shown in Table 8.4.

This matrix was obtained by multiplying the criticality levels of the functions by those of the components. The blocks that do not correspond to real allocations are shown in gray.

Table 8.4. Components and functions – risk levels

image

Elisabeth obtains three groups of components:

category 1, the most critical (level 2), correspond to the following functions: distribute clean water, store clean water;

category 2, intermediate criticality (level 3), correspond to: direct recycling of gray water, direct recycling of sewage, recycle gray water, recycle sewage;

category 3, least critical (level 4), correspond to: stop supply, launch water supply, detect high levels of clean water, detect low levels of clean water, melt ice/snow, measure levels of clean water.

Elisabeth can now associate IVV objectives with each system component. The higher the perceived risk level of the component, the stronger the objective will be.

IVV objectives allow us to define goals to attain during the IVV phase. To be effective, they must be attainable, have a precise definition, and be measureable: it then becomes possible to estimate the distance from a goal, and consequently the remaining effort required to attain this goal. Furthermore, in order to choose one objective rather than another (based on the level of risk to cover), a hierarchy must exist between these objectives.

The main objectives used in IVV are presented in detail in section 8.7. For the life support facility, Elisabeth decides to use the following objectives:

for category 3:

- technical requirements list: coverage of 100% of technical requirements,

- user manual: coverage of 100% of procedures,

- installation manual: coverage of 100% of procedures;

for category 2:

- category 3 objectives,

- user manual: coverage of 100% of instructions,

- installation manual: coverage of 100% of instructions,

- for software: coverage of 100% of instructions, coverage of 100% of the equivalence classes of interfaces4;

for category 1:

- category 2 objectives,

- user manual: coverage of 100% of decisions,

- installation manual: coverage of 100% of decisions,

- coverage of 100% of mechanical failures,

- for software: coverage of 100% of decisions, 100% pairwise coverage of interfaces5.

These objectives are not set in stone. Elisabeth has not yet estimated the effort required to obtain these objectives and does not yet have precise knowledge of the means available to her. As her preparation work advances, and based on the progress of the project, these objectives may be modified.

Continuing in her approach, Elisabeth now looks at questions linked to the attainment of the objectives she has just established. It is not enough to simply define objectives, and we need a means of judging whether or not these objectives have been attained. This is a verification activity applied to the IVV process. To do this, Elisabeth may take several complementary approaches:

A priori analysis, which consists of ensuring attainment of the objective by construction, i.e. at the moment of creation of the test repository. This may be based on the use of an automatic test repository generation tool guaranteeing the attainment of certain objectives (this is the case for the coverage of software code, for example, or interfaces). It may also be based on the creation of traceability matrices between the test repository and the definition list.

A posteriori analysis consists of installing observation mechanisms during the test phases in order to measure the attainment (or otherwise) of the objective6.

At this stage, Elisabeth chooses to use traceability matrices to demonstrate the coverage of documents: when creating tests, she must systematically link each test to the input elements in relation to which the objectives are defined.

8.4.2. Stages of integration, verification and validation

Once this first stage of work has been carried out, Elisabeth can look at another part of the strategy: the division of IVV into a certain number of distinct steps that may operate in parallel or in succession. Each step will be defined by a certain number of IVV objectives to be attained (as defined previously) by a certain IVV environment, and will be placed under the responsibility of one of the IVV stakeholders.

Elisabeth’s choices will mostly be oriented by:

– the IVV objectives to be attained. Depending on the nature of these objectives, Elisabeth will need to plan specific stages where the necessary conditions for attainment of these objectives will be present: environment, equipment, etc.;

– the different stages of the system’s lifecycle, particularly those concerned with the relationship between acquirers and suppliers: supplier delivery orders and the contents of these deliveries are constraints that must be taken into account;

– the system architecture. Based on this architecture, Elisabeth has a variety of choices: integrate all components together and carry out tests on the unit thus obtained (“big bang” integration); integrate components in several steps, starting with the most basic (those that do not require other components) and moving up to the highest level (bottom-up integration); or the reverse (top-down integration);

– the resources available: techniques, tools, finances and time.

Elisabeth’s main job is to guarantee that stages are coherent. For example, it is unlikely that two stages will have the same IVV objectives for the same system components, at the risk of generating redundant activities and thus excessive costs. In the same way, it is best that each IVV objective should be allocated to a stage to avoid “holes” in the coverage of objectives.

At the end of the analytical process, Elisabeth has four main stages for IVV, associated with three different environments.

1) IVV-Unit-Factory stage: at the KYN Systems facilities, development platform

This stage is intended to support IVV activities relating to the attainment of coverage objectives at the level of components, which are considered as black and white boxes. It is during this stage, for example, that objectives of software code coverage and coverage of technical requirements allocated to components will be attained. These objectives may be attained using specific tools allowing detailed control of the environment of each component (input data), its internal elements (states, transitions) and through the use of suitable observation capacities (observation of produced data, internal states, internal transitions, etc.). For example, for the sewage and gray water recycling units (intermediate criticality), this stage supports the following objectives:

– coverage of all instructions and control–command decisions;

– coverage of requirements allocated to this equipment.

2) IVV-Integration-Factory stage: at the KYN Systems facilities, integration platform

This stage supports IVV activities linked to the integration of different components and to IVV activities at system level. If successful, this stage is followed by disassembly of the system and its transportation to the final site. The specific environment used for this stage allows us to master the integration of components via various possibilities for simulating components that have yet to be integrated, and for simulating the system environment itself.

These stages thus support the following IVV operations:

– system assembly;

– testing of the internal interfaces of assembled components.

3) IVV-Verification-Factory stage: at the KYN Systems facilities, integration platform

This involves:

– verification: end-to-end testing of all integrated elements (and coverage of requirements allocated to all components together);

– validation: detailed testing of installation and usage procedures.

4) IVV-Validation-Target stage: in the operating environment

This stage takes place after the transportation of components to the final site. It is, by nature, carried out in an environment where few IVV tools are available. It supports:

– final assembly of components after transportation;

– final on-site validation.

Having made these choices, Elisabeth is able to produce a summary table (Table 8.5) allocating IVV objectives to IVV stages.

Three environments will therefore be used for IVV: the development environment at KYN Systems, the integration environment at KYN Systems and the operational environment. At this stage, one of Elisabeth’s tasks is to define these environments precisely. She must establish a set of specifications containing a precise description of her needs for IVV. We shall return to this point in Chapter 9.

Table 8.5. Objectives of IVV steps

image

The last task in establishing this strategy is to define the actors involved in different stages of IVV. Although IVV is entirely the responsibility of Elisabeth and her team, this does not mean that all activities must be carried out by them. Elisabeth may delegate certain IVV activities to certain project actors, for example component suppliers or even the final client (i.e. Anne). This delegation presents the advantage of optimizing the use of available resources. However, the suppliers (i.e. Elisabeth) must guarantee:

– the correct distribution of activities: avoid duplication of the same activity between several stakeholders and avoid “orphan activities”, i.e. unallocated activities;

– the successful completion of activities: here, we encounter the problem of levels of confidence accorded to stakeholders in charge of IVV activities. Can we have total confidence in them, and thus accept all results with no verification? Or, on the contrary, should we systematically doubt them and implement monitoring and verification activities, or even independent doubling of the activity? In this case, everything depends on the knowledge of stakeholders, based on a shared history that varies in length and in quality;

– independence of production and IVV. This independence may operate at technical level (the individuals responsible for IVV are not the same people as those in charge of production), organizational level (the team responsible for IVV and the team in charge of development are different) or economic level (different companies are responsible for development and IVV).

For the Life Support Facility, Elisabeth decides to allocate the different identified activities as follows:

– the IVV-Unit-Factory stage to the development team;

– the IVV-Integration-Factory, IVV-Verification-Factory and IVV-Validation- Target stages remain under Elisabeth’s direction.

8.5. Defining the infrastructure

The infrastructure may be defined as containing all components that contribute to the IVV phase, i.e. those used in executing IVV tasks.

For IVV, the infrastructure is generally considered to contain three broad types of systems: platforms, tools and data.

8.5.1. Platforms

These are the physical locations where parts of IVV occur. We usually find:

– The “laboratory”: situated on the supplier’s premises and under their responsibility. The laboratory contains all the tools needed for detailed control of all the components that it develops: environmental control, access to and modification of internal data and data produced by components. In our case, the development platform at KYN Systems in charge of supplying the project will carry out the IVVUnit- Factory step.

The “factory” site: situated on the supplier’s premises and under their responsibility. This site supports several stages of integration, where the components of the system are assembled bit by bit, and the first phases of testing are carried out on the system in its entirety. The factory site must therefore offer tools enabling progressive assembly of the system (physical manipulation of components) and tools for handling and observing the interfaces of components. In our case, the factory site corresponds to the KYN Systems integration platform, and will support the IVVIntegration- Factory and IVV-Verification-Factory stages.

The “operating” site: the final site where the system will be used, generally under the responsibility of the client or user. By nature, a site of this kind will generally contain very few tools for controlling system input. Moreover, the system installed at this site is very close to the final system, i.e. it contains few means of observation: equipment for IVV in this type of environment is minimal, and only the validation test phases may be carried out. In our case, the operating site will, of course, be the target site in Antarctica. The IVV-Validation-Target stage will take place in this location.

To return to our story, after analysis, one of the platforms initially included in the plan was not used. This was a similar site platform: Elisabeth had planned to carry out a large part of IVV activities at a site that was not the final site, but a very similar one that avoided certain drawbacks (access conditions, extreme climate conditions, logistics). Thus, Elisabeth had intended to use a site situated in the Western Pyrenees, not far from KYN Systems’ headquarters, for logistical reasons. In the end, this platform was not used for cost and access reasons. For verification, or what might have represented pre-operational validation, Elisabeth instead used a platform situated within KYN Systems’ facilities (the integration platform).

8.5.2. Tools

These are the software and material components of platforms. They support IVV activities by providing facilities for management, observation and action on the system, and constitute the final products of enabling systems devoted to IVV. These numerous tools follow a classification [POT 95] found in the domain of software, but which has been transposed for use in the systems domain:

– management tools, used for project management aspects relating to IVV, for example planning, progress monitoring, monitoring of identified bugs, etc.;

– “dynamic” tools, used for the execution of technical IVV actions, such as data creation, environment simulators, result analysis, etc.;

– “static” tools for IVV tasks that do not require execution of the system: code analysis tools, optical analysis, review tools, requirement processing tools, etc.

Figure 8.9. Taxonomy of IVV tools, based on [POT 95]

image

For each of the platforms identified, Elisabeth must identify the IVV tools to use for different phases. For the integration platform, for example, Elisabeth thinks of the following characteristics, which she must then transform into technical requirements for IVV (as an enabling system):

– the integration platform must be able to hold the assembled life support facility. It must therefore have the following minimum dimensions: X m ∗ Y m * Z m;

– the integration platform must allow handling of components in such a way as to allow their assembly;

– the integration platform must have means of simulating the OTL laboratory in terms of communications;

– the integration platform must be able to simulate horizontal vibrations of an amplitude of between 0 and T cm, with a frequency from 0 to F Hz;

– etc.

For a time, Elisabeth – who worked in the domain of space sciences for some years – had planned to use a special test tool: an “ice-field simulator”, a sort of giant freezer able to recreate extreme temperature and wind conditions, large enough to test whole sections of the facility (it is like the giant “pressure cookers” used in testing in the aerospace industry allowing all or part of the satellite to be plunged into a thermal vacuum). Unfortunately, preliminary feasibility studies led Elisabeth to rapidly abandon this idea.

8.5.3. Data

Elements are offered as system input in the course of IVV. Depending on the type of objective, this may be real data (in which case we work with operational data or copies of these data, without modifications) or artificial data: in this case, we use data created specifically for IVV. Use of artificial data allows us to create situations that are very rarely found when using real data, thus giving coverage of a broader range of IVV objectives.

The infrastructure must be defined as early as possible in the engineering cycle, for the simple reason that the specification, design, creation and IVV of the infrastructure require at least as much effort as the creation of the system for which it is to be used. Once again, we should remember that enabling systems must be operational during the phases of the system’s lifecycle where their services are required. This is particularly true for the IVV infrastructure.

This is therefore something Elisabeth must work on from the IVV preparation phase onwards: she must describe, albeit in a very general manner at this stage, the specification of the platforms that will be used during the different stages of IVV. This description should include requirements in terms of the data to use (real, simulated, etc.) and functionalities devoted to IVV (recording, replay, observation, etc.). It must, evidently, be carried out in coherence with decisions taken when creating an IVV strategy, and particularly when determining stages of IVV and their associated objectives.

8.6. Integration, verification and validation organization

The organization of IVV mostly concerns three broad domains:

– The first domain involves the definition and implementation of classic activities encountered in the operational management of any project. During early IVV stages of the lifecycle (preparation), this includes the identification of tasks to carry out for IVV, the identification of dependencies, the creation of a schedule and a workload plan, the definition of indicators to use to monitor the progress of IVV and its results. This allows us, where necessary, to trigger control actions, and identify risks and associated means of risk reduction. During later phases (creation),

this domain includes monitoring the progress of IVV, monitoring risks, choosing and implementing corrective actions, etc. These are classic activities found in all projects, and as such we shall not dwell on them any further here.

– The second domain consists of identifying roles involved in the execution of these tasks, and includes the identification of physical individuals who will fulfill these roles, recruitment and training. The roles usually found in IVV are as follows:

- integrator, verifier, validator: in charge of executing IVV activities (for example creating and running tests, analyzing test results, recording anomalies, etc.);

- process manager: responsible for defining and establishing IVV activities;

- method support: a specialist in IVV techniques providing support to the process manager and who is responsible for training IVV personnel;

- technical support: a specialist in the use of certain tools, in charge of the specification, creation and operational maintenance of the test infrastructure (platform, tools, etc.). Also responsible for training IVV personnel;

- system support: a specialist in the system which is the object of IVV, both in terms of its definition and its architecture. Responsible for providing support during the definition of IVV tasks (tests, among other things) and during the first analysis of any recorded anomalies.

Aside from the role of process manager, which falls to Elisabeth, the other roles need to be filled. One of Elisabeth’s first tasks is thus to recruit staff in order to begin carrying out IVV tasks as soon as possible.

– The third domain concerns identification and response to requirements in terms of logistics, such as physical access to the system: access control, fulfillment of vital requirements during the IVV phase (such as staff catering and accommodation), etc.

Elisabeth is aware that she needs to look at these three domains early on, and begins working on them during the IVV preparation phase. As with infrastructure definition, organization must be handled in accordance with other viewpoints of IVV and with the major orientations chosen during definition of the strategy and the infrastructure.

8.7. Choosing techniques

The field of techniques concerns all methods that may be used to carry out a process: where processes define objectives to attain (responding to the question “what?”), techniques relate to the actions needed to carry out these processes (responding to the question “how?”). Techniques should be seen as methodological tools available to the person responsible for a process in order to attain his objectives. In the context of IVV for complex systems, a number of basic techniques may be used throughout the IVV lifecycle. Examples include:

– testing: for verification and validation;

– integration patterns: to define the stages of the integration phase;

– reviews: for analyzing input data;

– test objectives: to define targets;

– check-lists: to make different IVV actions reproducible;

– load estimation techniques: to estimate the workload (or length) involved in the IVV process; and

– traceability: to demonstrate the attainment of test objectives.

These techniques are considered V&V tools. In creating her IVV plan, Elisabeth chose to use three techniques that appear to be essential for the project: reviews, testing (and test objectives) and traceability matrices.

8.7.1. Review

As we have seen, review is Elisabeth’s preferred technique for analyzing IVV input documents.

Reviewing is a classic V&V technique, and as such may be used throughout the engineering process; it is described by a specific standard [IEE 08]. In its usual form, a review is a process during which an item (a product or a process) is presented to one or more project stakeholders for examination, comments or approval. A review may take various forms depending on the objective and the stakeholders involved. We thus find “audit” type reviews carried out by third parties outside the engineering process, “walk through” reviews carried out by colleagues working in the same team to share information on the item being analyzed, and technical reviews involving colleagues, experts and managers which aim to evaluate the item in relation to reference elements (specifications, standards, etc.).

Inspection is a form of review taken to extremes in terms of organization and formalization. As with a technical review, the objective is to evaluate one or more items in relation to a point of reference, but the approach is particularly rigorous, and inspection may be seen as an extremely powerful IVV tool.

Elisabeth plans to use an inspection technique to carry out the two tasks she identified in her IVV strategy: the analysis of input documents and the implementation of certain tests (“static” tests that do not require system execution). In concrete terms, Elisabeth will ask several members of her team to carry out a detailed examination of all input documents for IVV in order to ensure that they may be used for this purpose in practice.

8.7.2. Testing

Testing is the main V&V technique traditionally used for IVV, for the simple reason that it is the most “natural” approach for use once the engineering process has entered the phase of creation, i.e. once the system takes solid form as an assembly of products (materials, software, etc.).

By definition, a test is “a process which consists of operating a system or a component in a controlled manner in order to evaluate its characteristics” (definition adapted from [IEE 90]). Two points are therefore particularly important: the notion of characteristics, which refers back to the precise identification of objectives (what needs to be tested? with what objective?); and the notion of procedures (and consequently of mastery and reproducibility of the activity).

Test objectives occupy a central place in IVV strategy. It is from these objectives that Elisabeth selected the tangible elements to attain in IVV, a selection guided by the criticality level of the component in question: the more critical the element, the stronger the test objectives.

Historically, IVV objectives come from the domain of software testing. We may distinguish two types:

– “Black box” test objectives are defined by reference to elements outside of the component being tested: input and output data. These objectives are independent of the architecture and the concrete form of the component, rendering them reusable. The most widely used objectives of this type are requirement coverage, function coverage, coverage of cases of use, interface coverage, pairwise comparisons and equivalence classes.

– “White box” test objectives include internal elements of the component in their definition: coverage of internal control structures and of data. As these objectives are dependent on design choices, they are rarely reusable. The most widely used objectives of this type are coverage of instructions, coverage of data, coverage of conditions, decisions, instructions, etc. These objectives are defined formally in relation to representations that are formalized in the form of ordered graphs and correspond to path objectives of these graphs.

Besides the fact of providing a formal definition of IVV targets, the interest of these objectives lies in the fact that they may be partially ordered: an objective is stronger than another if the set of tests used to attain the first also allows us to attain the second. This order is used in defining an IVV strategy, as it allows us to choose a test objective based on the criticality of the component in question: the more critical the element, the stronger the test objectives. Note, in passing, that this mechanism has been used for a long time for certifying on-board software in avionics systems [RTC 92].

Although historically applied for software testing, coverage objectives may be used for other elements such as specification documents, user manuals, usage procedures, installation procedures, etc. For example, looking at validation objectives based on the coverage of an installation manual, we may “transpose” the objectives presented in Table 8.6:

Table 8.6. Coverage objectives

Software coverage objectives Installation manual coverage objectives
100% of functions 100% of installation procedures
100% of instructions 100% of actions to carry out
100% of decisions 100% of alternatives between different actions

8.7.3. Traceability

A verification tool par excellence, traceability generally allows us to follow the association between several elements. These elements may come from the same or different levels of reference (requirement, function, component or test requirements, etc.). In a design process, this allows us, for example, to trace links between information produced by breakdowns or aggregations and allocated properties. In the context of IVV, this V&V tool mainly allows us to attain two objectives:

– demonstrate the attainment of test coverage objectives a priori. During construction of the test repository, the traceability mechanism allows us to link elements of input (requirements) to elements of the tests themselves, and thus show that the tests, as they are specified, permit (or otherwise) coverage of all requirements on the list;

– carry out an impact analysis on tests affected by the modification of a requirement. This analysis allows us to identify all tests that are potentially unaffected (non-regression tests) and those that should be modified in order to validate the modification.

8.8. Things to remember: integration, verification and validation

At this stage (only the “preparation” IVV stage has been carried out), Elisabeth has a clear vision of the activities she will need to carry out in the course of the following phases. She has taken a global approach that has allowed her to prepare future IVV activities, justifying her choices and ensuring all important points are covered.

Of course, the bulk of the work remains to be done: the team must create tests, produce tools, produce test data, carry out testing and analyze the results, monitor the correction of different bugs that are sure to be found, etc. All of these activities demand that Elisabeth bring in reinforcements for her IVV team. However, Elisabeth is unflustered: based on her preparatory work, she can tackle the rest of the job with maximum confidence in the success of the IVV process.

The main points to remember from Elisabeth’s approach are as follows:

– activities linked to engineering;

– anticipation;

– a multi-faceted approach;

– strategy – a key point;

– the IVV manager – a high-pressure role.

8.8.1. Activities linked to engineering

As we have seen throughout this chapter, IVV makes use of the various results of the early phases of systems engineering as a point of reference. Among other things, these results allow us to judge whether or not observed system behavior is correct. They also play a role in defining an important part of the IVV strategy: the identification of the most critical points in order to concentrate efforts on them, and the definition of “measurable” objectives to obtain in terms of coverage.

8.8.2. Anticipation

Although IVV may be seen as a set of activities that take place during the later phases of the system lifecycle – i.e. when the component elements become available – it must be launched considerably earlier in the project, at the same time as the initial phases of engineering. This anticipation allows us to reduce the risk of failure in the project as a whole, as:

– it contributes to the consolidation of system definition elements: as IVV is the first “consumer” of definition documents, it has a role to play in ensuring their level of quality, completeness and coherence;

– it allows modification of system definition documents (for the main system or an enabling system) from the moment of their creation, incorporating specific elements devoted to IVV (observation capacities, simulation capacities, etc.).

8.8.3. A multi-faceted approach

IVV does not consist simply of carrying out tests. As we have seen, a large number of points of view must be taken into account and mastered throughout the course of the project, including:

– the process dimension (definition, planning, monitoring activities to execute, etc.);

– the dimension of methods (mastery of IVV techniques, or “how” the processes are carried out);

– equipment (platform, tools, data).

The IVV manager must possess a number of skills, both technical and organizational. He or she must constantly juggle local (tactical) and global (strategic) views of the system.

8.8.4. Strategy: a key point

Strategy is the central element of IVV, as it is in the strategy that the distribution of IVV efforts across the system is decided.

Schematically, strategy leads us to identify the most critical “parts” of the system and to associate the strongest IVV objectives with these parts. By “parts” of the system, we refer to material elements that contribute to the system alongside immaterial elements (characteristics) that describe the system in terms of objectives to attain and constraints to respect. Moreover, the definition of the strategy takes account of the allocation of characteristics to components.

This association may be modified: based on the time and resources available, objectives may be subject to overall reductions or increases. This mechanism allows us to adapt IVV efforts based on the characteristics of the system, but also, and especially, on developments in the course of the project. If the IVV workload must

be reduced by 20%, the strategy offers a means of justifying choices as to where cuts will be made.

8.8.5. The IVV manager: a high-pressure role

Finally, we cannot conclude this chapter without highlighting a specificity of IVV linked to its position at the heart of systems-engineering disciplines. IVV is in constant interaction, and often confrontation, with a number of stakeholders in the engineering process:

– with the client: IVV may identify problems, the correction of which leads to time and cost overruns, which are difficult for the client to accept;

– with the user: IVV may highlight the differences between what the user wants and what the product really does, potentially leading to user dissatisfaction;

– with suppliers: if IVV identifies a problem in the final product, the supplier may claim that there was an error in the specifications they received, rejecting any responsibility for the problem;

– with systems architects and engineers: in contrast to suppliers, they may indicate that the origin of a problem lies with suppliers, considering the specifications they provided to the supplier to be valid.

We see that, for many, IVV points out inconvenient truths, creating problems. In practice, those involved in IVV phases, and particularly the director of IVV operations, should be well prepared to manage potential conflicts. This social dimension of the IVV role should not be neglected, and is often an important factor in the success (or otherwise) of a project.

8.9. Bibliography

[IEE 90] IEEE, Standard Glossary of Software Engineering Terminology, IEEE 610.12-1990, IEEE, 1990.

[IEE 04] IEEE, Standard for Software Verification and Validation, IEEE 1012-2004, IEEE, 2004.

[IEE 08] IEEE, Standard for Software Reviews and Audits, IEEE 1028-2008, IEEE, 2008.

[ISO 01] ISO, Software engineering – Product quality – Part 1: Quality model, ISO/IEC 9126-1:2001, ISO, 2001.

[ISO 03] ISO, Systems Engineering – A guide for the application of ISO/IEC 15288, ISO/IEC TR 19760:2003, ISO, 2003

[ISO 08a] ISO, Systems Engineering – System Life Cycle processes, ISO/IEC 15288:2008, ISO, 2008.

[ISO 08b] ISO, Information Technology – Software life cycle processes, ISO/IEC 12207:2008, ISO, 2008.

[POL 01] POL M., TEINISSEN R.,VAN VEENENDAAL E., Software Testing – A Guide to the TMap Approach, Addison Wesley, London, November 2001.

[POT 95] POTTER C., Dr CORY T., OWITT T., CAST Tools: An Evaluation and Comparison, JOWITT T. (ed.), Bloor Research Group, London, 1995.

[RAP 85] RAPPS S., WEYUKER E., “Selecting Software Test Using Dataflow Information”, IEEE Transactions on Software Engineering, vol. SE-11, no. 4, April 1985.

[RTC 92] RTCA, DO 178B/ED 12B – Software Consideration in Airborne Systems and Equipment Certification, RCTA, 1992.

[VIL 02] VILKOMIR S., BOWEN J., “Reinforced condition/decision coverage (RC/DC): a new criterion for software testing”, in: ZB2002: Formal Specification and Development in Z and B, LNCS 2272, 2002.

1 Chapter written by Daniel PRUN and Jean-Luc WIPPLER.

1 Note that similar errors have been made in the past, including confusion of units in descent levels between the onboard and on-the-ground systems linked to the Mars Climate Orbiter in 1999.

2 The use of the term “ensure” here takes an inadvisable shortcut. No certitude can be acquired at the end of IVV activities; at best, we may gain an increase in the level of trust in the system.

3 Qualification: this term may be used to designate the process by which we ensure that a system responds to requirements and is ready for use in its target environment (either operationally or as part of a super-system) [ISO 08b]. Qualification occurs at the end of the successful implementation of all IVV activities.

4 The “equivalence classes” test objective is a “black box” type objective that covers classes of input elements. Informally, a class is a set of input data values for which the system presents similar behaviors.

5 The “pairwise coverage” objective is a “black box” type objective that consists of covering all possible combinations of input elements for a system, considered two by two.

6 Numerous tools for testing software using source code instrumentation techniques permit the production and use of execution graphs. This software technique may be extended to broader systems (including materials, or even humans).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset