Chapter 9

Static Testing the Physical Design

The logical design phase translates the business requirements into system specifications that can be used by programmers during physical design and coding. The physical design phase determines how the requirements can be automated. During this phase a high-level design is created in which the basic procedural components and their interrelationships and major data representations are defined.

The physical design phase develops the architecture, or structural aspects, of the system. Logical design testing is functional; however, physical design testing is structural. This phase verifies that the design is structurally sound and accomplishes the intent of the documented requirements. It assumes that the requirements and logical design are correct and concentrates on the integrity of the design itself.

Testing the Physical Design with Technical Reviews

The logical design phase is verified with static techniques, that is, nonexecution of the application. As with the requirements and logical design phases, the static techniques check the adherence to specification conventions and completeness, with a focus on the architectural design. The basis for physical design verification is design representation schemes used to specify the design. Example design representation schemes include structure charts, Warnier–Orr diagrams, Jackson diagrams, data navigation diagrams, and relational database diagrams, which have been mapped from the logical design phase.

Design representation schemes provide mechanisms for specifying algorithms and their inputs and outputs to software modules. Various inconsistencies are possible in specifying the control flow of data objects through the modules. For example, a module may need a particular data item that another module creates but is not provided correctly. Static analysis can be applied to detect these types of control flow errors.

Other errors made during the physical design can also be detected. Design specifications are created by iteratively supplying detail. Although a hierarchical specification structure is an excellent vehicle for expressing the design, it does not allow for inconsistencies between levels of detail. For example, coupling measures the degree of independence between modules. When there is little interaction between two modules, the modules are described as loosely coupled. When there is a great deal of interaction, they are tightly coupled. Loose coupling is considered a good design practice.

Examples of coupling include content, common, control, stamp, and data coupling. Content coupling occurs when one module refers to or changes the internals of another module. Data coupling occurs when two modules communicate via a variable or array (table) that is passed directly as a parameter between the two modules. Static analysis techniques can determine the presence or absence of coupling.

Static analysis of the design representations detects static errors and semantic errors. Semantic errors involve information or data decomposition, functional decomposition, and control flow. Each defect uncovered during the physical design review should be documented, categorized, recorded, presented to the design team for correction, and referenced to the specific document in which the defect was noted. Table 9.1 shows a sample physical design phase defect recording form (see Appendix F3, “Physical Design Phase Defect Checklist,” for more details).

Creating Integration Test Cases

Integration testing is designed to test the structure and the architecture of the software and determine whether all software components interface properly. It does not verify that the system is functionally correct, only that it performs as designed.

Integration testing is the process of identifying errors introduced by combining individual program unit-tested modules. It should not begin until all units are known to perform according to the unit specifications. Integration testing can start with testing several logical units or can incorporate all units in a single integration test.

Because the primary concern in integration testing is that the units interface properly, the objective of this test is to ensure that they integrate, that parameters are passed, and the file processing is correct. Integration testing techniques include top-down, bottom-up, sandwich testing, and thread testing (see Appendix G, “Software Testing Techniques,” for more details).

Table 9.1   Physical Design Phase Defect Recording

Images

Methodology for Integration Testing

The following describes a methodology for creating integration test cases.

Step 1: Identify Unit Interfaces

The developer of each program unit identifies and documents the unit’s interfaces for the following unit operations:

  1. ■ External inquiry (responding to queries from terminals for information)

  2. ■ External input (managing transaction data entered for processing)

  3. ■ External filing (obtaining, updating, or creating transactions on computer files)

  4. ■ Internal filing (passing or receiving information from other logical processing units)

  5. ■ External display (sending messages to terminals)

  6. ■ External output (providing the results of processing to some output device or unit)

Step 2: Reconcile Interfaces for Completeness

The information needed for the integration test template is collected for all program units in the software being tested. Whenever one unit interfaces with another, those interfaces are reconciled. For example, if program unit A transmits data to program unit B, program unit B should indicate that it has received that input from program unit A. Interfaces not reconciled are examined before integration tests are executed.

Step 3: Create Integration Test Conditions

One or more test conditions are prepared for integrating each program unit. After the condition is created, the number of the test condition is documented in the test template.

Step 4: Evaluate the Completeness of Integration Test Conditions

The following list of questions will help guide evaluation of the completeness of integration test conditions recorded on the integration testing template. This list can also help determine whether test conditions created for the integration process are complete.

  1. ■ Is an integration test developed for each of the following external inquiries?

    1. –   Record test

    2. –   File test

    3. –   Search test

    4. –   Match/merge test

    5. –   Attributes test

    6. –   Stress test

    7. –   Control test

  2. ■ Are all interfaces between modules validated so that the output of one is recorded as input to another?

  3. ■ If file test transactions are developed, do the modules interface with all those indicated files?

  4. ■ Is the processing of each unit validated before integration testing?

  5. ■ Do all unit developers agree that integration test conditions are adequate to test each unit’s interfaces?

  6. ■ Are all software units included in integration testing?

  7. ■ Are all files used by the software being tested included in integration testing?

  8. ■ Are all business transactions associated with the software being tested included in integration testing?

  9. ■ Are all terminal functions incorporated in the software being tested included in integration testing?

The documentation of integration tests is started in the Test Specifications section (see Appendix E2, “System/Acceptance Test Plan”). Also in this section, the functional decomposition continues to be refined, but the system-level test cases should be completed during this phase.

Test items in the Introduction section are completed during this phase. Items in the Test Approach and Strategy, Test Execution Setup, Test Procedures, Test Tool, Personnel Requirements, and Test Schedule continue to be refined.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset