Chapter 18

Test Execution/Evaluation (Do/Check)

You will recall that in the spiral development environment, software testing is described as a continuous improvement process that must be integrated into a rapid application development methodology. Deming’s continuous improvement process using the PDCA model was applied to the software testing process. We are now in the Do/Check part of the spiral model (see Figure 18.1).

Figure 18.2 outlines the steps and tasks associated with the Do/Check part of spiral testing. Each step and task are described along, and valuable tips and techniques are provided.

Step 1: Setup and Testing

Task 1: Regression Test the Manual/Automated Spiral Fixes

The purpose of this task is to retest the tests that discovered defects in the previous spiral. The technique used is regression testing. Regression testing is a technique that detects spurious errors caused by software modifications or corrections. (See Appendix G27, “Regression Testing,” for more details.)

A set of test cases must be maintained and made available throughout the entire life of the software. The test cases should be complete enough so that all the software’s functional capabilities are thoroughly tested. The question arises as to how the test cases to test defects discovered during the previous test spiral can be located. An excellent mechanism is the retest matrix.

Images

Figure 18.1   Spiral testing and continuous improvement.

Images

Figure 18.2   Test execution/evaluation (steps/tasks).

As described earlier, a retest matrix relates test cases to functions (or program units). A check entry in the matrix indicates that the test case is to be retested when the function (or program unit) has been modified due to enhancements or corrections. No entry means that the test case does not need to be retested. The retest matrix can be built before the first testing spiral, but needs to be maintained during subsequent spirals. As functions (or program units) are modified during a development spiral, existing or new test cases need to be created and checked in the retest matrix in preparation for the next test spiral. Over time with subsequent spirals, some functions (or program units) may be stable with no recent modifications. Consideration to selectively remove their check entries should be undertaken between testing spirals.

If a regression test passes, the status of the defect report should be changed to “closed.”

Task 2: Execute the Manual/Automated New Spiral Tests

The purpose of this task is to execute new tests that were created at the end of the previous testing spiral. In the previous spiral, the testing team updated the test plan, GUI-based function test matrix, scripts, the GUI, the system fragment tests, and acceptance tests in preparation for the current testing spiral. During this task those tests are executed.

Task 3: Document the Spiral Test Defects

During spiral test execution, the results of the testing must be reported in the defect-tracking database. These defects are typically related to individual tests that have been conducted. However, variations to the formal test cases often uncover other defects. The objective of this task is to produce a complete record of the defects. If the execution step has been recorded properly, the defects have already been recorded on the defect-tracking database. If the defects are already recorded, the objective of this step becomes to collect and consolidate the defect information.

Tools can be used to consolidate and record defects depending on the test execution methods. If the defects are recorded on paper, the consolidation involves collecting and organizing the papers. If the defects are recorded electronically, search features can easily locate duplicate defects. A sample defect report is given in Appendix E27, “Defect Report,” which can be used to report the details of a specific defect.

Step 2: Evaluation

Task 1: Analyze the Metrics

Metrics are used so that we can help make decisions more effectively and support the development process. The objective of this task is to apply the principles of metrics to control the testing process.

In a previous task, the metrics and metric points were defined for each spiral to be measured. During the present task, the metrics that were measured are analyzed. This involves quantifying the metrics and putting them into a graphical format.

The following is the key information a test manager needs to know at the end of a spiral:

  1. Test case execution status—How many test cases were executed, how many were not executed, and how many discovered defects? This provides an indication of the tester’s productivity. If the test cases are not being executed in a timely manner, more personnel may need to be assigned to the project.

  2. Defect gap analysis—What is the gap between the number of defects that have been uncovered and the number that have been corrected? This provides an indication of development’s ability to correct defects in a timely manner. If there is a relatively large gap, perhaps more developers need to be assigned to the project.

  3. Defect severity status—The distribution of the defect severity (e.g., critical, major, and minor) provides an indication of the quality of the system. If there is a large percentage of defects in the critical category, there probably exist a considerable number of design and architecture issues.

  4. Test burnout tracking—Shows the cumulative and periodic number of defects being discovered. The cumulative number, for example, the running total number of defects, and defects by time period help predict when fewer and fewer defects are being discovered. This is indicated when the cumulative curve “bends” and the defects by time period approach zero. If the cumulative curve shows no indication of bending, the implication is that defect discovery is still very robust and that many more still exist to be discovered in other spirals.

Graphical examples of the foregoing metrics can be seen in Chapter 19, “Prepare for the Next Spiral (or Agile Iteration).”

Step 3: Publish Interim Report

See Appendix E25, “Project Status Report,” which can be used to report the status of the testing project for all key process areas; Appendix E26, “Test Defect Details Report,” which can be used to report the detailed defect status of the testing project for all key process areas; and Appendix E28, “Test Execution Tracking Manager,” which is an Excel spreadsheet that provides a comprehensive and test cycle view of the number of test cases that passed/failed, the number of defects discovered by application area, the status of the defects, percentage completed, and the defect severities by defect type. The template is located on the CD at the back of the book.

Task 1: Refine the Test Schedule

In a previous task, a test schedule was produced that includes the testing steps (and perhaps tasks), target start dates and end dates, and responsibilities. During the course of development, the testing schedule needs to be continually monitored. The objective of the current task is to update the test schedule to reflect the latest status. It is the responsibility of the test manager to:

  1. ■ Compare the actual progress to the planned progress.

  2. ■ Evaluate the results to determine the testing status.

  3. ■ Take appropriate action based on the evaluation.

If the testing progress is behind schedule, the test manager needs to determine the factors causing the slip. A typical cause is an underestimation of the test effort. Another factor could be that an inordinate number of defects are being discovered, causing a lot of the testing effort to be devoted to retesting old corrected defects. In either case, more testers may be needed or over time may be required to compensate for the slippage.

Task 2: Identify Requirement Changes

In a previous task, the functional requirements were initially analyzed by testing function, which consisted of hierarchical functional decomposition, functional window structure, window standards, and minimum system requirements.

Between spirals, new requirements may be introduced into the development process. They can consist of the following:

  1. ■ New GUI interfaces or components

  2. ■ New functions

  3. ■ Modified functions

  4. ■ Eliminated functions

  5. ■ New system requirements, for example, hardware

  6. ■ Additional system requirements

  7. ■ Additional acceptance requirements

Each new requirement needs to be identified, recorded, analyzed, and updated in the test plan, test design, and test scripts.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset