Chapter 22

Summarize/Report Test Results

Appendix F23, “Project Completion Checklist,” can be used to confirm that all the key activities have been completed for the project.

Step 1: Perform Data Reduction

Task 1: Ensure All Tests Were Executed/Resolved

During this task, the test plans and logs are examined by the test team to verify that all tests were executed (see Figure 22.1). The team can usually do this by ensuring that all the tests are recorded on the activity log and examining the log to confirm that the tests have been completed. When there are defects that are still open and not resolved, they need to be prioritized and deployment workarounds need to be established.

Task 2: Consolidate Test Defects by Test Number

During this task, the team examines the recorded test defects. If the tests have been properly performed, it is logical to assume that, unless a defect test document was reported, the correct or expected result was received. If that defect were not corrected, it would have been posted to the test defect log. The team can assume that all items are working except those recorded on the test log as having no corrective action or unsatisfactory corrective action. The test number should consolidate these defects so that they can be posted to the appropriate matrix.

Images

Figure 22.1   Summarize/report spiral test results.

Task 3: Post Remaining Defects to a Matrix

During this task, the uncorrected or unsatisfactorily corrected defects should be posted to a special function test matrix. The matrix indicates which test-by-test number tested which function. The defect is recorded in the intersection between the test and the functions for which that test occurred. All uncorrected defects should be posted to the function/test matrix intersection.

Step 2: Prepare Final Test Report

The objective of the final spiral test report is to describe the results of the testing, including not only what works and what does not, from above, but the test team’s evaluation regarding performance of the application when it is placed into production.

For some projects, informal reports are the practice, whereas in others, very formal reports are required. The following is a compromise between the two extremes to provide essential information not requiring an inordinate amount of preparation (see Appendix E15, “Spiral Testing Summary Report”; also see Appendix E29, “Final Test Summary Report,” which can be used as a final report of the test project with key findings).

Task 1: Prepare the Project Overview

An objective of this task is to document an overview of the project in paragraph format. Some pertinent information contained in the introduction includes the project name, project objectives, the type of system, the target audience, the organizational units that participated in the project, why the system was developed, what subsystems are involved, the major and subfunctions of the system, and what functions are out of scope and will not be implemented.

Task 2: Summarize the Test Activities

The objective of this task is to describe the test activities for the project including such information as the following:

  1. Test team—The composition of the test team, for example, test manager, test leader, and testers, and the contribution of each, such as test planning, test design, test development, and test execution.

  2. Test environment—Physical test facility, technology, testing tools, software, hardware, networks, testing libraries, and support software.

  3. Types of tests—Spiral (how many spirals), system testing (types of tests and how many), and acceptance testing (types of tests and how many).

  4. Test schedule (major milestones)—External and internal. External milestones are those events external to the project but that may have a direct impact on it. Internal milestones are the events within the project that can be controlled to some extent.

  5. Test tools—The testing tools used and their purpose, for example, path analysis, regression testing, load testing, and so on.

Task 3: Analyze/Create Metric Graphics

During this task, the defect and test management metrics measured during the project are gathered and analyzed. Defect tracking should be automated for greater productivity. Reports are run, and metric totals and trends are analyzed. This analysis will be instrumental in determining the quality of the system and its acceptability for use, and also will be useful for future testing endeavors. The final test report should include a series of metric graphics. The suggested graphics follow.

Defects by Function

Table 22.1 shows the number and percentage of defects discovered for each function or group. This analysis will flag the functions that have the most defects. Typically, such functions had poor requirements or design. In the following example, the reports had 43 percent of the total defects, which suggests an area that should be examined for maintainability after it is released for production.

Defects by Tester

Table 22.2 shows the number and percentage of defects discovered for each tester during the project. This analysis flags those testers who documented fewer than the expected number of defects. These statistics, however, should be used with care. A tester may have recorded fewer defects because the functional area tested may have relatively fewer defects, for example, tester Baker in Table 22.2. On the other hand, a tester who records a higher percentage of defects could be more productive, for example, tester Brown.

Defect Gap Analysis

Figure 22.2 shows the gap between the number of defects that has been uncovered and the number that has been corrected during the entire project. At project completion, these curves should coincide, indicating that the majority of the defects uncovered have been corrected and the system is ready for production.

Defect Severity Status

Figure 22.3 shows the distribution of the three severity categories for the entire project, for example, critical, major, and minor. A large percentage of defects in the critical category indicates that a problem existed with the design or architecture of the application that should be examined for maintainability after it is released for production.

Test Burnout Tracking

Figure 22.4 indicates the rate of uncovering defects for the entire project and is a valuable test completion indicator. The cumulative (e.g., running total) number of defects and defects by time period help predict when fewer and fewer defects are being discovered. This is indicated when the cumulative curve “bends” and the defects by time period approach zero.

Table 22.1   Defects Documented by Function

Images

Images

Table 22.2   Defects Documented by Tester

Tester

Number of Defects

Percent of Total

Jones

51

28

Baker

19

11

Brown

112

61

Grand totals

182

100

Root Cause Analysis

Figure 22.5 shows the source of the defects, for example, architectural, functional, usability, and so on. If the majority of the defects are architectural, the entire system will be affected, and a great deal of redesign and rework will be required. High-percentage categories should be examined for maintainability after they are released for production.

Defects by How Found

Figure 22.6 shows how the defects were discovered, for example, by external customers, manual testing, and the like. If a very low percentage of defects were discovered through inspections, walkthroughs, or JADs, this would indicate that there may be too much emphasis on testing and too little on the review process. The percentage differences between manual and automated testing also illustrate the contribution of automated testing to the process.

Images

Figure 22.2   Defect gap analysis.

Defects by Who Found

Figure 22.7 shows who discovered the defects, for example, external customers, development, quality assurance testing, and so on. For most projects, quality assurance testing will discover most of the defects. However, if external or internal customers discovered the majority of the defects, this would indicate that quality assurance testing was lacking.

Functions Tested and Not Tested

Figure 22.8 shows the final status of testing and verifies that all or most defects have been corrected and the system is ready for production. At the end of the project, all test cases should have been completed and the percentage of test cases run with errors and not run should be zero. Exceptions should be evaluated by management and documented.

Images

Figure 22.3   Defect severity status.

System Testing Defect Types

Systems testing consists of one or more tests that are based on the original objectives of the system. Figure 22.9 shows a distribution of defects by system testing type. In the example, performance testing had the most defects, followed by compatibility and usability. An unusually high percentage of performance tests indicates a poorly designed system.

Acceptance Testing Defect Types

Acceptance testing is an optional user-run test that demonstrates the ability of the application to meet the user’s requirements. The motivation for this test is to positive rather than negative, for example, to show that the system works. Less emphasis is placed on the technical issues, and more is placed on the question of whether the system is a good business fit for the end user.

Images

Figure 22.4   Test burnout tracking.

There should not be many defects discovered during acceptance testing, as most of them should have been corrected during system testing. In Figure 22.10, performance testing still had the most defects, followed by stress and volume testing.

Task 4: Develop Findings/Recommendations

A finding is a discrepancy between what is and what should be. A recommendation is a suggestion on how to correct a problem or improve a system. Findings and recommendations from the test team constitute most of the test report.

The objective of this task is to develop the findings and recommendations from the testing process and document “lessons learned.” Previously, data reduction has identified the findings, but they must be put in a format suitable for use by the project team and management.

The test team should make the recommendations to correct a situation. The project team should also confirm that the findings are correct and the recommendations reasonable. Each finding and recommendation can be documented in the Finding/Recommendation matrix depicted in Table 22.3.

Images

Figure 22.5   Root cause analysis.

Images

Figure 22.6   Defects by how found.

Images

Figure 22.7   Defects by who found.

Images

Figure 22.8   Functions tested/not tested.

Images

Figure 22.9   System testing by root cause.

Step 3: Review/Approve the Final Test Report

Task 1: Schedule/Conduct the Review

The test summary report review should be scheduled well in advance of the actual review, and the participants should have the latest copy of the test plan.

As with any interview or review, there are certain common elements. The first is defining what will be discussed; the second is discussing the details; the third is summarization; and the final element is timeliness. The reviewer should state up front the estimated duration of the review and set the ground rule that if time expires before completing all items on the agenda, a follow-on review will be scheduled.

The purpose of this task is for development and the project sponsor to agree and accept the test report. If there are any suggested changes to the report during the review, they should be incorporated.

Images

Figure 22.10   Acceptance testing by root cause.

Task 2: Obtain Approvals

Approval is critical in a testing effort, because it helps provide the necessary agreement among testing, development, and the sponsor. The best approach is with a formal sign-off procedure of a test plan. If this is the case, use the management approval sign-off forms. However, if a formal agreement procedure is not in place, send a memo to each key participant, including at least the project manager, development manager, and sponsor. In the document, attach the latest test plan and point out that all their feedback comments have been incorporated and that if you do not hear from them, it is assumed that they agree with the plan. Finally, indicate that in a spiral development environment, the test plan will evolve with each iteration but that you will include them in any modification.

Task 3: Publish the Final Test Report

The test report is finalized with the suggestions from the review and distributed to the appropriate parties. The purpose has short- and long-term objectives.

Table 22.3   Finding/Recommendations Matrix

Images

The short-term objective is to provide information to the software user to determine if the system is ready for production. It also provides information about outstanding issues, including testing not completed or outstanding problems, and recommendations.

The long-term objectives are to provide information to the project regarding how it was managed and developed from a quality point of view. The project can use the report to trace problems if the system malfunctions in production, for example, defect-prone functions that had the most errors and the ones that were not corrected. The project and organization also have the opportunity to learn from the current project. A determination of which development, project management, and testing procedures worked, and which did not work or need improvement, can be invaluable for future projects.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset