Chapter 25

The Defect Management Process

Quality Control and Defect Management

The Quality Control process is the third phase of Project Quality Management. A key element in managing quality, defect management establishes the method of recording and organizing the defects that are discovered during test execution. The output of the process gives the project stakeholders a way to judge the progress that the test team makes as it executes the test plan. The same output gives the end user visibility regarding how well the product conforms to his requirements.

This section breaks the defect management process into the following essential functions:

  1. ■ Defect discovery and classification

  2. ■ Defect tracking

  3. ■ Defect reporting

Defect Discovery and Classification

A defect is a deviation from either business or technical requirements. Testers generally find and log the defects as they execute test cases. Even though testing finds defects, end users find defects, too, as they use the business application or system.

Images

Figure 25.1   Defect life cycle.

Defects are classified into categories to facilitate change management and to help plan and prioritize the rework that is required to fix the defect. Classifications vary from organization to organization. The following are sample classifications:

  1. Showstopper (X): The impact of the defect is severe, and the system cannot be tested without resolving the defect because an interim solution (work-around) is not available.

  2. Critical (C): The impact of the defect is severe; however, an interim solution is available. The defect should not hinder the test process in any way.

  3. Noncritical (N): All defects that are not in the X or C category are in the N category. These are the defects that could potentially be resolved via documentation and user training. These can be GUI defects or some minor field-level observations. Figure 25.1 depicts the life-cycle flow of the defects. A defect has the initial state of “New” and eventually has a “Closed” state.

Defect Priority

During the test activities, testers assign a priority to each defect as they log the defects into the defect-tracking system. The priority assigned to a defect might change as a result of discussions in the defect meetings because the priority assigned to the defects will affect the order in which the development team will fix the defects. The number and sequence of the fixes have a direct impact on the development schedules and test schedules.

These are examples of common priority designations:

  1. High: Further development and testing cannot occur until the defect has been repaired. The software system cannot be used until the repair is done.

  2. Medium: The defect must be resolved as soon as possible because it is hindering development and testing activities. Software system use will be severely affected until the defect is fixed.

  3. Low: The defect is an irritant that should be repaired, but which can be repaired after a more serious defect has been fixed.

Defect Category

Defects are categorized into different categories per the testing strategy. The following are the major categories of defects normally identified in a testing project:

  1. Works as Intended (WAI): Test cases to be modified. This may arise when the tester’s understanding may be incorrect.

  2. Discussion Items: Arises when there is a difference of opinion between the test and the development team. This is marked to the domain consultant for the final verdict.

  3. Code Change: Arises when the development team has to fix the bug.

  4. Data Related: Arises when the defect is due to data and not coding.

  5. User Training: Arises when the defect is not severe or technically infeasible to fix; it is decided to train the user on the defect. This should ideally not be critical.

  6. New Requirement: Inclusion of functionality after discussion.

  7. User Maintenance: Master and parameter maintenance by the user causing the defect.

  8. Observation: Any other observation not classified in the foregoing categories, such as a user-perspective GUI defect.

Defect Tracking

The test strategy document (see Appendix E21, “Test Strategy”) specifies the defect management process for the project (see Figure 25.2). It spells out the test engineer’s actions when a defect is found that needs to be reported to the developers and the owners of the system.

Test engineers who enter their defect in the defect log (see Appendix E9, “Test Care Log”) note when they discovered the defect. The defect log can also be a database that includes the results of the test along with descriptions of the discrepancies between the expected and actual results.

Images

Figure 25.2   Defect tracking.

Numerous defect management tools are available for logging in and monitoring defects. Some of the popular defect management tools are described in Section 6, “Modern Software Testing Tools.”

Defect Reporting

Testers use the defect report (also called a problem report) to capture the detail of a problem so it can be evaluated and prioritized into a list of product defects. The report is important to the project management team as well as to the developers who are assigned to recreate and fix the defect, and the testers, who verify that the defect was fixed. The defect report does not include detailed descriptions of the expected and actual test results, but it does require a detailed problem description. Defects are reported using a standard format that collects the information shown in Appendix E12, “Defect Report.”

Defect Summary

Trend curves are based on the collective information from the defect reports and are published to graphically illustrate these types of trends:

  1. ■ Total errors found over time.

  2. ■ Errors by cause. Example: Operator versus program error.

  3. ■ Errors by how found. Example: Errors discovered by the user.

  4. ■ Errors by system. Example: Errors found in the order entry system.

  5. ■ Errors found by organization. Example: Support group or operations.

Figure 25.2 shows a graph of time versus the number of defects found during testing. The predicted error rate is an estimate of progress toward completing the test effort. When the rate of correction becomes a bottleneck in the test process, additional development resources should be assigned. Figure 25.2 also shows the difference between the predicted and actual error rates relative to the total number of projected errors.

Defect Meetings

Defect meetings are the best way to disseminate information among the testers, analysts, development, and the business.

Daily meetings are conducted at the end of the day between the test team and development team to discuss test execution and defects. This is when the defects are formally categorized in terms of the defect type and severity.

Before the defect meetings with the development team, the test team should have internal discussions with the test project manager on the defects reported. This process ensures that all defects are accurate and authentic to the best knowledge of the test team.

Defect Metrics

The analysis of the defects can be done on the basis of the severity, occurrence, and category of the defects. As an example, defect density is a metric that gives the ratio of defects in specific modules to the total defects in the application. Further analysis and derivation of metrics can be done employing the various components of the defect management.

  1. Defect age: Defect age is the time duration between the point of identification of the defect to the point of closure of the defect. This would give a fair idea regarding the defect set to be included for smoke test during regression.

  2. Defect density: Defect density is usually calculated per thousand source lines of code (KSLOC) as shown in the following text. This can be helpful in that a measure of defect density can be used to (1) predict the remaining defects when compared to the expected defect density, (2) determine if the amount of testing is sufficient, and (3) establish a database of standard defect densities.

Dd = D/KLSOC

where

D

=

the number of defects,

KSLOC

=

the number of noncommented lines of source code (numbered per thousand), and

Dd

=

the actual defect density.

Plotting defect density versus module size typically produces a U-shape curve that is concaved upward (see Figure 25.3). Plotting very small and very large modules shows that they have a higher defect count than modules of intermediate size. The increasing incidence of bugs for small module sizes holds across a wide variety of systems and has been demonstrated by different studies.

A different way of viewing the same data is to plot lines of code per module against total bugs. The curve looks roughly logarithmic and then flattens, corresponding to the minimum in the defect density curve, after which it goes up as the square of the number of the lines of code.

Quality Standards

Managing the cycle of finding and fixing defects is an integral activity in the quality control process. The purpose of the work that goes into the overall defect management is to compare the quality of the product to planned quality standards. If the quality standards are not well established by the project manager and test manager, then the cost of quality will reach a point of diminishing return. That point is where the cost of finding and fixing more defects outweighs the financial benefit of the project.

Images

Figure 25.3   Defect count and density versus module size.

Enforcing quality standards means delivering a product that the customer will accept. Beyond the acceptable level of quality is a point of diminishing returns at which the cost of quality exceeds the financial benefit of the project.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset