Chapter 14

Test Planning (Plan)

The purpose of the test project plan is to provide the basis for accomplishing testing in an organized manner. From a managerial point of view, it is the most important document, because it helps manage the test project. If a test plan is comprehensive and carefully thought out, test execution and analysis should proceed smoothly. (See Appendix E1 for a sample unit test plan, Appendix E4 for a sample system test plan, and Appendix F24 for a unit testing checklist, which can be used to verify that unit testing has been thorough and comprehensive.)

The test project plan is an ongoing document, particularly in the spiral environment, because the system is constantly changing. As the system changes, so does the test plan. A good test plan is one that:

  1. ■ Has a good chance of detecting a majority of the defects

  2. ■ Provides test coverage for most of the code

  3. ■ Is flexible

  4. ■ Is executed easily and automatically, and is repeatable

  5. ■ Defines the types of tests to be performed

  6. ■ Clearly documents the expected results

  7. ■ Provides for defect reconciliation when a defect is discovered

  8. ■ Clearly defines the test objectives

  9. ■ Clarifies the test strategy

  10. ■ Clearly defines the test exit criteria

  11. ■ Is not redundant

  12. ■ Identifies the risks

  13. ■ Documents the test requirements

  14. ■ Defines the test deliverables

Although there are many ways a test plan can be created, Figure 14.1 provides a framework that includes most of the essential planning considerations. It can be treated as a checklist of test items to consider. Some of the items, such as defining the test requirements and test team, are obviously required; however, others may not be. It depends on the nature of the project and the time constraints.

The planning test methodology includes three steps: building the test project plan, defining the metrics, and reviewing/approving the test project plan. Each of these is then broken down into its respective tasks, as shown in Figure 14.1.

Step 1: Build a Test Plan

Task 1: Prepare an Introduction

The first bit of test plan detail is a description of the problems to be solved by the application of the associated opportunities. This defines the summary background, describing the events or current status leading up to the decision to develop the application. Also, the application’s risks, purpose, objectives, and benefits, and the organization’s critical success factors should be documented in the introduction. A critical success factor is a measurable item that will have a major influence on whether a key function meets its objectives. An objective is a measurable end state that the organization strives to achieve. Examples of objectives include the following:

  1. ■ New product opportunity

  2. ■ Improved efficiency (internal and external)

  3. ■ Organizational image

  4. ■ Growth (internal and external)

  5. ■ Financial (revenue, cost profitability, etc.)

  6. ■ Competitive position

  7. ■ Market leadership

The introduction should also include an executive summary description. The executive sponsor (often called the project sponsor) is the individual who has ultimate authority over the project. This individual has a vested interest in the project in terms of funding, project results, and resolving project conflicts, and is responsible for the success of the project. An executive summary describes the proposed application from an executive’s point of view. It should describe the problems to be solved, the application goals, and the business opportunities. The objectives should indicate whether the application is a replacement of an old system and document the impact the application will have, if any, on the organization in terms of management, technology, and so on.

Any available documentation should be listed and its status described. Examples include requirements specifications, functional specifications, project plan, design specification, prototypes, user manual, business model/flow diagrams, data models, and project risk assessments. In addition to project risks, which are the potential adverse effects on the development project, the risks relating to the testing effort should be documented. Examples include the lack of testing skills, scope of the testing effort, lack of automated testing tools, and the like. See Appendix E4, “Test Plan (Client/Server and Internet Spiral Testing),” for more details.

Images

Figure 14.1   Test planning (steps/tasks).

Task 2: Define the High-Level Functional Requirements (in Scope)

A functional specification consists of the hierarchical functional decomposition, the functional window structure, the window standards, and the minimum system requirements of the system to be developed. An example of window standards is the Windows 95 GUI Standards. An example of a minimum system requirement could be Windows 95, a Pentium II microprocessor, 24 MB RAM, 3 GB disk space, and a modem. At this point in development, a full functional specification may not have been defined. However, a list of at least the major business functions of the basic window structure should be available.

A basic functional list contains the main functions of the system with each function named and described with a verb–object paradigm. This list serves as the basis for structuring functional testing (see Figure 14.2).

A functional window structure describes how the functions will be implemented in the windows environment. At this point, a full functional window structure may not be available, but a list of the major windows should be (see Figure 14.3).

Images

Figure 14.2   High-level business functions.

Images

Figure 14.3   Functional window structure.

Task 3: Identify Manual/Automated Test Types

The types of tests that need to be designed and executed depend only on the objectives of the application, that is, the measurable end state the organization is striving to achieve. For example, if the application is a financial application used by a large number of individuals, special security and usability tests need to be performed. However, three types of tests that are nearly always required are function, user interface, and regression testing. Function testing comprises the majority of the testing effort and is concerned with verifying that the functions work properly. It is a black-box-oriented activity in which the tester is completely unconcerned with the internal behavior and structure of the application. User interface testing, or GUI testing, checks the user’s interaction or functional window structure. It ensures that object state dependencies work properly and provide useful navigation through the functions. Regression testing tests the application in light of changes made during debugging, maintenance, or the development of a new release.

Other types of tests that need to be considered include system and acceptance testing. System testing is the highest level of testing and evaluates functionality as a total system, its performance, and overall fitness of use. Acceptance testing is an optional user-run test that demonstrates the ability of the application to meet the user’s requirements. This test may or may not be performed, depending on the formality of the project. Sometimes the system test suffices.

Finally, the tests that can be automated with a testing tool need to be identified. Automated tests provide three benefits: repeatability, leverage, and increased functionality. Repeatability enables automated tests to be executed more than once, consistently. Leverage comes from repeatability, from tests previously captured and tests that can be programmed with the tool, which may not have been possible without automation. As applications evolve, more and more functionality is added. With automation, the functional coverage is maintained with the test library.

Task 4: Identify the Test Exit Criteria

One of the most difficult and political problems is deciding when to stop testing, because it is impossible to know when all the defects have been detected. There are at least four criteria for exiting testing:

  1. Scheduled testing time has expired—This criterion is very weak, inasmuch as it has nothing to do with verifying the quality of the application. This does not take into account that there may be an inadequate number of test cases or the fact that there may not be any more defects that are easily detectable.

  2. Some predefined number of defects discovered—The problems with this is knowing the number of errors to detect and also overestimating the number of defects. If the number of defects is underestimated, testing will be incomplete. Potential solutions include experience with similar applications developed by the same development team, predictive models, and industrywide averages. If the number of defects is overestimated, the test may never be completed within a reasonable time frame. A possible solution is to estimate completion time, plotting defects detected per unit of time. If the rate of defect detection is decreasing dramatically, there may be “burnout,” an indication that a majority of the defects have been discovered.

  3. All the formal tests execute without detecting any defects—A major problem with this is that the tester is not motivated to design destructive test cases that force the tested program to its design limits; for example, the tester’s job is completed when the test program fields no more errors. The tester is motivated to not find errors and may subconsciously write test cases that show the program is error free. This criterion is only valid if there is a rigorous and totally comprehensive test case suite created that approaches 100 percent coverage. The problem with this is determining when there is a comprehensive suite of test cases. If it is felt that this is the case, a good strategy at this point is to continue with ad hoc testing. Ad hoc testing is a black-box testing technique in which the tester lets his or her mind run freely to enumerate as many test conditions as possible. Experience has shown that this technique can be a very powerful supplemental or add-on technique.

  4. Combination of the foregoing criteria—Most testing projects utilize a combination of the foregoing exit criteria. It is recommended that all the tests be executed, but any further ad hoc testing will be constrained by time.

Task 5: Establish Regression Test Strategy

Regression testing tests the application in light of changes made during a development spiral, debugging, maintenance, or the development of a new release. This test must be performed after functional improvements or repairs have been made to a system to confirm that the changes have no unintended side effects. Correction of errors relating to logic and control flow, computational errors, and interface errors are examples of conditions that necessitate regression testing. Cosmetic errors generally do not affect other capabilities and do not require regression testing.

It would be ideal if all the tests in the test suite were rerun for each new spiral; however, owing to time constraints, this is probably not realistic. A good regression strategy during spiral development is for some regression testing to be performed during each spiral to ensure that previously demonstrated capabilities are not adversely affected by later development spirals or error corrections. During system testing, after the system is stable and the functionality has been verified, regression testing should consist of a subset of the system tests. Policies need to be created to decide which tests to include. (See Appendix E21, “Test Strategy.”)

A retest matrix is an excellent tool that relates test cases to functions (or program units), as shown in Table 14.1. A check entry in the matrix indicates that the test case is to be retested when the function (or program unit) has been modified due to enhancements or corrections. An empty cell means that the test does not need to be retested. The retest matrix can be built before the first testing spiral, but needs to be maintained during subsequent spirals. As functions (or program units) are modified during a development spiral, existing or new test cases need to be created and checked in the retest matrix in preparation for the next test spiral. Over time, with subsequent spirals, some functions (or program units) may remain stable with no recent modifications. Consideration to selectively remove their check entries should be undertaken between testing spirals. (Also see Appendix E14, “Retest Matrix.”)

Table 14.1   Retest Matrix

Images

Other considerations of regression testing are as follows:

  1. ■ Regression tests are potential candidates for test automation when they are repeated over and over in every testing spiral.

  2. ■ Regression testing needs to occur between releases after the initial release of the system.

  3. ■ A test that uncovers an original defect should be rerun after the defect has been corrected.

  4. ■ An in-depth effort should be made to ensure that the original defect was corrected, and not just the symptoms.

  5. ■ Regression tests that repeat other tests should be removed.

  6. ■ Other test cases in the functional (or program unit) area where a defect is uncovered should be included in the regression test suite.

  7. ■ Client-reported defects should have high priority and should be regression-tested thoroughly.

Task 6: Define the Test Deliverables

Test deliverables result from test planning, test design, test development, and test defect documentation. Some spiral test deliverables from which you can choose include the following:

  1. ■ Test plan: Defines the objectives, scope, strategy, types of tests, test environment, test procedures, exit criteria, and so on (see Appendix E4, “Sample Template”).

  2. ■ Test design: Tests for the application’s functionality, performance, and appropriateness for use. The tests demonstrate that the original test objectives are satisfied.

  3. ■ Change request: A documented request to modify the current software system, usually supplied by the user (see Appendix D, “Change Request Form,” for more details). It is typically different from a defect report, which reports an anomaly in the system.

  4. ■ Metrics: The measurable indication of some quantitative aspect of a system. Examples include the number of severe defects, and the number of defects discovered as a function of the number of testers.

  5. ■ Test case: A specific set of test data and associated procedures developed for a particular objective. It provides a detailed blueprint for conducting individual tests and includes specific input data values and the corresponding expected results (see Appendix E8, “Test Case,” for more details).

  6. ■ Test log summary report: Specifies the test cases from the tester’s individual test logs that are in progress or completed for status reporting and metric collection (see Appendix E10, “Test Log Summary Report”).

  7. ■ Test case log: Specifies the test cases for a particular testing event to be executed during testing. It is also used to record the results of the test performed, to provide the detailed evidence for the summary of test results, and to provide a basis for reconstructing the testing event if necessary (see Appendix E9, “Test Case Log”).

  8. ■ Interim test report: A report published between testing spirals, indicating the status of the testing effort (see Part 18, Step 3, Publish Interim Report).

  9. ■ System summary report: A comprehensive test report after all spiral testing has been completed (see Appendix E11, “System Summary Report”).

  10. ■ Defect report: Documents defects discovered during spiral testing (see Appendix E12, “Defect Report”).

Task 7: Organize the Test Team

The people component includes human resource allocations and the required skill sets. The test team should comprise the highest-caliber personnel possible. They are usually extremely busy and are in great demand because of their talents, and it therefore is vital to build the best case possible for using these individuals for test purposes. A test team leader and test team need to have the right skills and experience, and be motivated to work on the project. Ideally, they should be professional quality assurance specialists, but can represent the executive sponsor, users, technical operations, database administration, computer center, independent parties, and so on. In any event, they should not represent the development team, for they may not be as unbiased as an outside party. This is not to say that developers should not test; they should unit and function test their code extensively before handing it over to the test team.

There are two areas of responsibility in testing: testing the application, which is the responsibility of the test team, and the overall testing processes, which is handled by the test manager. The test manager directs one or more testers, is the interface between quality assurance and the development organization, and manages the overall testing effort. Responsibilities include the following:

  1. ■ Setting up the test objectives

  2. ■ Defining test resources

  3. ■ Creating test procedures

  4. ■ Developing and maintaining the test plan

  5. ■ Designing test cases

  6. ■ Designing and executing automated testing tool scripts

  7. ■ Test case development

  8. ■ Providing test status

  9. ■ Writing reports

  10. ■ Defining the roles of the team members

  11. ■ Managing the test resources

  12. ■ Defining standards and procedures

  13. ■ Ensuring quality of the test process

  14. ■ Training the team members

  15. ■ Maintaining test statistics and metrics

The test team must be a set of team players and have these responsibilities:

  1. ■ Executing test cases according to the plan

  2. ■ Evaluating the test results

  3. ■ Reporting errors

  4. ■ Designing and executing automated testing tool scripts

  5. ■ Recommending application improvements

  6. ■ Recording defects

The main function of a team member is to test the application and report defects to the development team by documenting them in a defect-tracking system. Once the development team corrects the defects, the test team reexecutes the tests that discovered the original defects.

It should be pointed out that the roles of the test manager and team members are not mutually exclusive. Some of the team leader’s responsibilities are shared with the team members, and vice versa.

The basis for allocating dedicated testing resources is the scope of the functionality and the development time frame; for example, a medium development project will require more testing resources than a small one. If project A of medium complexity requires a testing team of five, project B with twice the scope would require ten testers (given the same resources).

Another rule of thumb is that the testing costs approach 25 percent of the total budget. Because the total project cost is known, the testing effort can be calculated and translated to tester headcount.

The best estimate is a combination of the project scope, test team skill levels, and project history. A good measure of required testing resources for a particular project is the histories of multiple projects, that is, testing resource levels and performance compared to similar projects.

Task 8: Establish a Test Environment

The purpose of the test environment is to provide a physical framework necessary for the testing activity. For this task, the test environment needs are established and reviewed before implementation.

The main components of the test environment include the physical test facility, technologies, and tools. The test facility component includes the physical setup. The technologies component includes the hardware platforms, physical network and all its components, operating system software, and other software such as utility software. The tools component includes any specialized testing software such as automated test tools, testing libraries, and support software.

The testing facility and workplace need to be established. This may range from an individual workplace configuration to a formal testing laboratory. In any event, it is important that the testers be together and in close proximity to the development team. This facilitates communication and the sense of a common goal. The testing tools that were acquired need to be installed.

The hardware and software technologies need to be set up. This includes the installation of test hardware and software, and coordination with vendors, users, and information technology personnel. It may be necessary to test the hardware and coordinate with hardware vendors. Communication networks need to be installed and tested.

Task 9: Define the Dependencies

A good source of information is previously produced test plans on other projects. If available, the sequence of tasks in the project work plans can be analyzed for activity and task dependencies that apply to this project.

Examples of test dependencies include the following:

  1. ■ Code availability

  2. ■ Tester availability (in a timely fashion)

  3. ■ Test requirements (reasonably defined)

  4. ■ Test tools availability

  5. ■ Test group training

  6. ■ Technical support

  7. ■ Defects fixed in a timely manner

  8. ■ Adequate testing time

  9. ■ Computers and other hardware

  10. ■ Software and associated documentation

  11. ■ System documentation (if available)

  12. ■ Defined development methodology

  13. ■ Test laboratory space availability

  14. ■ Agreement with development (procedures and processes)

The support personnel need to be defined and committed to the project. This includes members of the development group, technical support staff, network support staff, and database administrator support staff.

Task 10: Create a Test Schedule

A test schedule should be produced that includes the testing steps (and perhaps tasks), target start and end dates, and responsibilities. It should also describe how it will be reviewed, tracked, and approved. A simple test schedule format, as shown in Table 14.2, follows the spiral methodology.

Also, a project management tool such as Microsoft Project can format a Gantt chart to emphasize the tests and group them into test steps. A Gantt chart consists of a table of task information and a bar chart that graphically displays the test schedule. It also includes task time duration and links the task dependency relationships graphically. People resources can also be assigned to tasks for workload balancing. See Appendix E13, “Test Schedule,” and template file Gantt spiral testing methodology template.

Another way to schedule testing activities is with “relative scheduling,” in which testing steps or tasks are defined by their sequence or precedence. It does not state a specific start or end date but does have a duration, such as days. (Also see Appendix E18, “Test Execution Plan,” which can be used to plan the activities for the Execution phase, and Appendix E20, “PDCA Test Schedule,” which can be used to plan and track the Plan–Do–Check–Act test phases.)

It is also important to define major external and internal milestones. External milestones are events that are external to the project but may have a direct impact on the project. Examples include project sponsorship approval, corporate funding, and legal authorization. Internal milestones are derived for the schedule work plan and typically correspond to key deliverables that need to be reviewed and approved. Examples include test plan, design, and development completion approval by the project sponsor and the final spiral test summary report. Milestones can be documented in the test plan in table format as shown in Table 14.3. (Also see Appendix E19, “Test Project Milestones,” which can be used to identify and track the key test milestones.)

Task 11: Select the Test Tools

Test tools range from relatively simple to sophisticated software. New tools are being developed to help provide the high-quality software needed for today’s applications.

Because test tools are critical to effective testing, those responsible for testing should be proficient in using them. The tools selected should be most effective for the environment in which the tester operates and the specific types of software being tested. The test plan needs to name specific test tools and their vendors. The individual who selects the test tool should also conduct the test and be familiar enough with the tool to use it effectively. The test team should review and approve the use of each test tool, because the tool selected must be consistent with the objectives of the test plan.

Table 14.2   Test Schedule

Images

Images

Images

Table 14.3   Project Milestones

Images

The selection of testing tools may be based on intuition or judgment. However, a more systematic approach should be taken. Section 6, “Modern Software Testing Tools,” provides a comprehensive methodology for acquiring testing tools. It also provides an overview of the types of modern testing tools available.

Task 12: Establish Defect Recording/Tracking Procedures

During the testing process, when a defect is discovered, it needs to be recorded. A defect is related to individual tests that have been conducted, and the objective is to produce a complete record of those defects. The overall motivation for recording defects is to correct them and record metric information about the application. Development should have access to the defects reports, which they can use to evaluate whether there is a defect and how to reconcile it. The defect form can either be manual or electronic, with the latter being preferred. Metric information such as the number of defects by type or open time for defects can be very useful in understanding the status of the system.

Defect control procedures need to be established to control this process from initial identification to reconciliation. Table 14.4 shows some possible defect states, from open to closed with intermediate states. The testing department initially opens a defect report and also closes it. A “Yes” in a cell indicates a possible transition from one state to another. For example, an “Open” state can change to “Under Review,” “Returned by Development,” or “Deferred by Development.” The transitions are initiated by either the testing department or by development.

A defect report form also needs to be designed. The major fields of a defect form include (see Appendices E12 and E27, “Defect Report,” for more details) the following:

  1. ■ Identification of the problem, for example, functional area, problem type, and so on

  2. ■ Nature of the problem, for example, behavior

  3. ■ Circumstances that led to the problem, for example, inputs and steps

  4. ■ Environment in which the problem occurred, for example, platform, and so on

  5. ■ Diagnostic information, for example, error code, and so on

  6. ■ Effect of the problem, for example, consequence

It is quite possible that a defect report and a change request form are the same. The advantage of this approach is that it is not always clear whether a change request is a defect or an enhancement request. The differentiation can be made with a form field that indicates whether it is a defect or enhancement request. On the other hand, a separate defect report can be very useful during the maintenance phase when the expected behavior of the software is well known and it is easier to distinguish between a defect and an enhancement.

Table 14.4   Defect States

Images

Task 13: Establish Change Request Procedures

If it were a perfect world, a system would be built and there would be no future changes. Unfortunately, it is not a perfect world and after a system is deployed, there are change requests.

Some of the reasons for change are the following:

  1. ■ The requirements change.

  2. ■ The design changes.

  3. ■ The specification is incomplete or ambiguous.

  4. ■ A defect is discovered that was not discovered during reviews.

  5. ■ The software environment changes, for example, platform, hardware, and so on.

Change control is the process by which a modification to a software component is proposed, evaluated, approved or rejected, scheduled, and tracked. It is a decision process used in controlling the changes made to software. Some proposed changes are accepted and implemented during this process. Others are rejected or postponed, and are not implemented. Change control also provides for impact analysis to determine the dependencies (see Appendix D, “Change Request Form,” for more details).

Each software component has a life cycle. A life cycle consists of states and allowable transitions between those states. Any time a software component is changed, it should always be reviewed. During the review, it is frozen from further modifications and the only way to change it is to create a new version. The reviewing authority must approve the modified software component or reject it. A software library should hold all components as soon as they are frozen and also act as a repository for approved components.

The formal title of the organization that manages changes is a configuration control board, or CCB. The CCB is responsible for the approval of changes and for judging whether a proposed change is desirable. For a small project, the CCB can consist of a single person, such as a project manager. For a more formal development environment, it can consist of several members from development, users, quality assurance, management, and the like.

All components controlled by software configuration management are stored in a software configuration library, including work products such as business data and process models, architecture groups, design units, tested application software, reusable software, and special test software. When a component is to be modified, it is checked out of the repository into a private workspace. It evolves through many states that are temporarily outside the scope of configuration management control.

When a change is completed, the component is checked into the library and becomes a new component version. The previous component version is also retained.

Change control is based on the following major functions of a development process: requirements analysis, system design, program design, testing, and implementation. At least six control procedures are associated with these functions and need to be established for a change control system (see Appendix B, “Software Quality Assurance Plan,” for more details):

  1. Initiation procedures—This includes procedures for initiating a change request through a change request form, which serves as a communication vehicle. The objective is to gain consistency in documenting the change request document and routing it for approval.

  2. Technical assessment procedures—This includes procedures for assessing the technical feasibility and technical risks, and scheduling a technical evaluation of a proposed change. The objectives are to ensure integration of the proposed change, the testing requirements, and the ability to install the change request.

  3. Business assessment procedures—This includes procedures for assessing the business risk, effect, and installation requirements of the proposed change. The objectives are to ensure that the timing of the proposed change is not disruptive to the business goals.

  4. Management review procedures—This includes procedures for evaluating the technical and business assessments through management review meetings. The objectives are to ensure that changes meet technical and business requirements and that adequate resources are allocated for testing and installation.

  5. Test tracking procedures—This includes procedures for tracking and documenting test progress and communication, including steps for scheduling tests, documenting the test results, deferring change requests based on test results, and updating test logs. The objectives are to ensure that testing standards are utilized to verify the change, including test plans and test design, and that test results are communicated to all parties.

  6. Installation tracking procedures—This includes procedures for tracking and documenting the installation progress of changes. It ensures that proper approvals have been completed, adequate time and skills have been allocated, installation and backup instructions have been defined, and proper communication has occurred. The objectives are to ensure that all approved changes have been made, including scheduled dates, test durations, and reports.

Task 14: Establish Version Control Procedures

A method for uniquely identifying each software component needs to be established via a labeling scheme. Every software component must have a unique name. Software components evolve through successive revisions, and each needs to be distinguished. A simple way to distinguish component revisions is with a pair of integers, 1.1, 1.2, … that define the release number and level number. When a software component is first identified, it is revision 1 and subsequent major revisions are 2, 3, and so on.

In a client/server environment, it is highly recommended that the development environment be different from the test environment. This requires the application software components to be transferred from the development environment to the test environment. Procedures need to be set up.

Software needs to be placed under configuration control so that no changes are being made to the software while testing is being conducted. This includes source and executable components. Application software can be periodically migrated into the test environment. This process must be controlled to ensure that the latest version of software is tested. Versions will also help control the repetition of tests to ensure that previously discovered defects have been resolved.

For each release or interim change between versions of a system configuration, a version description document should be prepared to identify the software components.

Task 15: Define Configuration Build Procedures

Assembling a software system involves tools to transform the source components, or source code, into executable programs. Examples of tools are compilers and linkage editors.

Configuration build procedures need to be defined to identify the correct component versions and execute the component build procedures. The configuration build model addresses the crucial question of how to control the way components are built.

A configuration typically consists of a set of derived software components. An example of derived software components is executable object programs derived from source programs. Derived components must be correctly associated with each source component to obtain an accurate derivation. The configuration build model addresses the crucial question of how to control the way derived components are built.

The inputs and outputs required for a configuration build model include primary inputs and primary outputs. The primary inputs are the source components, which are the raw materials from which the configuration is built; the version selection procedures; and the system model, which describes the relationship between the components. The primary outputs are the target configuration and derived software components.

Different software configuration management environments use different approaches for selecting versions. The simplest approach to version selection is to maintain a list of component versions. Other automated approaches allow for the most recently tested component versions to be selected, or those updated on a specific date. Operating system facilities can be used to define and build configurations, including the directories and command files.

Task 16: Define Project Issue Resolution Procedures

Testing issues can arise at any point in the development process and must be resolved successfully. The primary responsibility of issue resolution is with the project manager, who should work with the project sponsor to resolve those issues. Typically, the testing manager will document test issues that arise during the testing process. The project manager or project sponsor should screen every issue that arises. An issue can be rejected or deferred for further investigation, but should be considered relative to its impact on the project. In any case, a form should be created that contains the essential information. Examples of testing issues include lack of testing tools, lack of adequate time to test, inadequate knowledge of the requirements, and so on.

Issue management procedures need to be defined before the project starts. The procedures should address how to:

  1. ■ Submit an issue

  2. ■ Report an issue

  3. ■ Screen an issue (rejected, deferred, merged, or accepted)

  4. ■ Investigate an issue

  5. ■ Approve an issue

  6. ■ Postpone an issue

  7. ■ Reject an issue

  8. ■ Close an issue

Task 17: Establish Reporting Procedures

Test reporting procedures are critical to manage the testing progress and manage the expectations of the project team members. This will keep the project manager and sponsor informed of the testing project progress and minimize the chance of unexpected surprises. The testing manager needs to define who needs the test information, what information they need, and how often the information is to be provided. The objectives of test status reporting are to report the progress of the testing toward its objectives and report test issues, problems, and concerns.

Two key reports that need to be published are:

  1. Interim Test Report—An interim test report is a report published between testing spirals indicating the status of the testing effort.

  2. System Summary Report—A test summary report is a comprehensive test report after all spiral testing has been completed.

Task 18: Define Approval Procedures

Approval procedures are critical in a testing project. They help provide the necessary agreement between members of the project team. The testing manager should define who needs to approve a test deliverable, when it will be approved, and what the backup plan is if an approval cannot be obtained. The approval procedure can vary from a formal sign-off of a test document to an informal review with comments. Table 14.5 shows test deliverables for which approvals are required or recommended, and by whom. (Also see Appendix E17, “Test Approvals,” for a matrix that can be used to formally document management approvals for test deliverables.)

Table 14.5   Deliverable Approvals

Images

Step 2: Define the Metric Objectives

“You can’t control what you can’t measure.” This is a quote from Tom DeMarco’s book, Controlling Software Projects, in which he describes how to organize and control a software project so that it is measurable in the context of time and cost projections. Control is the extent to which a manager can ensure minimum surprises. Deviations from the plan should be signaled as early as possible for timely corrective action. Another quote from DeMarco’s book, “The only unforgivable failure is the failure to learn from past failure,” stresses the importance of estimating and measurement. Measurement is a recording of past effects to quantitatively predict future effects.

Task 1: Define the Metrics

Software testing as a test development project has deliverables such as test plans, test design, test development, and test execution. The objective of this task is to apply the principles of metrics to control the testing process. A metric is a measurable indication of some quantitative aspect of a system and has the following characteristics:

  1. Measurability—A metric point must be measurable for it to be a metric, by definition. If the phenomenon cannot be measured, there is no way to apply management methods to control it.

  2. Independence—Metrics need to be independent of human influence. There should be no way of changing the measurement other than by changing the phenomenon that produced the metric.

  3. Accountability—Any analytical interpretation of the raw metric data rests on the data itself and it is, therefore, necessary to save the raw data and the methodical audit trail of the analytical process.

  4. Precision—Precision is a function of accuracy. The key to precision is, therefore, that a metric be explicitly documented as part of the data collection process. If a metric varies, it can be measured as a range or tolerance.

A metric can be a “result” or a “predictor.” A result metric measures a completed event or process. Examples include actual total elapsed time to process a business transaction or total test costs of a project. A predictor metric is an early-warning metric that has a strong correlation to some later result. An example is the predicted response time through statistical regression analysis when more terminals are added to a system when the number of terminals has not yet been measured. A result or predictor metric can also be a derived metric. A derived metric is one that is derived from a calculation or graphical technique involving one or more metrics.

The motivation for collecting test metrics is to make the testing process more effective. This is achieved by carefully analyzing the metric data and taking the appropriate action to correct problems. The starting point is to define the metric objectives of interest. Some examples include the following:

  1. Defect analysis—Every defect must be analyzed to answer such questions as the root causes, how it was detected, when it was detected, who detected it, and so on.

  2. Test effectiveness—How well is testing doing, for example, return on investment?

  3. Development effectiveness—How well is development fixing defects?

  4. Test automation—How much effort is expended on test automation?

  5. Test cost—What are the resources and time spent on testing?

  6. Test status—Another important metric is status tracking, or where are we in the testing process?

  7. User involvement—How much is the user involved in testing?

Task 2: Define the Metric Points

Table 14.6 lists some metric points associated with the general metrics selected in the previous task and the corresponding actions to improve the testing process. Also shown is the source, or derivation, of the metric point.

Table 14.6   Metric Points

Images

Images

Images

Images

Step 3: Review/Approve the Plan

Task 1: Schedule/Conduct the Review

The test plan review should be scheduled well in advance of the actual review, and the participants should have the latest copy of the test plan.

As with any interview or review, it should contain certain elements. The first is defining what will be discussed, or “talking about what we are going to talk about.” The second is discussing the details, or “talking about it.” The third is summarization, or “talking about what we talked about.” The final element is timeliness. The reviewer should state up front the estimated duration of the review and set the ground rule that if time expires before completing all items on the agenda, a follow-on review will be scheduled.

The purpose of this task is for development and the project sponsor to agree and accept the test plan. If there are any suggested changes to the test plan during the review, they should be incorporated into the test plan.

Task 2: Obtain Approvals

Approval is critical in a testing effort, for it helps provide the necessary agreements between testing, development, and the sponsor. The best approach is with a formal sign-off procedure of a test plan. If this is the case, use the management approval sign-off forms. However, if a formal agreement procedure is not in place, send a memo to each key participant, including at least the project manager, development manager, and sponsor. In the document, attach the latest test plan and point out that all their feedback comments have been incorporated and that if you do not hear from them, it is assumed that they agree with the plan. Finally, indicate that in a spiral development environment, the test plan will evolve with each iteration but that you will include them in any modification.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset