Chapter 6

Overview

The following provides an overview of the waterfall life-cycle development methodology and the associated testing activities. Deming’s continuous quality improvement is applied with technical review and testing techniques.

Waterfall Development Methodology

The life-cycle development or waterfall approach breaks the development cycle down into discrete phases, each with a rigid sequential beginning and end (see Figure 6.1). Each phase is fully completed before the next is started. Once a phase is completed, in theory during development, one never goes back to change it.

In Figure 6.1 you can see that the first phase in the waterfall is user requirements. In this phase, the users are interviewed, their requirements are analyzed, and a document is produced detailing what the users’ requirements are. Any reengineering or process redesign is incorporated into this phase.

In the next phase, entity relation diagrams, process decomposition diagrams, and data flow diagrams are created to allow the system to be broken down into manageable components from a data and functional point of view. The outputs from the logical design phase are used to develop the physical design of the system. During the physical and program unit design phases, various structured design techniques, such as database schemas, Yourdon structure charts, and Warnier–Orr diagrams, are used to produce a design specification that will be used in the next phase.

Images

Figure 6.1   Waterfall development methodology.

In the program unit design phase, programmers develop the system according to the physical design produced in the previous phase. Once complete, the system enters the coding phase, where it will be written in a programming language, unit or component tested, integration tested, system tested, and finally, user tested (often called acceptance testing).

Now the application is delivered to the users for the operation and maintenance phase (not shown in Figure 6.1). Defects introduced during the life-cycle phases are detected and corrected, and new enhancements are incorporated into the application.

Continuous Improvement “Phased” Approach

Deming’s continuous improvement process, which was discussed in the previous section, is effectively applied to the waterfall development cycle using the Deming quality cycle, or PDCA; that is, Plan, Do, Check, and Act. It is applied from two points of view: software testing, and quality control or technical reviews.

As defined in Section 1, “Software Quality in Perspective,” the three major components of quality assurance are software testing, quality control, and software configuration management. The purpose of software testing is to verify and validate the activities to ensure that the software design, code, and documentation meet all the requirements imposed on them. Software testing focuses on test planning, test design, test development, and test execution. Quality control is the process and methods used to monitor work and observe whether requirements are met. It focuses on structured walkthroughs and inspections to remove defects introduced during the software development life cycle.

Psychology of Life-Cycle Testing

In the waterfall development life cycle, there is typically a concerted effort to keep the testing and development departments separate. This testing organization is typically separate from the development organization, with a different reporting structure. The basis of this is that because requirements and design documents have been created at specific phases in the development life cycle, a separate quality assurance organization should be able to translate these documents into test plans, test cases, and test specifications. Underlying assumptions include the belief that (1) programmers should not test their own programs and (2) programming organizations should not test their own programs.

It is thought that software testing is a destructive process and that it would be very difficult for a programmer to suddenly change his perspective from developing a software product to trying to find defects, or breaking the software. It is believed that programmers cannot effectively test their own programs because they cannot bring themselves to attempt to expose errors.

Part of this argument is that there will be errors due to the programmer’s misunderstanding of the requirements of the programs. Thus, a programmer testing his own code would have the same bias, and would not be as effective testing it as someone else.

It is not impossible for a programmer to test her own programs, but testing is more effective when performed by someone who does not have a stake in it, as a programmer does. Because the development deliverables have been documented, why not let another individual verify them?

It is thought that a programming organization is measured by its ability to produce a program or system on time and economically. As with an individual programmer, it is difficult for the programming organization to be objective. From the point of view of the programming organization, if a concerted effort were made to find as many defects as possible, the project would probably be late and not cost effective. Less quality is the result.

From a practical point of view, an independent organization should be responsible for the quality of the software products. Product test or quality assurance organizations were created to serve as independent parties.

Software Testing as a Continuous Improvement Process

Software life-cycle testing means that testing occurs in parallel with the development cycle and is a continuous process (see Figure 6.2). The software testing process should start early in the application life cycle, not just in the traditional validation testing phase after the coding phase has been completed. Testing should be integrated into application development. For this, there needs to be a commitment on the part of the development organization and close communication with the quality assurance function.

Images

Figure 6.2   Development phases versus testing types.

A test plan is initiated during the requirements phase. It describes the organization of testing work. It is a document describing the approach to be taken for the intended testing activities and includes the items to be tested, the types of tests to be performed, test schedules, human resources, reporting procedures, evaluation criteria, and so on.

During logical, physical, and program unit design, the test plan is refined with more details. Test cases are also created. A test case is a specific set of test data and test scripts. A test script guides the tester through a test and ensures consistency among separate executions of the test. A test also includes the expected results, so that it can be verified whether the test met the objective correctly. During the coding phase, test scripts and test data are generated. During application testing, the test scripts are executed and the results are analyzed.

Figure 6.2 shows a correspondence between application development and the testing activities. The application development cycle proceeds from user requirements and design until the code is completed. During test design and development, the acceptance test criteria are established in a test plan. As more details are refined, the system, integration, and unit testing requirements are established. There may be a separate test plan for each test type, or one plan may be used.

During test execution, the process is reversed. Test execution starts with unit testing. Integration tests are performed that combine individual unit-tested pieces of code. Once this is completed, the system is tested from a total system point of view. This is known as system testing. System testing is a multifaceted test to evaluate the functionality, performance, and usability of the system. The final test is the acceptance test, which is a user-run test that verifies the ability of the system to meet the original user objectives and requirements. In some cases the system test serves as the acceptance test.

If you will recall, the PDCA approach (i.e., Plan, Do, Check, and Act) is a control mechanism used to control, supervise, govern, regulate, or restrain a system. The approach first defines the objectives of a process, develops and carries out the plan to meet those objectives, and checks to determine if the anticipated results are achieved. If they are not achieved, the plan is modified to fulfill the objectives. The PDCA quality cycle can be applied to software testing.

The Plan step of the continuous improvement process, when applied to software testing, starts with a definition of the test objectives; for example, what is to be accomplished as a result of testing. Testing criteria do more than simply ensure that the software performs according to specifications. Objectives ensure that all responsible individuals contribute to the definition of the test criteria, to maximize quality.

A major deliverable of this step is a software test plan. A test plan is the basis for accomplishing testing. The test plan should be considered an ongoing document. As the system changes, so does the plan. The test plan also becomes part of the system maintenance documentation after the application is delivered to the user. The outline of a good test plan includes an introduction, the overall plan, and testing requirements. As more detail is available, the business functions, test logs, problem and summary reports, test software, hardware, data, personnel requirements, test schedule, test entry criteria, and exit criteria are added.

The Do step of the continuous improvement process when applied to software testing describes how to design and execute the tests included in the test plan. The test design includes test cases, test procedures and scripts, expected results, function/test case matrix, test logs, and so on. The more definitive a test plan is, the easier the test design will be. If the system changes between development of the test plan and when the tests are to be executed, the test plan should be updated accordingly; that is, whenever the system changes, the test plan should change.

The test team is responsible for the execution of the tests and must ensure that the test is executed according to the plan. Elements of the Do step include selecting test tools; defining the resource requirements; and defining the test setup conditions and environment, test requirements, and the actual testing of the application.

The Check step of the continuous improvement process when applied to software testing includes the evaluation of how the testing process is progressing. Again, the credo for statisticians, “In God we trust. All others must use data,” is crucial to the Deming method. It is important to base decisions as much as possible on accurate and timely data. Testing metrics such as the number and types of defects, the workload effort, and the schedule status are key.

It is also important to create test reports. Testing began with setting objectives, identifying functions, selecting tests to validate the test functions, creating test conditions, and executing the tests. To construct test reports, the test team must formally record the results and relate them to the test plan and system objectives. In this sense, the test report reverses all the previous testing tasks.

Summary and interim test reports should be written at the end of testing and at key testing checkpoints. The process used for report writing is the same whether it is an interim or a summary report, and, similar to other tasks in testing, report writing is also subject to quality control; that is, it should be reviewed. A test report should at least include a record of defects discovered, data reduction techniques, root cause analysis, the development of findings, and recommendations to management to improve the testing process.

The Act step of the continuous improvement process when applied to software testing includes devising measures for appropriate actions relating to work that was not performed according to the plan or results that were not anticipated in the plan. This analysis is fed back to the plan. Examples include updating the test suites, test cases, and test scripts, and reevaluating the people, process, and technology dimensions of testing.

The Testing Bible: Software Test Plan

A test plan is a document describing the approach to be taken for intended testing activities and serves as a service-level agreement between the quality assurance testing function and other interested parties, such as development. A test plan should be developed early in the development cycle, and will help improve the interactions of the analysis, design, and coding activities. A test plan defines the test objectives, scope, strategy and approach, test procedures, test environment, test completion criteria, test cases, items to be tested, the tests to be performed, the test schedules, personnel requirements, reporting procedures, assumptions, risks, and contingency planning.

When developing a test plan, one should be sure that it is simple, complete, current, and accessible by the appropriate individuals for feedback and approval. A good test plan flows logically and minimizes redundant testing, demonstrates full functional coverage, provides workable procedures for monitoring, tracking, and reporting test status, contains a clear definition of the roles and responsibilities of the parties involved, has target delivery dates, and clearly documents the test results.

There are two ways of building a test plan. The first approach is a master test plan that provides an overview of each detailed test plan, that is, a test plan of a test plan. A detailed test plan verifies a particular phase in the waterfall development life cycle. Test plan examples include unit, integration, system, and acceptance. Other detailed test plans include application enhancements, regression testing, and package installation. Unit test plans are code oriented and very detailed, but short because of their limited scope. System or acceptance test plans focus on the functional test or black-box view of the entire system, not just a software unit. (See Appendix E1, “Unit Test Plan,” and Appendix E2, “System/Acceptance Test Plan,” for more details.)

The second approach is one test plan. This approach includes all the test types in one test plan, often called the acceptance/system test plan, but covers unit, integration, system, and acceptance testing, and all the planning considerations to complete the tests.

A major component of a test plan, often in the “Test Procedure” section, is a test case, as shown in Figure 6.3. (Also see Appendix E8, “Test Case.”) A test case defines the step-by-step process whereby a test is executed. It includes the objectives and conditions of the test, the steps needed to set up the test, the data inputs, the expected results, and the actual results. Other information such as the software, environment, version, test ID, screen, and test type is also provided.

Major Steps in Developing a Test Plan

A test plan is the basis for accomplishing testing and should be considered a living document; that is, as the application changes, the test plan should change.

A good test plan encourages the attitude of “quality before design and coding.” It is able to demonstrate that it contains full functional coverage, and the test cases trace back to the functions being tested. It also contains workable mechanisms for monitoring and tracking discovered defects and report status. Appendix E2 is a System/Acceptance Test Plan template that combines unit, integration, and system test plans into one. It is also used in this section to describe how a test plan is built during the waterfall life-cycle development methodology.

The following are the major steps that need to be completed to build a good test plan.

Step 1: Define the Test Objectives

The first step in planning any test is to establish what is to be accomplished as a result of the testing. This step ensures that all responsible individuals contribute to the definition of the test criteria that will be used. The developer of a test plan determines what is going to be accomplished with the test, the specific tests to be performed, the test expectations, the critical success factors of the test, constraints, scope of the tests to be performed, the expected end products of the test, a final system summary report (see Appendix E11, “System Summary Report”), and the final signatures and approvals. The test objectives are reviewed and approval for the objectives is obtained.

Step 2: Develop the Test Approach

The test plan developer outlines the overall approach or how each test will be performed. This includes the testing techniques that will be used, test entry criteria, test exit criteria, procedures to coordinate testing activities with development, the test management approach, such as defect reporting and tracking, test progress tracking, status reporting, test resources and skills, risks, and a definition of the test basis (functional requirement specifications, etc.).

Images

Figure 6.3   Test case form.

Step 3: Define the Test Environment

The test plan developer examines the physical test facilities, defines the hardware, software, and networks, determines which automated test tools and support tools are required, defines the help desk support required, builds special software required for the test effort, and develops a plan to support the foregoing.

Step 4: Develop the Test Specifications

The developer of the test plan forms the test team to write the test specifications, develops test specification format standards, divides up the work tasks and work breakdown, assigns team members to tasks, and identifies features to be tested. The test team documents the test specifications for each feature and cross-references them to the functional specifications. It also identifies the interdependencies and work flow of the test specifications and reviews the test specifications.

Step 5: Schedule the Test

The test plan developer develops a test schedule based on the resource availability and development schedule, compares the schedule with deadlines, balances resources and workload demands, defines major checkpoints, and develops contingency plans.

Step 6: Review and Approve the Test Plan

The test plan developer or manager schedules a review meeting with the major players, reviews the plan in detail to ensure it is complete and workable, and obtains approval to proceed.

Components of a Test Plan

A system or acceptance test plan is based on the requirement specifications and is required in a very structured development and test environment. System testing evaluates the functionality and performance of the whole application and consists of a variety of tests, including performance, usability, stress, documentation, security, volume, recovery, and so on. Acceptance testing is a user-run test that demonstrates the application’s ability to meet the original business objectives and system requirements, and usually consists of a subset of system tests.

Table 6.1 cross-references the sections of Appendix E2, “System/Acceptance Test Plan,” against the waterfall life-cycle development phases. “Start” in the intersection indicates the recommended start time, or first-cut of a test activity. “Refine” indicates a refinement of the test activity started in a previous life-cycle phase. “Complete” indicates the life-cycle phase in which the test activity is completed.

Technical Reviews as a Continuous Improvement Process

Quality control is a key preventive component of quality assurance. Defect removal via technical reviews during the development life cycle is an example of a quality control technique. The purpose of technical reviews is to increase the efficiency of the development life cycle and provide a method to measure the quality of the products. Technical reviews reduce the amount of rework, testing, and “quality escapes,” that is, undetected defects. They are the missing links to removing defects and can also be viewed as a testing technique, even though we have categorized testing as a separate quality assurance component.

Originally developed by Michael Fagan of IBM in the 1970s, inspections have several aliases. They are often referred to interchangeably as “peer reviews,” “inspections,” or “structured walkthroughs.” Inspections are performed at each phase of the development life cycle from user requirements through coding. In the latter, code walkthroughs are performed in which the developer walks through the code for the reviewer.

Research demonstrates that technical reviews can be a lot more productive than automated testing techniques in which the application is executed and tested. A technical review is a form of testing, or manual testing, not involving program execution on the computer. Structured walkthroughs and inspections are a more efficient means of removing defects than software testing alone. They also remove defects earlier in the life cycle, thereby reducing defect-removal costs significantly. They represent a highly efficient, low-cost technique of defect removal and can potentially result in a reduction of defect-removal costs of greater than two thirds when compared to dynamic software testing. A side benefit of inspections includes the ability to periodically analyze the defects recorded and remove the root causes early in the software development life cycle.

The purpose of the following section is to provide a framework for implementing software reviews. Discussed is the rationale for reviews, the roles of the participants, planning steps for effective reviews, scheduling, allocation, agenda definition, and review reports.

Table 6.1   System/Acceptance Test Plan versus Phase

Images

Images

Images

Images

Motivation for Technical Reviews

The motivation for a review is that it is impossible to test all software. Clearly, exhaustive testing of code is impractical. Technology also does not exist for testing a specification or high-level design. The idea of testing a software test plan is also bewildering. Testing also does not address quality issues or adherence to standards, which are possible with review processes.

There are a variety of software technical reviews available for a project, depending on the type of software product and the standards that affect the review processes. The types of reviews depend on the deliverables to be produced. For example, for a Department of Defense contract, there are certain stringent standards for reviews that must be followed. These requirements may not be required for in-house application development.

A review increases the quality of the software product, reduces rework and ambiguous efforts, reduces testing, and defines test parameters, and is a repeatable and predictable process. It is an effective method for finding defects and discrepancies; it increases the reliability of the delivered product, has a positive impact on the schedule, and reduces development costs.

Early detection of errors reduces rework at later development stages, clarifies requirements and design, and identifies interfaces. It reduces the number of failures during testing, reduces the number of retests, identifies requirements testability, and helps identify missing or ambiguous requirements.

Types of Reviews

There are formal and informal reviews. Informal reviews occur spontaneously among peers; the reviewers do not necessarily have any responsibility and do not have to produce a review report. Formal reviews are carefully planned meetings in which reviewers are held responsible for their participation, and a review report is generated that contains action items.

The spectrum of review ranges from very informal peer reviews to extremely formal and structured inspections. The complexity of a review is usually correlated to the complexity of the project. As the complexity of a project increases, the need for more formal reviews increases.

Structured Walkthroughs

A structured walkthrough is a presentation review in which a review participant, usually the developer of the software being reviewed, narrates a description of the software, and the remainder of the group provides feedback throughout the presentation. Testing deliverables such as test plans, test cases, and test scripts can also be reviewed using the walkthrough technique. These are referred to as presentation reviews because the bulk of the feedback usually occurs only for the material actually presented.

Advance preparation of the reviewers is not necessarily required. One potential disadvantage of a structured walkthrough is that, because of its informal structure, disorganized and uncontrolled reviews may result. Walkthroughs may also be stressful if the developer is conducting the walkthrough.

Inspections

The inspection technique is a formally defined process for verification of the software product throughout its development. All software deliverables are examined at defined phases to assess the current status and quality effectiveness, from the requirements to coding phase. One of the major decisions within an inspection is whether a software deliverable can proceed to the next development phase.

Software quality is achieved in a product during the early stages when the cost to remedy defects is 10 to 100 times less than it would be during testing or maintenance. It is, therefore, advantageous to find and correct defects as near to their point of origin as possible. Exit criteria are the standard against which inspections measure completion of the product at the end of a phase.

The advantages of inspections are that they are very systematic, controlled, and less stressful. The inspection process promotes the concept of egoless programming. If managed properly, it is a forum in which developers need not become emotionally protective of the work produced. An inspection requires an agenda to guide the review preparation and the meeting itself. Inspections have rigorous entry and exit requirements for the project work deliverables.

A major difference between structured walkthroughs and inspections is that inspections collect information to improve the development and review processes themselves. In this sense, an inspection is more of a quality assurance technique than walkthroughs.

Phased inspections apply the PDCA (Plan, Do, Check, and Act) quality model. Each development phase has entrance requirements; for example, how to qualify to enter an inspection and exit criteria, and how to know when to exit the inspection. In-between the entry and exit are the project deliverables that are inspected. In Table 6.2, the steps of a phased inspection and the corresponding PDCA steps are shown.

The Plan step of the continuous improvement process consists of inspection planning and preparing an education overview. The strategy of an inspection is to design and implement a review process that is timely, efficient, and effective. Specific products are designated, as are acceptable criteria, and meaningful metrics are defined to measure and maximize the efficiency of the process. Inspection materials must meet inspection entry criteria. The right participants are identified and scheduled. In addition, a suitable meeting place and time are decided. The group of participants is educated on what is to be inspected and their roles.

The Do step includes individual preparation for the inspections and the inspection itself. Participants learn the material and prepare for their assigned roles, and the inspection proceeds. Each review is assigned one or more specific aspects of the product to be reviewed in terms of technical accuracy, standards and conventions, quality assurance, and readability.

Table 6.2   PDCA Process and Inspections

Images

The Check step includes the identification and documentation of the defects uncovered. Defects are discovered during the inspection, but solution hunting and the discussion of design alternatives are discouraged. Inspections are a review process, not a solution session.

The Act step includes the rework and follow-up required to correct any defects. The author reworks all discovered defects. The team ensures that all the potential corrective actions are effective and no secondary defects are inadvertently introduced.

By going around the PDCA cycle for each development phase using inspections, we verify and improve each phase deliverable at its origin and stop it dead in its tracks when defects are discovered (see Figure 6.4). The next phase cannot start until the discovered defects are corrected. The reason is that it is advantageous to find and correct defects as near to their point of origin as possible. Repeated application of the PDCA results in an ascending spiral, facilitating quality improvement at each phase. The end product is dramatically improved, and the bewildering task of the software testing process will be minimized; for example, a lot of the defects will have been identified and corrected by the time the testing team receives the code.

Participant Roles

Roles will depend on the specific review methodology being followed, that is, structured walkthroughs or inspections. These roles are functional, which implies that it is possible in some reviews for a participant to execute more than one role. The role of the review participants after the review is especially important because many errors identified during a review may not be fixed correctly by the developer. This raises the issue of who should follow up on a review and whether another review is necessary.

Images

Figure 6.4   Phased inspections as an ascending spiral.

The review leader is responsible for the review. This role requires scheduling the review, conducting an orderly review meeting, and preparing the review report. The review leader may also be responsible for ensuring that action items are properly handled after the review process. Review leaders must possess both technical and interpersonal management characteristics. The interpersonal management qualities include leadership ability, mediator skills, and organizational talents. The review leader must keep the review group focused at all times and prevent the meeting from becoming a problem-solving session. Material presented for review should not require the review leader to spend more than two hours for preparation.

The recorder role in the review process guarantees that all information necessary for an accurate review report is preserved. The recorder must understand complicated discussions and capture their essence in action items. The role of the recorder is clearly a technical function and one that cannot be performed by a nontechnical individual.

The reviewer role is to objectively analyze the software and be accountable for the review. An important guideline is that the reviewer must keep in mind that it is the software that is being reviewed and not the producer of the software. This cannot be overemphasized. Also, the number of reviewers should be limited to six. If too many reviewers are involved, productivity will decrease.

In a technical review, the producer may actually lead the meeting in an organized discussion of the software. A degree of preparation and planning is needed in a technical review to present material at the proper level and pace. The attitude of the producer is also important, and it is essential that he or she does not become defensive. This can be facilitated by the group leader’s emphasizing that the purpose of the inspection is to uncover defects and produce the best product possible.

Steps for an Effective Review

Step 1: Plan for the Review Process

Planning can be described at both the organizational level and the specific review level. Considerations at the organizational level include the number and types of reviews that are to be performed for the project. Project resources must be allocated for accomplishing these reviews.

At the specific review level, planning considerations include selecting participants and defining their respective roles, scheduling the review, and developing a review agenda. There are many issues involved in selecting the review participants. It is a complex task normally performed by management, with technical input. When selecting review participants, care must be exercised to ensure that each aspect of the software under review can be addressed by at least some subset of the review team.

To minimize the stress and possible conflicts in the review processes, it is important to discuss the role that a reviewer plays in the organization and the objectives of the review. Focusing on the review objectives will lessen personality conflicts.

Step 2: Schedule the Review

A review should ideally take place soon after a producer has completed the software but before additional effort is expended on work dependent on the software. The review leader must state the agenda based on a well-thought-out schedule. If all the inspection items have not been completed, another inspection should be scheduled.

The problem of allocating sufficient time to a review stems from the difficulty in estimating the time needed to perform the review. The approach that must be taken is the same as that for estimating the time to be allocated for any meeting; that is, an agenda must be formulated and time estimated for each agenda item. An effective technique is to estimate the time for each inspection item on a time line.

Another scheduling problem is the duration of the review when the review is too long. This requires that review processes be focused in terms of their objectives. Review participants must understand these review objectives and their implications in terms of actual review time, as well as preparation time, before committing to the review. The deliverable to be reviewed should meet a certain set of entry requirements before the review is scheduled. Exit requirements must also be defined.

Step 3: Develop the Review Agenda

A review agenda must be developed by the review leader and the producer prior to the review. Although review agendas are specific to any particular product and the objective of its review, generic agendas should be produced for related types of products. These agendas may take the form of checklists (see Appendix F, “Checklists,” for more details).

Step 4: Create a Review Report

The output of a review is a report. The format of the report is not important. The contents should address the management perspective, user perspective, developer perspective, and quality assurance perspective.

From a management perspective, the review report serves as a summary of the review that highlights what was reviewed, who did the reviewing, and their assessment. Management needs an estimate of when all action items will be resolved to successfully track the project.

The user may be interested in analyzing review reports for some of the same reasons as the manager. The user may also want to examine the quality of intermediate work products in an effort to monitor the development organization’s progress.

From a developer’s perspective, the critical information is contained in the action items. These may correspond to actual errors, possible problems, inconsistencies, or other considerations that the developer must address.

The quality assurance perspective of the review report is twofold: quality assurance must ensure that all action items in the review report are addressed, and it should also be concerned with analyzing the data on the review forms and classifying defects to improve the software development and review process. For example, a large number of specification errors might suggest a lack of rigor or time in the requirements specifications phase of the project. Another example is a large number of defects reported, suggesting that the software has not been adequately unit tested.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset