Chapter 12

Development Methodology Overview

Limitations of Life-Cycle Development

In Section 2, “Waterfall Testing Review,” the waterfall development methodology was reviewed along with the associated testing activities. The life-cycle development methodology consists of distinct phases from requirements to coding. Life-cycle testing means that testing occurs in parallel with the development life cycle and is a continuous process. Although the life-cycle or waterfall development is very effective for many large applications requiring a lot of computer horsepower, for example, DOD, financial, security-based, and so on, it has a number of shortcomings:

  1. ■ The end users of the system are only involved at the very beginning and the very end of the process. As a result, the system that they were given at the end of the development cycle is often not what they originally visualized or thought they requested.

  2. ■ The long development cycle and the shortening of business cycles lead to a gap between what is really needed and what is delivered.

  3. ■ End users are expected to describe in detail what they want in a system, before the coding phase. This may seem logical to developers; however, there are end users who have not used a computer system before and are not certain of its capabilities.

  4. ■ When the end of a development phase is reached, it is often not quite complete, but the methodology and project plans require that development press on regardless. In fact, a phase is rarely complete, and there is always more work than can be done. This results in the “rippling effect”; sooner or later, one must return to a phase to complete the work.

  5. ■ Often, the waterfall development methodology is not strictly followed. In the haste to produce something quickly, critical parts of the methodology are not followed. The worst case is ad hoc development, in which the analysis and design phases are bypassed and the coding phase is the first major activity. This is an example of an unstructured development environment.

  6. ■ Software testing is often treated as a separate phase starting in the coding phase as a validation technique and is not integrated into the whole development life cycle.

  7. ■ The waterfall development approach can be woefully inadequate for many development projects, even if it is followed. An implemented software system is not worth very much if it is not the system the user wanted. If the requirements are incompletely documented, the system will not survive user validation procedures; that is, it is the wrong system. Another variation is when the requirements are correct, but the design is inconsistent with the requirements. Once again, the completed product will probably fail the system validation procedures.

  8. ■ Because of the foregoing issues, experts began to publish methodologies based on other approaches, such as prototyping.

The Client/Server Challenge

The client/server architecture for application development divides functionality between a client and server so that each performs its task independently. The client cooperates with the server to produce the required results.

The client is an intelligent workstation used as a single user, and because it has its own operating system, it can run other applications such as spreadsheets, word processors, and file processors. The user and the server process client/server application functions cooperatively. The server can be a PC, minicomputer, local area network, or even a mainframe. The server receives requests from the clients and processes them. The hardware configuration is determined by the application’s functional requirements.

Some advantages of client/server applications include reduced costs, improved accessibility of data, and flexibility. However, justifying a client/server approach and ensuring quality are difficult and present additional difficulties not necessarily found in mainframe applications. Some of these problems include the following:

  1. ■ The typical graphical user interface has more possible logic paths, and thus the large number of test cases in the mainframe environment is compounded.

  2. ■ Client/server technology is complicated and, often, new to the organization. Furthermore, this technology often comes from multiple vendors and is used in multiple configurations and in multiple versions.

  3. ■ The fact that client/server applications are highly distributed results in a large number of failure sources and hardware/software configuration control problems.

  4. ■ A short- and long-term cost–benefit analysis must be performed to justify client/server technology in terms of the overall organizational costs and benefits.

  5. ■ Successful migration to a client/server depends on matching migration plans to the organization’s readiness for client/server technology.

  6. ■ The effect of client/server technology on the user’s business may be substantial.

  7. ■ Choosing which applications will be the best candidates for a client/server implementation is not straightforward.

  8. ■ An analysis needs to be performed of which development technologies and tools enable a client/server.

  9. ■ Availability of client/server skills and resources, which are expensive, needs to be considered.

  10. ■ Although client/server technology is more expensive than mainframe computing, cost is not the only issue. The function, business benefit, and the pressure from end users have to be balanced.

Integration testing in a client/server environment can be challenging. Client and server applications are built separately. When they are brought together, conflicts can arise no matter how clearly defined the interfaces are. When integrating applications, defect resolutions may have single or multiple solutions, and there must be open communication between quality assurance and development.

In some circles there exists a belief that the mainframe is dead and the client/server prevails. The truth of the matter is that applications using mainframe architecture are not dead, and client/server technology is not necessarily the panacea for all applications. The two will continue to coexist and complement each other in the future. Mainframes will certainly be part of any client/server strategy.

Psychology of Client/Server Spiral Testing

The New School of Thought

The psychology of life-cycle testing encourages testing by individuals outside the development organization. The motivation for this is that with the life-cycle approach, there typically exist clearly defined requirements, and it is more efficient for a third party to verify these. Testing is often viewed as a destructive process designed to break development’s work.

The psychology of spiral testing, on the other hand, encourages cooperation between quality assurance and the development organization. The basis of this argument is that, in a rapid application development environment, requirements may or may not be available, to varying degrees. Without this cooperation, the testing function would have a difficult task defining the test criteria. The only possible alternative is for testing and development to work together.

Testers can be powerful allies to development and, with a little effort, they can be transformed from adversaries into partners. This is possible because most testers want to be helpful; they just need a little consideration and support. To achieve this, however, an environment needs to be created to bring out the best of a tester’s abilities. The tester and development manager must set the stage for cooperation early in the development cycle and communicate throughout the cycle.

Tester/Developer Perceptions

To understand some of the inhibitors to a good relationship between the testing function and development, it is helpful to understand how each views his or her role and responsibilities.

Testing is a difficult effort. It is a task that is both infinite and indefinite. No matter what testers do, they cannot be sure they will find all the problems, or even all the important ones.

Many testers are not really interested in testing and do not have the proper training in basic testing principles and techniques. Testing books or conferences typically treat the testing subject too rigorously and employ deep mathematical analysis. The insistence on formal requirement specifications as a prerequisite to effective testing is not realistic in the real world of a software development project.

It is hard to find individuals who are good at testing. It takes someone who is a critical thinker motivated to produce a quality software product, likes to evaluate software deliverables, and is not caught up in the assumption held by many developers that testing has a lesser job status than development. A good tester is a quick learner and eager to learn, is a good team player, and can effectively communicate both verbally and in writing.

The output from development is something that is real and tangible. A programmer can write code and display it to admiring customers, who assume it is correct. From a developer’s point of view, testing results in nothing more tangible than an accurate, useful, and all-too-fleeting perspective on quality. Given these perspectives, many developers and testers often work together in an uncooperative, if not hostile, manner.

In many ways the tester and developer roles are in conflict. A developer is committed to building something successful. A tester tries to minimize the risk of failure and tries to improve the software by detecting defects. Developers focus on technology, which takes a lot of time and energy when producing software. A good tester, on the other hand, is motivated to provide the user with the best software to solve a problem.

Testers are typically ignored until the end of the development cycle when the application is “completed.” Testers are always interested in the progress of development and realize that quality is only achievable when they take a broad point of view and consider software quality from multiple dimensions.

Project Goal: Integrate QA and Development

The key to integrating the testing and developing activities is for testers to avoid giving the impression that they are out to “break the code” or destroy development’s work. Ideally, testers are human meters of product quality and should examine a software product, evaluate it, and discover if the product satisfies the customer’s requirements. They should not be out to embarrass or complain, but inform development how to make their product even better. The impression they should foster is that they are the “developer’s eyes to improved quality.”

Development needs to be truly dedicated to quality and view the test team as an integral player on the development team. They need to realize that no matter how much work and effort has been expended by development, if the software does not have the correct level of quality, it is destined to fail. The testing manager needs to remind the project manager of this throughout the development cycle. The project manager needs to instill this perception in the development team.

Testers must coordinate with the project schedule and work in parallel with development. They need to be informed about what is going on in development, and so should be included in all planning and status meetings. This lessens the risk of introducing new bugs, known as “side effects,” near the end of the development cycle and also reduces the need for time-consuming regression testing.

Testers must be encouraged to communicate effectively with everyone on the development team. They should establish a good relationship with the software users, who can help them better understand acceptable standards of quality. In this way, testers can provide valuable feedback directly to development.

Testers should intensively review online help and printed manuals whenever they are available. It will relieve some of the communication burden to get writers and testers to share notes rather than saddle development with the same information.

Testers need to know the objectives of the software product, how it is intended to work, how it actually works, the development schedule, any proposed changes, and the status of reported problems.

Developers need to know what problems were discovered, what part of the software is or is not working, how users perceive the software, what will be tested, the testing schedule, the testing resources available, what the testers need to know to test the system, and the current status of the testing effort.

When quality assurance starts working with a development team, the testing manager needs to interview the project manager and show an interest in working in a cooperative manner to produce the best software product possible. The next section describes how to accomplish this.

Iterative/Spiral Development Methodology

Spiral methodologies are a reaction to the traditional waterfall methodology of systems development, a sequential solution development approach. A common problem with the waterfall model is that the elapsed time for delivering the product can be excessive.

By contrast, spiral development expedites product delivery. A small but functioning initial system is built and quickly delivered, and then enhanced in a series of iterations. One advantage is that the clients receive at least some functionality quickly. Another is that the product can be shaped by iterative feedback; for example, users do not have to define every feature correctly and in full detail at the beginning of the development cycle, but can react to each iteration.

With the spiral approach, the product evolves continually over time; it is not static and may never be completed in the traditional sense. The term spiral refers to the fact that the traditional sequence of analysis–design–code–test phases is performed on a microscale within each spiral or cycle, in a short period of time, and then the phases are repeated within each subsequent cycle. The spiral approach is often associated with prototyping and rapid application development.

Traditional requirements-based testing expects that the product definition will be finalized and even frozen prior to detailed test planning. With spiral development, the product definition and specifications continue to evolve indefinitely; that is, there is no such thing as a frozen specification. A comprehensive requirements definition and system design probably never will be documented.

The only practical way to test in the spiral environment, therefore, is to “get inside the spiral.” Quality assurance must have a good working relationship with development. The testers must be very close to the development effort, and test each new version as it becomes available. Each iteration of testing must be brief, in order not to disrupt the frequent delivery of the product iterations. The focus of each iterative test must be first to test only the enhanced and changed features. If time within the spiral allows, an automated regression test also should be performed; this requires sufficient time and resources to update the automated regression tests within each spiral.

Clients typically demand very fast turnarounds on change requests; there may be neither formal release nor a willingness to wait for the next release to obtain a new system feature. Ideally, there should be an efficient, automated regression test facility for the product, which can be used for at least a brief test prior to the release of the new product version (see Section 6, “Modern Software Testing Tools,” for more details).

Spiral testing is a process of working from a base and building a system incrementally. Upon reaching the end of each phase, developers reexamine the entire structure and revise it. Drawing the four major phases of system development—planning/analysis, design, coding, and test/deliver—into quadrants, as shown in Figure 12.1, represents the spiral approach. The respective testing phases are test planning, test case design, test development, and test execution/evaluation.

The spiral process begins with planning and requirements analysis to determine the functionality. Then a design is made for the base components of the system and the functionality determined in the first step. Next, the functionality is constructed and tested. This represents a complete iteration of the spiral.

Having completed this first spiral, users are given the opportunity to examine the system and enhance its functionality. This begins the second iteration of the spiral. The process continues, looping around and around the spiral until the users and developers agree the system is complete; the process then proceeds to implementation.

The spiral approach, if followed systematically, can be effective in ensuring that the users’ requirements are being adequately addressed and that the users are closely involved with the project. It can allow for the system to adapt to any changes in business requirements that occurred after the system development began. However, there is one major flaw with this methodology: there may never be any firm commitment to implement a working system. One can go around and around the quadrants, never actually bringing a system into production. This is often referred to as “spiral death.”

Images

Figure 12.1   Spiral testing process.

Although the waterfall development has often proved itself to be too inflexible, the spiral approach can produce the opposite problem. Unfortunately, the flexibility of the spiral methodology often results in the development team ignoring what the user really wants, and thus, the product fails the user verification. This is where quality assurance is a key component of a spiral approach. It will ensure that user requirements are being satisfied.

A variation to the spiral methodology is the iterative methodology, in which the development team is forced to reach a point where the system will be implemented. The iterative methodology recognizes that the system is never truly complete, but is evolutionary. However, it also realizes that there is a point at which the system is close enough to completion to be of value to the end user.

The point of implementation is decided upon prior to the start of the system, and a certain number of iterations will be specified, with goals identified for each iteration. Upon completion of the final iteration, the system will be implemented in whatever state it may be.

Role of JADs

During the first spiral, the major deliverables are the objectives, an initial functional decomposition diagram, and a functional specification. The functional specification also includes an external (user) design of the system. It has been shown that errors defining the requirements and external design are the most expensive to fix later in development. It is, therefore, imperative to get the design as correct as possible the first time.

A technique that helps accomplish this is joint application design sessions (see Appendix G19, “JADs,” for more details). Studies show that JADs increase productivity over traditional design techniques. In JADs, users and IT professionals jointly design systems in facilitated group sessions. JADs go beyond the one-on-one interviews to collect information. They promote communication, cooperation, and teamwork among the participants by placing the users in the drivers’ seats.

JADs are logically divided into phases: customization, session, and wrap-up. Regardless of what activity one is pursuing in development, these components will always exist. Each phase has its own objectives.

Role of Prototyping

Prototyping is an iterative approach often used to build systems that users initially are unable to describe precisely (see Appendix G24, “Prototyping,” for more details). The concept is made possible largely through the power of fourth-generation languages (4GLs) and application generators.

Prototyping is, however, as prone to defects as any other development effort, maybe more so if not performed in a systematic manner. Prototypes need to be tested as thoroughly as any other system. Testing can be difficult unless a systematic process has been established for developing prototypes.

There are various types of software prototypes, ranging from simple printed descriptions of input, processes, and output to completely automated versions. An exact definition of a software prototype is impossible to find; the concept is made up of various components. Among the many characteristics identified by MIS professionals are the following:

  1. ■ Comparatively inexpensive to build (i.e., less than 10 percent of the full system’s development cost).

  2. ■ Relatively quick development so that it can be evaluated early in the life cycle.

  3. ■ Provides users with a physical representation of key parts of the system before implementation.

  4. ■ Prototypes:

    1. Do not eliminate or reduce the need for comprehensive analysis and specification of user requirements.

    2. Do not necessarily represent the complete system.

    3. Perform only a subset of the functions of the final product.

    4. Lack the speed, geographical placement, or other physical characteristics of the final system.

Basically, prototyping is the building of trial versions of a system. These early versions can be used as the basis for assessing ideas and making decisions about the complete and final system. Prototyping is based on the premise that, in certain problem domains (particularly in online interactive systems), users of the proposed application do not have a clear and comprehensive idea of what the application should do or how it should operate.

Often, errors or shortcomings overlooked during development appear after a system becomes operational. Application prototyping seeks to overcome these problems by providing users and developers with an effective means of communicating ideas and requirements before a significant amount of development effort has been expended. The prototyping process results in a functional set of specifications that can be fully analyzed, understood, and used by users, developers, and management to decide whether an application is feasible and how it should be developed.

Fourth-generation languages have enabled many organizations to undertake projects based on prototyping techniques. They provide many of the capabilities necessary for prototype development, including user functions for defining and managing the user–system interface, data management functions for organizing and controlling access, and system functions for defining execution control and interfaces between the application and its physical environment.

In recent years, the benefits of prototyping have become increasingly recognized. Some include the following:

  1. ■ Prototyping emphasizes active physical models. The prototype looks, feels, and acts like a real system.

  2. ■ Prototyping is highly visible and accountable.

  3. ■ The burden of attaining performance, optimum access strategies, and complete functioning is eliminated in prototyping.

  4. ■ Issues of data, functions, and user–system interfaces can be readily addressed.

  5. ■ Users are usually satisfied, because they get what they see.

  6. ■ Many design considerations are highlighted, and a high degree of design flexibility becomes apparent.

  7. ■ Information requirements are easily validated.

  8. ■ Changes and error corrections can be anticipated and, in many cases, made on the spur of the moment.

  9. ■ Ambiguities and inconsistencies in requirements become visible and correctable.

  10. ■ Useless functions and requirements can be quickly eliminated.

Methodology for Developing Prototypes

The following describes a methodology to reduce development time through reuse of the prototype and knowledge gained in developing and using the prototype. It does not include how to test the prototype within spiral development. This is included in the next part.

Step 1: Develop the Prototype

In the construction phase of spiral development, the external design and screen design are translated into real-world windows using a 4GL tool such as Visual Basic or PowerBuilder. The detailed business functionality is not built into the screen prototypes, but a “look and feel” of the user interface is produced so the user can see how the application will look.

Using a 4GL, the team constructs a prototype system consisting of data entry screens, printed reports, external file routines, specialized procedures, and procedure selection menus. These are based on the logical database structure developed in the JAD data modeling sessions. The sequence of events for performing the task of developing the prototype in a 4GL is iterative and is described as follows.

Define the basic database structures derived from logical data modeling. The data structures will be populated periodically with test data as required for specific tests.

Define printed report formats. These may initially consist of query commands saved in an executable procedure file on disk. The benefit of a query language is that most of the report formatting can be done automatically by the 4GL. The prototyping team needs only to define what data elements to print and what selection and ordering criteria to use for individual reports.

Define interactive data entry screens. Whether each screen is well designed is immaterial at this point. Obtaining the right information in the form of prompts, labels, help messages, and validation of input is more important. Initially, defaults should be used as often as possible.

Define external file routines to process data that is to be submitted in batches to the prototype or created by the prototype for processing by other systems. This can be done in parallel with other tasks.

Define algorithms and procedures to be implemented by the prototype and the finished system. These may include support routines solely for the use of the prototype.

Define procedure selection menus. The developers should concentrate on the functions as the user would see them. This may entail combining seemingly disparate procedures into single functions that can be executed with a single command from the user.

Define test cases to ascertain that:

  1. ■ Data entry validation is correct.

  2. ■ Procedures and algorithms produce expected results.

  3. ■ System execution is clearly defined throughout a complete cycle of operation.

Repeat this process, adding report and screen formatting options, corrections of errors discovered in testing, and instructions for the intended users. This process should end after the second or third iteration or when changes become predominantly cosmetic rather than functional.

At this point, the prototyping team should have a good understanding of the overall operation of the proposed system. If time permits, the team must now describe the operation and underlying structure of the prototype. This is most easily accomplished through the development of a draft user manual. A printed copy of each screen, report, query, database structure, selection menu, and catalogued procedure or algorithm must be included. Instructions for executing each procedure should include an illustration of the actual dialogue.

Step 2: Demonstrate Prototypes to Management

The purpose of this demonstration is to give management the option of making strategic decisions about the application on the basis of the prototype’s appearance and objectives. The demonstration consists primarily of a short description of each prototype component and its effects, and a walkthrough of the typical use of each component. Every person in attendance at the demonstration should receive a copy of the draft user manual, if one is available.

The team should emphasize the results of the prototype and its impact on development tasks still to be performed. At this stage, the prototype is not necessarily a functioning system, and management must be made aware of its limitations.

Step 3: Demonstrate Prototype to Users

There are arguments for and against letting the prospective users actually use the prototype system. There is a risk that users’ expectations will be raised to an unrealistic level with regard to delivery of the production system and that the prototype will be placed in production before it is ready. Some users have actually refused to give up the prototype when the production system was ready for delivery. This may not be a problem if the prototype meets the users’ expectations and the environment can absorb the load of processing without affecting others. On the other hand, when users exercise the prototype, they can discover the problems in procedures and unacceptable system behavior very quickly.

The prototype should be demonstrated before a representative group of users. This demonstration should consist of a detailed description of the system operation, structure, data entry, report generation, and procedure execution. Above all, users must be made to understand that the prototype is not the final product, that it is flexible, and that it is being demonstrated to find errors from the users’ perspective.

The results of the demonstration include requests for changes, correction of errors, and overall suggestions for enhancing the system. Once the demonstration has been held, the prototyping team cycles through the steps in the prototype process to make the changes, corrections, and enhancements deemed necessary through consensus of the prototyping team, the end users, and management.

For each iteration through prototype development, demonstrations should be held to show how the system has changed as a result of feedback from users and management. The demonstrations increase the users’ sense of ownership, especially when they can see the results of their suggestions. The changes should therefore be developed and demonstrated quickly.

Requirements uncovered in the demonstration and use of the prototype may cause profound changes in the system scope and purpose, the conceptual model of the system, or the logical data model. Because these modifications occur in the requirements specification phase rather than in the design, code, or operational phases, they are much less expensive to implement.

Step 4: Revise and Finalize Specifications

At this point, the prototype consists of data entry formats, report formats, file formats, a logical database structure, algorithms and procedures, selection menus, system operational flow, and possibly a draft user manual.

The deliverables from this phase consist of formal descriptions of the system requirements, listings of the 4GL command files for each object programmed (i.e., screens, reports, and database structures), sample reports, sample data entry screens, the logical database structure, data dictionary listings, and a risk analysis. The risk analysis should include the problems and changes that could not be incorporated into the prototype and the probable impact that they would have on development of the full system and subsequent operation.

The prototyping team reviews each component for inconsistencies, ambiguities, and omissions. Corrections are made, and the specifications are formally documented.

Step 5: Develop the Production System

At this point, development can proceed in one of three directions:

  1. The project is suspended or canceled because the prototype has uncovered insurmountable problems or the environment is not ready to mesh with the proposed system.

  2. The prototype is discarded because it is no longer needed or because it is too inefficient for production or maintenance.

  3. Iterations of prototype development are continued, with each iteration adding more system functions and optimizing performance until the prototype evolves into the production system.

The decision on how to proceed is generally based on such factors as:

  1. ■ The actual cost of the prototype

  2. ■ Problems uncovered during prototype development

  3. ■ The availability of maintenance resources

  4. ■ The availability of software technology in the organization

  5. ■ Political and organizational pressures

  6. ■ The amount of satisfaction with the prototype

  7. ■ The difficulty in changing the prototype into a production system

  8. ■ Hardware requirements

Continuous Improvement “Spiral” Testing Approach

The purpose of software testing is to identify the differences between existing and expected conditions, that is, to detect software defects. Testing identifies the requirements that have not been satisfied and the functions that do not work properly. The most commonly recognized test objective is to identify bugs, but this is a limited definition of the aim of testing. Not only must bugs be identified, but they must be put into a framework that enables testers to predict how the software will perform.

In the spiral and rapid application development testing environment, there may be no final functional requirements for the system. They are probably informal and evolutionary. Also, the test plan may not be completed until the system is released for production. The relatively long lead-time to create test plans based on a good set of requirement specifications may not be available. Testing is an ongoing improvement process that occurs frequently as the system changes. The product evolves over time and is not static.

Images

Figure 12.2   Spiral testing and continuous improvement.

The testing organization needs to get inside the development effort and work closely with development. Each new version needs to be tested as it becomes available. The approach is to first test the new enhancements or modified software to resolve defects reported in the previous spiral. If time permits, regression testing is then performed to ensure that the rest of the system has not regressed.

In the spiral development environment, software testing is again described as a continuous improvement process that must be integrated into a rapid application development methodology. Testing as an integrated function prevents development from proceeding without testing. Deming’s continuous improvement process using the PDCA model (see Figure 12.2) will again be applied to the software testing process.

Before the continuous improvement process begins, the testing function needs to perform a series of information-gathering planning steps to understand the development project objectives, current status, project plans, function specification, and risks.

Once this is completed, the formal Plan step of the continuous improvement process commences. A major step is to develop a software test plan. The test plan is the basis for accomplishing testing and should be considered an ongoing document; that is, as the system changes, so does the plan. The outline of a good test plan includes an introduction, the overall plan, testing requirements, test procedures, and test plan details. These are further broken down into business functions, test scenarios and scripts, function/test matrix, expected results, test case checklists, discrepancy reports, required software, hardware, data, personnel, test schedule, test entry criteria, exit criteria, and summary reports.

The more definitive a test plan is, the easier the Plan step will be. If the system changes between development of the test plan and when the tests are to be executed, the test plan should be updated accordingly.

The Do step of the continuous improvement process consists of test case design, test development, and test execution. This step describes how to design test cases and execute the tests included in the test plan. Design includes the functional tests, GUI tests, and fragment system and acceptance tests. Once an overall test design is completed, test development starts. This includes building test scripts and procedures to provide test case details.

The test team is responsible for executing the tests and must ensure that they are executed according to the test design. The Do step also includes test setup, regression testing of old and new tests, and recording any defects discovered.

The Check step of the continuous improvement process includes metric measurements and analysis. As discussed in Section 1, Chapter 5, “Quality through Continuous Improvement Process,” crucial to the Deming method is the need to base decisions as much as possible on accurate and timely data. Metrics are key to verifying if the work effort and test schedule are on schedule, and to identify any new resource requirements.

During the Check step, it is important to publish intermediate test reports. This includes recording of the test results and relating them to the test plan and test objectives.

The Act step of the continuous improvement process involves preparation for the next spiral iteration. It entails refining the function/GUI tests, test suites, test cases, test scripts, and fragment system and acceptance tests, and modifying the defect-tracking system and the version and control system, if necessary. It also includes devising measures for appropriate actions relating to work that was not performed according to the plan or unanticipated results. Examples include a reevaluation of the test team, test procedures, and technology dimensions of testing. All these are fed back to the test plan, which is updated.

Once several testing spirals have been completed and the application has been verified as functionally stable, full system and acceptance testing starts. These tests are often optional. Respective system and acceptance test plans are developed, defining the test objects and the specific tests to be completed.

The final activity in the continuous improvement process is summarizing and reporting the spiral test results. A major test report should be written at the end of all testing. The process used for report writing is the same whether it is an interim or a final report, and, similar to other tasks in testing, report writing is also subject to quality control. However, the final test report should be much more comprehensive than interim test reports. For each type of test, it should describe a record of defects discovered, data reduction techniques, root cause analysis, the development of findings, and follow-on recommendations for current and/or future projects.

Figure 12.3 provides an overview of the spiral testing methodology by relating each step to the PDCA quality model. Appendix A, “Spiral Testing Methodology,” provides a detailed representation of each part of the methodology. The methodology provides a framework for testing in this environment. The major steps include information gathering, test planning, test design, test development, test execution/evaluation, and preparing for the next spiral. It includes a set of tasks associated with each step or a checklist from which the testing organization can choose based on its needs. The spiral approach flushes out the system functionality. When this has been completed, it also provides for classical system testing, acceptance testing, and summary reports.

Images

Figure 12.3   Spiral testing methodology.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset