Chapter 20

Conduct the System Test (Act)

System testing evaluates the functionality and performance of the whole application and consists of a variety of tests including the following: performance, usability, stress, documentation, security, volume, recovery, and so on. Figure 20.1 describes how to extend fragment system testing. It includes discussions of how to prepare for the system tests, design and script them, execute them, and report anomalies discovered during the test.

Step 1: Complete System Test Plan

Task 1: Finalize the System Test Types

In a previous task, a set of system fragment tests was selected and executed during each spiral. The purpose of the current task is to finalize the system test types that will be performed during system testing.

You will recall that systems testing consists of one or more tests that are based on the original objectives of the system, which were defined during the project interview. The purpose of this task is to select the system tests to be performed, not to implement the tests. Our initial list consisted of the following system test types:

  1. ■ Performance

  2. ■ Security

  3. ■ Volume

Images

Figure 20.1   Conduct system test (steps/tasks).

  1. Stress

  2. ■ Compatibility

  3. ■ Conversion

  4. ■ Usability

  5. ■ Documentation

  6. ■ Backup

  7. ■ Recovery

  8. ■ Installation

The sequence of system test-type execution should also be defined in this task. For example, related tests such as performance, stress, and volume might be clustered together and performed early during system testing. Security, backup, and recovery are also logical groupings, and so on.

Finally, the system tests that can be automated with a testing tool need to be finalized. Automated tests provide three benefits: repeatability, leverage, and increased functionality. Repeatability enables automated tests to be executed more than once, consistently. Leverage comes from repeatability, from tests previously captured and tests that can be programmed with the tool, which might not have been possible without automation. As applications evolve, more and more functionality is added. With automation, the functional coverage is maintained with the test library.

Task 2: Finalize System Test Schedule

In this task, the system test schedule should be finalized; this includes the testing steps (and perhaps tasks), target start and target end dates, and responsibilities. It should also describe how it will be reviewed, tracked, and approved. A sample system test schedule is shown in Table 20.1.

Task 3: Organize the System Test Team

With all testing types, the system test team needs to be organized. The system test team is responsible for designing and executing the tests, evaluating the results and reporting any defects to development, and using the defect-tracking system. When development corrects defects, the test team retests the defects to verify the correction.

The system test team is led by a test manager whose responsibilities include the following:

  1. ■ Organizing the test team

  2. ■ Establishing the test environment

  3. ■ Organizing the testing policies, procedures, and standards

  4. ■ Assurance test readiness

Table 20.1   Final System Test Schedule

Images

Images

Images

  1. ■ Working the test plan and controlling the project

  2. ■ Tracking test costs

  3. ■ Ensuring test documentation is accurate and timely

  4. ■ Managing the team members

Task 4: Establish the System Test Environment

During this task, the system test environment is also finalized. The purpose of the test environment is to provide a physical framework for the testing activity. The test environment needs are established and reviewed before implementation.

The main components of the test environment include the physical test facility, technologies, and tools. The test facility component includes the physical setup. The technologies component includes the hardware platforms, physical network and all its components, operating system software, and other software. The tools component includes any specialized testing software, such as automated test tools, testing libraries, and support software.

The testing facility and workplace need to be established. These may range from an individual workplace configuration to a formal testing laboratory. In any event, it is important that the testers be together and near the development team. This facilitates communication and the sense of a common goal. The system testing tools need to be installed.

The hardware and software technologies need to be set up. This includes the installation of test hardware and software and coordination with vendors, users, and information technology personnel. It may be necessary to test the hardware and coordinate with hardware vendors. Communication networks need to be installed and tested.

Task 5: Install the System Test Tools

During this task, the system test tools are installed and verified for readiness. A trial run of tool test cases and scripts should be performed to verify that the test tools are ready for the actual acceptance test. Some other tool readiness considerations include the following:

  1. ■ Test team tool training

  2. ■ Tool compatibility with operating environment

  3. ■ Ample disk space for the tools

  4. ■ Maximizing the tool potentials

  5. ■ Vendor tool help hotline

  6. ■ Test procedures modified to accommodate tools

  7. ■ Installing the latest tool changes

  8. ■ Verifying the vendor contractual provisions

Step 2: Complete System Test Cases

During this step, the system test cases are designed and scripted. The conceptual system test cases are transformed into reusable test scripts with test data created.

To aid in developing the script test cases, the GUI-based Function Test Matrix template in Appendix E7 can be used to document system-level test cases, with the “function” heading replaced with the system test name.

Task 1: Design/Script the Performance Tests

The objective of performance testing is to measure the system against predefined objectives. The required performance levels are compared against the actual performance levels and discrepancies are documented.

Performance testing is a combination of black-box and white-box testing. From a black-box point of view, the performance analyst does not have to know the internal workings of the system. Real workloads or benchmarks are used to compare one system version with another for performance improvements or degradation. From a white-box point of view, the performance analyst needs to know the internal workings of the system and define specific system resources to investigate, such as instructions, modules, and tasks.

Some of the performance information of interest includes the following:

  1. ■ CPU utilization

  2. ■ IO utilization

  3. ■ Number of IOs per instruction

  4. ■ Channel utilization

  5. ■ Main storage memory utilization

  6. ■ Secondary storage memory utilization

  7. ■ Percentage of execution time per module

  8. ■ Percentage of time a module is waiting for IO completion

  9. ■ Percentage of time module spent in main storage

  10. ■ Instruction trace paths over time

  11. ■ Number of times control is passed from one module to another

  12. ■ Number of waits encountered for each group of instructions

  13. ■ Number of pages-in and pages-out for each group of instructions

  14. ■ System response time, for example, last key until first key time

  15. ■ System throughput, that is, number of transactions per time unit

  16. ■ Unit performance timings for all major functions

Baseline performance measurements should first be taken on all major functions in a noncontention mode, for example, unit measurements of functions when a single task is in operation. This can be easily done with a simple stopwatch, as was done earlier for each spiral. The next set of measurements should be made in a system-contended mode in which multiple tasks are operating, and queuing results in demands on common resources such as CPU, memory, storage, channel, network, and so on. Contended system execution time and resource utilization performance measurements are performed by monitoring the system to identify potential areas of inefficiency.

There are two approaches to gathering system execution time and resource utilization. With the first approach, samples are taken while the system is executing in its typical environment with the use of external probes, performance monitors, or a stopwatch. With the other approach, probes are inserted into the system code, for example, calls to a performance monitor program that gathers the performance information. The following is a discussion of each approach, followed by a discussion of test drivers, which are support techniques used to generate data for the performance study.

Monitoring Approach

This approach involves monitoring a system by determining its status at periodic time intervals, and is controlled by an elapsed time facility in the testing tool or operating system. Samples taken during each time interval indicate the status of the performance criteria during the interval. The smaller the time interval, the more precise the sampling accuracy.

Statistics gathered by the monitoring are collected and summarized in performance.

Probe Approach

This approach involves inserting probes or program instructions into the system programs at various locations. To determine, for example, the CPU time necessary to execute a sequence of statements, a problem execution results in a call to the data collection routine that records the CPU clock at that instant. A second probe execution results in a second call to the data collection routine. Subtracting the first CPU time from the second yields the net CPU time used. Reports can be produced showing execution time breakdowns by statement, module, and statement type.

The value of these approaches is their use as performance requirements validation tools. However, formally defined performance requirements must be stated, and the system should be designed so that the performance requirements can be traced to specific system modules.

Test Drivers

In many cases test drivers and test harnesses are required to make system performance measurements. A test driver provides the facilities needed to execute a system, for example, inputs. The input data files for the system are loaded with data values representing the test situation to yield recorded data to evaluate against the expected results. Data are generated in an external form and presented to the system.

Performance test cases need to be defined, using one or more of the test templates located in the appendices, and test scripts need to be built. Before any performance test is conducted, however, the performance analyst must make sure that the target system is relatively bug-free. Otherwise, a lot of time will be spent documenting and fixing defects rather than analyzing the performance.

The following are the five recommended steps for any performance study:

  1. Document the performance objectives; for example, exactly what the measurable performance criteria are must be verified.

  2. Define the test driver or source of inputs to drive the system.

  3. Define the performance methods or tools that will be used.

  4. Define how the performance study will be conducted; for example, what is the baseline, what are the variations, how can it be verified as repeatable, and how does one know when the study is complete?

  5. Define the reporting process, for example, techniques and tools.

Task 2: Design/Script the Security Tests

The objective of security testing is to evaluate the presence and appropriate functioning of the security of the application to ensure the integrity and confidentiality of the data. Security tests should be designed to demonstrate how resources are protected.

A Security Design Strategy

A security strategy for designing security test cases is to focus on the following four security components: the assets, threats, exposures, and controls. In this manner, matrices and checklists will suggest ideas for security test cases.

Assets are the tangible and intangible resources of an entity. The evaluation approach is to list what should be protected. It is also useful to examine the attributes of assets, such as amount, value, use, and characteristics. Two useful analysis techniques are asset value and exploitation analysis. Asset value analysis determines how the value differs among users and potential attackers. Asset exploitation analysis examines different ways to use an asset for illicit gain.

Threats are events with the potential to cause loss or harm. The evaluation approach is to list the sources of potential threats. It is important to distinguish among accidental, intentional, and natural threats, and threat frequencies.

Exposures are forms of possible loss or harm. The evaluation approach is to list what might happen to assets if a threat is realized. Exposures include disclosure violations, erroneous decision, and fraud. Exposure analysis focuses on identifying areas in which exposure is the greatest.

Security functions or controls are measures that protect against loss or harm. The evaluation approach is to list the security functions and tasks, and focus on controls embodied in specific system functions or procedures. Security functions assess the protection against human errors and casual attempts to misuse the system. Some functional security questions include the following:

  1. ■ Do the control features work properly?

  2. ■ Are invalid and improbable parameters detected and properly handled?

  3. ■ Are invalid or out-of-sequence commands detected and properly handled?

  4. ■ Are errors and file accesses properly recorded?

  5. ■ Do procedures for changing security tables work?

  6. ■ Is it possible to log in without a password?

  7. ■ Are valid passwords accepted and invalid passwords rejected?

  8. ■ Does the system respond properly to multiple invalid passwords?

  9. ■ Does the system-initialed authentication function properly?

  10. ■ Are there security features for remote access?

It is important to assess the performance of the security mechanisms as well as the functions themselves. Some questions and issues concerning security performance include the following:

  1. Availability—What portion of time is the application or control available to perform critical security functions? Security controls usually require higher availability than other portions of the system.

  2. Survivability—How well does the system withstand major failures or natural disasters? This includes the support of emergency operations during failure, backup operations afterward, and recovery actions to return to regular operation.

  3. Accuracy—How accurate is the security control? Accuracy encompasses the number, frequency, and significance of errors.

  4. Response time—Are response times acceptable? Slow response times can tempt users to bypass security controls. Response time can also be critical for control management, for example, the dynamic modification of security tables.

  5. Throughput—Does the security control support required use capacities? Capacity includes the peak and average loading of users and service requests.

A useful performance test is stress testing, which involves large numbers of users and requests to attain operational stress conditions. Stress testing is used to attempt to exhaust limits for such resources as buffers, queues, tables, and ports. This form of testing is useful in evaluating protection against service denial threats.

Task 3: Design/Script the Volume Tests

The objective of volume testing is to subject the system to heavy volumes of data to find out if it can handle the volume. This test is often confused with stress testing. Stress testing subjects the system to heavy loads or stresses in terms of rates, such as throughputs over a short time period. Volume testing is data oriented, and its purpose is to show that the system can handle the volume of data specified in its objectives.

Some examples of volume testing are as follows:

  1. ■ Relative data comparison is made when processing date-sensitive transactions.

  2. ■ A compiler is fed an extremely large source program to compile.

  3. ■ A linkage editor is fed a program containing thousands of modules.

  4. ■ An electronic-circuit simulator is given a circuit containing thousands of components.

  5. ■ An operation system’s job queue is filled to maximum capacity.

  6. ■ Enough data is created to cause a system to span files.

  7. ■ A test-formatting system is fed a massive document format.

  8. ■ The Internet is flooded with huge e-mail messages and files.

Task 4: Design/Script the Stress Tests

The objective of stress testing is to investigate the behavior of the system under conditions that overload its resources. Of particular interest is the impact that this has on the system processing time. Stress testing is boundary testing. For example, test with the maximum number of terminals active and then add more terminals than specified in the requirements under different limit combinations. Some of the resources subjected to heavy loads by stress testing include the following:

  1. ■ Buffers

  2. ■ Controllers

  3. ■ Display terminals

  4. ■ Interrupt handlers

  5. ■ Memory

  6. ■ Networks

  7. ■ Printers

  8. ■ Spoolers

  9. ■ Storage devices

  10. ■ Transaction queues

  11. ■ Transaction schedulers

  12. ■ User of the system

Stress testing studies the system’s response to peak bursts of activity in short periods of time and attempts to find defects in a system. It is often confused with volume testing, in which the system’s capability of handling large amounts of data is the objective.

Stress testing should be performed early in development because it often uncovers major design flaws that can have an impact on many areas. If stress testing is not performed early, subtle defects, which might have been more apparent earlier in development, may be difficult to uncover.

The following are the suggested steps for stress testing:

  1. Perform simple multitask tests.

  2. After the simple stress defects are corrected, stress the system to breaking point.

  3. Perform the stress tests repeatedly for every spiral.

Some stress-testing examples include the following:

  1. ■ Word-processing response time for a fixed entry rate, such as 120 words per minute

  2. ■ Introducing a heavy volume of data in a very short period of time

  3. ■ Varying loads for interactive, real-time process control

  4. ■ Simultaneous introduction of a large number of transactions

  5. ■ Thousands of users signing on to the Internet within a minute

Task 5: Design/Script the Compatibility Tests

The objective of compatibility testing (sometimes called cohabitation testing) is to test the compatibility of the application with other applications or systems. This is a test that is often overlooked until the system is put into production. Defects are often subtle and difficult to uncover in this test. An example is when the system works perfectly in the testing laboratory in a controlled environment, but does not work when it coexists with other applications. An example of compatibility is when two systems share the same data or data files or reside in the same memory at the same time. The system may satisfy the system requirements, but not work in a shared environment; it may also interfere with other systems.

The following is a compatibility (cohabitation) testing strategy:

  1. Update the compatibility objectives to note how the application has actually been developed and the actual environments in which it is to perform. Modify the objectives for any changes in the cohabiting systems or the configuration resources.

  2. Update the compatibility test cases to make sure they are comprehensive. Make sure that the test cases in the other systems that can affect the target system are comprehensive. And ensure maximum coverage of instances in which one system could affect another.

  3. Perform the compatibility tests and carefully monitor the results to ensure the expected results. Use a baseline approach, which is the system’s operating characteristics before the incorporation of the target system into the shared environment. The baseline needs to be accurate and incorporate not only the functioning but also the operational performance to ensure that it is not degraded in a cohabitation setting.

  4. Document the results of the compatibility tests and note any deviations in the target system or the other cohabitation systems.

  5. Regression test the compatibility tests after the defects have been resolved, and record the tests in the retest matrix.

Task 6: Design/Script the Conversion Tests

The objective of conversion testing is to verify the conversion of existing data and load a new database. The most common conversion problem is between two versions of the same system. A new version may have a different data format, but must include the data from the old system. Ample time needs to be set aside to carefully think of all the conversion issues that may arise.

Some key factors that need to be considered when designing conversion tests include the following:

  1. Auditability—There needs to be a plan to perform before-and-after comparisons and analysis of the converted data to ensure it was converted successfully. Techniques to ensure auditability include file reports, comparison programs, and regression testing. Regression testing checks to verify that the converted data does not change the business requirements or cause the system to behave differently.

  2. Database verification—Prior to conversion, the new database needs to be reviewed to verify that it is designed properly, satisfies the business needs, and that the support center and database administrators are trained to support it.

  3. Data cleanup—Before the data is converted to the new system, the old data needs to be examined to verify that inaccuracies or discrepancies in the data are removed.

  4. Recovery plan—Roll-back procedures need to be in place before any conversion is attempted to restore the system to its previous state and undo the conversions.

  5. Synchronization—It must be verified that the conversion process does not interfere with normal operations. Sensitive data, such as customer data, may be changing dynamically during conversions. One way to achieve this is to perform conversions during nonoperational hours.

Task 7: Design/Script the Usability Tests

The objective of usability testing is to determine how well the user will be able to use and understand the application. This includes the system functions, publications, help text, and procedures to ensure that the user comfortably interacts with the system. Usability testing should be performed as early as possible during development and should be designed into the system. Late usability testing might be impossible, because it is locked in and often requires a major redesign of the system to correct serious usability problems. This may make it economically infeasible.

Some of the usability problems the tester should look for include the following:

  1. ■ Overly complex functions or instructions

  2. ■ Difficult installation procedures

  3. ■ Poor error messages, for example, “syntax error”

  4. ■ Syntax difficult to understand and use

  5. ■ Nonstandardized GUI interfaces

  6. ■ User forced to remember too much information

  7. ■ Difficult log-in procedures

  8. ■ Help text not context sensitive or not detailed enough

  9. ■ Poor linkage to other systems

  10. ■ Unclear defaults

  11. ■ Interface too simple or too complex

  12. ■ Inconsistency of syntax, format, and definitions

  13. ■ User not provided with clear acknowledgment of all inputs

Task 8: Design/Script the Documentation Tests

The objective of documentation testing is to verify that the user documentation is accurate and ensure that the manual procedures work correctly. Documentation testing has several advantages, including improving the usability of the system, reliability, maintainability, and installability. In these cases, testing the document will help uncover deficiencies in the system or make the system more usable.

Documentation testing also reduces customer support costs; when customers can figure out answers to their questions by reading the documentation, they are not forced to call the help desk.

The tester verifies the technical accuracy of the documentation to ensure that it agrees with and describes the system accurately. He or she needs to assume the user’s point of view and carry out the steps described in the documentation.

Some tips and suggestions for the documentation tester include the following:

  1. ■ Use documentation as a source of many test cases.

  2. ■ Use the system exactly as the documentation describes it should be used.

  3. ■ Test every hint or suggestion.

  4. ■ Incorporate defects into the defect-tracking database.

  5. ■ Test every online help hypertext link.

  6. ■ Test every statement of fact, and do not take anything for granted.

  7. ■ Work like a technical editor rather than a passive reviewer.

  8. ■ Perform a general review of the whole document first and then a detailed review.

  9. ■ Check all the error messages.

  10. ■ Test every example provided in the document.

  11. ■ Make sure all index entries have documentation text.

  12. ■ Make sure documentation covers all key user functions.

  13. ■ Make sure the reading style is not too technical.

  14. ■ Look for areas that are weaker than others and need more explanation.

Task 9: Design/Script the Backup Tests

The objective of backup testing is to verify the ability of the system to back up its data in the event of a software or hardware failure. This test is complementary to recovery testing and should be part of recovery test planning.

Some backup testing considerations include the following:

  1. ■ Backing up files and comparing the backup with the original

  2. ■ Archiving files and data

  3. ■ Complete system backup procedures

  4. ■ Checkpoint backups

  5. ■ Backup performance system degradation

  6. ■ Effect of backup on manual processes

  7. ■ Detection of “triggers” to backup system

  8. ■ Security procedures during backup

  9. ■ Maintaining transaction logs during backup procedures

Task 10: Design/Script the Recovery Tests

The objective of recovery testing is to verify the system’s ability to recover from a software or hardware failure. This test verifies the contingency features of the system for handling interruptions and returning to specific points in the application’s processing cycle. The key questions for designing recovery tests are as follows:

  1. ■ Have the potentials for disasters and system failures, and their respective damages, been identified? Fire-drill brainstorming sessions can be an effective method of defining disaster scenarios.

  2. ■ Do the prevention and recovery procedures provide for adequate responses to failures? The plan procedures should be tested with technical reviews by subject matter experts and the system users.

  3. ■ Will the recovery procedures work properly when really needed? Simulated disasters need to be created with the actual system verifying the recovery procedures. This should involve the system users, the support organization, vendors, and so on.

Some recovery testing examples include the following:

  1. ■ Complete restoration of files that were backed up either during routine maintenance or error recovery

  2. ■ Partial restoration of file backup to the last checkpoint

  3. ■ Execution of recovery programs

  4. ■ Archive retrieval of selected files and data

  5. ■ Restoration when power supply is the problem

  6. ■ Verification of manual recovery procedures

  7. ■ Recovery by switching to parallel systems

  8. ■ Restoration performance system degradation

  9. ■ Security procedures during recovery

  10. ■ Ability to recover transaction logs

Task 11: Design/Script the Installation Tests

The objective of installation testing is to verify the ability to install the system successfully. Customers have to install the product on their systems. Installation is often the developers’ last activity and often receives the least amount of attention during development. Yet, it is the first activity that the customer performs when using the new system. Therefore, clear and concise installation procedures are among the most important parts of the system documentation.

Reinstallation procedures need to be included to be able to reverse the installation process and validate the previous environmental condition. Also, the installation procedures need to document how the user can tune the system options and upgrade from a previous version.

Some key installation questions the tester needs to consider include the following:

  1. ■ Who is the user installer? For example, what technical capabilities are assumed?

  2. ■ Is the installation process documented thoroughly with specific and concise installation steps?

  3. ■ For which environments are the installation procedures supposed to work, for example, platforms, software, hardware, networks, or versions?

  4. ■ Will the installation change the user’s current environmental setup, for example, config.sys, and so on?

  5. ■ How does the installer know the system has been installed correctly? For example, is there an installation test procedure in place?

Task 12: Design/Script Other System Test Types

In addition to the foregoing system tests, the following system tests may also be required:

  1. API testing—Verify the system uses APIs correctly, for example, operating system calls.

  2. Communication testing—Verify the system’s communications and networks.

  3. Configuration testing—Verify that the system works correctly in different system configurations, for example, software, hardware, and networks.

  4. Database testing—Verify the database integrity, business rules, access, and refresh capabilities.

  5. Degraded system testing—Verify that the system performs properly under less than optimum conditions, for example, line connections down, and the like.

  6. Disaster recovery testing—Verify that the system recovery processes work correctly.

  7. Embedded system test—Verify systems that operate on low-level devices, such as video chips.

  8. Facility testing—Verify that each stated requirement facility is met.

  9. Field testing—Verify that the system works correctly in the real environment.

  10. Middleware testing—Verify that the middleware software works correctly, for example, the common interfaces and accessibility among clients and servers.

  11. Multimedia testing—Verify the multimedia system features, which use video, graphics, and sound.

  12. Online help testing—Verify that the system’s online help features work properly.

  13. Operability testing—Verify system will work correctly in the actual business environment.

  14. Package testing—Verify that the installed software package works correctly.

  15. Parallel testing—Verify that the system behaves the same in the old and new versions.

  16. Port testing—Verify that the system works correctly on different operating systems and computers.

  17. Procedure testing—Verify that nonautomated procedures work properly, for example, operation, DBA, and the like.

  18. Production testing—Verify that the system will work correctly during actual ongoing production and not just in the test laboratory environment.

  19. Real-time testing—Verify systems in which time issues are critical and there are response time requirements.

  20. Reliability testing—Verify that the system works correctly within predefined expected failure duration, for example, mean time to failure (MTF).

  21. Serviceability testing—Verify that service facilities of the system work properly, for example, mean time to debug a defect and maintenance procedures.

  22. SQL testing—Verify the queries, data retrievals, and updates.

  23. Storage testing—Verify that the system storage requirements are met, for example, sizes of spill files and amount of main or secondary storage used.

Step 3: Review/Approve System Tests

Task 1: Schedule/Conduct the Review

The system test plan review should be scheduled well in advance of the actual review, and the participants should have the latest copy of the test plan.

As with any interview or review, certain elements must be present. The first is defining what will be discussed; the second is discussing the details; and the third is summarization. The final element is timeliness. The reviewer should state up front the estimated duration of the review and set the ground rule that if time expires before completing all items on the agenda, a follow-on review will be scheduled.

The purpose of this task is for development and the project sponsor to agree and accept the system test plan. If there are any suggested changes to the test plan during the review, they should be incorporated into the test plan.

Task 2: Obtain Approvals

Approval is critical in a testing effort because it helps testing, development, and the sponsor agree. The best approach is with a formal sign-off procedure of a system test plan. If this is the case, use the management approval sign-off forms. However, if a formal agreement procedure is not in place, send a memo to each key participant including at least the project manager, development manager, and sponsor. In the document, attach the latest test plan and point out that all their feedback comments have been incorporated and that if you do not hear from them, it is assumed that they agree with the plan. Finally, indicate that in a spiral development environment, the system test plan will evolve with each iteration but that you will include them in any modification.

Step 4: Execute the System Tests

Task 1: Regression Test the System Fixes

The purpose of this task is to retest the system tests that discovered defects in the previous system test cycle for this build. The technique used is regression testing. Regression testing is a technique that detects spurious errors caused by software modifications or corrections.

A set of test cases must be maintained and available throughout the entire life of the software. The test cases should be complete enough so that all the software’s functional capabilities are thoroughly tested. The question arises as to how to locate those test cases to test defects discovered during the previous test spiral. An excellent mechanism is the retest matrix.

As described earlier, a retest matrix relates test cases to functions (or program units). A check entry in the matrix indicates that the test case is to be retested when the function (or program unit) has been modified due to enhancements or corrections. The absence of an entry indicates that the test does not need to be retested. The retest matrix can be built before the first testing spiral, but needs to be maintained during subsequent spirals. As functions (or program units) are modified during a development spiral, existing or new test cases need to be created and checked in the retest matrix in preparation for the next test spiral. Over time with subsequent spirals, some functions (or program units) may be stable, with no recent modifications. Selective removal of check entries should be considered, and undertaken between testing spirals.

Task 2: Execute the New System Tests

The purpose of this task is to execute new system tests that were created at the end of the previous system test cycle. In the previous spiral, the testing team updated the function/GUI, system fragment, and acceptance tests in preparation for the current testing spiral. During this task, those tests are executed.

Task 3: Document the System Defects

During system test execution, the results of the testing must be reported in the defect-tracking database. These defects are typically related to individual tests that have been conducted. However, variations to the formal test cases often uncover other defects. The objective of this task is to produce a complete record of the defects. If the execution step has been recorded properly, the defects have already been recorded on the defect-tracking database. If the defects are already recorded, the objective of this step becomes to collect and consolidate the defect information.

Tools can be used to consolidate and record defects depending on the test execution methods. If the defects are recorded on paper, the consolidation involves collecting and organizing the papers. If the defects are recorded electronically, search features can easily locate duplicate defects.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset