Chapter 11

Static Testing and Dynamic Testing the Code

The program unit design is the detailed design in which specific algorithmic and data structure choices are made. Specifying the detailed flow of control will make it easily translatable to program code with a programming language. The coding phase is the translation of the detailed design to executable code using a programming language.

Testing Coding with Technical Reviews

The coding phase produces executable source modules. The basis of good programming is programming standards that have been defined. Some good standards should include commenting, unsafe programming constructs, program layout, defensive programming, and so on. Commenting refers to how a program should be documented and to what level or degree. Unsafe programming constructions are practices that can make the program hard to maintain. An example is goto statements. Program layout refers to how a standard program should be laid out on a page, indentation of control constructs, and initialization. A defensive programming practice describes the mandatory components of the defensive programming strategy. An example is error condition handling and transfer of control to a common error routine.

Table 11.1   Coding Phase Defect Recording

Images

Static analysis techniques, such as structured walkthroughs and inspections, are used to ensure the proper form of the program code and documentation. This is accomplished by checking adherence to coding and documentation conventions and type checking.

Each defect uncovered during the coding phase review should be documented, categorized, recorded, presented to the design team for correction, and referenced to the specific document in which the defect was noted. Table 11.1 shows a sample coding phase defect recording form (see Appendix F5, “Coding Phase Defect Checklist,” for more details).

Executing the Test Plan

By the end of this phase, all the items in each section of the test plan should have been completed. The actual testing of software is accomplished through the test data in the test plan developed during the requirements, logical design, physical design, and program unit design phases. Because results have been specified in the test cases and test procedures, the correctness of the executions is ensured from a static test point of view; that is, the tests have been reviewed manually.

Images

Figure 11.1   Executing the tests.

Dynamic testing, or time-dependent techniques, involves executing a specific sequence of instructions with the computer. These techniques are used to study the functional and computational correctness of the code.

Dynamic testing proceeds in the opposite order of the development life cycle. It starts with unit testing to verify each program unit independently and then proceeds to integration, system, and acceptance testing. After acceptance testing has been completed, the system is ready for operation and maintenance. Figure 11.1 briefly describes each testing type.

Unit Testing

Unit testing is the basic level of testing. Unit testing focuses separately on the smaller building blocks of a program or system. It is the process of executing each module to confirm that each performs its assigned function. The advantage of unit testing is that it permits the testing and debugging of small units, thereby providing a better way to manage the integration of the units into larger units. In addition, testing a smaller unit of code makes it mathematically possible to fully test the code’s logic with fewer tests. Unit testing also facilitates automated testing because the behavior of smaller units can be captured and played back with maximized reusability. A unit can be one of several types of application software. Examples include the module itself as a unit, GUI components such as windows, menus, and functions, batch programs, online programs, and stored procedures.

Integration Testing

After unit testing is completed, all modules must be integration-tested. During integration testing, the system is slowly built up by adding one or more modules at a time to the core of already-integrated modules. Groups of units are fully tested before system testing occurs. Because modules have been unit-tested prior to integration testing, they can be treated as black boxes, allowing integration testing to concentrate on module interfaces. The goals of integration testing are to verify that each module performs correctly within the control structure and that the module interfaces are correct.

Incremental testing is performed by combining modules in steps. At each step one module is added to the program structure, and testing concentrates on exercising this newly added module. When it has been demonstrated that a module performs properly with the program structure, another module is added, and testing continues. This process is repeated until all modules have been integrated and tested.

System Testing

After integration testing, the system is tested as a whole for functionality and fitness of use based on the System/Acceptance Test Plan. Systems are fully tested in the computer operating environment before acceptance testing occurs. The sources of the system tests are the quality attributes that were specified in the Software Quality Assurance Plan. System testing is a set of tests to verify these quality attributes and ensure that the acceptance test occurs in a relatively trouble-free manner. System testing verifies that the functions are carried out correctly. It also verifies that certain nonfunctional characteristics are present. Some examples include usability testing, performance testing, stress testing, compatibility testing, conversion testing, and document testing.

Black-box testing is a technique that focuses on testing a program’s functionality against its specifications. White-box testing is a testing technique in which paths of logic are tested to determine how well they produce predictable results. Gray-box testing is a combination of these two approaches and is usually applied during system testing. It is a compromise between the two and is a well-balanced testing approach that is widely used during system testing.

Acceptance Testing

After systems testing, acceptance testing certifies that the software system satisfies the original requirements. This test should not be performed until the software has successfully completed systems testing. Acceptance testing is a user-run test that uses black-box techniques to test the system against its specifications. The end users are responsible for ensuring that all relevant functionality has been tested.

The acceptance test plan defines the procedures for executing the acceptance tests and should be followed as closely as possible. Acceptance testing continues even when errors are found, unless an error itself prevents continuation. Some projects do not require formal acceptance testing. This is true when the customer or user is satisfied with the other system tests, when timing requirements demand it, or when end users have been involved continuously throughout the development cycle and have been implicitly applying acceptance testing as the system is developed.

Acceptance tests are often a subset of one or more system tests. Two other ways to measure acceptance testing are as follows:

  1. Parallel Testing—A business-transaction-level comparison with the existing system to ensure that adequate results are produced by the new system.

  2. Benchmarks—A static set of results produced either manually or from an existing system is used as expected results for the new system.

Defect Recording

Each defect discovered during the foregoing tests is documented to assist in the proper recording of these defects. A problem report is generated when a test procedure gives rise to an event that cannot be explained by the tester. The problem report documents the details of the event and includes at least these items (see Appendix E12, “Defect Report,” for more details):

  1. ■ Problem identification

  2. ■ Author

  3. ■ Release/build number

  4. ■ Open date

  5. ■ Close date

  6. ■ Problem area

  7. ■ Defect or enhancement

  8. ■ Test environment

  9. ■ Defect type

  10. ■ Who detected

  11. ■ How detected

  12. ■ Assigned to

  13. ■ Priority

  14. ■ Severity

  15. ■ Status

Other test reports to communicate the testing progress and results include a test case log, test log summary report, and system summary report.

A test case log documents the test cases for a test type to be executed. It also records the results of the tests, which provides the detailed evidence for the test log summary report and enables reconstructing testing, if necessary. (See Appendix E9, “Test Case Log,” for more information.)

A test log summary report documents the test cases from the tester’s logs in progress or completed for the status reporting and metric collection. (See Appendix E10, “Test Log Summary Report.”)

A system summary report should be prepared for every major testing event. Sometimes it summarizes all the tests. It typically includes the following major sections: general information (describing the test objectives, test environment, references, etc.), test results and findings (describing each test), software functions and findings, and analysis and test summary. (See Appendix E11, “System Summary Report,” for more details.)

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset