Chapter 15

Test Case Design (Do)

You will recall that in the spiral development environment, software testing is described as a continuous improvement process that must be integrated into a rapid application development methodology. Deming’s continuous improvement process using the PDCA model is applied to the software testing process. We are now in the Do part of the spiral model (see Figure 15.1).

Figure 15.2 outlines the steps and tasks associated with the Do part of spiral testing. Each step and task are described, and valuable tips and techniques are provided.

Step 1: Design Function Tests

Task 1: Refine the Functional Test Requirements

At this point, the functional specification should have been completed. It consists of the hierarchical functional decomposition, the functional window structure, the window standards, and the minimum system requirements of the system to be developed. An example of windows standards is the Windows 2000 GUI Standards. A minimum system requirement could be the following: Windows 2000, a Pentium IV microprocessor, 1 GB RAM, 40 GB disk space, and a 56 kbps modem.

A functional breakdown consists of a list of business functions, hierarchical listing, group of activities, or set of user profiles defining the basic functions of the system and how the user will use it. A business function is a discrete controllable aspect of the business and the smallest component of a system. Each should be named and described with a verb–object paradigm. The criteria used to determine the successful execution of each function should be stated. The functional hierarchy serves as the basis for function testing, in which there will be at least one test case for each lowest-level function. Examples of functions include the following: approve customer credit, handle order, create invoice, order components, receive revenue, pay bill, purchase items, and so on. Taken together, the business functions constitute the total application including any interfaces. A good source of these functions (in addition to the interview itself) is a process decomposition or data flow diagram, or CRUD matrix, which should be requested during the information-gathering interview.

Images

Figure 15.1   Spiral testing and continuous improvement.

The requirements serve as the basis for creating test cases. The following quality assurance test checklists can be used to ensure that the requirements are clear and comprehensive:

  1. Appendix E22: Clarification Request, which can be used to document questions that may arise while the tester analyzes the requirements.

  2. Appendix F25: Ambiguity Review Checklist, which can be used to assist in the review of a functional specification of structural ambiguity (not to be confused with content reviews).

  3. Appendix F26: Architecture Review Checklist, which can be used to review the architecture for completeness and clarity.

  4. Appendix F27: Data Design Review Checklist, which can be used to review the logical and physical design for clarity and completeness.

  5. Appendix F28: Functional Specification Review Checklist, which can be used in functional specification for content completeness and clarity (not to be confused with ambiguity reviews).

  6. Appendix F29: Prototype Review Checklist, which can be used to review a prototype for content completeness and clarity.

  7. Appendix F30: Requirements Review Checklist, which can be used to verify that the testing project requirements are comprehensive and complete.

  8. Appendix F31: Technical Design Review Checklist, which can be used to review the technical design for clarity and completeness.

A functional breakdown is used to illustrate the processes in a hierarchical structure showing successive levels of detail. It is built iteratively as processes and nonelementary processes are decomposed (see Figure 15.3).

Images

Figure 15.2   Test design (steps/tasks).

Images

Figure 15.3   Functional breakdown.

A data flow diagram shows processes and the flow of data among these processes. It is used to define the overall data flow through a system and consists of external agents that interface with the system, processes, data flow, and stores depicting where the data is stored or retrieved. A data flow diagram should be reviewed, and each major and leveled function should be listed and organized into a hierarchical list.

A CRUD matrix, or association matrix, links data and process models. It identifies and resolves matrix omissions and conflicts and helps refine the data and process models, as necessary.

A functional window structure describes how the functions will be implemented in the windows environment. Figure 15.4 shows a sample functional window structure for order processing.

Images

Figure 15.4   Functional window structure.

Task 2: Build a Function/Test Matrix

The function/test matrix cross-references the tests to the functions. This matrix provides proof of the completeness of the test strategies, illustrating in graphic format which tests exercise which functions. (See Table 15.1 and Appendix E5, “Function/Test Matrix,” for more details.)

The matrix is used as a control sheet during testing and can also be used during maintenance. For example, if a function is to be changed, the maintenance team can refer to the function/test matrix to determine which tests need to be run or changed. The business functions are listed vertically, and the test cases are listed horizontally. The test case name is recorded on the matrix along with the number. (Also see Appendix E24, “Test Condition versus Test Case,” Matrix I, which can be used to associate a requirement with each condition that is mapped to one or more test cases.)

It is also important to differentiate those test cases that are manual from those that are automated. One way to accomplish this is to come up with a naming standard that will highlight an automated test case; for example, the first character of the name is “A.”

Table 15.1 shows an example of a function/test matrix.

Step 2: Design GUI Tests

The goal of a good graphical user interface (GUI) design should be consistency in “look and feel” for the users of the application. Good GUI design has two key components: interaction and appearance. Interaction relates to how the user interacts with the application. Appearance relates to how the interface looks to the user.

GUI testing involves confirming that the navigation is correct; for example, when an icon, menu choice, or radio button is clicked, the desired response occurs. The following are some good GUI design principles the tester should look for while testing the application.

Ten Guidelines for Good GUI Design

  1. Involve users.

  2. Understand the user’s culture and experience.

  3. Prototype continuously to validate the requirements.

  4. Let the user’s business workflow drive the design.

  5. Do not overuse or underuse GUI features.

  6. Create the GUI, help files, and training concurrently.

  7. Do not expect users to remember secret commands or functions.

  8. Anticipate mistakes, and do not penalize the user for making them.

  9. Continually remind the user of the application status.

  10. Keep it simple.

Table 15.1   Functional/Test Matrix

Images

Task 1: Identify the Application GUI Components

GUI provides multiple channels of communication using words, pictures, animation, sound, and video. Five key foundation components of the user interface are windows, menus, forms, icons, and controls.

  1. Windows—In a windowed environment, all user interaction with the application occurs through the windows. These include a primary window, along with any number of secondary windows generated from the primary one.

  2. Menus—Menus come in a variety of styles and forms. Examples include action menus (push button, radio button), pull-down menus, pop-up menus, option menus, and cascading menus.

  3. Forms—Forms are windows or screens into which the user can add information.

  4. Icons—Icons, or “visual push buttons,” are valuable for instant recognition, ease of learning, and ease of navigation through the application.

  5. Controls—A control component appears on a screen that allows the user to interact with the application, and is indicated by its corresponding action. Controls include menu bars, pull-down menus, cascading menus, pop-up menus, push buttons, check boxes, radio buttons, list boxes, and drop-down list boxes.

A design approach to GUI test design is to first define and name each GUI component by name within the application, as shown in Table 15.2. In the next step, a GUI component checklist is developed that can be used to verify each component in this table. (Also see Appendix E6, “GUI Component Test Matrix.”)

Task 2: Define the GUI Tests

In the previous task, the application GUI components were defined, named, and categorized in the GUI component test matrix. In the present task, a checklist is developed against which each GUI component is verified. The list should cover all possible interactions and may or may not apply to a particular component. Table 15.3 is a partial list of the items to check. (See Appendix E23, “Screen Data Mapping,” which can be used to document the properties of the screen data, and Appendix F32, “Test Case Preparation Review Checklist,” which can be used to ensure that test cases have been prepared as per specifications.)

In addition to the GUI component checks, if there is a GUI design standard, it should be verified as well. GUI standards are essential to ensure that the internal rules of construction are followed to achieve the desired level of consistency. Some of the typical GUI standards that should be verified include the following:

  1. ■ Forms “enterable” and display-only formats

  2. ■ Wording of prompts, error messages, and help features

  3. ■ Use of color, highlight, and cursors

    Table 15.2   GUI Component Test Matrix

    Images

  4. ■ Screen layouts

  5. ■ Function and shortcut keys, or “hot keys”

  6. ■ Consistently locating screen elements on the screen

  7. ■ Logical sequence of objects

  8. ■ Consistent font usage

  9. ■ Consistent color usage

It is also important to differentiate manual from automated GUI test cases. One way to accomplish this is to use an additional column in the GUI component matrix that indicates if the GUI test is manual or automated.

Step 3: Define the System/Acceptance Tests

Task 1: Identify Potential System Tests

System testing is the highest level of testing and evaluates the functionality as a total system, its performance, and overall fitness of use. This test is usually performed by the internal organization and is oriented to systems’ technical issues rather than acceptance, which is a more user-oriented test.

Systems testing consists of one or more tests that are based on the original objectives of the system that were defined during the project interview. The purpose of this task is to select the system tests that will be performed, not how to implement the tests. Some common system test types include the following:

  1. Performance testing—Verifies and validates that the performance requirements have been met; measures response times, transaction rates, and other time-sensitive requirements.

    Table 15.3   GUI Component Checklist

    Images

  2. Security testing—Evaluates the presence and appropriate functioning of the security of the application to ensure the integrity and confidentiality of the data.

  3. Volume testing—Subjects the application to heavy volumes of data to determine if it can handle the volume of data.

  4. Stress testing—Investigates the behavior of the system under conditions that overload its resources. Of particular interest is the impact that this has on system processing time.

  5. Compatibility testing—Tests the compatibility of the application with other applications or systems.

  6. Conversion testing—Verifies the conversion of existing data and loads a new database.

  7. Usability testing—Determines how well the user will be able to use and understand the application.

  8. Documentation testing—Verifies that the user documentation is accurate and ensures that the manual procedures work correctly.

  9. Backup testing—Verifies the ability of the system to back up its data in the event of a software or hardware failure.

  10. Recovery testing—Verifies the system’s ability to recover from a software or hardware failure.

  11. Installation testing—Verifies the ability to install the system successfully.

Task 2: Design System Fragment Tests

System fragment tests are sample subsets of full system tests that can be performed during each spiral loop. The objective of doing a fragment test is to provide early warning of pending problems that would arise in the full system test. Candidate fragment system tests include function, performance, security, usability, documentation, and procedure. Some of these fragment tests should have formal tests performed during each spiral, whereas others should be part of the overall testing strategy. Nonfragment system tests include installation, recovery, conversion, and the like, which are probably going to be performed until the formal system test.

Function testing on a system level occurs during each spiral as the system is integrated. As new functionality is added, test cases need to be designed, implemented, and tested during each spiral.

Typically, security mechanisms are introduced fairly early in the development. Therefore, a set of security tests should be designed, implemented, and tested during each spiral as more features are added.

Usability is an ongoing informal test during each spiral and should always be part of the test strategy. When a usability issue arises, the tester should document it in the defect-tracking system. A formal type of usability test is the end user’s review of the prototype, which should occur during each spiral.

Documentation (such as online help) and procedures are also ongoing informal tests. These should be developed in parallel with formal system development during each spiral and not be put off until a formal system test. This will avoid last-minute surprises. As new features are added, documentation and procedure tests should be designed, implemented, and tested during each spiral.

Some performance testing should occur during each spiral at a noncontended unit level, that is, one user. Baseline measurements should be performed on all key functions as they are added to the system. A baseline measurement is a measurement taken for the specific purpose of determining the initial value of the state or performance measurement. During subsequent spirals, the performance measurements can be repeated and compared to the baseline. Table 15.4 provides an example of baseline performance measurements.

Task 3: Identify Potential Acceptance Tests

Acceptance testing is an optional user-run test that demonstrates the ability of the application to meet the user’s requirements. The motivation for this test is to demonstrate rather than be destructive, that is, to show that the system works. Less emphasis is placed on technical issues, and more is placed on the question of whether the system is a good business fit for the end user. The test is usually performed by users, if performed at all. Typically, 20 percent of the time, this test is rolled into the system test. If performed, acceptance tests typically are a subset of the system tests. However, the users sometimes define “special tests,” such as intensive stress or volume tests, to stretch the limits of the system even beyond what was tested during the system test.

Step 4: Review/Approve Design

Task 1: Schedule/Prepare for Review

The test design review should be scheduled well in advance of the actual review, and the participants should have the latest copy of the test design.

As with any interview or review, it should contain certain elements. The first is defining what will be discussed, or “talking about what we are going to talk about.” The second is discussing the details, or “talking about it.” The third is summarization, or “talking about what we talked about.” The final element is timeliness. The reviewer should state up front the estimated duration of the review and set the ground rule that if time expires before completing all items on the agenda, a follow-on review will be scheduled.

The purpose of this task is for development and the project sponsor to agree and accept the test design. If there are any suggested changes to the test design during the review, they should be incorporated into the design.

Task 2: Obtain Approvals

Approval is critical in a testing effort, because it helps provide the necessary agreements among testing, development, and the sponsor. The best approach is with a formal sign-off procedure of a test design. If this is the case, use the management approval sign-off forms. However, if a formal agreement procedure is not in place, send a memo to each key participant, including at least the project manager, development manager, and sponsor. In the document, attach the latest test design and point out that all their feedback comments have been incorporated and that if you do not hear from them, it is assumed that they agree with the design. Finally, indicate that in a spiral development environment, the test design will evolve with each iteration but that you will include them in any modification.

Table 15.4   Baseline Performance Measurements

Images

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset