Chapter 4

Transforming Requirements to Testable Test Cases

Introduction

Quality assurance (QA) is a holistic process involving the entire development and production process, that is, monitoring, improving, and ensuring that issues and bugs are found and fixed.

Software testing is a major component of the software development life cycle. Some organizations assign responsibility for testing to their test programmers or the QA department. Others outsource testing (see Section 5, Chapter 33, “On-Site/Offshore Model”). During the software testing process, QA project teams are typically a mix of developers, testers, and the business community who work closely together, sharing information and assigning tasks to one another.

The following section provides an overview of how to create test cases when “good” requirements do exist.

Software Requirements as the Basis of Testing

Would you build a house without architecture and specific requirements? The answer is no, because of the cost of materials and manpower rework. Somehow, there is a prevalent notion that software development efforts are different, that is, put something together, declare victory, and then spend a great deal of time fixing and reengineering the software. This is called “maintenance.” According to Standish Group Statistic, American companies spend $84 billion annually on failed software projects and $138 billion on projects that significantly exceed their time and budget estimates, or have reduced functionality.

Images

Figure 4.1   Importance of good requirements. (Reference: Ivy Hooks.)

Figure 4.1 shows that the probability of project success (as measured by meeting its target cost) is greatest when 8 to 14 percent of the total project cost is invested in requirements activities.

Requirement Quality Factors

If software testing depends on good requirements, it is important to understand some of the key elements of quality requirements.

Understandable

Requirements must be understandable. Understandable requirements are organized in a manner that facilitates reviews. Some techniques to improve understandability include the following:

  1. ■ Organize requirements by their object, for example, customer, order, invoice.

  2. ■ User requirements should be organized by business process or scenario. This allows the subject matter expert to see if there is a gap in the requirements.

  3. ■ Separate functional from nonfunctional requirements, for example, functional versus performance.

  4. ■ Organize requirements by level of detail. This determines their impact on the system, for example, “the system shall be able to take an order” versus “the system shall be able to take a retail order from the point of sale.”

  5. ■ Write requirements grammatically correctly and in a style that facilitates reviews. If the requirement is written in Microsoft Word, use the spell check option but beware of the context; that is, spell check may pass a word or phrase, but it may be contextually inappropriate.

  6. ■ Use “shall” for requirements. Do not use “will” or “should.” These are goals, not requirements. Using nonimperative words such as these makes the implementation of the requirement optional, potentially increasing cost and schedule, reducing quality, and creating contractual misunderstandings.

Necessary

The requirement must also be necessary. The following is an example of an unnecessary requirement. Suppose the following requirement is included in a requirement specification: “The system shall be acceptable if it passes 100 test cases.” This is really a project process and not a requirement and should not be in a requirement specification. A requirement must relate to the target application or system being built.

Modifiable

It must be possible to change requirements and associated information. The technique used to store requirements affects modifiability. For example, requirements in a word processor are much more difficult to modify than in a requirements management tool such as CaliberRM or Doors. However, for a very small project, the cost and learning curve for the requirements management tool may make the word processor the best option.

Consistency affects modifiability. Templates and glossaries for requirements make global changes possible. Templates should be structured to make the requirements visible, thus facilitating modifiability. A good best practice is to label each requirement with a unique identifier. Requirements should also be located in a central spot and be located with ease. Any requirement dependencies should also be noted, for example, requirement “Y” may depend on requirement “X.”

Nonredundant

There should not be duplicate requirements, as this causes problems. Duplicates increase maintenance; that is, every time a requirement changes, its duplicates also must be updated. Duplicate requirements also increase the potential for injecting requirement errors.

Terse

A good requirement must have no unnecessary verbiage or information. A tersely worded requirement gets right to the point; for example, “On the other hand,” “However,” “In retrospect,” and so on are pedantic.

Testable

It should be possible to verify or validate a testable requirement; that is, it should be possible to prove the intent of the requirement. Untestable requirements lend themselves to subjective interpretations by the tester. A best practice is to pretend that computers do not exist and ask yourself, could I test this requirement and know that it either works or does not?

Traceable

A requirement must also be traceable. Trace ability is key to verifying that requirements have been met. Compound requirements are difficult to trace and may cause the product to fail testing. For example, the requirement “the system shall calculate retirement and survivor benefits” is a compound requirement. The list approach avoids misunderstanding when reviewing requirements for trace ability individually.

Within Scope

All requirements must be defined in the area under consideration. The scope of a project is determined by all the requirements established for the project. The project scope is defined and refined as requirements are identified, analyzed, and baselined. A trace ability matrix will assist in keeping requirements within scope.

Numerical Method for Evaluating Requirement Quality

A best practice to ensure quality requirements is to use a numerical measure rather than subjective qualifiers such as “poor, acceptable, good, and excellent.”

The first step of this technique is to create a checklist of the requirements quality factors that will be used in your requirements review. The second step is to weight each quality factor according to its importance. The total weight of all the factors will be 100. For example:

  1. ■ Quality factor 1 = 10

  2. ■ Quality factor 2 = 5

  3. ■ Quality factor 3 = 10

  4. ■ Quality factor 4 = 5

  5. ■ Quality factor 5 = 20

  6. ■ Quality factor 6 = 15

  7. ■ Quality factor 7 = 10

  8. ■ Quality factor 8 = 25

The total score for quality starts at 100. The amount for an unmet quality factor is subtracted from the total. For example, if all quality factors are met except Quality factor 5, 20 is subtracted from 100, resulting in a final score of 80%.

Process for Creating Test Cases from Good Requirements

A technique is a process, style, and method of doing something. Appendix G describes 39 software testing techniques. Examples include black box, white box, equivalence class partitioning, etc. Techniques are used within a methodology.

A methodology or process is a philosophy, guide, or blueprint that provides methods and principles for the field employing it. In the context of information systems, methodologies are strategies with a strong focus on gathering information, planning, and design elements.

The following sections outline a useful methodology for extrapolating test cases from good requirements.

Step 1: Review the Requirements

Before writing test cases, the requirements need to be reviewed to ensure that they reflect the requirements’ quality factors.

An inspection is a type of formal, rigorous team manual peer review that can discover many problems than individual reviewers cannot find on their own. Informal manual peer reviews are also useful, depending on the situation. Unfortunately, reviews of requirements are not always productive (see Section 2, “Waterfall Testing Review,” Chapter 6 for more details about inspections and other types of reviews).

Two popular tools that automate the requirements process include the following:

  1. ■ Smart Check is commercially offered by Smartware Technologies, Inc. This tool is an automated document review tool that locates anomalies and ambiguities within requirements or technical specifications based on a word, word phrases, word category, and complexity level. The tool has a glossary of words that research has shown to cause ambiguities and structural deficiencies. SmartCheck also allows the user to edit and add his or her own words, phrases, and categories to the dictionary. Reports illustrate the frequency distribution for the 18 potential anomaly types, or by word or phrase. The tool is not intended to evaluate the correctness of the specified requirements. It is an aid to writing the requirements right, not to writing the right requirements.

    Images

    Figure 4.2   SmartCheck word/phrase distribution report.

    The following is an example of the results obtained by running SmartCheck. The quote is actually an excerpt from the U.S. Declaration of Independence. Although this example is not a software requirements specification, it does illustrate the point.

    In every stage of these Oppressions We have Petitioned for Redress in the most humble terms: Our repeated Petitions have been answered only by repeated injury. A Prince, whose character is thus<-- a subordinate conjunction to connect ideas - consider rewording marked by every act which may<-- a potentially ambiguous condition - consider rewording define a Tyrant, is unfit to be the rule<-- a potentially ambiguous noun or variable of a free people.

    The SmartCheck report in Figure 4.2 illustrates the distribution of words or phrases located on the basis of the 18 anomaly categories. The SmartCheck report in Figure 4.3 illustrates the distribution of the types of 18 anomaly categories. (Refer to http://www.smartwaretechnologies.com/ for more details).

  2. ■ ARM Tool (The Automated Requirement Measurement) was developed by the Software Assurance Technology Center (SATC) at the NASA Goddard Space Flight Center as an early life-cycle tool for assessing requirements that are specified in natural language. The objective of the ARM tool is to provide measures that can be used by project managers to assess the quality of a requirements specification document. The ARM tool scans a requirements specification document for key words and phrases and generates a report file summarizing the specific quality indicators. (See http://sw-assurance.gsfc.nasa.gov/disciplines/quality/index.php for more information.)

Images

Figure 4.3   SmartCheck anomaly-type report.

The following are some requirements review tips to improve the process:

  1. Prepare the reviewers—Provide the reviewer the requirements before the actual review, and tell them what kind of input you are seeking. Give them guidance on how to study and analyze a requirements specification. For example, point to the sections that you want them to review.

    Give the reviewers a checklist of typical requirements errors so that they can focus their examination on those points (see several checklists in the appendices and on the CD provided with the book).

    Tell the reviewers how to behave during the review. Make sure the participants understand how to collaborate effectively and constructively. Tell them that there is no such thing as a stupid question.

  2. Invite the right reviewers—Determine the type of reviewers you need represented in your requirements reviews. Examples include developers, subject matter experts (SMEs), business analysts, and testers.

  3. Emphasize finding major problems—The real leverage from a review comes from finding major errors of commission and omission. Finding such errors can help you avoid extensive—and expensive—rework much later in the project.

  4. Ask the right questions—The following is a list of useful questions during the reviews:

    1. –   Does the software product have a clearly defined purpose and objectives?

    2. –   Are the characteristics of users (or user groups) of the product identified?

    3. –   Are all external interfaces of the software stated?

    4. –   Does each requirement have a unique identifier or label?

    5. –   Is each requirement simply stated and can it stand on its own?

    6. –   Are all the conditions identified?

    7. –   Are multiple actions identified?

    8. –   Are requirements organized into logical groupings?

    9. –   Are the requirements hierarchically organized?

    10. –   Are the requirements prioritized (see “Requirements Prioritization Model” on the CD that came with the book)?

    11. –   Are the types of requirements defined, for example, functional, performance, etc.?

    12. –   Are the requirements consistent and nonconflicting?

    13. –   Are the requirements written in an active voice?

    14. –   Are the requirements ambiguous?

    15. –   Are there references to unknown terms, for example, acronyms, abbreviations?

    16. –   Are the input and outputs correct and detailed?

    17. –   Do the requirements express what the customer really needs?

  5. Send out the revised requirements document—After the requirements errors have been corrected, send out the requirements document to the same participants for them to review individually or as a group.

Step 2: Write a Test Plan

A software test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help the whole team understand the “why” and “how” of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it.

The task of test planning consists of the following:

  1. ■ Prioritizing quality goals for the release

  2. ■ Defining the testing activities to achieve those goals

  3. ■ Evaluating how well the activities support the goals

  4. ■ Planning the actions needed to carry out the activities

(See Appendix E and the CD that comes with this book for examples of test plans.)

Step 3: Identify the Test Suite

After the test plan has been completed and the requirements are “testable,” an effective way of transforming the requirements to test cases is to first design the test suites. A test suite, also known as a validation suite, is a collection of test cases that are intended to be used as input to a software program to show that it has some specified set of behaviors. Test suites are used to group similar test cases together, for example, Handle Orders.

A test suite often contains detailed instructions or goals for each collection of test cases and information on the system configuration to be used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.

A test suite (or by functionality) document is an organized table of contents for test cases. It lists the names of all test cases. The suite can be organized by listing the major product features, and then listing the test cases for each of those, as shown in Table 4.1 (also see Appendix E5).

Another way is to build a table in which the rows are types of business objects and the columns are types of operations (see Table 4.2). Each cell in the grid lists test cases that test one type of operation for one type of object. For example, an Order System object is “Orders.” The Orders business object would have test cases for each of the following CRUD-type operations: adding an order, list all orders, editing orders, deleting orders, searching for orders, etc. The next row might contain the “Customer” business object and have test cases for almost all the same operations.

The advantage of using an organized list or grid is that it gives the big picture, and it helps identify any area that needs more work. It is easy to forget to test other types of business objects and test business operations, for example, “Create Coupons.” It is obvious that shoppers use coupons, but it is easy to forget to test the ability to create coupons. If it is overlooked, there will be a clearly visible blank space in the test suite document. These clear indications of missing test cases allow one to improve the test suite sooner, make more realistic estimates of testing time needed, and find more defects. These advantages allow the discovered defects to be fixed sooner and help keep management expectations in sync with reality.

Step 4: Name the Test Cases

Having an organized system test suite makes it easier to list test cases because the task is broken down into many small, specific subtasks.

There may be some list items or grid cells that really should be empty. If you cannot think of any test cases for a part of the suite that logically should have some test cases, explicitly mark it as “TBD.”

The name of each test case should be a short phrase describing a general test situation. Use distinct test cases when different steps will be needed to test each situation. One test case can be used when the steps are the same and different input values are needed.

As you fill in the test suite outline, think of features or use cases that should be in the software requirements specification but are not there yet. Note any missing requirements in the requirements document as you go along.

At this point, you can already get a better feeling for the scope of the testing effort. You can already roughly prioritize the test cases. You are already starting to look at your requirements critically, and you may have identified missing or unclear requirements. Also, you can already estimate the level of specification-based test coverage that you will achieve (see “Test Case Prioritization Model” on the CD that came with the book).

Table 4.1   Function versus Test Cases

Images

Table 4.2   Test Suite Identification Matrix

Images

Step 5: Write Test Case Descriptions and Objectives

In Step 4, you may have generated approximately one dozen test case names on your first pass. That number will go up as you continue to make your testing more systematic. The advantage of having a large number of tests is that it usually increases the coverage.

The disadvantage to creating a big test suite is simply that it is too big. It could take a long time to fully specify every test case that you have mapped out. Also, the resulting document could become too large, making it harder to maintain.

For each test case, write one or two sentences describing its purpose and objectives. The description should provide enough information so that you could come back to it after several weeks and recall the same ad hoc testing steps that you have in mind now. Later, when you actually write detailed steps in the test case, any team member can carry out the test the same way that you intended.

The act of writing the descriptions forces you to think a bit more about each test case. When describing a test case, you may realize that it should actually be split into two test cases, or merged with another test case. Again, make sure to note any requirements problems or questions that you uncover.

Step 6: Create the Test Cases

The next step is to write the test case steps and specify test data. This is where the testing techniques can help you define the test data and conditions. A rule of thumb is to create approximately ten test cases per day.

Focus on the test cases that seem most in need of additional detail. For example, select system test cases that cover the following:

  1. ■ High-priority-use cases or features

  2. ■ Software components that are currently available for testing

  3. ■ Features that must work properly before other features can be exercised

  4. ■ Features that are needed for product demos or screenshots

  5. ■ Requirements that need to be clearer

Each test case should be simple enough to clearly succeed or fail. Ideally, the steps of a test case are a simple sequence: set up the test situation, exercise the system with specific test inputs, and verify the correctness of the system outputs.

Systems that are highly testable tend to have a large number of simple test cases that follow the set up–exercise–verify pattern. For those test cases, a one-column format can clearly express the needed steps. However, not all test cases are simple. Sometimes it is impractical to test one requirement at a time. Instead, some system test cases may be longer scenarios that exercise several requirements and verify correctness at each step. For those test cases, a two-column format may be useful.

In the one-column format, each step is a brief verb phrase that describes the action that the tester should take. For example, “Enter Username,” “Enter Password,” “Select Login,” and “See Home Page.” Verification of expected outputs are written using the verbs “observe” and “verify.” If multiple inputs are needed, multiple outputs must be verified.

In the two-column format, each test case step has two parts. The test input is a verb phrase describing what the tester should do in that step. The expected output is a noun phrase describing all the output that the tester should observe at that step. (See Appendix E, “Test Templates,” and the templates in the CD that came with the book.)

If you only have one test input value for a given test case, then you could write that test data value directly into the step where it is used. However, many test cases will have a set of test data values that must all be used to adequately cover all possible inputs. Define and use test input variables. Each variable is defined with a set of its selected values, and then it is used in test case steps just as you would use a variable in a programming language. When carrying out the tests, the tester should repeat each test case with each possible combination of test variable values, or as many as are practical.

Carefully selecting test data is as important as defining the steps of the test case. The concepts of boundary conditions and equivalence partitions are key to good test data selection. Try these steps to select test data:

  1. ■ Determine the set of all input values that can possibly be entered for a given input parameter.

  2. ■ Define the boundary between valid and invalid input values. For example, negative ages are impossible. You might also check for clearly unreasonable inputs. For example, an age entered as 200 is unrealistic.

(See Appendix G, “Software Testing Techniques,” for more information on how to write test cases. Thirty-nine testing techniques are included.)

Step 7: Review the Test Cases

A suite of system test cases can find many defects, but still leave many other critical defects undetected. One clear way to guard against undetected defects is to increase the coverage of your test suite.

Although a suite of unit tests might be evaluated in terms of its implementation coverage, a suite of system test cases should instead be evaluated in terms of specification coverage. Implementation coverage measures the percentage of lines of code that are executed by the unit test cases. If there is a line of code that is never executed, then there could be an undetected defect on that line.

Specification coverage measures the percentage of written requirements that the system test suite covers. If there is a requirement that is not tested by any system test case, then you are not assured that the requirement has been satisfied.

You can evaluate the coverage of your system tests at two levels: (1) the test suite itself is an organized table of contents for the test cases that can make it easy to notice parts of the system that are not being tested; and (2) within an individual test case, the set of possible input values should cover all input values. (See the “Test Case Review Checklist” located on the CD that came with the book.)

Transforming Use Cases to Test Cases

The use case, created by Ivar Jacobsen, is a scenario that describes the use of a system by an actor to accomplish work.

The following are the steps the tester can follow to create effective test cases from use cases.

Step 1: Draw a Use Case Diagram

Use cases can be represented visually with use case diagrams as shown in Figure 4.4.

The ovals represent use cases, and the stick figures represent “actors,” which can be either humans or other systems. The lines represent communication between an actor and a use case. Use cases provide the “big picture.” Each use case represents functionality that will be implemented, and each actor represents someone or something outside our system that interacts with it.

Step 2: Write the Detailed Use Case Text

The details of each use case are then documented in text format. Table 4.3 illustrates the “Enroll” use case details consisting of the normal and alternative flows.

Images

Figure 4.4   Use case diagram.

Table 4.3   Format for the “Enroll” Use Case Textual Description

Images

Images

Table 4.4   Use Case Scenarios

Scenario 1

Basic flow

 

Scenario 2

Basic flow

Alternate flow 1

 

Scenario 3

Basic flow

Alternate flow 2

 

Scenario 4

Basic flow

Alternate flow 2

Alternate flow 3

 

Step 3: Identify Use Case Scenarios

A use case scenario is an instance of a use case, or a complete “path” through the use case. End users of a system can go down many paths as they execute the functionality specified in the use case. To illustrate this, Figure 4.5 is a flowchart of the enrollment process. The basic (or normal) path is illustrated by the dotted lines.

The alternate paths (or exceptions) are depicted as A1 and A2. A1 is the case when an error occurs when the student is entering his or her information into the system. A2 depicts the case when the student has selected a particular course but then chooses not to accept it.

Table 4.4 lists some possible combinations of scenarios for Figure 4.5. Starting with the basic flow combinations, alternative flows are added to define the scenarios. These scenarios will be used as the basis for creating test cases.

Step 4: Generating the Test Cases

A test case is a set of test inputs, execution conditions, and expected results developed for a particular objective.

Images

Figure 4.5   Enrollment flowchart.

Once the set of scenarios has been identified, the next step is to identify the test cases. This is accomplished by analyzing the scenarios and reviewing the use case textual descriptions. There should be at least one test case for each scenario. For each invalid test case, there should be only one invalid input.

To document the test cases, a matrix format can be used, as illustrated in Table 4.5. The first column of the first row contains the test case ID, and the second column has a brief description of the test case and the scenario being tested. All the other columns except the last one contain data elements that will be used to implement the tests. The last column contains a description of the test case’s expected output. The “V” depicts a valid test input, and an “I” depicts an invalid test input.

Step 5: Generating Test Data

Once all of the test cases have been identified, they should be reviewed and validated to ensure accuracy and to identify redundant or missing test cases. Then, once they are approved, the final step is to substitute actual data values for the I’s and V’s. Table 4.6 shows a test case matrix with values substituted for the I’s and V’s in the previous matrix. A number of techniques can be used for identifying data values.

Two valuable techniques are Equivalence Class Partitioning and Boundary Value Analysis (see Appendix G, “Software Testing Techniques,” for more details).

Summary

Use cases are useful in the front end of the software development life cycle, and test cases are typically associated with the latter part of the life cycle. By leveraging use cases to generate test cases, testing teams can get started much earlier in the life cycle.

What to Do When Requirements Are Nonexistent or Poor?

The following section provides an overview of how to create test cases when “good” requirements do not exist.

Depending on the project and organization, requirements may be very well written and satisfy the requirements quality factors described earlier. On the other hand, it is often the case that requirements are not clear, unambiguous, and present. In this case, other alternatives need to be considered.

Ad Hoc Testing

The Art of Ad Hoc Testing

Ad hoc testing is the least formal of test techniques. It has been criticized because it is not structured. This testing type is most often used as a complement to other types of testing. Ad hoc testing finds a place during the entire testing cycle. Early in the project, ad hoc testing provides breadth to testers’ understanding of your program, thus aiding in discovery. In the middle of a project, the data obtained helps set priorities and schedules. As a project nears the ship date, ad hoc testing can be used to examine defect fixes more rigorously, as described earlier.

Table 4.5   Enrollment Test Case Matrix

Images

Table 4.6   Enrollment Test Case Details

Images

However, this is also a strength; that is, important things can be found quickly. Ad hoc testing is performed with improvisation in which the tester seeks to find defects with any means that seem appropriate. It is different from regression testing, which looks for a specific issue with detailed reproducible steps, with a clear expected result.

Ad hoc testing is in many ways similar to jazz improvisation. Jazz musicians sometimes use a fake book consisting of lead sheets for the songs on which they will improvise. After playing the recognizable melody once, the musicians take turns playing extemporaneous solos. Sometimes they will also vary the rhythm of the piece while improvising; for example, by playing behind the beat. These improvisational solos may be recognizable as related to the original tune, or they may not. However, toward the end of the song, the players typically return to the original melody.

There is a parallel to software testing. Testers often start with a documented test design that systematically describes all the cases to be covered. One of the more productive ways to perform improvisational testing is to gather a group of two or more skilled testers in the same room, and ask them to collaborate on extemporaneous testing. The defect-finding power of testers collaborating with improvisational testing is very similar to the power of collaboration exhibited in jazz sessions.

One approach to improvisational testing is to use existing documented tests as the basis, and then invent variations on that theme.

Advantages and Disadvantages of Ad Hoc Testing

One of the best uses of ad hoc testing is for discovery. Reading the requirements or specifications (if they exist) often does not provide a good sense of how a program behaves. Ad hoc testing can find holes in your test strategy, and can expose relationships between subsystems that would otherwise not be apparent. In this way, it serves as a tool for checking the completeness of your testing.

Missing cases may be found that would not otherwise be apparent with formal test cases, as these are set in concrete. Defects found while doing ad hoc testing are often examples of entire classes of forgotten test cases.

Another use for ad hoc testing is to determine the priorities for your other testing activities. Low-level housekeeping functions and basic features often do not make it into the requirements and thus have no associated test cases.

A disadvantage of ad hoc testing is that these forms of tests are not documented and, therefore, not repeatable. This limits ad hoc tests from the regression testing suite.

Exploratory Testing

The Art of Exploratory Testing

Exploratory testing is extra suitable if requirements and specifications are incomplete, or if there is lack of time. The approach can also be used to verify that previous testing has found the most important defects. It is common to perform a combination of exploratory and scripted testing, i.e., a written set of test steps to test software, where the choice is based on risk.

Exploratory testing as a technique for testing computer software does not require significant advanced planning and is tolerant of limited documentation. It relies on the skill and knowledge of the tester to guide the testing, and uses an active feedback loop to guide and calibrate the effort. According to James Bach, “The classical approach to test design, i.e., scripted testing, is like playing ‘20 Questions’ by writing out all the questions in advance.”

Exploratory testing is the tactical pursuit of software faults and defects driven by challenging assumptions. It is an approach in software testing with simultaneous learning, test design, and test execution. While the software is being tested, the tester learns things that together with experience and creativity generates new good tests to run.

Exploratory testing has been performed for a long time, and has similarities to ad hoc testing. In the early 1990s, ad hoc was too often synonymous with sloppy and careless work. This new terminology was first published by Cem Kaner in his book Testing Computer Software. Exploratory testing is more structured than classical ad hoc testing and can be as disciplined as any other intellectual activity.

Exploratory testing seeks to find out how the software actually works, and to ask questions about how it will handle difficult and easy cases. The testing is dependent on the tester’s skill of inventing test cases and finding defects. The more the tester knows about the product and different test methods, the better the testing will be.

When performing exploratory testing, there are no exact expected results; it is the tester who decides what will be verified, critically investigating the correctness of the result.

In reality, testing almost always is a combination of exploratory and scripted testing, but with a tendency toward either one, depending on the context.

According to Cem Kaner and James Bach, exploratory testing is more a [mindset] or “... a way of thinking about testing” than a methodology. The documentation of exploratory testing ranges from documenting all tests performed to documenting just the bugs.

Exploratory Testing Process

The basic steps of exploratory testing are as follows:

  1. Identify the purpose of the product.

  2. Identify functions.

  3. Identify areas of potential instability.

  4. Test each function and record problems.

  5. Design and record a consistency verification test.

According to James Bach, “Exploratory Testing, as I practice it, usually proceeds according to a conscious plan. But not a rigorous plan ... it is not scripted in detail. To the extent that the next test we do is influenced by the result of the last test we did, we are doing exploratory testing. We become more exploratory when we can’t tell what tests should be run in advance of the test cycle.”

Test cases themselves are not preplanned:

  1. ■ Exploratory testing can be concurrent with product development and test execution.

  2. ■ Such testing is based on implicit and explicit (if they exist) specifications as well as the “as-built” product.

  3. ■ Exploratory testing starts with a conjecture as to correct behavior, followed by exploration for evidence that it works/does not work.

  4. ■ It is based on some kind of mental model.

  5. ■ “Try it and see if it works.”

Advantages and Disadvantages of Exploratory Testing

The main advantage of exploratory testing is that less preparation is needed, important bugs are found fast, and it is more intellectually stimulating than scripted testing.

Another major benefit is that testers can use deductive reasoning based on the results of previous tests to guide their future testing on the fly. They do not have to complete a current series of scripted tests before focusing in on or moving on to exploring a more target-rich environment. This also accelerates bug detection when used intelligently.

Another benefit is that, after initial testing, most bugs are discovered by some kind of exploratory testing. This can be demonstrated logically by stating, “Programs that pass certain tests tend to continue to pass the same tests and are more likely to fail other tests or scenarios that are yet to be explored.”

Disadvantages are that the tests cannot be reviewed in advance (and thus cannot prevent errors in code and test cases), and that it can be difficult to show exactly which tests have been run.

When repeating exploratory tests, they will not be performed in precisely the same manner, which can be a disadvantage if it is more important to know what exact functionality.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset