Chapter 35

Taxonomy of Software Testing Tools

Testing Tool Selection Checklist

Finding the appropriate tool can be difficult. Several questions need to be answered before selecting a tool. Appendix F19, “Testing Tool Selection Checklist,” lists questions to help the QA team evaluate and select an automated testing tool.

The following list categorizes currently available tool types on the basis of their tool objectives and features. In the section “Commercial Vendor Tool Descriptions” later in this chapter, popular vendors supporting each tool category are discussed.

  1. Function/regression tools—These tools help you test software through a native graphical user interface (GUI) to ensure the functionality of the system.

  2. Bug management tools—These tools help you track software product defects and manage product enhancement requests. They Manage defect states from defect discovery to closure.

  3. Test process/management tools—These tools help organize and execute suites of test cases at the command line, API, or protocol level. Some tools have GUIs, but they do not have any special support for testing a product that has a native GUI.

  4. Requirements analysis tools—These tools help you verify the completeness, and locate ambiguities and conflicting requirements.

  5. Unit testing tools—These tools help you unit test software, which is usually performed by the developer, usually using interfaces below the public interfaces of the software under test.

  6. Load/performance testing tools—These tools help you analyze the performance of the system under test under varying loads and stress.

  7. Test data generation tools—These tools help you create test data and test cases.

  8. Site monitoring tools—These tools help you measure and maximize value across the IT service delivery life cycle to ensure applications meet quality, performance, and availability goals.

  9. Java testing tools—These tools help you test Java Web site applets.

  10. Embedded testing tools—These tools help you verify systems that operate on low-level devices, such as video chips.

  11. Database testing tools—These tools help you verify database integrity, business rules, access, and refresh capabilities.

  12. Web testing tools—These tools help you locate broken Web links and evaluate the performance of Web-based systems under heavy loads.

  13. Security testing tools—These tools help you evaluate the ability of the system to ensure system integrity and protect resources.

Commercial Vendor Tool Descriptions

Table 35.1 provides an overview of some commercial testing tools. The descriptions are listed alphabetically. Tool name is listed and cross-referenced to the type of software testing supported.

Open-Source Freeware Vendor Tools

Table 35.2 provides an overview of some open-source software testing tools. The descriptions are listed alphabetically. Tool name is listed and cross-referenced to the type of software testing supported.

When You Should Consider Test Automation

A testing tool should be considered on the basis of the test objectives. As a general guideline, one should investigate the appropriateness of a testing tool when the human manual process is inadequate. For example, if a system needs to be stress-tested, a group of testers could simultaneously log on to the system and attempt to simulate peak loads using stopwatches. However, this approach has limitations. One cannot systematically measure the performance precisely or repeatably. For this case, a load-testing tool can simulate several virtual users under controlled stress conditions.

Table 35.1   Vendor Testing Tool versus Tool Category

Images

Images

Images

Images

Images

Images

Images

Images

Images

Images

Images

Table 35.2   Open-Source Testing Tool versus Tool Category

Images

Images

Images

Images

Images

Images

A regression testing tool might be needed under the following circumstances:

  1. ■ Tests need to be run at every build of an application, for example, time-consuming, unreliable, and inconsistent use of human resources.

  2. ■ Tests are required using multiple data values for the same actions.

  3. ■ Tests require detailed information from system internals, such as SQL and GUI attributes.

  4. ■ There is a need to stress a system to see how it performs.

Testing tools have the following benefits:

  1. ■ Much faster than their human counterparts

  2. ■ Run without human intervention

  3. ■ Provide code coverage analysis after a test run

  4. ■ Precisely repeatable

  5. ■ Reusable, like programming subroutines

  6. ■ Detailed test cases (including predictable “expected results”) that have been developed from functional specifications or technical design documentation

  7. ■ Stable testing environment with a test database that can be restored to a known constant, so that the test cases can be repeated each time modifications are made to the application

When You Should NOT Consider Test Automation

In spite of the compelling business case for test automation, and despite the significant investments of money, time, and effort invested in test automation tools and projects, the majority of testing is still performed manually. Why? There are three primary reasons why test automation fails: the steep learning curve, the development effort required, and the maintenance overhead.

The learning curve is an issue for the simple reason that traditional test scripting tools are basically specialized programming languages, but the best testers are application experts, not programmers.

This creates a skills disconnect that requires an unreasonable learning curve. Application experts, who make ideal testers because of their business knowledge, are unlikely to have programming skills. Gaining these skills takes months if not years, and without these skills the script libraries are usually not well designed for maintainability.

Most test tool vendors are aware of this shortcoming and attempt to address it through a capture/replay facility. This is an approach that ostensibly allows a tester to perform the test manually while it is automatically “recorded” into a test script that can later be replayed. Although this approach appears to address the learning curve, in reality it often causes more problems than it solves.

First, a recorded test script is fragile and easily subject to failure. Because it has no error handling or logic, the smallest deviation in the application behavior or data will cause the script to either abort or make errors. Furthermore, it combines both script and data into a single program, which yields no reusability or modularity. The end result is essentially unstructured, poorly designed code.

Also, although it may appear easy to record a script, it is not as easy to modify or maintain it. The reason software is tested is because something has changed, which means the scripts must also be modified. Making extensive script changes and debugging errors is time consuming and complex.

Once companies discover that capture/replay is not a viable long-term solution, they either give up or begin a development effort.

Contrary to popular belief, it is not always wise to purchase a testing tool. Some factors that limit a testing tool include the following:

  1. Unrealistic expectations—The IT industry is notorious for latching onto any new technology solution thinking that it will be a panacea. It is human nature to be optimistic about any new technology. The vendor salespeople present the rosiest picture of their tool offerings. The result is expectations that are often unrealistic.

  2. Lack of a testing process—A prerequisite for test automation is that a sound manual testing process exist. The lack of good testing practices and standards will be detrimental to test automation. Automated testing tools will not automatically find defects unless well-defined test plans are in place.

  3. False sense of security—Even though a set of automated test scripts runs successfully, this does not guarantee that the automated testing tool has located all the defects. This assumption leads to a false sense of security. Automation is as good as the test cases and test input data.

  4. Technical difficulties—Automated testing tools themselves unfortunately have defects. Technical environmental changes such as the operating system can severely limit automated testing tools.

  5. Organizational issues—Test automation will have an impact on the organization, for it transcends projects and departments. For example, the use of data-driven test scripts requires test input data, typically in the form of rows in an Excel spreadsheet. This data will probably be supplied by another group, such as the business system analysts, not the testing organization.

  6. Cost—A testing tool may not be affordable to the organization, for example, the cost/performance trade-off.

  7. Culture—The development culture may not be ready for a testing tool, because it lacks the proper skills and commitment to long-term quality.

  8. Usability testing—There are no automated testing tools that can test usability.

  9. One-time testing—If the test is going to be performed only once, a testing tool may not be worth the required time and expense.

  10. Time crunch—If there is pressure to complete testing within a fixed time frame, a testing tool may not be feasible, because it takes time to learn, set up, and integrate into the development methodology.

  11. Ad hoc testing—If there are no formal test design and test cases, a regression testing tool will be useless.

  12. Predictable results—If tests do not have predictable results, a regression testing tool will be useless.

  13. Instability—If the system is changing rapidly during each testing spiral, more time will be spent maintaining a regression testing tool than it is worth.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset