Chapter 28

Test Process and Automation Assessment

Companies that invest in information technology do so to enhance competitive abilities, reduce overhead, or comply with regulatory demands. Business initiatives elaborate these business purposes, but rarely measure the impact of change to existing IT infrastructure and IT business systems.

Conversely, IT software development and implementation often ignore the business justification in favor of focusing on the technical solutions (more details are discussed in Chapter 30, “SOA Testing”).

This chapter deals with the approach and methodology for conducting a test process assessment in a business environment.

Test Process Assessment

There is often a perceived conception that software testing does not add direct value to the business. However, over a period businesses have realized that software testing is a must for the business to avoid catastrophic bugs in the software that will have adverse effects on the business.

Software testing processes not only detect whether the product meets design requirements, but also validate that the business objectives are being met. If quality assurance and quality control are not aligned with the business objectives, functional defects (or bugs) put the expected business at risk.

Concentrating on IT system design to the exclusion of functional business testing often results in nonavailability of business applications in production environments.

Images

Figure 28.1   Process evaluation methodology.

When a critical business system fails, many midsize and large companies conduct error corrective strategies in an effort to measure the financial loss to the business and trace the cause of failure to its origin. It is not unusual to follow the error through quality assurance to untested code that was not included in regression testing.

Y2K fears caused companies to realize the importance of integrating testing processes into the software development life cycle. Operational realities are helping the software test process assessment to gain attention as a business enabler.

A good starting point is an assessment of the current software engineering and management practices. The output of this study is a detailed Gap Analysis Report, which captures the present strength and weakness of the company’s software testing practices. The analysis is carried out via:

  1. ■ Management discussions

  2. ■ Questionnaires

  3. ■ Responsive feedback

  4. ■ Well-structured interviews

  5. ■ Analysis

  6. ■ Action plans

The analysis includes the applications and test artifacts prepared at the various test phases. It is followed by detailed action planning with the client’s management to arrive at a road map for improving the test software processes. The gap analysis activity focuses predominantly on the key areas that contribute to improving the test process.

To perform a comprehensive analysis, the process analyst should not only learn how the business functions; he or she must also understand the expectations that are unique to each level of management.

Process Evaluation Methodology

Figure 28.1 shows the steps for the test assessment process.

Step 1: Identify the Key Elements

The process analyst assesses the technology used to develop the application, studies the software development methodology adopted by the client, determines the company’s maturity level, and examines the current level of existing testing processes.

The test process assessment ascertains the level of maturity and coverage of the quality applications and products.

The following are the areas studied during a test process assessment:

  1. ■ Scope of test methodology

  2. ■ Test process management

  3. ■ Testing functions and training

  4. ■ Life-cycle review methodologies

  5. ■ Estimating and planning the test cycles

  6. ■ Test strategy

  7. ■ Test coverage

  8. ■ Test design techniques

  9. ■ Test metrics

  10. ■ Test data

  11. ■ Testware management

  12. ■ Test tools

  13. ■ Test environment

  14. ■ Defect management

  15. ■ Change and configuration management

  16. ■ Communication and reporting

  17. ■ Commitment and motivation

  18. ■ Static test techniques

The foregoing elements are extensive, but may not be relevant in all process improvement opportunities.

Step 2: Gather and Analyze the Information

The process analyst assesses the current test process by gathering relevant information from the following business representatives (see Figure 28.2):

  1. ■ Heads of business units

  2. ■ Project managers

  3. ■ Program managers

  4. ■ QA managers

  5. ■ Product managers

  6. ■ Test engineers

  7. ■ Test leads

  8. ■ SQA leads

  9. ■ Test tool specialists

  10. ■ Test environment specialists

Images

Figure 28.2   Test assessment inputs and outputs.

The information is gathered by creating a questionnaire for each one of the business representatives on the basis of the roles and responsibilities of the business representatives. Interviews are conducted, and the results should be validated against the existing development and test documents to detect areas requiring further clarification.

Step 3: Analyze Test Maturity

After validating and collating the information, the process analyst defines the gaps in the current processes against the standard set of processes defined for that technology or business group:

  1. ■ IEEE 829 test documentation

  2. ■ American National Standards Institute (ANSI)

  3. ■ Sarbanes–Oxley (ISACA subset of COBIT)

  4. ■ Software Engineering Institute-Capability Maturity Model (SEI-CMM)

The following are key areas of the testing process and the indicators that will help the assessor draw conclusions on the maturity level.

The Requirements Definition Maturity

The basis for successful testing is testable requirements. Requirements developed early in the software development life cycle are rarely complete and unambiguous. Sources of requirements vary greatly, but common starting points are e-mails, verbal descriptions, unwritten customer expectations, and “tribal gossip.” Ultimately, the requirements definition process must refine all forms of ambiguous information into concise statements from which test engineers can develop test cases.

The process analyst must examine the existing requirements definition and verification/validation process. The steps for determining the quality of the requirements process are as follows:

  1. ■ Gathered, or elicitation

  2. ■ Analysis and prioritization

  3. ■ Documentation

  4. ■ Review for completeness

  5. ■ Incorporation changes back to the baseline document

The analyst must also assess whether adequate impact analysis is performed for changes to requirements (including scope creep), and how those changes are managed for all components of the testing process.

The following organizational evaluations are important inputs for the gap analysis of the existing requirements process to best practices:

  1. ■ Analyze e-mail communication between the development group and business teams.

  2. ■ Analyze project documents to ascertain the traceability of requirements to test cases.

  3. ■ Verifying the final requirement document is reviewed and signed off by the business owners and Quality Assurance.

After observing the existing requirements methodology and maturity, the process analyst should recommend the actions that will address the gaps to improve the existing requirement definition and verification/validation process.

Test Strategy Maturity

The test strategy is an area where insufficient planning leads to multiple issues in the test life cycle, for example, testing resources and test environment availability.

The test strategy should be comprehensive and include the following sections:

  1. ■ Scope of testing

  2. ■ Types of testing

  3. ■ Traceability methodology

  4. ■ Effort estimation

  5. ■ Test case preparation

  6. ■ Test execution methodology

  7. ■ Defect management process

  8. ■ Resource allocation

  9. ■ Test closure process

The process analyst should also verify that the test strategy defines the test entry (what needs to be ready to start testing) and exit (what needs to be completed to stop testing) criteria as well as the type of metrics to be collected at the various stages of testing.

The process analyst should also verify that test process audits measure compliance with quality standards and continual process improvement. The test artifacts will reveal the cost, quality, and schedule metrics that are captured.

Other important areas of process validation include the following:

  1. ■ A formalized configuration management process.

  2. ■ Documented end-to-end testing processes and procedures for all key test process areas (including guidelines, templates, and checklist).

  3. ■ The defect turnaround time (or aging) is measured, that is, the time it takes to correct a defect, based on severity.

Test Effort Estimation Maturity

Test effort estimation methodology is evaluated. Inaccurate effort estimation not only delays the test cycle, but puts the project schedule at risk.

Calculations for the testing effort are performed by adopting estimation methods such as the SMC model (Simple, Medium, and Complex test cases), the Work Breakdown Structure (WBS) model, a Test Case Points Model, and some form of Function Point Analysis.

Regardless of the model, the analyst should verify that the estimation method is documented and that the estimations are compared to the actual test efforts to verify the level of accuracy.

The process analyst should consider the following:

  1. The definition of simple, medium, and complex test cases

  2. The availability of test data and techniques used to generate data

  3. Effort required for the defect management process

  4. Number of test iterations considered for the release and methodology adopted for regression and retesting

Test Design and Execution Maturity

The test design methodology defines how the test cases are defined, how the traceability matrix is established, and how the test data is linked to the test cases. With the goal of optimizing the test execution, a gap analysis reveals where missing processes and missing links prevent efficient test execution.

The following are the parameters of a mature test design and execution process:

  1. ■ Testing is a measured and quantified process.

  2. ■ Products are tested for quality, for example, reliability, usability, and maintainability.

  3. ■ Test cases are collected and recorded for reuse.

  4. ■ Defects are logged, and severity is assigned.

  5. ■ Testing is defined and managed.

  6. ■ Testing costs and effectiveness are measured.

  7. ■ Testing processes are fine-tuned and continuously improved.

  8. ■ Defect prevention and quality control are enforced.

  9. ■ Automated testing has a significant role in the quality control process.

  10. ■ Tools support test case design and metric collection.

  11. ■ Process reuse is practiced.

  12. ■ Defect root cause analysis is practiced, that is, a defect prevention technique.

The test process assessment defines all of the foregoing indicators and collects the test execution metrics to quantify continual quality improvement.

Regression Testing Maturity

A regression-testing strategy must also be defined. Regression testing is the selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements (IEEE, 1990). Testers must determine the degree of regression testing to minimize the risk. A software change may have an unexpected effect on a seemingly unrelated part of the software.

Test Automation Maturity

Efforts have been made to reduce the software testing life cycle and cost. Test automation has emerged as a viable alternative to manual testing to reduce the test life-cycle cost. However, the initial investment on the testing tools and script development efforts still remains a huge cost. The return on investment (ROI) is not realized quickly by the business.

A structured approach to test automation should ensure that businesses get the benefits of complete, thorough testing on the code developed so that software testing does not consume a major portion in the SDLC. Unplanned approaches toward testing have resulted in companies spending more than 30 percent of their development life cycle in various forms of testing.

The Test Strategy identifies the scope of automation, functionalities to be automated, methodology, and approach toward automation. The strategy defines roles and responsibilities, project test schedule, test planning, design activities, test environment preparation, test risks and contingencies, and an acceptable level of thoroughness. The Test Strategy includes the test procedures, naming conventions, test procedure format standards, and the test procedure traceability matrix.

The following is the standard outline of a Test Automation Strategy that can be customized depending on the test requirements (see Appendix E30, “Test Automation Strategy”):

  1. ■ Overview of the project

  2. ■ Automation purpose and objectives

  3. ■ Scope of automation—inclusions and exclusions

  4. ■ Automation approach

  5. ■ Test environment

  6. ■ Tools used—scripting and test management

  7. ■ Script naming conventions

  8. ■ Resources and scheduling

  9. ■ Training requirements

  10. ■ Risk and mitigation

  11. ■ Assumptions and constraints

  12. ■ Entry and exit criteria

  13. ■ Acceptance criteria

  14. ■ Deliverables

Step 4: Document and Present Findings

The final gap report, the test process findings, is a critical deliverable that identifies the candidates for test process improvement. While identifying the gaps, the report also documents the best practices that exist in the current environment. The report baselines the current processes and serves as the starting point for future continuous improvement initiatives.

Test Automation Assessment

The test automation approach determines how to ensure that business requirements and end goals of the application are achieved. The approach helps plan and identify software components to be tested using test automation. It will also determine the context and approach to automated testing for different project life cycles.

The best automation testing strategy must balance the cost/risk of defects against the overall costs of extensive testing. The goals are to maximize the value from the testing done, and to minimize the testing effort and duration, to an acceptable risk level.

The following are the major factors to be considered for test automation:

  1. Identification of the correct application and correct percentage of the application that can be automated

  2. Identification of testing tools that should be considered, which includes compatibility, cost, ease of use, reusability, framework considerations, and training

  3. Identification and creation of the test framework, for example, data centric, business function centric, and hybrid approaches

  4. Identification of the various levels of reusable test components, that is, functions that can be reused across the application under test

  5. Creation of test automation scripts adhering to the standards and guidelines

  6. Required validations, checkpoints, error-handling mechanisms, and result reporting

  7. Creating the relevant test data for running the scripts

  8. Dry run of the scripts to ensure they perform the required business function validation as expected by the business

  9. Creating the necessary documentation for maintaining the scripts developed for enhancements, new releases, tool guide manuals, and so on

Figure 28.3 shows the test automation approach in the context of the Plan–Do–Check–Act (PDCA) model (see Section 1, “Software Quality in Perspective,” Chapter 5). As with any other continuous quality improvement initiative, the test automation effort must be planned, executed, checked to verify that it is on track, and acted upon to adjust the plan.

The following sections describe planning considerations for the test automation strategy.

Images

Figure 28.3   PDCA applied to test automation.

Identify the Applications to Automate

In their eagerness to expedite structured testing, companies have invested in various testing tools. With time, they have not always realized the benefits that were expected from their investment. The primary reason for this failure is the unplanned, nonsystematic activities on automation.

The following are the major decision points that need to be evaluated to identify the correct application for test automation:

  1. ■ Applications that are business critical, have high-frequency usage, and have a long life span

  2. ■ Applications that are localized/globalized with multiple platforms

  3. ■ Multiple releases requiring complete regression testing each time

  4. ■ Minimal external interfaces requiring manual intervention

  5. ■ Applications in which the impact of the releases does not negatively affect the entire regression-testing efforts

  6. ■ GUI-based applications involving bitmaps

  7. ■ Applications with objects/functions that are used multiple times across applications

  8. ■ Applications that are stable without many changes to the front-end GUI or back end

Identify the Best Test Automation Tool

Tool evaluation is very critical to test automation. One needs to collect the details on the technology with which the application is developed, the details of the technology for the interfacing applications, and the third-party tools used to develop the applications.

The vendors for each automation tool describe the type of technology supported by their tools and the required add-ins for additional interfacing and underlying application technologies. On the basis of an understanding of the product, an evaluation of various test automation tools should be undertaken.

Some general factors to consider before choosing the automation tool include the following:

  1. ■ Technical capabilities of the application under automation

  2. ■ Compatibility with the application environment, components, and interfaces

  3. ■ Ease of test development

  4. ■ Test maintainability

  5. ■ Reliability and market confidence

  6. ■ Custom objects used; third-party tools used

  7. ■ Vendor tool references

  8. ■ Ease of use for the testers

  9. ■ Scripting languages used and ease of adoption

  10. ■ Amenability for easier modification of the scripts

  11. ■ Cost of the automation tool and annual maintenance cost

  12. ■ Support from the vendor for newer technologies and issues encountered (see Section 6, “Modern Software Testing Tools,” for both informal and formal methodologies for selecting an automation tool).

Test Scripting Approach

The following are the activities that are normally involved during the test automation scripting that need to be considered in the automation strategy:

  1. ■ Test case selection: Review all the test cases and appropriately align the related business areas so that number of test cases will be reduced. Segregate the test cases that can be automated from the ones that cannot be automated.

  2. ■ Capture the base flow scripts: Capture the script for the basic business flow, capture the GUI or bitmap, follow the scripting standards and guidelines, and use the available functions in the library.

  3. ■ Verification and validation: Realign the scripts, add required check points, break points, functions, and synchronization.

  4. ■ Create the data tables: Create all possible test data combinations to ensure coverage and prepare for traceability.

  5. ■ Dry run: Run the script, validate the results, and follow the defect management process.

Test Execution Approach

Normally, test management tools such as HP’s Quality Center are used for storing the automation scripts created. These scripts are triggered for execution using the available functionalities within the test management tools.

The advantage of automation scripts is execution without human intervention. These scripts can be scheduled to be triggered when the environment is available, even in the middle of the night. When the automation analyst comes back the next day, the results of the test execution are stored in the defined files, which will help him or her to analyze and raise exceptions.

Each test team needs to perform problem-reporting operations in compliance with a defined defect management process. The documentation and tracking of software problem reports are greatly facilitated by an automated defect-tracking tool. The same defect management process that is adopted for functional and integration testing is adopted for the defects arising out of the test execution of automated test scripts.

The test team manager is responsible for ensuring that tests are executed according to schedule. Test personnel are allocated and redirected when necessary to handle problems that arise during the test effort. To perform this oversight function effectively, the test manager needs to perform test program status tracking and management reporting.

Test metrics provide the test manager with key indicators of the test coverage, progress, and the quality of the test effort. The metrics collection focuses on the breadth of testing to include the amount of demonstrated functionality and the amount of testing that has been performed (see Chapter 22, “Summarize/Report Test Results,” for more information relating to test metrics).

Test Script Maintenance

Following test execution, the test team needs to review the performance of the test scripts to determine where improvements can be implemented to improve the test scripts on the next iteration. The test scripts are upgraded on the basis of the test execution results, modification in the business flow, and enhancements to the base functionalities.

Whenever new enhancements are introduced in the application, the test manager needs to perform an impact analysis on any new or changed functionality as to how they will impact the existing regression suite and how many new scripts are needed to be added to the regression set.

Throughout the test execution cycle, the test team needs to collect various test metrics. The focus of the test review includes an assessment of whether the application satisfies acceptance criteria and is ready to go into production. The review also includes an evaluation of achieved progress measurements and other metrics collected.

Throughout the entire test life cycle, it is a good practice to document and begin to evaluate lessons learned at each milestone. The metrics that are collected throughout the test life cycle, and especially during the test execution phase, help pinpoint problems that need to be addressed.

Lessons learned metrics evaluations and corresponding improvement activity or corrective action need to be documented throughout the entire test process in a central repository that is easily accessible.

Test Automation Framework

Software testing gurus have accepted test automation as the effective way to improve quality, and reduce cost and life-cycle time. Many companies acquired these testing tools with the hope of optimizing their testing effort and quality. However, over a period, these companies realized that these tools had not really benefited them as expected. While conducting a root-cause analysis for this failure, companies have realized that the absence of a structured test automation approach and an overall framework for this approach is the basic reason for failure. This resulted in introduction of various test automation frameworks depending on the application technologies and methodologies adopted for testing the relevant applications.

Some popular test automation frameworks that are in place in the test automation arena are as follows:

  1. ■ Data-driven framework

  2. ■ Modular framework

  3. ■ Keyword-driven framework

  4. ■ Hybrid framework

This part provides an overview of the various features of the foregoing frameworks, and the approach to building these frameworks.

Automation frameworks emerged as a concept with a set of rules, assumptions, standards and guidelines, and generic reusable components and practices that provided support for automated software testing. It also defined the directory storage structure for effective usage and maintenance of automation scripts and defined the way in which the test automation results are documented and published.

Basic Features of an Automation Framework

As an automation expert, one should understand that 100 percent of any standard business-oriented applications cannot be automated. There are various dependent factors such as interfaces involved and their technologies, third-party tools used and their compatibility with testing tools, real-time application complexities, and various other factors. One more important aspect for the success of automation is combining all related functional test cases together and optimizing the reusable components; ideally, one cannot create x number of test scripts for x number of test cases identified for test automation. Ideally, the number of automation scripts should be a lesser percentage of the identified test cases for automation because of reusability.

The following are some best practices for a test automation framework.

Define the Folder Structure

The basic success of an automation project lies in uniformity and reusability. One should define the folder structure for the automation project in such a way that everyone in the organization will be able to understand and access the structure. A sample format is shown in the following text; it can be customized depending on the complexity of the project.

  1. ■ Project Automation

    1. –   Repository

      • Driver scripts

      • Reusable window functions

      • Reusable business functions

      • Error-handling functions

    2. –   Test scripts

      • Common (modulewise)

        1. ■ Test scripts

      • Login (example)

        1. ■ Test scripts

    3. –   Test data files

    4. –   Log

      • Test report

      • Error log

Modularize Scripts/Test Data to Increase Robustness

The best approach for effective test automation is introducing modularity so that reuse can be ensured at different levels. This will enable more data combinations to be tested, thereby increasing the coverage and reducing the failure chances by introducing test data tables for multiple testing.

Reuse Generic Functions and Application-Specific Function Libraries

Another advantage of the Test Automation Framework—based approach is the introduction of different levels of reusable functions. These functions can be at the OS/window level or at the application levels. These functions reduce the level of coding each automation tester needs to introduce in his or her test script and improve the productivity levels. The following are some of the generic functions that can be developed:

  1. ■ File handling

  2. ■ String handling

  3. ■ Buffer handling

  4. ■ Variable handling

  5. ■ Database access

  6. ■ Logging utilities

  7. ■ Systemenvironment handling

  8. ■ Application mapping functions

  9. ■ System messaging or system API enhancements and wrappers

Develop Scripting Guidelines and Review Checklists

Defined guidelines and standards for writing the automated scripts are essential to enforce uniformity, reusability, and ease of maintenance. Some of the standards that should be documented include the following:

  1. ■ Variables

  2. ■ Connecting to databases

  3. ■ Calling the reusable functions

These need to be customized for the test tool being used with the application being automated.

Define Error Handling and Recovery Functions

The Test Automation Framework should have common error-handling techniques for the expected and unexpected behavior of the application at different levels. These error-handling scripts may be kept in the common library folder for effective reuse by the various automation testers.

Basic failures. These correspond to failures at the system level (e.g., data table, GUI map not loaded, file not found, out of memory, etc.). The script halts the execution, logging the error message.

Application failures. These correspond to failures of the application, such as unexpected pop-up window, page not found, button not found, link not found, server time-out, and so on.

Every function starts by checking for the expected window. Utilities are developed for basic functionalities such as filling the text box, selecting the radio button, and selection from a list box. These utilities will check for existence and enablement of controls before performing operations. Each function ends by checking whether any error message appeared on the screen.

If such failures occur, the script logs the appropriate message and continues execution with next test case/scenario.

Define the Maintenance Process

The Test Automation Framework should also define ways and means for incorporating future enhancements into the application. The framework should be scalable for the future enhancements. It should define how to identify the impact of the changed functionalities in the existing test automation suite and how to modify them.

Standard Automation Frameworks

Test Automation frameworks have evolved over a period of time depending on the maturity levels in the automation testing organization.

The Data-Driven Framework, Modular Framework, Keyword-Driven Framework, and Hybrid Framework are some of the popular framework models that are being used across the test automation areas.

Table 28.1   Test Data

Account Number

Credit Card Number

Validity Date

Auto Debit

Remarks

313 254 2288

2222 3333 4444

08/04/21

Y

567 298 9988

9923 8769 8742

08/03/12

N

987 765 9843

8769 6754 4397

09/02/23

Y

769 457 5544

6549 7692 4214

09/01/23

Y

Data-Driven Framework

Data-driven testing is a framework in which test input and output values are read from data files (such as CVS files, Excel files, text files, etc.) to drive the tests. Navigation through the different application screens, reading of the data files, and logging of test status and information are all coded in the test script.

The Data-Driven test framework is very useful for carrying out tests on an application screen using different combinations of test data (see Table 28.1). In this case, only one script can handle the various combinations of tests, depending on the different combinations of test data as specified in the data files. Each row is a test case.

The Data-Driven Framework will be very useful when one needs to validate the business function with a host of relevant data of different combinations. This will be effective and will replace the mundane manual testing work, where human error is bound to happen. This approach also will save a lot of time in the test life cycle and improve productivity.

The key aspect that needs to be considered for this framework is aligning the test data to ensure maximum coverage to unearth the hidden bugs in the system (see Chapter 34, “Software Testing Trends,” for a description of the SmartTest tool from Smartware Technologies, Inc., which automatically generates the test data).

Modular Framework

The Modular Framework approach (illustrated in Figure 28.4) requires the creation of small, independent automation scripts and functions that represent modules, sections, and functions of the application under test. These small scripts are then used in a hierarchical method to construct larger tests, realizing a particular test case.

The following modularity format will explain how this framework is constructed using the different levels and features available in the application.

Images

Figure 28.4   The Modular Framework.

Driver scripts, main scripts, business function scripts, validation scripts, subroutine scripts, and reporting scripts are some of the components of this modularity, for example, retail banking functions.

The following are the advantages of the Modular Framework:

  1. ■ Because scripts are written to perform and test individual business functions, they can easily be combined in “higher-level” test scripts to accommodate complex test scenarios.

  2. ■ Reduces redundancy and effort in creating automated test scripts.

  3. ■ Scripts can be developed even when application development is in progress.

  4. ■ Script reusability is very high in this framework.

  5. ■ Maintaining the expected results for such scripts is very easy.

  6. ■ Error handling is much more robust, which allows unattended execution of the test scripts.

  7. ■ Because such scripts have very little interdependency, they can be used in a plug-and-play manner.

Keyword-Driven Framework

A Keyword-Driven Framework is one of the popular models of business automation. With this framework, the different screens, functions, and business components are specified as keywords in a data table. The test data and the actions to be performed are scripted with the test automation tool. Testing is driven completely by the different keywords specified in the data table. This is also called a Table-Driven Framework as the keywords are mapped to the relevant automation scripts in a table. A sample format is given in Table 28.2.

Table 28.2   Keyword Test Data

Window

Control

Action

Arguments

Window Name

Menu

Click

Open

Window Name

Push Button

Click

Folder Name

Window Name

Verify

Results

Window Name

Menu

Click

Close

The test suite consists of all the test case files (see Figure 28.5). The user is able to select a specific test suite with a list of test cases to execute based on a flag that is turned on or off in the test suite file.

The test suite will be in the form of an Excel worksheet that contains columns for TestCaseID, Description, To be Executed (Y/N), Object Repository Path, Test Case File Path, and so on.

Images

Figure 28.5   Keyword-Driven Framework.

Test Case File contains the detailed steps to be carried out for the execution of a test case. It is in the form of an Excel sheet and contains columns for Keywords, Object Names, Parameters.

The driver script reads the test case files from test suite, checks the keywords from each step of test case, and executes the steps one after the other, depending on the keywords contained in the action field. The keywords are handled by a processing engine, which in turn calls the appropriate library function, based on the keyword. The keyword action is implemented in the library function. Before executing the keyword, the driver script performs error checking, and logs any relevant information.

One can also extend this framework with the help of startup scripts.

The startup script performs the initialization of test settings and reads the test suite. It will then call the driver script to execute all the test cases marked for execution in the test suite file.

The advantage of keyword-driven testing is that the tester need not be code-savvy to execute the scripts. He or she only needs to be comfortable with the various keywords and related functions that need to be validated to execute these scripts. The automation specialist will create the functions through the code for the required keyword.

The Keyword-Driven Framework requires individuals with good scripting skills (depending on the testing tool) to create the keyword functions.

Hybrid Framework

The most commonly implemented framework (see Figure 28.6) is a combination of all of the aforementioned techniques, pulling from their strengths and trying to mitigate their weaknesses. This framework is what most frameworks evolve into over time and multiple projects. It is defined by the core data engine, the generic component functions, and the function libraries. Whereas the function libraries provide generic routines useful even outside the context of a Keyword-Driven Framework, the core engine and component functions are highly dependent on the existence of all three elements.

The test execution starts with the driver script. This script invokes the core data engine by providing one or more driver scripts that process these test tables, invoking the main script for each level.

The core data engine can be implemented with the following, depending on the test requirements:

  1. Release driver

  2. Test suite driver

  3. Test script driver

Images

Figure 28.6   Hybrid Test Framework.

A release driver consists of multiple test suites. A test suite consists of multiple test scripts. A test script consists of multiple sets of test data.

The driver script will first call the release driver, which in turn calls the corresponding suite driver. The suite driver then will call the respective test scripts. The called test scripts will be executed by taking the corresponding test data for each script.

Building a hybrid test automation framework requires the architect to understand the application technology, interfaces and third-party components interacting with the applications, and the business flow of the application. The test cases should be analyzed thoroughly to understand and identify the reusable business components. The understanding of the application will help the architect identify the type of framework required to automate the application.

The automation analyst should identify the relevant framework that is applicable for the application under automation and design the same. The basic design should be flexible and scalable. The framework should consider the future enhancements and releases and should be effectively designed to increase modularity and reusability.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset