Chapter 29

Nonfunctional Testing

There is a tendency for a project to focus primarily on functional testing, that is, on functions that a system or component must be able to perform, and ignore nonfunctional testing. They specify the criteria that judge the operation of the system.

Nonfunctional testing verifies how a system must behave, that is, constraints upon the system’s behavior. Nonfunctional requirements specify all the remaining forms of testing not covered by the functional requirements.

This chapter deals with performance, security, usability, and compliance testing.

Performance Testing

Today’s complicated business environment necessitates integration of multiple applications developed and maintained in different architectures. Enterprise application integration has gained much importance. Scalability, reliability, and performance of the enterprise application from the business perspective have opened up the requirements for increased performance testing and management. This chapter will illustrate the various types of performance testing that are being performed.

The performance of an application is measured from different perspectives to improve scalability and performance of the application. Load testing, stress testing, and volume testing are some of the types of performance testing that are normally done during the application development stage to ensure that the application performs at the expected level in production. Even when the application is live in production, performance is continuously monitored through performance monitoring tools to understand the current levels of performance and to understand the factors that affect the performance, so that they can be addressed.

Load Testing

Load testing is defined as the practice of modeling the expected usage of the application software by simulating multiple users concurrently. The system response under this condition is observed for various factors such as memory utilization, hardware capacity utilization, throughput, and so on. The source of any irrational behavior is observed and rectified in the system so that the application behaves in a better manner in production. There are a number of vendor-based and freeware tools that can simulate thousands of virtual users in the system for facilitating load testing (see Chapter 35, “Taxonomy of Testing Tools,” for more information).

Stress Testing

With stress testing, the load placed on the system is increased beyond the normal expected usage to test the applications response. Either the load on the user pattern may be increased or the system may be executed continuously for a lengthy period of time (hours or days) to test the robustness of the hardware system under stress.

Volume Testing

Volume testing is a form of performance testing in which the data volume is increased to abnormal levels to observe the response of the system. Volume testing will verify the physical and logical limits of the systems’ capacity.

Performance Monitoring

When an application is deployed, it is the responsibility of the system owners to monitor the application continuously for any performance degradation, as this will impact the business. There are multiple instances in the Web business of companies losing millions of dollars because of the nonavailability of their system online. Continuous monitoring of the various factors that affect performance is accomplished by performance monitoring. When symptoms of degradation or slowness appear, appropriate remedial measures are initiated so that the system does not go down abruptly, thereby affecting the business.

Performance Testing Approach

Performance testing has come a long way in the application testing life cycle. It requires specialized skills with application technologies, tools, languages, system configuration, and capacity details.

The application performance test architect analyzes the application architecture and determines the type of performance testing required for the application. This is performed in consultation with the business users on the performance expectations.

The following are the key activities carried out during this phase:

  1. ■ Identification of critical and noncritical business transactions

  2. ■ Expected application response time

  3. ■ Throughput for business transactions

  4. ■ Peak-hour performance

The normal and peak hour load expected in a multi-user application is tested to detect real-time issues before the application goes into production.

Knowledge Acquisition Process

In this phase, the performance team will understand the application functionality, the user characteristics, and the system architecture, as well as the application design. The team will interact with the various stakeholders such as business users, application developers, and the system maintenance team to understand the business requirements, capacity of the planned system as perceived by the developers, and the expected number of users in the system when the system goes live. The team will understand the production environment in which the application needs to be deployed in terms of hardware, software, and network connectivity. In some situations this will be determined on the basis of results of the performance testing activity.

The following are the planning steps in performance analysis:

  1. Define the scope: This involves knowledge of multiple user groups, the number of concurrent users, frequency of access to different functionalities, simulated random think times between access to various screens, transaction duration, and so on. The team will determine whether databases need refreshing between tests, and to what extent. Database refreshes between tests can be time consuming, especially with large databases. Often, database refreshes can take more time than the actual test. The team will define the parameters that will characterize the performance of the system. Some examples are transaction response time and transaction throughput (pages, transactions, and also the parameters that need to be monitored for identifying bottlenecks).

  2. Plan the performance test: The performance test team should study the test environment to ensure that it mimics the real-time production environment. They have to identify the transactions and application scenarios that need to be tested in consultation with the business users. They should also identify the common windows/OS-related transactions that will lower the performance of the application.

Based on the inputs collected, the performance test team will plan the combination of various input parameters to execute multiple test scenarios. This can be added or modified during test execution time depending on the response of the system. They should also plan the database-loading patterns.

Figure 29.1 illustrates a typical performance testing environment.

Test Development

The following are the planning steps in test development, that is, test script development, test execution, and test analysis:

  1. Develop Test Scripts—Test script development involves the following activities:

    1. –   Configure the Performance Testing tool, for example, HP’s Performance Test Centre 8.1 in the test environment.

    2. –   Use LoadRunner’s VuGen (Virtual User generator) to record scripts.

    3. –   After recording, scripts need to be modified to emulate complex environments.

    Some examples include the following:

    1. –   Loop to make a single captured activity act like many activities.

    2. –   Parameterize the variables, and supply data from an external source. Example sources of data include text files and capturing data returned from the application under test.

    3. –   Prepare data files for data inputs through the tool.

    4. –   All users will try to access a particular transaction at the same time.

  2. Test Execution—Test execution involves the following activities:

    1. –   Test data setup.

    2. –   For large databases, this will be a time-consuming activity. If scripts had been developed earlier for this purpose, run the scripts and load data.

    3. –   Set up the test scenario in the testing tool.

    4. –   Turn on the server monitors for monitoring CPU, memory, and so on.

    5. –   Replay the scripts with user loads by generating the virtual users using LoadRunner’s Controller. The tool records the test results. The results are exported for further analysis.

    6. –   Execute the test scripts under varying user loads. Refresh the database by running the database-loading scripts between executions.

    7. –   Collect data for analysis, including data from multiple sources such as Web server logs, application server logs, performance statistics from servers such as performance monitor logs, and so on.

  3. Analysis—LoadRunner has standard reports that can be used for analysis and reporting purposes. Some of the reports that are generated are as follows:

    Images

    Figure 29.1   Performance testing environment.

    1. –   Transaction Performance Summary Report

    2. –   Detail Transaction Report By vuser (virtual user)

    3. –   Transaction Performance By vuser Report

    4. –   Scenario Execution Report

    5. –   Failed Transaction Report

    6. –   Database Server Report Monitors

    7. –   Network Delay Monitors

    8. –   System Resource Monitors

The team analyzes the data collected from the report generated to identify the bottlenecks.

The following are some typical outputs from a performance test tool that help the performance test analyst to understand and analyze the performance requirements:

Test Summary Report: A test summary report, such as that shown in Figure 29.2, gives the overall result of the performance testing conducted in terms of the number of virtual users pumped into the system, total throughput in bytes, average throughput per second, total hits into the system, and average hits per second for each transaction identified for performance testing. This will give an indication to the performance test analyst on the performance parameters.

Average Transaction Response Time: Figure 29.3 gives the average response time for the identified scenario at various points of time. This may change, depending on the number of users in the system and system throughput.

Images

Figure 29.2   Test summary report.

Images

Figure 29.3   Average transaction response time.

Average Transaction Response Time under Load: Figure 29.4 shows how the average transaction response time changes as we increase the load on the system. The analyst will come to know how the response time is impacted by increasing the load and what is the tolerable limit for the live system.

CPU Utilization: Figure 29.5 gives an idea to the analyst of how CPU utilization is at various points of the identified transaction. This will help them to set the ideal utilization level for the CPU.

Page Component Breakdown: The pie chart in Figure 29.6 shows how much each page component is a percentage of the sum of average download time (in seconds).

Network Delay Time: Network delay is composed of network propagation, serialization, and queuing delay. Propagation delay is the time it takes the physical signal to traverse the path. Serialization delay is the time it takes to actually transmit the packet. Queuing delay is the time a packet spends in router queues. Figure 29.7 is a network delay time graph.

Images

Figure 29.4   Average transaction response time under load.

Images

Figure 29.5   CPU utilization.

Images

Figure 29.6   Page component breakdown.

Performance Deliverables

The performance test team is responsible for the following deliverables during the performance testing sessions:

Images

Figure 29.7   Network delay time graph.

  1. ■ Performance testing strategy

  2. ■ Performance test plan

  3. ■ Identified performance test scenarios

  4. ■ Vuser (virtual user) scripts for the identified scenarios

  5. ■ Vuser scripts documentation

  6. ■ Test execution plan

  7. ■ Test data information report

  8. ■ Test run report (daily report)

  9. ■ Analysis findings

  10. ■ Performance test report

Recommendations will be provided as part of the performance test report. For example, if the performance test team finds the test architecture not good enough, they can suggest the recommended architecture and how many users the new architecture will support.

Security Testing

Security testing was once considered a technical assignment performed by network administrators or system developers. In those days, application security was not given much importance during the test phase of software development life cycle. An increasing number of security incidents and a growing awareness among business owners about invalidated applications due to security issues have moved security testing into the software tester’s world. Gartner’s reports say that three out of four Web sites are vulnerable to an attack and 75% of the hacks occur at the application level. More and more clients across the globe have started including application security testing as a part of software testing.

The cornerstone of security rests on confidentiality, integrity, and availability. For critical applications, there is a need to provide different levels of access to different users. Security of transactions ensures customer confidence, which is a key factor for successful implementation of applications. As per Section 404 of SOX, organizations have to maintain internal control over financial reporting, which involves testing the integrity of the applications.

The following are the steps for a successful security initiative.

Step 1: Identifying the Scope of Security Testing

The main objectives of security testing are the following:

  1. ■ Verify and validate that the applications meet the security requirements.

  2. ■ Identify security vulnerabilities of applications in the given environment.

Performing a thorough security assessment of a Web application is a complex task that should be approached like any other software analysis task—with a methodology, testing procedures, set of helpful tools, skills, and knowledge. Manual penetration testing as well as automated tools can be used to uncover critical security vulnerabilities in Web applications. The technology used for development and the vulnerability of the applications determine the correct balance of automated scanning and manual penetration testing to provide the best possible Web application security coverage.

Security testing starts with vulnerability assessment. Vulnerability scanning examines a network for security holes in the network segments for IP-enabled devices and enumerates systems, operating systems, and applications. Apart from identifying the operating system version, IP protocols, and TCP/UDP ports that are listening, vulnerability scanning also identifies the common security threats, such as weak passwords, files with liberal permissions, security configuration problems, and so on.

Security testing strategy for an application or product should be developed for each phase such as development, implementation, deployment, operation, and maintenance. Security testing should preferably be performed by an independent testing team. The test target should be identified using a threat model, and all interfaces such as User interface (UI), sockets, file input, API, mail configuration, and devices should be included under the scope. The performance bottlenecks such as network bandwidth, memory, disk space, files, and sockets should be subject to security testing.

Step 2: Test Case Generation and Execution

The security of an application is tested by attempting to violate the built-in security controls. This technique ensures that the protection mechanisms in the system are adequate enough to secure the application from improper and unauthorized access. The tester overloads the system with continuous requests, thereby denying service to others. The tester may deliberately cause system errors to violate security during recovery or may browse through insecure data to find the key to system entry. The following areas need to be tested for security:

  1. ■ User authentication

  2. ■ Password management

  3. ■ Access controls

  4. ■ Input validation

  5. ■ Exception handling

  6. ■ Secure data storage and transmission

  7. ■ Logging

  8. ■ Monitoring and alerting

  9. ■ Change management

  10. ■ Application development

  11. ■ Periodic security assessments and audits

Buffer overflow, SQL injection, cross-site scripting, parameter tampering, cookie poisoning, hidden fields, debug options, unvalidated input, broken authorization, broken authentication, and session management are some of the areas around which the test cases should be generated for security testing. Ideally, security testing should be performed at the end of functional integration testing and performance testing. This helps to detect hidden security threats in the application.

After completing security testing, the findings should be summarized in a report. The summary report should contain details such as the types of testing conducted and the security risks identified, with ratings, which helps the business take a decision on deployment of the application.

Types of Security Testing

The following are the types of security testing along with the purpose, tools, and approach.

Network Scanning

Network scanning involves using a port scanner to identify whether all hosts are potentially connected to the organization’s network. This identifies all active hosts and open ports, and some scanners will give additional information on the scanned hosts and applications running on a particular port. This should be executed continuously in the system.

Purpose
  1. ■ Check the unauthorized hosts connected to

  2. ■ Identify vulnerable services

  3. ■ Identify deviations from the permitted services as per the security policy

  4. ■ Help in penetration testing

  5. ■ Assist in configuration of the intrusion detection systems

Tools
  1. Fscan—A command line port scanner that scans both TCP and UDP ports

  2. LANguard network scanner—Freeware security and port scanner

  3. DUMPSec—security auditing program for Microsoft Windows

Approach

A high level of human expertise is required for interpreting the results. Scanning may disrupt the network operations by taking more bandwidth and less response time. The results should be documented and analyzed, and corrective steps should be initiated. The following are some possible measures:

  1. ■ Investigate and disconnect unauthorized hosts

  2. ■ Disable or remove unnecessary and vulnerable services

  3. ■ Modify firewall to restrict outside access

  4. ■ Modify vulnerable hosts to restrict access to vulnerable services

The speed and efficiency of network scanning depends on the number of hosts in the system, and there are many freeware tools available that are automated. The disadvantage of network-scanning tools is that they do not directly identify the vulnerabilities.

Vulnerability Scanning

Apart from scanning the ports, these tools also report on the associated vulnerabilities. Outdated software versions, unapplied patches and system upgrades, noncompliance deviations from the organization’s security policy, and so on are identified. The negative side of vulnerability scanning is that these tools tend to load the system and continuous update of vulnerability database to capture them.

Purpose
  1. ■ Identify the active hosts (a computer connected to the Internet on the network)

  2. ■ Identify the active and vulnerable services on, e.g., e-mail service, hosts

  3. ■ Identify the applications, misconfigured settings, and operating systems

  4. ■ Verify compliance with the host application security policies

Tools
  1. Cybercop Scanner—A network-based vulnerability-testing tool

  2. ISS Internet Scanner—A vulnerability-scanning tool that identifies security issues

  3. SecureScan—NX, SAINT, and SARA are some other vulnerability-scanning tools

Approach

Vulnerability scanning is required to validate that operating systems and major applications are up-to-date on security patches and software versions. The results of the testing should be documented and analyzed.

The following are the recommended corrective measures:

  1. ■ Upgrade or patch vulnerable systems.

  2. ■ Improve configuration management.

  3. ■ Dedicated resources to monitor vulnerability.

  4. ■ Implement continuous improvement in the organizations’ security policies and architecture.

Network scanning can be fast, depending on the number of hosts scanned; automated freeware tools are available. These scanners are easy to run on a regular basis. Sometimes, there is a chance of false-positives, which have to be identified by the analysis of the results.

Password Cracking

Password cracking is a process that verifies whether users are employing strong passwords, by intercepting the password hashes in the network.

Password crackers should be run on the system on a monthly basis, or even continuously, to ensure correct password combination throughout the organization.

Tools
  1. Crack 5—UNIX password cracker

  2. John the Ripper—Windows and UNIX password cracker

  3. L0phtCrack—Windows password cracker

If the cracked passwords were selected according to policy, the policy should be modified to reduce the percentage of crackable passwords. If the cracked passwords were not selected according to the policy, then users should be educated to choose passwords as per the policy.

Log Reviews

Various system logs can be used to identify deviations from the organization’s security policy, including firewall logs, IDS (abbreviation to be mentioned) logs, server logs, and any other logs collecting audit data on systems and networks. Audit logs can be used to validate that the system is operating according to policies.

Manual audit log review is extremely cumbersome and time consuming. Automated audit tools provide a means of significantly reducing the required review time and to generate reports (predefined and customized) that summarize the log contents to a set of specific activities.

Approach

For example, if an IDS (abbreviation) sensor is placed behind the firewall (within the enclave), its logs can be used to examine the service requests and communications that are allowed into the network by the firewall. If this sensor registers unauthorized activities beyond the firewall, it indicates that the firewall is no longer configured securely and a backdoor exists on the network.

File Integrity Checkers

File integrity checker is a tool to recognize changes to files, particularly unauthorized changes. A file integrity checker computes and stores a checksum for every guarded file and establishes a database of file checksums. Stored checksums should be recomputed regularly to test the current value against the stored value to identify any file modifications. The reference database should be stored off-line so that attacks cannot compromise the system and hide their tracks by modifying the database.

Purpose
  1. ■ To recognize unauthorized changes to files

  2. ■ To determine the extent of possible damage when a compromise is suspected

Tools
  1. LAN guard

  2. Tripwire

Virus Detectors

All organizations are at risk of “contracting” computer viruses, Trojans, and worms if they are connected to the Internet, use removable media (e.g., floppy disks and CD-ROMs), or use shareware/freeware software. With any malicious code, there is also the risk of compromising or losing sensitive or confidential information. To detect viruses, anti-virus software needs to be installed on network and machines. This antivirus software should have an up-to-date virus identification database (sometimes called virus signatures) that allows it to recognize all viruses. To detect viruses, the anti-virus software compares file contents with the known computer virus signatures, identifies infected files, quarantines and repairs them if possible, or deletes them if not. More sophisticated programs also look for viruslike activity in an attempt to identify new or mutated viruses that would not be recognized by the current virus detection database.

Tools
  1. McAfee

  2. Symantec

  3. Trend Micro

Approach

There are two primary types of anti-virus programs available: those that are installed on the network infrastructure and those that are installed on end-user machines.

The virus detector installed on the network infrastructure is usually installed on mail servers or in conjunction with firewalls at the network border of an organization. Server-based virus detection programs can detect viruses before they enter the network or before users download their e-mail.

The other type of virus detection software is installed on end-user machines. This software detects malicious code in e-mails, floppies, hard disks, documents, and the like but only for the local host. The software also sometimes detects malicious code from Web sites.

Penetration Testing

Penetration testing is security testing in which evaluators attempt to circumvent the security features of a system on the basis of their understanding of the system design and implementation. It is important to determine how vulnerable an organization’s network is and the level of damage that can occur if the network is compromised. A penetration test can be designed to simulate an inside or an outside attack. If both internal and external testing is to be performed, the external testing usually occurs first. With external penetration testing, firewalls usually limit the amount and types of traffic that are allowed into the internal network from external sources. Depending on what protocols are allowed through, initial attacks are generally focused on commonly used and allowed application protocols such as FTP, HTTP, or SMTP and POP.

Purpose

The purpose of penetration testing is to identify methods of gaining access to a system by using common tools and techniques used by attackers. These types of testing expose vulnerabilities in kernel code, buffer overflow, symbolic link, file descriptors, race conditions, file and directory permissions, Trojans, and so on.

Approach

Penetration testing can be either overt or covert. These two types of penetration testing are commonly referred to as Blue Teaming and Red Teaming. Blue Teaming involves performing a penetration test with the knowledge and consent of the organization’s IT staff. Red Teaming involves performing a penetration test without the knowledge of the organization’s IT staff but with full knowledge and permission of the upper management. This type of test is useful for testing not only network security but also the IT staff’s response to perceived security incidents and their knowledge and implementation of the organization’s security policy. In Red Teaming, penetration testing may be conducted with or without warning.

To simulate an actual external attack, the testers are not provided with any real information about the target environment other than targeted IP address/ranges, and they must covertly collect information before the attack. They collect information on the target from public Web pages, newsgroups, and similar sites. They then use port scanners and vulnerability scanners to identify target hosts. Because they are, most likely, going through a firewall, the amount of information is far less than they would get if operating internally. After identifying hosts on the network that can be reached from the outside, they attempt to compromise one of the hosts. If successful, they then leverage this access to compromise other hosts not generally accessible from outside. (Reference: Guidelines on Security Testing by NIST, special publication 800-42.)

Usability Testing

As the number of users of Web applications in business grows, this impacts the applications and usage pattern of the users. When more than the estimated number of users log in, the system application performance is affected, and we have seen how this performance can be improved by acting on the result of the performance testing techniques explained. Similarly, another problem that crops up due to the mushrooming growth of Web application is usability. Usability testing helps us to evaluate the ease of use with which the end users of the system access the applications.

According to ISO 9214-11, usability is the “extent to which product can be used by any specific users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.” Usability is a combination of factors that influence the user’s experience with a product or a system. There are many variations on Web site usability testing, but a simple way to picture it is to imagine a real user sitting in front of a PC and working on a short list of tasks on a Web site, and to record the findings. The process is repeated with a handful of different users and the identified weaknesses are rectified.

The following are the three key tenets of usability:

  1. ■ Communicate clearly so that users understand you. Users allocate minimal time to initial Web site visits, so you must quickly convince them that the site is worthwhile.

  2. ■ Provide information users want. Users must be able to easily determine whether your services meet their needs and why they should do business with you.

  3. ■ Offer simple, consistent page design, clear navigation, and an information architecture that puts information where users expect to find it.

Usability ought not to be confused with “functionality,” however, as the latter is purely concerned with the functions and features of the product and has no bearing on how easily they can be used.

Goals of Usability Testing

The goal of usability testing is to discover the needs and expectations of users. Its purpose is to examine the proposed AUT (Application Under Testing) to find how the intended user can meet his or her goals using the system being tested.

The following are some critical tenets of usability testing:

  1. Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.

  2. Match between system and the real world: The system should speak the user’s language, with words, phrases, and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.

  3. Ease of learning: How fast can a user learn to use a system that he has never seen before, to accomplish basic tasks?

  4. Flexibility and efficiency of use: The ability to use the system in different ways in an efficient manner is very important.

  5. Accelerators: May often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.

  1. User control and freedom: Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.

  2. Consistency: Actions that cause the same reaction in similar situations, for example, clicking on a hyperlink opens a pop-up window whereas clicking on a button takes you to a new screen.

  3. Error frequency and severity: How frequent are errors in the system? How severe are they? How do users recover from errors? Even better than good error messages is a careful design that prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.

  4. Aesthetic and minimalist design: Dialogues should not contain information that is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.

  5. Graphical User Interface: The front end or the part of a software application or Web site that the users see and work with.

  6. Orientation: How the user knows his location within the application or Web site. The user’s orientation is critical for future navigation and for a feeling of “understanding the application” and easily correcting navigation mistakes.

Approach and Execution

The usability specialist should identify the transactions that affect and are expected to impact the users in terms of usage of the system. The test cases should be written for the following areas:

  1. ■ Site design and page design

  2. ■ Navigation aids and common look and feel

  3. ■ Page size, file size, making pages resize

  4. ■ Effects of fonts on legibility

  5. ■ Use of textual elements and formatting lists, block text, and tables

  6. ■ Improving Web page accessibility

  7. ■ When to use images and how to make images more efficient

  8. ■ Appearance of links and where and how to use links

  9. ■ Improving user efficiency

The usability specialist can write the test cases in a similar format as the functional test cases. The usability experts who are going to execute these test cases should have some basic knowledge of the usage pattern of Web applications, and the expected results should be documented.

Normally, users from different walks of life who will have access to the system should be chosen for executing and documenting their user experience for the usability test. This will closely reflect the real-world situation. Usability testing should be carried out on a real-time system, on a paper prototype, or on a demo application. One of the most effective forms of inspection-based user testing involves the use of a “usability checklist.” Checklist-based user testing is extremely inexpensive to implement, and requires a surprisingly small number of testers to be effective.

The usability testers can be volunteers who will stop at any time to perform the testing. The testers should feel free to speak their minds without fear of hurting the feelings of the product developer even if their mistakes may mean that the developer will have to do more work. You may think the test is a simple matter, and you may even be bored with it, but the testers might take it very seriously.

Guidelines for Usability Testing

The usability specialist should clearly document the guidelines for preparation of usability test cases, definition of outside user for testing, and test execution guidelines for usability testing. The following are some standard guidelines:

  1. ■ For all but the simplest and most informal tests, run a pilot test first.

  2. ■ Ensure that testers are made to feel at ease, and are fully informed of any observation. Attend at least one test as a participant to appreciate the stress that the testers/participants undergo.

  3. ■ Ensure that participants/testers have the option to abandon any tasks that they are unable to complete.

  4. ■ Do not prompt participants unless it is clearly necessary to do so.

  5. ■ Record the events in as much detail as possible—to the level of keystrokes and mouse clicks if necessary.

  6. ■ If there are observers, ensure that they do not interrupt in any way.

  7. ■ Be sensitive to the fact that developers may be upset by what they observe or what you report.

Accessibility Testing and Section 508

In 1998, Congress amended the Rehabilitation Act to require federal agencies to make their electronic and information technology accessible to people with disabilities. Inaccessible technology interferes with an individual’s ability to obtain and use information quickly and easily. Section 508 was enacted to eliminate barriers in information technology, to make available new opportunities for people with disabilities, and to encourage development of technologies that will help achieve these goals. The law applies to all federal agencies when they develop, procure, maintain, or use electronic and information technology. Under Section 508 (29 U.S.C. ‘ 794d), agencies must give disabled employees and members of the public access to information that is comparable to the access available to others. Web accessibility means that people with disabilities should be able to use the Web. More specifically, Web accessibility means that people with disabilities should be able to perceive, understand, navigate, and interact with the Web, and contribute to the Web. Web accessibility also benefits others, including older people with changing abilities due to aging.

The standards define the types of technology covered and set forth provisions that establish a minimum level of accessibility. The application section (1194.2) outlines the scope and coverage of the standards. The standards cover the full range of electronic and information technologies in the federal sector, including those used for communication, duplication, computing, storage, presentation, control, transport, and production. This includes computers, software, networks, peripherals, and other types of electronic office equipment. The standards define electronic and information technology, in part, as “any equipment or interconnected system or subsystem of equipment, that is used in the creation, conversion, or duplication of data or information.”

The standards provide criteria specific to various types of technologies, including the following:

  1. ■ Software applications and operating systems

  2. ■ Web-based information or applications

  3. ■ Telecommunication products

  4. ■ Video and multimedia products

  5. ■ Self-contained, closed products (e.g., information kiosks, calculators, and fax machines)

  6. ■ Desktop and portable computers

This section provides technical specifications and performance-based requirements that focus on the functional capabilities of covered technologies. This dual approach recognizes the dynamic and continually evolving nature of the technology involved as well as the need for clear and specific standards to facilitate compliance. Certain provisions are designed to ensure compatibility with adaptive equipment that people with disabilities commonly use for information and communication access, such as screen readers, Braille displays, and TTYs.

Most of the specifications for software pertain to usability for people with vision impairments. For example, one provision requires alternative keyboard navigation, which is essential for people with vision impairments who cannot rely on pointing devices, such as a mouse. Other provisions address animated displays, color and contrast settings, flash rate, and electronic forms, among others.

The criteria for Web-based technology and information are based on access guidelines developed by the Web Accessibility Initiative of the World Wide Web Consortium. Many of these provisions ensure access for people with vision impairments who rely on various assistive products to access computer-based information, such as screen readers, which translate what’s on a computer screen into automated audible output, and refreshable Braille displays. Certain conventions, such as verbal tags or identification of graphics and format devices, such as frames, are necessary so that these devices can “read” them for the user in a sensible way. The standards do not prohibit the use of Web site graphics or animation. Instead, the standards aim to ensure that such information is also available in an accessible format. Generally, this means use of text labels or descriptors for graphics and certain format elements. (HTML code already provides an “Alt Text” tag for graphics that can serve as a verbal descriptor for graphics.) This section also addresses the usability of multimedia presentations, image maps, style sheets, scripting languages, applets and plug-ins, and electronic forms.

The standards apply to federal Web sites but not to private sector Web sites (unless a site is provided under contract to a federal agency, in which case only that Web site or portion covered by the contract would have to comply). Accessible sites offer significant advantages that go beyond access. For example, those with “text-only” options provide a faster downloading alternative and can facilitate transmission of Web-based data to cell phones and personal digital assistants.

The criteria of this section are designed primarily to ensure access to people who are deaf or hard of hearing. This includes compatibility with hearing aids, cochlear implants, assistive listening devices, and TTYs. TTYs are devices that enable people with hearing or speech impairments to communicate over the telephone; they typically include an acoustic coupler for the telephone handset, a simplified keyboard, and a visible message display. One requirement calls for a standard nonacoustic TTY connection point for telecommunication products that allow voice communication but also provide TTY functionality. Other specifications address adjustable volume controls for output, product interface with hearing technologies, and the usability of keys and controls by people who may have impaired vision or limited dexterity or motor control.

Multimedia products involve more than one media and include, but are not limited to, video programs, narrated slide production, and computer-generated presentations. Provisions address caption decoder circuitry (for any system with a screen larger than 13 inches) and secondary audio channels for television tuners, including tuner cards for use in computers. The standards also require captioning and audio description for certain training and informational multimedia productions developed or procured by federal agencies. The standards also provide that viewers be able to turn captioning or video description features on or off.

Section 508 covers products that generally have embedded software but are often designed in such a way that a user cannot easily attach or install assistive technology. Examples include information kiosks, information transaction machines, copiers, printers, calculators, fax machines, and similar types of products. The standards require that access features be built into the system so that users do not have to attach an assistive device to it. Other specifications address mechanisms for private listening (handset or a standard headphone jack), touchscreens, auditory output and adjustable volume controls, and location of controls in accessible reach ranges.

Section 508 also focuses on keyboards and other mechanically operated controls, touch screens, use of biometric form of identification, and ports and connectors.

The performance requirements mentioned in Section 508 are intended for overall product evaluation and for technologies or components for which there is no specific requirement under the technical standards in Subpart B. These criteria are designed to ensure that the individual accessible components work together to create an accessible product. They cover operation, including input and control functions, operation of mechanical mechanisms, and access to visual and audible information. These provisions are structured to allow people with sensory or physical disabilities to locate, identify, and operate input, control, and mechanical functions and to access the information provided, including text, static, or dynamic images, icons, labels, sounds, or incidental operating cues. For example, one provision requires that at least one mode allow operation by people with low vision (visual acuity between 20/70 and 20/200) without relying on audio input because many people with low vision may also have a hearing loss.

The standards also address access to all information, documentation, and support provided to end users (e.g., federal employees) of covered technologies. This includes user guides, installation guides for end-user installable devices, and customer support and technical support communications. Such information must be available in alternate formats upon request at no additional charge. Alternate formats or methods of communication can include Braille, cassette recordings, large print, electronic text, Internet postings, TTY access, and captioning and audio description for video materials.

A standard set of test cases is given in the government Web site that can be used to guide accessibility testing. (Reference: http://www.section508.gov.)

Compliance Testing

Compliance testing determines that a product implementation of a particular implementation specification fulfills all mandatory elements as specified and that these elements are operable.

Compliance testing may become more stringent over time, especially as a particular implementation specification matures. Regardless of how a software audit is initiated, the process is rarely anticipated and often results in valuable resource loss. Beyond the resource strain, software audits require additional expenses to deploy asset management services to prevent future compliance breaches. This chapter presents a risk assessment survey to help determine if your organization can adhere to a software compliance audit and what level of risk it faces.

The following are six basic steps for enabling software management to ensure it has the documentation to satisfy an audit:

  1. Review existing software licensing agreements.

  2. Take an inventory of existing IT assets.

  3. Compare inventory to purchasing records to determine problematic areas.

  4. Uninstall noncompliant software.

  5. Implement management policies for use and license compliance.

  6. Maintain new standards and processes.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset