Glossary

Acceptance Testing: Form of testing to assure user that a system is performing as expected.

Ad Hoc: Testing without formalized test cases, i.e., trial and error.

Adaptive Maintenance: Modifications made to a system to accommodate changes in the processing environment.

Agile Methodology: A collection of values, principles, and practices that incorporates iterative development, test, and feedback.

Algorithm: A set of rules that are supposed to give the correct answer for solving a particular problem.

ANSI: Acronym for the American National Standard Institute, an institute that creates standards for a wide variety of industries, including computer programming languages.

Architecture: Similar to the architecture of a building, the architecture of a computer refers to the design structure of the computer and all its details.

Archive: To store information, to back it up, with the idea of preserving it for a long time.

ASCII: Stands for the American Standard Code for Information Interchange, which is a standardized coding system used by almost all computers and printers.

Assumption: Proposition that must be allowed to reduce the relevant variables of a problem to be manageable.

Attribute: The descriptive characteristic of something.

Backup: The process of making copies of files to enable recovery.

Baseline: (1) A defined set of executables or documents of a specific product, put into a state in which all development and change activity are closely managed to support a defined activity at a set time. Examples: integration test, pilots, system test, reviews. (2) A product, document, or deliverable that has been formally reviewed, approved, and agreed upon; thereafter serving as a basis for further development, and to which a change can only be implemented through formal change control procedures. Examples: initial deployment of a product, evolution of existing products.

Baseline Measurement: A measurement taken for the specific purpose of determining the initial value of a state.

Benchmark: A test used to measure the relative performance of hardware or software products.

Button: On a computer screen, it is the visual equivalent of a button on a machine.

Capture/Replay: Automated regression testing tools that record and replay software functionality to verify that software changes do not adversely affect any portion of the application already tested.

Capture/Replay Testing: Testing using a capture/replay tool to record interaction scenarios.

Cascade: A command in applications that automatically organizes all the windows on the screen in a tidy stack.

Cause–Effect Diagram: A tool used to identify possible causes of a problem by representing the relationship between some effect and its potential cause.

Client/Server: A system architecture in which a client computer cooperates with a server over a network.

COE: Center of excellence whereby IT organizations improve their testing practices by centralizing some or all test-related activities.

Compliance Testing: Determines that a product implementation of a particular implementation specification fulfills all mandatory elements as specified and that these elements are operable.

Control Chart: A statistical method for differentiating between common and special cause variations as demonstrated by a process.

Corrective Action: The practice and procedure for reporting, tracking, and resolving identified problems both in the software product and the development process. The resolution provides a final solution to the identified problem.

Corrective Maintenance: The identification and removal of code defects.

CPU: The central processing unit, the brain of the computer.

CRUD: Create, read, update, and delete

Customer: An individual or organization that receives a product.

Database: A collection of information stored in computerized form.

Data-Driven Testing: Framework in which test input and output values are read from data files.

Defect: A deviation from either business or technical requirements.

Download: To receive information, typically a file, from another computer.

Drag-and-Drop: Perform tasks by using the mouse to drag an icon onto some other icon.

Dynamic Testing: Testing a program or system through executing one or more tests.

Emergency Repair: Software repair required immediately.

Entrance Criteria: Quantitative and qualitative measures used to evaluate a product’s readiness to enter the next phase or stage of development.

Error: A discrepancy between actual values or conditions and those expected.

Exit Criteria: Quantitative and qualitative measures used to evaluate a product’s acceptance for that specific stage or phase of development.

Exploratory Testing: The tactical pursuit of software faults and defects driven by challenging assumptions.

Flowchart: A diagram that shows the sequence of steps of a process.

Formal Review: A type of review typically scheduled at the end of each activity or stage of development to review a component of a deliverable, or in some cases a complete deliverable, or the software product and its supporting documentation.

GIGO: Stands for “garbage in, garbage out.” Computers, unlike humans, will unquestioningly process the most nonsensical input data and produce nonsensical output.

GUI: Graphical user interface — a user interface in which graphics and characters are used on screens to communicate with the user.

Histogram: A graphical description of measured values organized according to the frequency of occurrence.

Hybrid Framework: It is defined by the core data engine, the generic component functions, and the function libraries.

Icon: A miniature picture used to represent a function.

Impact Analysis: The process of determining which system components are affected by a change to software or hardware.

Incident Report: A report to document an issue or error arising from the execution of a test.

Inputs: Products, services, or information needed to make a process work.

Integration Testing: (1) The testing of combinations of individual, unit-tested pieces of code as they are combined into a complete unit. (2) A testing event driven by temporal cycles determined before the start of the testing phase. This test phase is conducted to identify functional problems with the software product. This is a verification activity.

Intermediate Repair: Software repair before the next formal release, but not immediately (e.g., in a week or so).

ISO9000: A quality series that comprises a set of five documents developed in 1987 by the International Standards Organization (ISO).

Keyword-Driven Framework: Different screens; the functions and business components are specified as keywords in a data table.

Legacy System: Previous application system in production.

Load testing: The practice of modeling the expected usage of the application software by simulating the multiple users concurrently.

Maintenance: Tasks associated with the modification or enhancement of production software.

Management: A team or individual who manages resources.

Management Review and Approval: A management review is the final review of a deliverable. It is conducted by the project manager with the project sponsor to verify the quality of the business aspects of a work product.

Mean: A value derived by adding several items and dividing the sum by the number of items.

Modifiable Requirements: Requirements and associated information must be changeable.

Modular Framework: An approach requiring the creation of small, independent automation scripts and functions that represent modules, sections, and functions of the application under test.

Necessary Requirements: Requirements that are really necessary as opposed to being needed.

Network: A system that connects computers together and shares resources.

Nonredundant Requirements: There should not be duplicate requirements as this causes problems.

PDCA: Plan, Do, Check, and Act.

Perfective Maintenance: Enhancement to software performance, maintainability, or understandability.

Performance Testing: Measurements from different perspectives to improve scalability and performance of the application.

PMBOK: Project Management Institute’s Project Management Body of Knowledge.

Policy: Managerial intents and goals regarding a process or products.

Problem: Any deviation from predefined standards.

Problem Reporting: The method of identifying, tracking, and assigning attributes to problems detected within the software product, deliverables, or within the development processes.

Procedure: Step-by-step method that is followed to ensure some standard.

Process: Specific activities that must be performed to accomplish a function.

Process Improvement: To change a process to make it develop a product faster, more economically, or with better quality.

Productivity: Ratio of output to the input of a process using the same unit of measure.

Project Charter: A living business document that officially recognizes the funding of a project.

Project Framework: Useful way to unite quality processes with project phases, and synchronize project quality management with the system, or software, development approach.

Project Management: The application of knowledge, skills, tools, and techniques to meet the requirements of a project.

Quality: The totality of features and characteristics of a product or service that bears on its ability to meet stated or implied needs.

Quality Assurance: Defining the level of compliance with requirements and incorporating continuous quality improvement into the test processes.

Quality Assurance Evaluation: A type of review performed by the QA organization to ensure that a project is following good quality management practices.

Quality Assurance Organization: A permanently established organization or unit whose primary goal is to review the project and products at various points to ensure that good quality management practices are being followed. Also to provide the testing efforts and all associated deliverables for testing on supported projects. The QA organization must be independent of the project team.

Quality Control: Process by which product quality is compared with standards.

Quality Improvement: Changing a process so that the rate of defects is reduced.

Quality Management: The execution of processes and procedures that ensures quality as an output from the development process.

Quality Planning: Planning the quality approach.

Quality Standards: Planning the quality management approach for every project includes establishing quality standards.

Regression Testing: Tests used to verify a previously tested system whenever it is modified.

Release Management: A formal release process for nonemergency corrective, perfective, and adaptive projects.

Requirement: A performance standard for an attribute or a function, or the process used to verify that a standard has been met.

Reviews: A process or meeting during which a work product, or a set of work products, is presented to project personnel, project and program managers, users, customers, sponsors, or other interested parties for comment or approval.

ROI: Return on investment.

Root Cause Analysis: A methodical process based on quantitative data to identify the primary cause in which a defect has been introduced into the product. This typically goes beyond repairing the product affected and establishes how the process or method allowed the defect to be introduced into the product to begin with.

Run Chart: A graph of data points in chronological order used to detect trends of a characteristic being measured.

Scatter Plot: A graph that shows whether there is a relationship between two factors.

Scope Statement: Contains early estimates of the project resources and costs.

Security Testing: The cornerstone of security rests on confidentiality, integrity, and availability.

SMC: Simple, medium, and complex test cases

SOA Testing: To view the whole business process, and ensure that the pieces of that process interact properly.

Software Maintenance: All changes, corrections, and enhancements that occur after an application has been placed into production.

Standard: A measure used to evaluate products or processes and identify nonconformance.

Static Testing: Testing an artifact through a review process.

Statistical Process Control: The use of statistics and tools to measure a process.

Stress Testing: Load placed on the system is increased beyond the normal expected usage to test the application’s response.

System Testing: The functional testing of a system to verify that it performs within the limits of the system requirements and is fit for use.

Terse Requirement: A good requirement must be free of unnecessary verbiage or information.

Test Coverage: A measure of the portion of a system under test that is actually tested.

Test Cycle: A set of ordered test conditions that will test a logical and complete portion of a system.

Test Data generator: A testing tool that creates data that is then read by an automated test tool and entered into the application.

Test Event: A generic term used to describe one of many levels of a test. Examples: unit test, integration test, system test.

Test Maturity: The gaps in the current processes relative to the standard set of processes.

Test Readiness Review: A formal review conducted primarily to evaluate that all preliminary and entrance criteria have been satisfied and are verifiable before proceeding into a formal test event.

Test Suite: A collection of test cases that are intended to be used as input to a software program to show that it has some specified set of behaviors.

Testable Requirement: A testable requirement must be able to be verified or validated; that is, it should be possible to prove the intent of the requirement.

Testing Estimation: Takes into consideration the types and costs of the resources that are required to complete the planned test.

Testing Tool: A manual or automated procedure or software used to test a system.

Traceability Requirement: A requirement must also be traceable to test cases. Traceability is key to verifying that requirements have been met.

Understandability Requirement: Understandable requirements are organized in a manner that facilitates reviews.

Unit Testing: Testing performed on individual programs to verify that they perform according to their requirements.

Usability Testing: The extent to which product can be used by any specific users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specific context of use.

Use Case: A scenario that describes the use of a system by an actor to accomplish work.

User: The customer who uses a product or process.

User Story: An informal statement of the requirement in simple sentence formats typically written on 3 × 5 cards.

Validation: A type of evaluation conducted at the end of the development process to assess the software product’s ability to meet the specified requirements.

Values: The ideals and customs for which individuals have a positive regard.

Verification: A type of evaluation to determine if the software products at a given development phase satisfy the stipulated conditions, which were determined at the start of that phase.

Vision: A statement that describes the desired future state of something.

Volume Testing: A form of performance testing in which the data volume is increased to an abnormal quantity to observe the behavior of the system.

Walkthrough: A testing technique to analyze a technical work product.

Window: A rectangle on a screen that represents information.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset