Appendix E

Test Templates

E1: Unit Test Plan

The unit test plan is based on the program or design specification and is required for a formal test environment. The following is a sample unit test plan table of contents:

1. Introduction Section

a.   Test Strategy and Approach

b.   Test Scope

c.   Test Assumptions

2. Walkthrough (Static Testing)

a.   Defects Discovered and Corrected

b.   Improvement Ideas

c.   Structured Programming Compliance

d.   Language Standards

e.   Development Documentation Standards

3. Test Cases (Dynamic Testing)

a.   Input Test Data

b.   Initial Conditions

c.   Expected Results

d.   Test Log Status

4. Environment Requirements

a.   Test Strategy and Approach

b.   Platform

c.   Libraries

d.   Tools

e.   Test Procedures

f.   Status Reporting

E2: System/Acceptance Test Plan

The system or acceptance test plan is based on the requirements specifications and is required for a formal test environment. System testing evaluates the functionality and performance of the whole application and consists of a variety of tests including performance, usability, stress, documentation, security, volume, recovery, and so on. Acceptance testing is a user-run test that demonstrates the application’s ability to meet the original business objectives and system requirements, and usually consists of a subset of system tests.

The following is a sample test plan table of contents:

1. Introduction

a.   System Description (i.e., brief description of system)

b.   Objective (i.e., objectives of the test plan)

c.   Assumptions (e.g., computers available for all working hours, etc.)

d.   Risks (i.e., risks if unit testing is not completed)

e.   Contingencies (e.g., backup procedures, etc.)

f.   Constraints (e.g., limited resources)

g.   Approval Signatures (e.g., authority to sign off on document)

2. Test Approach and Strategy

a.   Scope of Testing (i.e., tests to be performed)

b.   Test Approach (e.g., test tools, black box)

c.   Types of Tests (e.g., unit, system, static, dynamic, manual)

d.   Logistics (e.g., location, site needs, etc.)

e.   Regression Policy (e.g., between each build)

f.   Test Facility (i.e., general description of where test will occur)

g.   Test Procedures (e.g., defect fix acceptance, defect priorities, etc.)

h.   Test Organization (e.g., description of QA/test team)

i.   Test Libraries (i.e., location and description)

j.   Test Tools (e.g., capture/playback regression testing tools)

k.   Version Control (i.e., procedures to control different versions)

l.   Configuration Building (i.e., how to build the system)

m.   Change Control (i.e., procedures to manage change requests)

3. Test Execution Setup

a.   System Test Process (e.g., entrance criteria, readiness, etc.)

b.   Facility (e.g., details of test environment, laboratory)

c.   Resources (e.g., staffing, training, timeline)

d.   Tool Plan (e.g., specific tools, packages, special software)

e.   Test Organization (e.g., details of personnel, roles, responsibilities)

4. Test Specifications

a.   Functional Decomposition (e.g., what functions to test from functional specification)

b.   Functions Not to Be Tested (e.g., out of scope)

c.   Unit Test Cases (i.e., specific unit test cases)

d.   Integration Test Cases (i.e., specific integration test cases)

e.   System Test Cases (i.e., specific system test cases)

f.   Acceptance Test Cases (i.e., specific acceptance test cases)

5. Test Procedures

a.   Test Case, Script, Data Development (e.g., procedures to develop and maintain)

b.   Test Execution (i.e., procedures to execute the tests)

c.   Correction (i.e., procedures to correct discovered defects)

d.   Version Control (i.e., procedures to control software component versions)

e.   Maintaining Test Libraries

f.   Automated Test Tool Usage (i.e., tool standards)

g.   Project Management (i.e., issue and defect management)

h.   Monitoring and Status Reporting (i.e., interim versus summary reports)

6. Test Tools

a.   Tools to Use (i.e., specific tools and features)

b.   Installation and Setup (i.e., instructions)

c.   Support and Help (e.g., vendor help line)

7. Personnel Resources

a.   Required Skills (i.e., manual/automated testing skills)

b.   Roles and Responsibilities (i.e., who does what when)

c.   Numbers and Time Required (e.g., resource balancing)

d.   Training Needs (e.g., send staff for tool training)

8. Test Schedule

a.   Development of Test Plan (e.g., start and end dates)

b.   Design of Test Cases (e.g., start and end dates by test type)

c.   Development of Test Cases (e.g., start and end dates by test type)

d.   Execution of Test Cases (e.g., start and end date by test type)

e.   Reporting of Problems (e.g., start and end dates)

f.   Developing Test Summary Report (e.g., start and end dates)

g.   Documenting Test Summary Report (e.g., start and end dates)

E3: Requirements Traceability Matrix

The following requirements traceability matrix is a document that traces user requirements from analysis through implementation. It can be used as a completeness check to verify that all requirements are present or that there are no unnecessary/extra features, and as a maintenance guide for new personnel. At each step in the development cycle, the requirements, code, and associated test cases are recorded to ensure that the user requirement is addressed in the final system. Both the user and developer have the ability to easily cross-reference the requirements to the design specifications, programming, and test cases.

Images

E4: Test Plan (Client/Server and Internet Spiral Testing)

The client/server test plan is based on the information gathered during the initial interviews with development and any other information that becomes available during the course of the project. Because requirements specifications are probably not available in the spiral development environment, this test plan is a “living document.” Through every spiral, new information is added, and old information is updated as circumstances change. The major testing activities are the function, GUI, system, acceptance, and regression testing. These tests, however, are not necessarily performed in a specific order and may be concurrent.

The cover page of the test plan includes the title of the testing project, author, current revision number, and date last changed. The next page includes an optional section for sign-offs by the executive sponsor, development manager, testing manager, quality assurance manager, and others as appropriate.

The following is a sample test plan table of contents:

1. Introduction

1.1    Purpose

1.2    Executive Summary

1.3    Project Documentation

1.4    Risks

2. Scope

2.1    In Scope

2.2    Test Requirements

2.2.1    High-Level Functional Requirements

2.2.2    User Business/Interface Rules

2.3    GUI Testing

2.4    Critical System/Acceptance Testing

2.4.1    Performance Testing

2.4.2    Security Testing

2.4.3    Volume Testing

2.4.4    Stress Testing

2.4.5    Compatibility Testing

2.4.6    Conversion Testing

2.4.7    Usability Testing

2.4.8    Documentation Testing

2.4.9    Backup Testing

2.4.10  Recovery Testing

2.4.11  Installation Testing

2.5    Regression Testing

2.6    Out of Scope

3. Test Approach

3.1    General Test Structure

3.2    Data

3.3    Interfaces

3.4    Environmental/System Requirements

3.5    Dependencies

3.6    Regression Test Strategy

3.7    Defect Tracking and Resolution

3.8    Issue Resolution

3.9    Change Requests

3.10  Resource Requirements

3.10.1  People

3.10.2  Hardware

3.10.3  Test Environment

3.11  Milestones/Schedule

3.12  Software Configuration Management

3.13  Test Deliverables

3.14  Test Tools

3.15  Metrics

3.16  Test Entrance/Exit Criteria

3.17  Interim and Summary Status Reports

3.18  Approvals

E5: Function/Test Matrix

The following function/test matrix cross-references the tests to the functions. This matrix provides proof of the completeness of the test strategies and illustrates in graphic format which tests exercise which functions.

Images

E6: GUI Component Test Matrix (Client/Server and Internet Spiral Testing)

With the following GUI component test matrix, each GUI component is defined and documented by name and GUI type. During GUI testing, each component is tested against a predefined set of GUI tests.

Images

E7: GUI-Based Functional Test Matrix (Client/Server and Internet Spiral Testing)

Below is a GUI-based function test matrix template that can be used to document GUI-based test cases. It includes functions and associated GUI objects or foundation components (windows, menus, forms, icons, and controls). Each test includes a requirements number, test objective, test procedure (step or script), expected results, whether the test passed or failed, the tester, and the date of the test. It thus also serves as a test case log.

Function (Enter the Name)

Images

E8: Test Case

The following test case defines the step-by-step process whereby a test is executed. It includes the objectives and conditions of the test, the steps needed to set up the test, the data inputs, and the expected and actual results. Other information, such as the software, environment, version, test ID, screen, and test type, is also provided.

Date: ____________ Tested by: _______________________________________________

System: ____________ Environment:___________________________________________

Objective: ________________ Test ID ________________ Req. ID __________________

Function: _______________________________ Screen:___________________________

Version: ________________________________ Test Type:_________________________

(Unit, Integ., System, Accept.)

Condition to Test:

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Data/Steps to Perform:

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Expected Results:

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Actual Results: Passed M Failed M

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

E9: Test Case Log

The following test case log documents the test cases for a test type to be executed during testing. It also records the results of the tests, which provide the detailed evidence for the test log summary report, and enables one to reconstruct the test, if necessary.

Test Name:

Enter name of test

Test Case Author:

Enter test case author name

Tester Name:

Enter tester name

Project ID/Name:

Enter name of project

Test Cycle ID:

Enter test cycle ID

Date Tested:

Enter date test case was completed

Images

E10: Test Log Summary Report

The following test log summary report documents the test cases from the tester’s test logs, either in progress or completed, for status reporting and metric collection.

Completed By:

Enter the tester name of the report

Report Date:

Enter date of the report

Project ID/Name

Enter project identifier/name

Testing Name/Event:

Enter the name of the type of test (unit, integration, system, acceptance)

Total Number of Test Cases

Enter total number of test cases

Testing Subtype:

Enter name of testing subtype (interface, volume, stress, user, parallel testing)

Images

E11: System Summary Report

A system summary report should be prepared for every major testing event. Sometimes it summarizes all the tests. The following is an outline of the information that should be provided.

1. General Information

1.1 Test Objectives. The objectives of the test, including the general functions of the software tested and the test analysis performed, should be summarized. Objectives include functionality, performance, etc.

1.2 Environment. The software sponsor, development manager, the user organization, and the computer center at which the software is to be installed should be identified. The manner in which the test environment may be different from the operation environment should be noted, and the effects of this difference assessed.

1.3 References. Applicable references should be listed, including the following:

a.   Project authorization

b.   Previously published documents on the project

c.   Documentation concerning related projects

d.   Standards and other reference documents

2. Test Results and Findings

The results and findings of each test should be presented separately.

2.1 Test (Identify)

2.1.1 Validation Tests. Data input and output results of this test, including the output of internally generated data, should be compared with the data input and output requirements. Findings should be included.

2.1.2 Verification Tests. Variances with expected results should be listed.

2.2 Test (Identify). The results and findings of the second and succeeding tests should be presented in a manner similar to the previous paragraphs.

3. Software Function and Findings

3.1 Function (Identify)

3.1.1 Performance. The function should be briefly described. The software capabilities that were designed to satisfy this function should be described. The findings on the demonstrated capabilities from one or more tests should be included.

3.1.2 Limits. The range of data values tested should be identified. The deficiencies, limitations, and constraints detected in the software during the testing with respect to this function should be noted.

3.2 Function (Identify). The findings on the second and succeeding functions should be presented in a manner similar to Paragraph 3.1.

4. Analysis Summary

4.1 Capabilities. The capabilities of the software as demonstrated by the tests should be described. When tests were to demonstrate fulfillment of one or more specific performance requirements, findings showing the comparison of the results with these requirements should be prepared. The effects of any differences in the test environment compared with the operational environment should be assessed.

4.2 Deficiencies. Software deficiencies as demonstrated by the tests should be listed, and their impact on the performance of the software should be assessed. The cumulative or overall impact on performance of all detected deficiencies should be summarized.

4.3 Graphical Analysis. Graphs can be used to demonstrate the history of the development project, including defect trend analysis, root-cause analysis, and so on. (Project wrap-up graphs are recommended as illustrations.)

4.4 Risks. The business risks faced if the software is placed in production should be listed.

4.5 Recommendations and Estimates. For each deficiency, estimates of time and effort required for its correction should be provided along with recommendations on the following:

a.   Urgency of each correction

b.   Parties responsible for corrections

c.   How the corrections should be made

4.6 Opinion. The readiness of the software for production should be assessed.

E12: Defect Report

The following defect report documents an anomaly discovered during testing. It includes all the information needed to reproduce the problem, including the author, release/build number, open/close dates, problem area, problem description, test environment, defect type, how it was detected, who detected it, priority, severity, status, and so on.

Software Problem Report

Defect ID: (Required)

Computer-generated

Author: (Required)

Computer-generated

Release/Build#: (Required)

Build where issue was discovered

Open Date: (Required)

Computer-generated

Close Date: (Required)

Computer-generated when QA closes

Problem Area: (Required)

e.g., add order, etc.

Defect or Enhancement: (Required)

Defect (default)

Enhancement

Problem Title: (Required)

Brief one-line description

Problem Description:

A precise problem description with screen captures, if possible

Current Environment: (Required)

e.g., Win95T/Oracle 4.0 NT

Other Environments:

e.g., WinNT/Oracle 4.0 NT

Defect Type: (Required)

Architectural

Connectivity

Consistency

Database integrity

Documentation

Functionality (default)

GUI

Installation

Memory

Performance

Security and controls

Standards and conventions

Stress

Usability

Who Detected: (Required)

External customer

Internal customer

Development

Quality assurance (default)

How Detected: (Required)

Review

Walkthrough

JAD

Testing (default)

Assigned To: (Required)

Individual assigned to investigate problem

Priority: (Required)

Critical

High (default)

Medium

Low

Severity: (Required)

Critical

High (default)

Medium

Low

Status: (Required)

Open (default)

Being reviewed by development

Returned by development

Ready for testing in the next build

Closed (QA)

Returned by (QA)

Deferred to the next release

Status Description:

(Required when status = “returned by development,” “ready for testing in the next build”)

Fixed by:

(Required when status = “ready for testing in the next build”)

Planned Fix Build#:

(Required when status = “ready for testing in the next build”)

E13: Test Schedule

The following test schedule includes the testing steps (and perhaps tasks), the target begin and end dates, and responsibilities. It should also describe how the test will be reviewed, tracked, and approved.

Images

Images

E14: Retest Matrix

A retest matrix is a tool that relates test cases to functions (or program units) as shown in the following table. A check entry in the matrix indicates that the test case is to be retested when the function (or program unit) has been modified due to enhancements or corrections. The absence of an entry indicates that the test does not need to be retested. The retest matrix can be built before the first testing spiral but needs to be maintained during subsequent spirals. As functions (or program units) are modified during a development spiral, existing or new test cases need to be created and checked in the retest matrix in preparation for the next test spiral. Over time with subsequent spirals, some functions (or program units) may be stable with no recent modifications. Selective removal of their check entries should be considered, and undertaken between testing spirals.

Images

E15: Spiral Testing Summary Report (Client/Server and Internet Spiral Testing)

The objective of the final spiral test report is to describe the results of the testing, including not only what works and what does not, but the test team’s evaluation regarding performance of the application when it is placed into production.

For some projects, informal reports are the practice, whereas in others, very formal reports are required. The following is a compromise between the two extremes to provide essential information not requiring an inordinate amount of preparation:

  1. Project Overview

  2. Test Activities

    1. Test Team

    2. Test Environment

    3. Types of Tests

    4. Test Schedule

    5. Test Tools

  3. Metric Graphics

  4. Findings/Recommendations

E16: Minutes of the Meeting

The following Minutes of the Meeting is used to document the results and follow-up actions for the project information-gathering session. This sample is also included in the CD at the back of the book.

Meeting Purpose

 

Meeting Date

 

Start Time

 

End Time

 

Attended By

 

Distribution List

 

Important Discussions

Images

Action Items

a.

b.

c.

d.

e.

f.

g.

h.

i.

j.

k.

E17: Test Approvals

The Test Approvals matrix is to formally document management approvals for test deliverables. The following is a sample that is also included in the CD at the back of the book.

Deliverable Approvals

 

Images

E18: Test Execution Plan

The following Test Execution Plan is used to plan the activities for the execution phase. This sample is also included in the CD at the back of the book.

Project Name: _____________________________________________________________

Project Code: ______________________________________________________________

Images

Date: ____________________________________________________________________

E19: Test Project Milestones

The following Test Project Milestones matrix is used to identify and track the key test milestones. This sample is also included in the CD at the back of the book.

Images

E20: PDCA Test Schedule

The following PDCA Test Schedule matrix is used to plan and track the Plan–Do–Check–Act test phases. This sample is also included in the CD at the back of the book.

Images

E21: Test Strategy

The following Test Strategy is used to document the overall testing approach for the project. This sample table of contents is also included in the CD in the back of the book.

1. Introduction

1.1 Project Overview

<An introduction to the project, including an outline of the project scope>

1.2 About the Client

<Client’s business in association with the project>

1.3 Application/System Overview

<A concise and high-level explanation of our understanding of the functionality of the application and the breakup of business functions>

2. Testing Scope

<General application scope should be provided in this section>

2.1 Testing Objectives

<Test objectives as they relate to specific requirements>

2.2 Types of Tests

<Types of testing such as functionality testing, nonfunctionality testing, operational acceptance testing, regression testing, performance testing, and so on, should be mentioned here>

2.3 Within Scope

<Transactions, reports, interfaces, business functions, and so on>

2.4 Out of Scope

<Define what is not specifically covered in testing>

2.5 Assumptions

<Test assumptions in conjunction with the test scope>

2.6 Baseline Documents

<The list of baseline documents, prototype with version numbers>

3. Testing Approach

3.1 Testing Methodology

3.3.1 Entry Criteria

<List of criteria that need to be fulfilled before test planning can begin>

3.3.2 Test Planning Approach

<The approach to be adopted in preparing necessary testware, for example, manual test cases or automated test scripts, approach for creating test data, and so on>

3.3.3 Test Documents

<List of test documents, their definition and purpose>

3.3.4 Test Execution Methodology

<A description of how the tests will be executed>

3.3.5 Test Execution Checklist

<List of items that need to be available to the test team prior to the start of test execution>

3.3.6 Test Iterations

<Number of iterations of testing planned for execution, the entry and exit criteria, and the scope of each test iteration>

3.3.7 Defect Management

<Entire defect management process. It includes defect meeting, defect resolution, and so on>

3.3.8 Defect Logging and Defect Reporting

<A note on the defect-logging process and a sample defect log template that will be used during test execution should be mentioned here>

3.3.9 Defect Classification and Defect Life Cycle

<A detailed note on the life cycle of a defect, the different defect severity levels, and defect categories>

3.3.10 Defect Meetings

<A detailed defect meeting procedure indicating the parties to the defect meeting, their responsibilities, and the frequency of defect meetings>

3.3.11 Exit Criteria

<Exit criteria for test execution>

4. Resources

4.1 Skills Required for the Project

<An analysis of the skills required for executing the project>

4.2 Training Schedule

<Project-specific training needs with a timetable>

4.3 Offshore

4.3.1 Test Personnel

<List of test team personnel and their roles in the project along with date of inclusion in the project>

4.3.2 Roles and Responsibilities

4.4 On-Site

4.4.1 Test Personnel

<List of test team personnel and their roles in the project along with date of inclusion in the project>

4.4.2 Roles and Responsibilities

4.5 Client

<Roles and responsibilities of client or client’s representative>

4.6 Test Infrastructure

4.6.1 Hardware

<List of hardware requirements for test execution>

4.6.2 Software

<List of software requirements for test execution>

5. Project Organization and Communication

<Project organization chart, the turnaround time for the review, and sign-off for the documents submitted to the clients>

5.1 Escalation Model

<In case of issues and concerns, the escalation procedure and timelines to escalate>

5.2 Suspension and Resumption Criteria

<List of circumstances under which test activities will be suspended or resumed should be mentioned here>

5.3 Risk, Contingency, and Mitigation Plan

<Risks of the project, contingency, and mitigation plan for the risks identified>

5.4 Schedule

5.4.1 Milestones

<A high-level schedule for the different stages of the project with clear indication of milestones planned with a list of activities>

5.4.2 Detailed Plan

<A detailed project plan using MS-Project with all identified tasks and subtasks, resources to be used with dates fitting into the milestones as mentioned in the high-level schedule>

5.4.3 Deliverables

<A list of deliverables associated with the project as mentioned in the test documents, the mechanism for obtaining client acceptance for the deliverables>

6. Appendix

<Appendix, as mentioned in any of the foregoing sections, should be mentioned here>

E22: Clarification Request

The following Clarification Request matrix is used to document questions that may arise while the tester analyzes the requirements. This sample is also included in the CD at the back of the book.

Project Name: _____________________________________________________________

Project Code: ______________________________________________________________

Images

E23: Screen Data Mapping

The following Screen Data Mapping matrix is used to document the properties of the screen data. This sample is also included in the CD at the back of the book.

Images

E24: Test Condition versus Test Case

The following Test Condition versus Test Case matrix is used to associate a requirement with each condition that is mapped to one or more test cases. This sample is also included in the CD at the back of the book.

Images

E25: Project Status Report

The following Project Status Report is used to report the status of the testing project for all key process areas. This sample is also included in the CD at the back of the book.

Purpose: This template consolidates the QA project-related activities in all key process areas. It is published to all project stakeholders weekly.

Project Name _____________________ Project Code ____________________________

Project Start Date ______________________Project Manager ______________________

Project Phase ______________________ Week No. & Date ________________________

Distribution _______________________________________________________________

Key Activities

 

Details

Remarks

Deliverables

 

 

 

 

 

Decisions

 

 

 

 

 

Weekly Progress for This Week

Images

Unplanned Activities

Images

Activities Planned for Next Week

Images

Images

E26: Test Defect Details Report

The following Test Defect Details Report is used to report the detailed defect status of the testing project for all key process areas. This sample is also included in the CD at the back of the book.

Images

E27: Defect Report

The following Defect Report is used to report the details of a specific defect. This sample is also included in the CD at the back of the book.

Images

E28: Test Execution Tracking Manager

The Test Execution Tracking Manager is an Excel spreadsheet that provides a comprehensive and test cycle view of the number of test cases that passed/failed, the number of defects discovered by application area, the status of the defects, percentage completed, and the defect severities by defect type. The template is located in the CD at the back of the book.

E29: Final Test Summary Report

The following Final Test Summary Report is used as a final report of the test project with key findings.

The following is a sample table of contents that is also included in the CD at the back of the book.

1. Introduction

1.1 Executive Summary

<Highlights of the project in terms of schedule, size, and defect counts, as well as important events that occurred during the life of the project, which would be of interest to the management>

1.2 Project Overview

<This section covers the business of the client and overview of the project>

1.3 Scope of Testing

<A note on the scope of testing and details regarding the scope of testing>

2. Test Methodology

2.1 Test Documents

<A brief note on the test documents>

2.2 Test Iterations

<The details of test iterations carried out>

2.3 Defect Management

<A brief note explaining the Defect Management process followed during execution>

3. Measurements

3.1 Traceability Matrix

<The details of the trace from the requirements through to the scripts>

3.2 Planned versus Actual

<Details of Planned versus Actual schedule with reasons for variation>

3.3 Test Scripts Summary

<The Final Test Scripts summary at the end of Test Execution>

3.4 Features Untested/Invalid

<Details pertaining to the scripts that were untested, invalid, or not delivered and the reasons>

4. Findings

4.1 Final Defect Summary

<Summary of Defects at the end of test execution>

4.2 Deferred Defects

<Details of test cases that failed and are in deferred status with reasons for deferring the defect>

5. Analysis

5.1 Categorywise Defects

<A chart should be generated to display the count of defects categorywise>

5.2 Statuswise Defects

<A chart should be generated to display the count of defects statuswise>

5.3 Severitywise Defects

<A chart should be generated to display the count of defects severitywise>

5.4 Issues

<Details of issues encountered during the course of the project that were documented and brought to the attention of management>

5.5 Risks

<Defects reported should be analyzed and also any foreseeable risks that could affect the business>

5.6 Observations

<Any other critical events that cannot be classified under issues and risks>

6. Test Team

<Names and roles of personnel from all parties involved during the project>

7. Appendices

<Appendices, as referred to in any of the foregoing sections, should be mentioned here>

E30: Test Automation Strategy

The following is the standard format of a Test Automation Strategy that will be customized depending upon the test requirements.

  • ■ Overview of the Project

  • ■ Automation Purpose and Objectives

  • ■ Scope of Automation — Inclusions and Exclusions

  • ■ Automation Approach

  • ■ Test Environment

  • ■ Tools Used — Scripting and Test Management

  • ■ Script Naming Conventions

  • ■ Resources and Scheduling

  • ■ Training Requirements

  • ■ Risk and Mitigation

  • ■ Assumptions and Constraints

  • ■ Entry and Exit Criteria

  • ■ Acceptance Criteria

  • ■ Deliverables

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset