Chapter 13

Information Gathering (Plan)

You will recall that, in the spiral development environment, software testing is described as a continuous improvement process that must be integrated into a rapid application development methodology. Deming’s continuous improvement process using the PDCA model (see Figure 13.1) is applied to the software testing process. We are now in the Plan part of the spiral model.

Figure 13.2 outlines the steps and tasks associated with information gathering within the Plan part of spiral testing. Each step and task is described along with valuable tips and techniques.

The purpose of gathering information is to obtain information relevant to the software development project and organize it, to understand the scope of the development project and start building a test plan. Other interviews may occur during the development project, as necessary.

Proper preparation is critical to the success of the interview. Before the interview, it is important to clearly identify the objectives of the interview and communicate them to all parties, identify the quality assurance representative who will lead the interview, and identify the scribe; schedule a time and place; prepare any required handouts; and communicate what is required from development.

Although many interviews are unstructured, the interviewing steps and tasks shown in Figure 13.2 will be helpful.

Images

Figure 13.1   Spiral testing and continuous improvement.

Step 1: Prepare for the Interview

Task 1: Identify the Participants

It is recommended that there be no more than two interviewers representing quality assurance. It is helpful for one of these to assume the role of questioner, with the other taking detailed notes. This will allow the interviewer to focus on soliciting information. Ideally, the interviewer should be the manager responsible for the project-testing activities. The scribe, or note taker, should be a test engineer or lead tester assigned to the project; the scribe supports the interviewer and records each pertinent piece of information and lists the issues, the assumptions, and questions.

The recommended development participants attending include the project sponsor, development manager, or a senior development team member. Although members of the development team can take notes, this is the responsibility of the scribe. Having more than one scribe can result in confusion, because multiple sets of notes will eventually have to be consolidated. The most efficient approach is for the scribe to take notes, and summarize at the end of the interview. (See Appendix F20, “Project Information Gathering Checklist,” which can be used to verify the information available and required at the beginning of the project.)

Task 2: Define the Agenda

The key factor for a successful interview is a well-thought-out agenda. It should be prepared by the interviewer ahead of time and agreed upon by the development leader. The agenda should include an introduction, specific points to cover, and a summary section. The main purpose of an agenda is to enable the testing manager to gather enough information to scope out the quality assurance activities and begin a test plan. Table 13.1 depicts a sample agenda (details are described in “Step 2: Conduct the Interview”).

Step 2: Conduct the Interview

A good interview contains certain elements. The first is defining what will be discussed, or “talking about what we are going to talk about.” The second is discussing the details, or “talking about it.” The third is summarizing, or “talking about what we talked about.” The final element is timeliness. The interviewer should state up front the estimated duration of the interview and set the ground rule that if the allotted time expires before completing all items on the agenda, a follow-on interview will be scheduled. This is difficult, particularly when the interview is into the details, but nonetheless it should be followed.

Images

Figure 13.2   Information gathering (steps/tasks).

Table 13.1   Interview Agenda

Images

Task 1: Understand the Project

Before getting into the project details, the interviewer should state the objectives of the interview and present the agenda. As with any type of interview, he or she should indicate that only one individual should speak, no interruptions should occur until the speaker acknowledges a request, and the focus should be on the material being presented.

The interviewer should then introduce himself or herself, introduce the scribe, and ask the members of the development team to introduce themselves. Each should indicate name, title, specific roles and job responsibilities, as well as expectations of the interview. The interviewer should point out that the purpose of this task is to obtain general project background information.

The following general questions should be asked to solicit basic information:

  1. ■ What is the name of the project?

  2. ■ What are the high-level project objectives?

  3. ■ Who is the audience (users) of the system to be developed?

  4. ■ When was the project started?

  5. ■ When is it anticipated to be complete?

  6. ■ What is the status of the project?

  7. ■ What is the projected effort in person-months?

  8. ■ Is this a new, maintenance, or package development project?

  9. ■ What are the major problems, issues, and concerns?

  10. ■ Are there plans to address problems and issues?

  11. ■ Is the budget on schedule?

  12. ■ Is the budget too tight, too loose, or about right?

  13. ■ What organizational units are participating in the project?

  14. ■ Is there an established organization chart?

  15. ■ What resources are assigned to each unit?

  16. ■ What is the decision-making structure; that is, who makes the decisions?

  17. ■ What are the project roles and the responsibilities associated with each role?

  18. ■ Who is the resource with whom the test team will communicate on a daily basis?

  19. ■ Has a quality management plan been developed?

  20. ■ Has a periodic review process been set up?

  21. ■ Has a representative from the user community been appointed to represent quality?

Task 2: Understand the Project Objectives

To develop a test plan for a development project, it is important to understand the objectives of the project. The purpose of this task is to understand the scope, needs, and high-level requirements of this project.

The following questions should be asked to solicit basic information:

  1. ■ Purpose:

    1. –   What type of system is being developed, for example, payroll, order entry, inventory, or accounts receivable/payable?

    2. –   Why is the system being developed?

    3. –   What subsystems are involved?

    4. –   What are the subjective requirements, for example, ease of use, efficiency, morale, flexibility?

  2. ■ Scope:

    1. –   Who are the users of the system?

    2. –   What are the users’ job titles and roles?

    3. –   What are the major functions and subfunctions of the system?

    4. –   What functions will not be implemented?

    5. –   What business procedures are within the scope of the system?

    6. –   Are there analysis diagrams, such as business flow diagrams, data flow diagrams, or data models, to describe the system?

    7. –   Have project deliverables been defined along with completeness criteria?

  3. ■ Benefits:

    1. –   What are the anticipated benefits that will be provided to the user with this system?

      • Increased productivity

      • Improved quality

      • Cost savings

      • Increased revenue

      • More competitive advantage

  4. ■ Strategic:

    1. –   What are the strategic or competitive advantages?

    2. –   What impact will the system have on the organization, customers, legal, government, and so on?

  5. ■ Constraints:

    1. –   What are the financial, organizational, personnel, technological constraints, or limitations of the system?

    2. –   What business functions and procedures are out of the scope of the system?

Task 3: Understand the Project Status

The purpose of this task is to understand where the project is at this point, which will help define how to plan the testing effort. For example, if this is the first interview and the project is at the stage of coding the application, the testing effort is already behind schedule. The following questions should be asked to solicit basic information:

  1. ■ Has a detailed project work plan, including activities, tasks, dependencies, resource assignments, work effort estimates, and milestones, been developed?

  2. ■ Is the project on schedule?

  3. ■ Is the completion time too tight?

  4. ■ Is the completion time too loose?

  5. ■ Is the completion time about right?

  6. ■ Have there been any major slips in the schedule that will have an impact on the critical path?

  7. ■ How far is the project from meeting its objectives?

  8. ■ Are the user functionality and quality expectations realistic and being met?

  9. ■ Are the project work effort hours trends on schedule?

  10. ■ Are the project costs trends within the budget?

  11. ■ What development deliverables have been delivered?

Task 4: Understand the Project Plans

Because the testing effort needs to track development, it is important to understand the project work plans. The following questions should be asked to solicit basic information:

  1. ■ Work breakdown:

    1. –   Has a Microsoft Project (or other tool) plan been developed?

    2. –   How detailed is the plan; for example, how many major and bottom-level tasks have been identified?

    3. –   What are the major project milestones (internal and external)?

  2. ■ Assignments:

    1. –   Have appropriate resources been assigned to each work plan?

    2. –   Is the work plan well balanced?

    3. –   What is the plan to stage resources?

  3. ■ Schedule:

    1. –   Is the project plan on schedule?

    2. –   Is the project plan behind schedule?

    3. –   Is the plan updated periodically?

Task 5: Understand the Project Development Methodology

The testing effort must integrate with the development methodology. If considered a separate function, it may not receive the appropriate resources and commitment. When testing is integrated with development, the latter should not proceed without the former. Testing steps and tasks need to be integrated into the systems development methodology through addition or modification of tasks. Specifically, the testing function needs to know when in the development methodology test design can start. It also needs to know when the system will be available for execution and the recording and correction of defects.

The following questions should be asked to solicit basic information:

  1. ■ What is the methodology?

  2. ■ What development and project management methodology does the development organization use?

  3. ■ How well does the development organization follow the development methodology?

  4. ■ Is there room for interpretation or flexibility?

  5. ■ Standards:

    1. –   Are standards and practices documented?

    2. –   Are the standards useful or do they hinder productivity?

    3. –   How well does the development organization enforce standards?

Task 6: Identify the High-Level Business Requirements

A software requirements specification defines the functions of a particular software product in a specific environment. Depending on the development organization, it may vary from a loosely defined document with a generalized definition of what the application will do to a very detailed specification, as shown in Appendix C, “Requirements Specification.” In either case, the testing manager must assess the scope of the development project to start a test plan.

The following questions should be asked to solicit basic information:

  1. What are the high-level functions? The functions at a high level should be enumerated. Examples include order processing, financial processing, reporting capability, financial planning, purchasing, inventory control, sales administration, shipping, cash flow analysis, payroll, cost accounting, and recruiting. This list defines what the application is supposed to do and gives the testing manager an idea of the level of test design and implementation required. The interviewer should solicit as much detail as possible, including a detailed breakdown of each function. If this detail is not available during the interview, a request for a detailed functional decomposition should be made, and it should be pointed out that this information is essential for preparing a test plan.

  2. What are the system (minimum) requirements? A description of the operating system version (Windows, etc.) and minimum microprocessor, disk space, RAM, and communications hardware should be provided.

  3. What are the Windows or external interfaces? The specification should define how the application should behave from an external viewpoint, usually by defining the inputs and outputs. It also includes a description of any interfaces to other applications or subsystems.

  4. What are the performance requirements? This includes a description of the speed, availability, data volume throughput rate, response time, and recovery time of various functions, stress, and so on. This serves as a basis for understanding the level of performance and stress testing that may be required.

  5. What other testing attributes are required? This includes such attributes as portability, maintainability, security, and usability. This serves as a basis for understanding the level of other system-level testing that may be required.

  6. Are there any design constraints? This includes a description of any limitation on the operating environments, database integrity, resource limits, implementation language standards, and so on.

Task 7: Perform Risk Analysis

The purpose of this task is to measure the degree of business risk in an application system to improve testing. This is accomplished in two ways: high-risk applications can be identified and subjected to more extensive testing, and risk analysis can help identify the error-prone components of an individual application so that testing can be directed at those components. This task describes how to use risk assessment techniques to measure the risk of an application under testing.

Computer Risk Analysis

Risk analysis is a formal method for identifying vulnerabilities (i.e., areas of potential loss). Any weakness that could be misused, intentionally or accidentally, and result in a loss to the organization is a vulnerability. Identification of risks allows the testing process to measure the potential effect of those vulnerabilities (e.g., the maximum loss that could occur if the risk or vulnerability were exploited).

Risk has always been a testing consideration. Individuals naturally try to anticipate problems and then test to determine whether additional resources and attention need to be directed at those problems. Often, however, risk analysis methods are both informal and ineffective.

Through proper analysis, the test manager should be able to predict the probability of such unfavorable consequences as the following:

  1. ■ Failure to obtain all, or even any, of the expected benefits

  2. ■ Cost and schedule overruns

  3. ■ An inadequate system of internal control

  4. ■ Technical performance of the resulting system that is significantly below the estimate

  5. ■ Incompatibility of the system with the selected hardware and software

The following reviews the various methods used for risk analysis and the dimensions of computer risk, and then describes the various approaches to assigning risk priorities. There are three methods of performing risk analysis.

Method 1: Judgment and Instinct

This method of determining how much testing to perform enables the tester to compare the project with past projects to estimate the magnitude of the risk. Although this method can be effective, the knowledge and experience it relies on are not transferable but must be learned over time.

Method 2: Dollar Estimation

Risk is the probability of incurring loss. That probability is expressed through this formula:

(Frequency of occurrence) × (loss per occurrence) = (annual loss expectation)

Business risk based on this formula can be quantified in dollars. Often, however, the concept, not the formula, is used to estimate how many dollars might be involved if problems were to occur. The disadvantages of projecting risks in dollars are that such numbers (i.e., frequency of occurrence and loss per occurrence) are difficult to estimate and the method implies a greater degree of precision than may be realistic.

Method 3: Identifying and Weighting Risk Attributes

Experience has demonstrated that the major attributes causing potential risks are the project size, experience with the technology, and project structure. The larger the project is in dollar expense, staffing levels, elapsed time, and number of departments affected, the greater the risk.

Because of the greater likelihood of unexpected technical problems, project risk increases as the project team’s familiarity with the hardware, operating systems, database, and application languages decreases. A project that has a slight risk for a leading-edge, large systems development department may carry a very high risk for a smaller, less technically advanced group. The latter group, however, can reduce its risk by purchasing outside skills for an undertaking that involves a technology in general commercial use.

In highly structured projects, the nature of the task defines the output completely, from the beginning. Such output is fixed during the life of the project. These projects carry much less risk than those whose output is more subject to the manager’s judgment and changes.

The relationship among these attributes can be determined through weighting, and the testing manger can use weighted scores to rank application systems according to their risk. For example, this method can show application A is a higher risk than application B.

Risk assessment is applied by first weighting the individual risk attributes. For example, if an attribute is twice as important as another, it can be multiplied by the weight of two. The resulting score is compared with other scores developed for the same development organization and is used to determine a relative risk measurement among applications, but it is not used to determine an absolute measure.

Table 13.2 compares three projects using the weighted risk attribute method. Project size has a weight factor of 2, experience with technology has a weight factor of 3, and project structure has a weight factor of 1. When the project scores are each multiplied by each of the three weight factors, it becomes clear that project A has the highest risk.

Table 13.2   Identifying and Weighting Risk Attributes

Weighting Factor

Project A (Score × Weight)

Project B (Score × Weight)

Project C (Score × Weight)

Project size (2)

5 × 2 = 10

3 × 2 = 6

2 × 2 = 4

Experience with technology (3)

7 × 3 = 21

1 × 3 = 3

5 × 3 = 15

Project structure (1)

4 × 1 = 4

6 × 1 = 6

3 × 1 = 3

Total score

35

15

22

Information gathered during risk analysis can be used to allocate test resources to test application systems. For example, high-risk applications should receive extensive testing; medium-risk systems, less testing; and low-risk systems, minimal testing. The area of testing can be selected on the basis of high-risk characteristics. For example, if computer technology is a high-risk characteristic, the testing manager may want to spend more time testing how effectively the development team is using that technology.

Step 3: Summarize the Findings

Task 1: Summarize the Interview

After the interview is completed, the interviewer should review the agenda and outline the main conclusions. If a follow-up session is needed, one should be scheduled at this point while the members are present.

Typically, during the interview, the notes are unstructured and hard to follow by anyone except the note taker. However, the notes should have at least followed the agenda. After the interview concludes, the notes should be formalized into a summary report. This should be performed by the scribe note taker. The goal is to make the results of the session as clear as possible for quality assurance and the development organization. However, the interview leader may have to embellish the material or expand certain areas. (See Appendix E16, “Minutes of the Meeting,” which can be used to document the results and follow-up actions for the project information-gathering session).

Task 2: Confirm the Interview Findings

The purpose of this task is to bring about agreement between the interviewer and the development organization, to ensure an understanding of the project. After the interview notes are formalized, it is important to distribute the summary report to the other members who attended the interview. A sincere invitation for their comments or suggestions should be communicated. The interviewer should then actively follow up interview agreements and disagreements. Any changes should then be implemented. Once there is full agreement, the interviewer should provide a copy of the summary report.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset