6Test Project Evaluation and Reporting

Keywords: None

6.1Introduction

Learning objectives

No learning objectives for this section.

In Chapter 5, we used a scenario of planning a summer vacation as a way to introduce the project management topics of planning, risk management, and retrospectives. Following with this analogy, once the vacation has begun, it’s important to track and evaluate several areas to help ensure that our vacation is successful. For example, from a project perspective, we’d need to determine our budget beforehand and, throughout the trip (without getting too exacting) we could do periodic variance analysis. This analysis could compare our actual expenses to date against both our overall budget and budget to date. We could report to our family where we are from a vacation expense perspective and, if we were way over budget, we could discuss ways to help us get back on budget (e.g., we might need to rely on tuna sandwiches over filet mignon for dinner for the rest of the trip). Also, to get to our vacation spot, we could consider comparing our current traveling time against our overall travel plans. If it takes, for instance, six hours of driving time, we could take checkpoints to see if we’re on schedule. If not, our family could determine whether corrective action needed to be taken . . . or we’ve just encountered too much traffic on the road and will arrive at our destination late.

This vacation analogy is one of several examples in our everyday lives of tracking a project or any major activity or initiative, taking periodic checkpoints to assess variance from a plan and overall performance. Truly, students are concerned with determining how well they are performing in class via their graded assignments and exams throughout the course and will look to take different tactics to raise their level if below expectations (e.g., studying harder, forming a study group, getting advice from their instructor, etc.). Similarly, employees are encouraged to have periodic performance reviews with their managers to ensure that they are meeting expectations concerning their goals; if not, corrective action can be taken early to get back on course. Lastly, similar to the information a project manager uses to assess project performance, a test manager needs to objectively track a project’s test components in progress, evaluate the data, and take any necessary corrective actions to bring back performance to an acceptable level. This also includes reporting the appropriate data at the correct level to the correct internal and external stakeholders.

In this chapter, we’ll explore some specific testing information that a test manager would be interested in tracking; delve into internal and external reporting to meet a variety of constituents’ needs; discuss test reporting considerations at each of the primary testing stages within a project; and close with an overview of key quality control techniques essential for a test manager. While some people think of test metrics as purely a Waterfall project consideration, interspersed throughout this chapter, we’ll see some Agile-specific metrics and ways to report on progress.

6.2Tracking Information

Learning objectives

LO 7.2.1

(K4) For a given level of testing determine the most effective metrics to be used to control the project.

As a project progresses through its lifecycle and approaches its testing phase, project stakeholders—in particular the project sponsor, project manager, and test manager—will need to know test progress to gauge the quality of the project. In order to do this, test managers develop metrics and report on such.

The ISTQB defines key reporting terms as below. In Table 6-1, I’ve included two realistic examples for each of the terms defined.

What are the overall purpose and true value of the metrics that you and your test team will develop for projects? In general, your metrics focus on three major areas: the quality of the product being developed; the progress of the project in meeting its goals; and the quality of the test process. Your various testing metrics will help determine the level of quality in the product as it develops in terms of overall and point-in-time defects discovered/fixed/retested/confirmed. Additionally, your metrics will yield information indicating how well the project will meet its key performance indicators, such as completion on schedule, in part by showing the test team’s progress in executing planned test cases. Lastly, metrics can show high or low levels of quality in the test process through lessons learned sessions, offered at the end of major milestones or phases or the completion of the project in Waterfall projects or during sprint retrospectives at the end of each sprint in Scrum projects.

Table 6-1 Basic reporting terms and definitions

image

As a test manager, how do you best determine the metrics on which to periodically report? To begin, your metrics should align with and support the overall project objectives as well as specific testing objectives defined in the test policy. The objectives can include proper exit criteria from the test plan. Therefore, each metric should correlate to a key project objective, and you should be able to explain that correlation. For instance, at one place Jim worked in the project management office (PMO), from a project governance perspective, he was responsible for generating and distributing reports to project managers and their managers regarding conformance to the standard process and templates. These reports aligned with the overall management objective that all projects needed to follow the PMO-defined project process and use the prescribed templates.

Bear in mind that all metrics should include the following considerations:

  • Objective. Metrics should not be unduly biased to favor one position over another. That is, metrics should never be used to project a false sense of progress when there are clearly issues nor a sense of undue concern when there is none.
  • Informative. In one place Jim worked, senior leadership described a project as a way to tell a story. In this story, a project is like a journey with a purpose and well-defined beginning and ending; along the journey, periodic checkpoints help determine how well the project is progressing. These periodic checkpoints are metrics reports and should contain sufficient information (not too little and not too much) to inform and, as necessary, guide the audience in making any necessary decisions on course correction so the project can stay on its journey.
  • Understandable. Although your metrics should be clear and as simple as necessary, given both the information presented and the target audience, each metric should be explained to ensure proper understanding and remove any ambiguities in interpretation. As best as possible, educate your audience in each metric and show the clear alignment of the metric to overall project and/or test objectives. In particular, explain where the metrics could indicate warning signs that either quality or project progress may be jeopardized.
  • Ease of gathering data. Metrics that are extremely difficult or time-consuming to obtain from team members may be inaccurate or simply not provided at all. Such metrics that are cumbersome and arduous to produce on a regular basis may have diminishing returns; surely, if a tester spends even 10 percent of her time (e.g., 4 hours per week) gathering data and producing and distributing management reports, the team could decide on a more efficient (albeit skinnier) set of metrics to better utilize her time by running test cases.

Keeping these considerations in mind will go a long way to making metrics useful. If appropriate project decisions cannot be made based on reported metrics, they lose their application and usefulness and become an exercise in futility.

One useful approach to consider when developing a small set of metrics is the Goal-Question-Metric (GQM) approach, which begins with a set of goals defined by the organization/department/project and then characterizes a way to assess the achievement of those goals, resulting in a set of data per question in order to answer each question in a quantitative manner to assess performance.1

Since metrics are generally shared on a periodic (usually weekly) basis, it is highly recommended that any tools that can be used to both generate and distribute reports, relieving managers and/or staff from these tedious and repetitive tasks, be considered for use.

Typical testing data that could be tracked for a project include the following:

  • Test cases designed. Similar to a variance analysis that a project manager performs on projects, including a comparison of budgeted or planned costs versus actual costs, the test manager would track the overall planned test cases designed for a project versus the actual test cases developed.
  • Defect metrics. There are a variety of metrics involving defects, including these:
    • A breakdown by status (e.g., open, fixed, retested, confirmed, closed)
    • A categorization by severity or priority (e.g., number of defects related to high-, medium-, and low-risk tests based on a risk-based testing strategy)
    • A breakdown by area (e.g., number of defects discovered in different functional areas of the product)
    • Trending (e.g., increased number of daily defects discovered over time could indicate increasingly poor product quality and should be investigated)
    • Convergence (e.g., convergence or moving toward a desired goal or criterion, such as the total number of tests executed to the total number of tests planned for execution)
    • Resolution turnaround (e.g., how quickly it takes the development team to resolve defects and the test team to retest and confirm these fixes)
    • Reopen rate (e.g., if the rate that defects are closed and then reopened after testers retest increases over time, this could be indicative of poor code quality perhaps due to misunderstood requirements, design flaws, unskilled/untrained developers, etc.)
    • Rejection rate (e.g., defects created either in error or due to ignorance and measure wasted effort with an opportunity for improvement)
  • Test execution metrics. Similar to defect metrics just addressed, test execution metrics include these:
    • A breakdown by status (e.g., number of tests run, passed, failed, blocked, etc.)
    • Automated versus manual (e.g., number of automated tests run, number of manual tests run)
    • Execution rates (e.g., how quickly tests are run)
    • Convergence (e.g., convergence or moving toward a desired goal or criterion, such as the total number of tests executed to the total number of tests planned for execution)
    • Planned versus actual (e.g., the number of actual tests run compared to the number of planned tests to be run at a point in time to see if test execution is on, ahead of, or behind plan)
    • Number of test cycles (e.g., how many test cycles or passes through the test set)
  • Coverage metrics. The following metrics pertain to understanding the degree to which the test basis has been tested and, if possible, more than one type of coverage metric should be used:
    • Code (e.g., an indication of how many parts of the software have been tested or covered by tests within the test suite, including statement, decision, and condition coverage)
    • Requirements (e.g., the number of requirements that have been tested and, of those, how many are known to have problems)
    • Risks (e.g., the amount of test coverage depending on the level of risk, such as the amount of coverage on high-risk, medium-risk, and low-risk items)
  • Management metrics. The following metrics focus more on test management-related items as opposed to test basis concerns:
    • Resource hours (e.g., the number of planned versus actual resource hours devoted to testing)
    • Setup time (e.g., the planned versus actual time to set up the testing environment, create test data, user accounts, etc.)
    • Downtime incurred (e.g., the expected and unexpected downtime incurred on a project due to issues related to the test environment, test infrastructure (such as servers), etc.)
    • Planned versus actual (e.g., while planned versus actual metrics are covered under defects and test cases execution categories, at the management level, this can include schedule and cost metrics by comparing the plans against the actual either at a point in time or in a cumulative sense at the project level or at the testing stage)

6.2.1Burn Baby Burn

The Agile methodology often uses burn charts to represent project status over time. For instance, burndown charts are common and typically show story point movement over the course of a sprint or iteration. Similar to the risk burndown chart in Figure 6-4 to be addressed later, the sprint burndown chart in Figure 6-1 shows the amount of work remaining at the beginning of the sprint and the downward trend over the course of the sprint with the goal of zero story points remaining by the end of the sprint. Here, we can see that actual progress in completing story points during the sprint varies, with some days falling below plan (e.g., days 2, 3, 7, 8, 9, 10, and 11) and other days progressing above plan (e.g., 5, 6, 12, and 13); overall, all story points planned for the sprint have been completed. At the end of each sprint, the Scrum master updates the release burndown chart which, similar to the sprint burndown chart, reflects the progress the team has made in completing story points; the difference pertains to the scope as the sprint burndown chart focuses on the progress within the sprint while the release burndown chart reflects overall progress at the release level.

image

Figure 6-1 Sprint burndown chart

Burnup charts are also somewhat common with Agile projects. These typically depict functionality as represented by completed story points over time (see Figure 6-2). From this perspective, the trend increases over time as more and more functionality or user stories via story points are completed. Assuming a straight-line, linear approach in terms of an even allocation of completed story points, represented as planned progress, we can see that the actual progress, although never exceeding the plan within any one sprint, overall, achieves the goal of 200-story-point completion by the final, tenth sprint.

image

Figure 6-2 Functionality burnup chart

The above lists include metrics that capture information both point-in-time as a snapshot view or as trends to see how the information changes over time. Often, point-in-time information can be adequately represented in tabular form or as a histogram, such as the number of defects per status as of yesterday. In order to effectively see emerging trends over time, such as the number of defects per status per day over a 20-day period, a time-based graph such as a histogram would come in handy. Trends help uncover important information concerning schedule, resource, or quality issues. Obviously, trending information relies on and is derived from detailed information that is tracked accordingly.

In order to properly track this data, the test manager must ensure that the data being reported is correct. Incorrect or invalid data can make the status and interpretation of the data unrealistically positive (resulting in a false sense of security with no action taken when there may indeed need consideration and a planned course of action to be developed) or unrealistically negative (causing inappropriate and unnecessary action to take place). Either situation would poorly reflect on the test team and leave egg on the face of the test manager.

6.3Evaluating and Using Information — Internal Reporting

Learning objectives

LO 7.3.1

(K6) Create an effective test results report set for a given project for internal use.

As mentioned at the end of the previous subsection, the accuracy and validity of the reported metric data are critical. When comparing actual values to expected outcomes and variances arise, there will undoubtedly be further investigation and corrective action taken to reduce the variances to bring future actual results in line with planned results. For example, if the plan is to find 100 defects per week but the project is trending at an actual rate of 50 defects per week, management will want to know why. Some possible causes could be better-than-expected software quality, resulting in fewer defects discovered. Also, this result could be due to the fact that testers are unexpectedly out sick or because of a downed test environment, preventing testers from executing test cases according to plan.

As previously mentioned, it is important to educate your audience in the metrics that you present. That assumes of course that you, as test manager, properly understand the information yourself. Understanding this information requires expertise. Over time, this experience includes the ability to effectively extract or derive the information, interpret it, and research any variances. Again, reporting raw data that is flawed not only is misleading but also will seriously undermine your credibility as a test manager.

After you have verified the reporting information for accuracy, the information is ready to be distributed internally to your project team for review and consideration of any necessary action plans. Accompanying the reporting information should be a clear explanation of what the metrics represent and how to interpret them; this preferably should be done via meetings or conference calls to allow dialogue rather than through a less interactive means such as email. Different project team members will need to see different reports. For instance, management may wish to see high-level testing efficiency reporting of actual versus planned test-case execution. The development team may be interested to see more detailed defect metrics as defects affect the development team; such reporting may be shared internally before releasing to project managers or external team members in order to better understand the reason for the information reflected in this metric report.

The test manager uses internal reports to manage and control the testing aspects of the project. Some of these internal reports include:

  • Daily test design/execution status. –This report lists the number of test cases executed on a daily basis and can show meaningful trends over time. The planned test-case execution on a daily basis can likewise be plotted and the gap, if any, between the planned and actual daily test case execution can be displayed (Figure 6-3).

    The test manager can look at other reports or question her team to help understand any significant gaps between actual versus plan. For example, assume that the test manager has determined that the number of test cases to be executed by her team is 50 per day. The team performs as expected on the first of eight days. However, productivity dips a bit on days 2 and 4; productivity takes a nosedive beginning on day 6 and lasting through day 8. What could account for this? After researching the possible causes of this dip in productivity, it was discovered that there was an inordinate number of defects detected by this test team after the first few days of testing. The testers then were refocused on retesting the test cases related to the defects and thus were not focused on testing their planned, previously unexecuted test cases for the day, reflected in their poor performance on this internal report.

image

Figure 6-3 Test execution status report

6.3.1The earlier the better

One particular reality of many software development projects is that testing, positioned at the tail end of traditional, sequential models, often gets squeezed in terms of time in order to maintain the project schedule. This results in either stressed testers working extra hard and long hours to complete their testing tasks or some testing tasks being sacrificed and not getting done, compromising the quality of the product. If testing could somehow occur earlier than planned on projects, this concern may be reduced or even eliminated. One approach is to intentionally shift the testing design and execution as early as possible. Following an Agile approach called test-driven development, where requirements are turned into test cases first and then the code written and refactored until the test cases pass, this method moves testing much earlier into the process with the aim to produce better-quality products.

Additionally, given the general rule that it is easier and cheaper to fix a defect the earlier it is detected, the goal is to move test preparation and design, test execution, and overall risk mitigation as close as possible to the start of sprint cycles on a Scrum project. Often, for planned user stories per sprint, tests are not always designed and executed on day 1. There is sometimes a lag, and proper tests in support of stories contained within the sprint do not start for one or even several days into the sprint, causing this rush to test toward the end of the sprint cycle. Ideally, the following would happen:

  • Tests for those cases planned for an upcoming sprint would be designed at the start or even prior to entering the sprint.
  • At the start of the sprint, the goal is to execute all test cases in order to obtain the maximum time to test within the timeboxed sprint.
  • Appropriately assessed higher risk user stories are addressed at the start of the sprint.
  • All user stories and risks are covered by the end of the sprint.

In theory, this all makes sense. But how can the team do this in practice, especially when the team may be too occupied with closing activities of one sprint to expend time and energy designing test cases for the next sprint(s)? How can all planned test cases in support of the user stories (especially more important stories from a risk perspective) be executed on day 1 of the sprint before developers have coded to support the stories?

Test-driven development, noted above, can help, as user-story test cases can be executed on day 1 of the sprint since testing precedes actual development with this approach. Additionally, just as some time is allocated for future sprint planning, some time prior to the actual sprint can be used to design the test cases for user stories so, when the actual sprint begins, appropriate test cases are already predesigned before entering the sprint. Lastly, outside of the sprint cycle, the team can assess each user story from a risk perspective. This means that each story would be considered from a risk likelihood and risk impact perspective. Given a numerical assessment scale (e.g., 1 = low likelihood/impact, 2 = average likelihood/impact, 3 = high likelihood/impact), each story is assessed and assigned a risk priority number (RPN). For example, a story with a relatively high-risk likelihood of occurring and high impact if the risk does occur would be assigned an RPN of 9 (3 = high likelihood × 3 = high impact). Those stories with higher RPNs should be planned to be written and tested earlier in the sprint than later since these stories are more important to the product from a risk perspective and the overall mitigation of risks occurs by addressing the largest risks in the sprint first.

We can measure the effectiveness of this approach of test design, test execution, and risk mitigation as early as possible using metrics.

For example, as the risk burndown chart in Figure 6-4 shows, over the life of the sprint, as each user story is completed (the definition of done being executed and passed test cases associated with the story), there is a decrease in the overall risks associated with the sprint. For example, assume we begin with, for simplicity, 20 user stories assigned to the sprint, each with an RPN of 9 (high likelihood of occurrence times high impact or 3 × 3 = 9), with a total sprint RPN of 180. Day 1 of the sprint reflects this starting point. The second-day value of 162 is the starting point for that day, and the difference from the previous day (180 – 162 = 18) reflects completion of two user stories, since each has an RPN of 9. It helps to note that the sprint begins on a Wednesday (3/1), so we can see that the plateau from 3/4 to 3/6 is so because 3/4 and 3/5 are weekend days with no testing work accomplished, and we begin the week on Monday, 3/7, with a sprint backlog of 126 RPNs. This plateauing also occurs on the second weekend of the two-week sprint (3/11 and 3/12). Those stories with the larger RPNs should be planned to be completed toward the start of the sprint to best mitigate risk in case not all stories complete within this sprint as planned; obviously, in this example, each story has the same RPN so this prioritization would not make as much sense here, since all 20 user stories were assessed as high-risk items.

image

Figure 6-4 Risk burndown chart

  • Daily defect status. This report shows the various statuses of defects on a daily basis. Figure 6-5 shows the number of defects per status at the end of each day over an eight-day testing period. For example, at the end of day 1, five new defects were created. During day 2, ten defects were retested and the associated test cases were then passed. Since there are four statuses that are being reported, this report can get a little difficult to read as it is and including an accompanying data table may help with the interpretation.

image

Figure 6-5 Daily defect status report

  • Weekly defect trending. As Figure 6-6 shows, the arrival rate of defects per week, with the exception of week 4, shows a trend toward declining defects over time. If an exit criterion for the project were to have at least one week without any new defects identified, then we see that we have satisfied this constraint on week 8 (assuming everything else remained the same and the team executed their planned number of test cases that week). The test manager may question why the number of defects discovered increased in week 4.

image

Figure 6-6 Weekly defect trending report

  • Daily test environment reports and status. This type of report can show a trend over time regarding the availability of the test environment. If, for example, test execution progress is down for a period of time, the test manager can consult this test environment report to see if there were any test environment downtime during the period of slow execution progress. The point is that these internal reports can work in tandem to better understand and interpret information that singular internal reports alone may not reveal.
  • Resource availability. As its name implies, this type of report shows the availability of resources over a period of time. Let’s assume that we have an eight-week testing cycle and we plan on a full staff of 10 testers dedicated to the entire project, including this eight-week cycle. The upper limit shows a target of 10 testing resources; if the full set of 10 testers is available each week, then we have achieved 100 percent testing resource availability. In interpreting the information in Figure 6-7, the project is at full capacity during weeks 1 and 2, but two testers in week 3 unexpectedly are out sick. While one of the sick testers returns to work by week 4, the other sick tester is out sick for this week as well. By week 5, we are at full capacity again, only to dip to 80 percent capacity the final week (week 8). This simple metric can be helpful to better understand if, for instance, our actual test case execution drops below plan in weeks 3 and 4, as consulting this report can help explain lower productivity due to tester illness. This metric can also apply to the Scrum approach and may shed light on why burndown charts are misaligned with a team’s historical velocity; the team could very well experience less-than-planned velocity in a sprint if team members are unexpectedly out sick.

image

Figure 6-7 Testing resource availability report

While some may think that internal reporting does not need to be as formal as that intended for external consumers, the quality and accuracy of the information is just as essential, especially if management and the project team will make decisions based on the information in these reports. A word to the wise then is to invest the proper amount of time to ensure that the information is sound and to use good reporting tools and methods to make this reporting as easy and as accurate as possible.

6.4Sharing Information — External Reporting

Learning objectives

LO 7.4.1

(K6) Create an accurate, comprehensible, and audience-appropriate test status dashboard.

LO 7.4.2

(K4) Given a specific project and project team, determine the proper level and media for providing weekly test reports.

Similar to internal reporting, the purpose of external reporting is to inform project stakeholders of the testing status of the project. The audience for external reporting often involves senior leadership or executives far removed from the day-to-day, operational work that the test team and other team members perform. At times, testing status may be a bit different from the status reported by other aspects of the project. This doesn’t mean that one report is more correct than another, but rather that the reporting often comes from different perspectives.

As with any form of communication, external reporting must take into account its audience and that audience’s needs and communicate status appropriately. For instance, the development team may require a detailed defect report to help better understand the level of quality software they deliver to the testing team. However, an executive may best be served by a high-level dashboard status of testing on the project. Generally, senior levels in a company require higher-level, trending information over information found in specific detailed reports. The quick visuals and trending information inherent in dashboards allow busy executives to quickly and easily interpret the metric data to inform them quickly and accurately of project and specifically test status.

Aside from tailoring the appropriate message to the target audience, another factor to consider is the amount and level of information to provide. For example, if the presenter will be available to answer specific questions, then this level of detail does not need to be in the material presented. However, if the presenter does not accompany the material, and reports appear for instance as a dashboard on the company intranet, it is advisable to create a facility to both display summary information and provide access to associated detailed information if the audience wishes to see specific details. Providing both the summary along with the capability to drill down for details adds credibility to the presenter.

External reporting will vary depending on the importance and criticality of the project and the expertise of the audience. However, some common examples of external reports for testing include:

  • Dashboard showing the current testing and defect status. Probably the first items of inquiry for a senior leader as a consumer of an external report is to understand both the current testing status (where are we versus where should we be?; are we on target?) and the current defect status (how many defects are there and what are their statuses?). A dashboard that reflects the number of planned test cases to be executed and the actual number of cases executed as well as a breakdown of defects by status and trends over time will help the audience get a gauge on the quality of the product.
  • Resource and budget reports. These reports help shed light on the planned and actual test resources assigned to the project and can assist if other reports indicate problem areas. For instance, if the actual test execution status is behind the plan, the resource report could explain this as several testers may have been unexpectedly out sick, setting the test team behind in its planned test execution. Regarding budget reports, again variance analysis can be used to show any major deviations of actual financial expenditures beyond the budgeted and planned expenditures, especially in terms of testing assets, environments and infrastructure, etc.
  • Risk mitigation chart. Senior leaders often wish to see how the project mitigates risk. One simple way is to chart a grid of risks considering each risk’s likelihood of occurring as well as overall impact if it does occur. Given a risk-based testing strategy, test cases in the high category (Figure 6-8), which are considered riskier test cases in terms of both high likelihood of occurring (“likely” or “near certain”) with a corresponding high impact (“major” or “critical”) if they do occur, should be planned to be tested first to allow sufficient time earlier in the cycle to fix potential defects. This planning and early execution of priority test cases help mitigate the risk of uncorrected defects adversely impacting product quality.

image

Figure 6-8 Risk mitigation chart

Test coverage chart (requirements- or risk-based). This chart provides a way to see which requirements have been tested via appropriate test cases and can be based on requirements ranking or risk ranking (addressed in the previous reporting example of the risk mitigation chart). As a traceability matrix ensures that test cases are built to satisfy each requirement, this chart shows the progress in executing test cases that relate to the requirements, indicating how much testing of the requirements have been achieved along with any defects related to those executed test cases.

  • Defect trends. As previously shown in Figure 6-6, metric reporting can include the trends in new defects discovered per day or per week, the daily or weekly number of defects in each status, and other defect information.
  • Milestone achievement. Meeting key project and specifically test milestones is important to communicate to project stakeholders. Milestones denote important points of progress in a project and achieving such gives greater confidence to stakeholders that the project is on track. From a testing perspective, key milestones could include completion of the test suite, completion of necessary test reviews, and completion of testing phases.
  • Project health indicators (red, yellow, green). Project managers generally report the health of a project based on key performance indicators (KPIs). These indicators often relate to meeting defined objectives around cost, schedule, scope, quality, and customer satisfaction. Each KPI’s actual information can be assessed against its plan and, given any threshold allowance, a health indicator can be determined. Figure 6-9 reflects an example of this: the project is doing fine in terms of project scope, product quality, and overall customer satisfaction at the expense of the project cost and more so the project schedule. This concept certainly applies to testing, and health can be assessed on achievement of milestones (although this might be packaged along with the overall project schedule KPI) or variance from specific testing targets and objectives. For instance, if the actual defects detected are more than 25 percent of the planned defects, the test manager can provide a yellow health assessment. Generally, a green assessment indicates that variance analyses show actuals performing against plans within a given threshold; a yellow assessment means that, although actuals are beyond plans, the team is confident that they will meet the established plans through some recovery mechanism; and red assessment shows trouble in meeting plans such that no current recovery plans are in place and additional help is needed.

image

Figure 6-9 Key performance indicators

  • Predictive metrics. Predictive metrics help look at the past as a way to consider what may happen in the future. For example, based on the weekly number of defects and their level of criticality, it may be possible to predict the number of high defects per total defects in future weeks.
  • “Go/No go” recommendations. Generally, just before release of a product to the production environment, a project manager conducts a meeting with the necessary project stakeholders to assess if the product is ready to be released. Various questions need to be addressed at this meeting, the answers to which will influence the overall recommendation to move forward with the production deployment. In line with this, the team may address the number of any outstanding defects and their level of criticality to the product’s quality. The team may also consider the number and relative importance of any test cases that have not yet been executed. The test manager and applicable test metrics play an extremely important part in determining whether the product is ready to be deployed, influencing the final “go” (release to production) or “no go” (defer the release until some steps are taken to ensure readiness) recommendation is made.

Similar to internal reporting, external reporting should be done on a regular basis, usually weekly. This helps set audience expectation in terms of anticipating what and when they will receive metric reports so they can best use that information for their own planning and necessary decision making. As a project manager, Jim has developed and used project-level templates such as a communication plan. Although the communication plan is intended to set expectations from the project manager to the project stakeholder in terms of what, when, and how project status and other communications will be handled, the concept applies here as well. If the test manager develops a similar communication plan (or is involved in influencing the project-level communication plan specifically for test status), this helps set the expectation for all project stakeholders.

Once the frequency and level of communication have been established based on the test manager’s target audience, the test manager may have several options in terms of communication vehicles used to publish and distribute metrics reports. These include:

  • Real-time dashboard. Software tools can display a dashboard of metrics at the appropriate level (usually high level with a capacity to drill down to specifics) and offer real-time updates if the metrics are linked to source testing information, such as defect statuses, test execution statuses, etc.

6.4.1Kanban Boards

The Agile methodology offers some interesting and unique metrics related to test performance and progress.

One type of performance monitor is a Kanban board. Having its roots in manufacturing and engineering from the 1940s, the Kanban technique is a way to visually see team progress; in fact, the Japanese word “Kanban” means “visual signal” or “card.” Jim was a product owner proxy on a Scrum project and worked with the project’s Scrum master to develop a Kanban board that was used at each daily sprint stand-up meeting. At this meeting, aside from each team member reporting on work completed the previous day, work planned for the day ahead, and reporting any obstacles, each team member also showed movement of user stories across from left to right as the story progressed from “to do” through “to be tested” to “done” (see Figure 6-10). Since Scrum methods were new for us at that time, this manual method of recording progress in a public way for the team to see served our needs. (This Kanban board served as an information radiator, which is a publicly posted dashboard for others to quickly see and assess progress and can include information such as user-story progress within a sprint.) The Scrum master would then record the changes in status from the physical Kanban board to a tool so overall status and progress at the project level (including completed, current, and future planned sprints) could be reported or analyzed.

image

Figure 6-10 Kanban board

Jim has broadened the use of this simple but effective means of reporting progress to both present and manage his annual performance goals. Very simply, he wrote his goals on sticky notes and, over time, moved the notes from the “planned” column to the “in-progress” column to the “completed” column. This acted as a way for him to easily show anyone the current progress of his planned goals. Since most goals were due on a quarterly or even annual basis, he didn’t complicate matters by maintaining history or tracking any trends in his progress or lack thereof. However, if necessary, he could have taken snapshots in time (e.g., monthly) to see if there were issues of concern with blocked goals over time. The advantage of this method is a simple way for management and fellow colleagues and (of course him!) to outline his work and progress toward completing goals.

  • Publishing to a known site on the intranet. One task Jim had was to publish weekly reporting updates to a project collaboration site for other team members to see. While this was certainly not in real time, once the appropriate metrics were developed, the weekly updates did not require too much effort.
  • Emailed out to a distribution list. Another task of Jim’s involved generating monthly and weekly reports, sourced from a project management tool and copied via a nightly job to the company’s data warehouse. The report generation effort was not difficult or time-consuming; however, the cut-and-paste work to extract the information from the report and place in an email to a specified distribution list was tedious and error-prone. In terms of push versus pull communication strategies, this would be considered a push form of communication as he sent information to the report consumers; they did not access a website or intranet in order to get status updates whenever they wanted such updates.
  • Included in regular project status meetings. Here, the test manager can periodically supply the project manager with test metrics to be included in the larger project status report. Typically, project managers will include schedule highlights, discuss risks and action items, and entertain updates from each functional area, including testing.
  • Physically posted in an office window. While this may seem to be the low-tech method of communicating testing metrics, Jim has seen instances where project schedules were posted outside of the project manager’s office or cubicle. Additionally, at one company where the Agile methodology was very popular and well-supported, one conference room was dedicated to an Agile project team, where, during story time, the Scrum master would post user stories along the glass walls of the room, so those outside could easily see many small slips of paper adorning the room. The point is that metrics reports in the form of graphs and charts can certainly be posted outside of the test manager’s office. However, this should be only one of several other ways to periodically communicate test status to the project team.

Communication involves not only conveying information from the sender but also understanding information by the receiver. External reporting therefore is only effective when the target audience receives and understands the communication well enough to make plans and take the necessary action (if necessary). The reporting must include the proper level of detail to suit the needs of the intended audience. Providing too much information and details, when not necessary or warranted, can overwhelm the target audience, hampering their overall understanding and possibly discrediting not only the report but also its provider, you.

So, in order to be a successful test manager, one of the primary tasks is to adequately communicate test status to the intended audience at the appropriate level of detail so the audience can make informed decisions.

6.5Test Results Reporting and Interpretation

Learning objectives

LO 7.5.1

(K6) Given a particular lifecycle, project, and product, design appropriate test results reporting processes and dashboards, including metrics, taking into account the information needs and level of sophistication of the various stakeholders.

LO 7.5.2

(K4) Analyze a given test situation during any part of the test process, including appropriate qualitative interpretation, based on a given test dashboard and set of metrics.

In addition to including different test metrics reports for internal and external reporting, different reports make sense at various phases of the software development lifecycle, specifically in the testing phases.

Planning and Control

Test planning involves identifying and implementing all activities and resources needed to meet both the mission and objectives noted in the test strategy. This could include identifying the features to be tested, the necessary testing tasks, who is assigned to perform each testing task, information on the test environment, entry and exit criteria, and any risks involved along with any mitigation plans.

Test control, an ongoing activity, is similar to performing the variance analyses mentioned previously. Although variance analyses exist at the projectmanagement level, comparing budgeted or planned schedule, cost, and scope against reality or actuals at the time, they also can be extended to the testing realm in comparing planned test-case execution or expected defect discovery against actuals. While the test schedule and other monitoring activities and metrics are defined during the planning process, comparison of actual information against these plans occurs during test control in order to measure success. During test planning, traceability is established between the test basis, the test conditions, and other test work products that can be monitored during test control and monitoring. Traceability will help determine the extent to which quality risks are mitigated, requirements are met, and supported configurations work properly; this reporting goes beyond that of mere test-case status and defect counts.

Typical metrics in these phases include the following, which allow the report consumer to quickly determine significant variance analyses, where actual status deviates from planned status:

  • Requirements or risk coverage. Coverage helps indicate how well the test manager is managing the test effort. This includes comparing actual test execution status against planned status in terms of requirements or risk coverage, which provide test cases aligned with either requirements (given a requirements-based strategy) or risk (given a risk-based strategy), with the test cases associated with the most important requirements or highest-risk items to be tested earliest. If less important requirements or lower risk items are addressed first, the testing approach should be reviewed and corrected. If inadequate risk coverage is occurring, the test manager should meet with the test team, development manager, and project manager to not only understand why, but also take measures to bring risk coverage back in line to meet planned expectations.

    Note that, if formal documentation of the system isn’t available, coverage must still be established and should be based on targets set in collaboration with stakeholders. Given a risk-based testing strategy, this is done in terms of risks. Coverage metrics are still necessary and helpful even if system documentation is absent.

  • Test case development and execution status. As with all other tasks in a project, the development and execution of test cases follow a plan. As the plan is carried out, the team can easily see if the development of the test cases is behind, on, or ahead of the planned development. Similarly, after the test cases have been crafted, the rate of test case execution could be compared with the planned rate and any significant variances, as leadership and management team define significant, should be reported and corrective actions taken to bring the actual in line with the planned.
  • Defect discovery trends. The quality of the software can be gauged depending on the planned versus actual defect discovery trends. If the trend shows discovery of a great number of defects, or a larger proportion of critical defects, at a higher rate than anticipated, this could be indicative of significant quality issues or even a poor estimation job by the testers in not anticipating as many defects as occurred. Defect discovery can also be used as a test management metric. For example, in risk-based testing, the goal is to find most of the high-severity defects early in the test execution period. Then, during test control, you can monitor defect discovery to see if that pattern held. If the pattern does not hold, you can take steps to rectify, as generally there was probably a failure to properly assess quality risks.
  • Planned versus actual hours for testing activities. If not carefully watched by the test manager, the actual hours can far exceed the planned hours for testing activities. The test manager should weekly check her budget and work breakdown structure to see if variations or deviations from plan are beginning to occur. The sooner this is detected and fixed, the better.

    There can also be cases where actual hours are less than planned hours as the test team is not as productive as planned. This could occur for several reasons, including unavailability of the test environment and/or systems, poor quality or missing test data, automation tool anomalies, unavailability of human resources, ill-prepared documentation or poor quality of incoming code. Regardless of the reason(s), the test manager must devise a plan and take action to rectify the lack of productivity and insufficient test coverage.

Analysis and Design

During the test analysis phase, the team is determining what to test based on the test objectives defined in the test policy, strategy and plans, taking into account stakeholders’ perspectives on project and product success, including factoring in product quality risk analysis. The ISTQB calls “what to test” the test condition. Test analysis sets out to create a set of test conditions for a project using test basis documents and work products such as requirements specifications and, on Agile projects, user stories.

Typical metrics for the test analysis phase include:

  • Number of identified test conditions. The actual number of identified test conditions can be compared to an expected number of conditions based on historical data. For example, prior data can indicate that, for each test basis element such as an individual requirement or quality risk, there should be five test conditions. If, for a particular project, there are fewer test conditions, is this a problem? If you are discovering more than five test conditions, is that necessarily a problem? The answer, of course, is that it depends. The ratio of test conditions per test basis element may be different from the historical data; the important point is that the test manager can substantiate the fewer or additional test conditions based on the characteristics of the product and the needs of the project.
  • Defects found. With an analytical test strategy, where the test team analyzes the test basis to identify test conditions to cover, there is an emphasis on reviewing requirements and analyzing risks. This analysis often uncovers defects in work products such as requirements specifications, design specifications, product plans, project plans, marketing documents, etc. Defects found in these various documents and work products against expectations based on historical data can be reported to management.

The test design phase then determines how to test those test conditions developed during test analysis.

Typical metrics for this test design phase include:

  • Coverage metrics. While coverage was addressed primarily in terms of risk in the planning and control phases, it takes on a much broader context in the test design phase. Here, coverage metrics include coverage of the risk items by the test cases but also coverage of the requirements likewise through the test cases. Other coverage types include quality characteristics as well such as functional interoperability, security, compliance, and reliability, ensuring these and other characteristics pertaining to the quality of the product are covered via appropriate test cases. Any coverage gaps that are not satisfactorily covered by one or more test cases expose risk, which in and of itself must be assessed according to potential impact and likelihood. An analysis of these gaps or deviations from the plan should be addressed with appropriate corrective actions before moving out of the design phase.
  • Defects found during test design. Defects can be discovered in the various test basis documents such as the requirements document, functional specifications documents, design documents, risk analyses, configuration document, and so on. These defects should then be tracked and reported against their sources and amendments and repairs made to those sources appropriately.
  • Defects found during review cycles. Phase containment indicates the percentage of defects resolved in the same phase in which those defects were introduced. Thus, defects discovered during various reviews, such as requirements reviews, design reviews, etc., and that are fixed before moving on to the next phase increase the quality of the product. If the defect cannot be fixed in the current phase and does escape to the next phase, it certainly will incur additional work in terms of documentation, status reporting, investigating and applying possible workarounds, and so on. However, the time, effort, and cost in managing these defects are still less than that spent detecting a previously unfound defect. The clear indication here is find early, fix early. Generally, defect trending may uncover quality problems if a higher than expected number of defects are found in the review cycle. This may require multiple review cycles in order to achieve an acceptable level of quality before the work products and the project in general can move to the next lifecycle phase.

    On the flip side, if defect trending shows fewer and fewer new defects being discovered and reported each day, assuming there is sufficient test execution and test coverage, this may indicate sufficient quality in the product.

    Lastly, if defect trending shows the largest number of defects found early in the lifecycle, this could very well indicate an efficient test execution process.

Implementation and Execution

The test implementation phase organizes the test cases, finalizes the test data and associated test environments, and creates a test execution schedule. This includes assessing risks, prioritization, the test environment, data dependencies, and constraints in order to devise a workable test execution schedule. This schedule must align with the test objectives, test strategy, and overall test plan.

Typical metrics at this test implementation phase include:

  • Percentages of environments, systems, and data configured and ready for testing. We don’t live in a perfect world and we need to make tradeoffs. Often, we must begin testing without all test environments, systems, and configured data ready to go. This may not necessarily be a bad strategy. However, if the highest-risk items that should be tested earliest require setup, configuration, and data that are just not available, this could hurt and hamper the overall testing effort. Assessing readiness percentages alone isn’t sufficient, so the test manager must determine the impact of less than 100 percent availability along with any schedule impacts or other changes, balancing all variables in order to product an acceptable testing effort and quality product.

    An example of data configuration readiness is an application requiring 100 configurations (to use round, easy numbers). As each configuration is completed, this contributes 1 percent to the total percentage of data configuration completeness.

  • Percentage of test data created and used. The availability and accuracy of test data can also impact the order of test execution, resulting in a less-than-optimal testing schedule and test team efficiency. This could result in repeated reuse of the test data that are available as well as changes to automated tests by resetting them to their state of inactivity after each test case use.

    An example of test data readiness is if source data can be directly loaded into the test environment. If so, after the data load, there is 100 percent test data availability. However, if the data needs to be massaged, changed, transformed in any way, the test data availability is not there until this transformation is complete. Just loading the raw or source data alone does not achieve this 100 percent test data availability since it is not yet ready for use.

  • Percentage of test cases that are automated and results of automation execution. The test manager must ensure that the test case automation effort is achieving its goals as originally planned and documented. Additionally, the automation effort must also include results that are being integrated into the overall testing results. While it may seem great that for instance the 10 percent of the testing effort covered by test automation detects 50 percent of the defects, the test manager should be skeptical and question the types of defects that automation is finding, checking to ensure that the automation is indeed performing the correct validation, and lastly determine why manual testing is not uncovering a sufficient number of defects.

    One caveat is not to expect that 100 percent of the regression test suite can or should be automated (for example, some tests are too complex, take too long to automate, or are not run enough to justify automating them).

In the test execution phase, as its name states, the actual execution of tests occurs with appropriate results recorded.

Test execution phase metrics include:

  • Percentage of test conditions and cases covered by test execution and the results of that execution. This is basically coverage again. Obviously, if the coverage is not moving along according to plan, unexpected risks may show themselves. For example, a blocked test can prevent bringing to light defects that would result in architectural changes to the system and could adversely affect the performance testing schedule. Additionally, the test manager must monitor test case pass/fail information to determine whether the expected rate is being accomplished. If too many test cases are failing, indicating poor product quality, this may require a need for additional test cycles. Conversely, if too many test cases are passing, the test manager should question if adequate testing really has occurred, where focus should be applied given the extra time in the schedule, and if plans need to be modified for future testing cycles.
  • Percentage of test cases that are marked for inclusion in the regression test suite. The number of test cases that should be included in the regression test suite varies depending on each product. In general, test cases that are expected to remain constant are good candidates for consideration for the regression test suite. These test cases can begin to be evaluated, if not at the design or implementation phase, then definitely during the execution phase.
  • Time spent maintaining automated and manual test cases. The time devoted to maintaining the set of automated and manual test cases may not be trivial on a project. Changing requirements will undoubtedly contribute to the maintenance time beyond the scheduled time to maintain the documentation appropriately. There can be additional maintenance time if the delivered code doesn’t match requirements, the data or environment has changed, the interfacing systems have changed, or due to several other factors. While this maintenance time will be assigned to the project and isn’t really considered productive test time, it may have a sizeable impact on the testing project.
  • Breakdown between types of testing—e.g., black box, white box, experience-based. The test manager needs to determine the risk coverage, defect detection, and test skill usage for each type of testing. After analyzing the information, the test manager may redirect testing resources toward a specific type of testing if this type is resulting in an inordinate amount of defects. If, for instance, scripted testing is finding fewer defects, this could be because the software has stabilized. Or, if white box testing is finding fewer defects, the test manager can investigate what code coverage levels are being achieved.

Evaluating Exit Criteria and Reporting

At the evaluating exit criteria phase, the test manager relates the test results to the exit criteria. In fact, throughout the project, the test manager is checking test results to ensure steady progress toward meeting the exit criteria. The test manager must consider removing any obstacles that could prevent the project from meeting its exit criteria. For Scrum projects, there is consideration here of examining the product, user story completion, and feature sets against their definitions of done.

Although the test manager monitors this progress on a detailed level, she reports to testing stakeholders at a summary level from a total project perspective. At this point, no new metrics are developed or introduced in the testing project.

Metrics that are finalized at this stage, unless the testing phase has been extended, help with process improvement on future projects. These metrics include:

  • Test conditions, test cases or test specifications executed versus planned and the final status of each. The test manager can compare actual versus planned test cases executed with the assumed exit criteria of 100 percent test case execution, 100 percent test cases passing, and 0 percent test cases failed. Some projects do allow some number of failed test cases and defects, such as only low defects are allowed to be unresolved, as appropriate exit criteria. Although this may seem stringent, the test team and management can review the results against these criteria in order to waive, if necessary.
  • Number of testing cycles used versus planned. The test manager can compare the number of test cycles used versus the number planned and determine if additional testing cycles could have improved the results.
  • Number and complexity of changes to the software that were introduced, implemented, and tested. Typically, there is an agreement to freeze the code and not introduce any changes to the testing environment in order to offer stability. If too many complex changes have been allowed, this could seriously affect the test results and outcomes.
  • Schedule and budget adherence and variances. These summary metrics are usually considered project management metrics. However, the test manager is responsible for the testing tasks in the schedule as well as the testing budget. The prudent test manager throughout the project keeps a careful eye on progress as measured through the schedule as well as cost via budget versus actual expenditure variance analyses. Nonetheless, the test manager must pay particular attention to the variance analyses at test entry, at major testing milestones during test execution, and at test exit. The earlier variances are detected, the more time the test manager and other project stakeholders can take corrective action to rectify issues. At this point, the test manager provides overall variance analyses for summary reporting.
  • Risk mitigation. The test manager can report any mitigating actions taken during the course of the project to lessen the impact or likelihood of test-related risks. Please reference the Risk section in Chapter 5.
  • Planned versus actual effective testing time. This metric shows productive testing time and may exclude documentation updates, attending project meetings, automating tests, and supplementing the regression test suite. The report can show effective test time for individual testers or the entire test team.

Test Closure Activities

Similar to the closure of the entire project, the testing cycle includes a test closure phase that follows the completion of the test execution phase. Not unlike project closure, where the project manager ensures that all tasks on the project schedule are complete, the product is deployed to the customer in the production environment, a retrospective or lessons learned session is conducted, and all necessary project documentation is archived, the test manager and test team embark upon similar activities. From the test manager’s perspective, she would ensure that all test-related tasks are complete, the final work products have been delivered, actively participate in retrospective meetings, and archive necessary test data, test cases, system configurations, and the like for the project.

Typical metrics in this phase and its associated closure activities include:

  • Percentage of test cases that can be reused on future projects. Reuse is a strong benefit to help other, similar future projects. The test manager and test team can help determine which test work products can be reused not only on future internal projects but also to help the internal support team or external customers. Test cases that can be reused are a better investment in time compared with those used once and then discarded. For example, the test manager can transfer tests and test environments to the team who will handle maintenance testing. Also, the test manager can deliver automated or manual regression test suites to customers if they are integrating her system into a larger system. The extra work to hand off these work products should be factored into the overall project schedule as real work.
  • Percentage of test cases that should be moved to the regression test suite. As discussed during the test execution phase, test case candidates for the regression test suite should be identified as early as possible and then finalized and added to the test suite during the test closure phase. Test cases that are expected to remain constant are good candidates for review for possible automation.
  • Percentage of the defects that have reached the end of their lifecycle. Defects that will not be resolved in this project, including those defects that the team collectively agree will be deferred to another project, accepted as a permanent product restriction, or reclassified as an enhancement request rather than a defect or bug.

As part of test closure activities, the test manager may also look for process improvements. In fact, during retrospective sessions, the facilitator not only asks the team what went well and what didn’t, she also probes for any process improvements, born usually from items that didn’t go as well as planned or expected. The purpose of the retrospective is to help future, similar projects to be more successful since the “boat has already left the dock,” so to speak, for the current project. Additionally, if team members will work on future projects together, retrospectives help uncover gaps in working relationships and can help foster better teamwork on future projects. For example, if on the current project defects seemed to be clustered around a certain area or functionality, future projects may consider including additional rigor in its quality risk analysis in part by including additional participants. Or, if there is an inordinate number of defects, the team could decide to do additional static testing, such as reviewing code through code inspections, or additional reviews of requirements, specifications, and designs. Also, if there were overall quality issues shown perhaps through a large number of defects, the team could brainstorm on ways to improve quality on future projects via process or tool improvements, or even training or certification to raise the skill level of project stakeholders, including the test team. Additionally, if the team seriously underestimated the amount of productive time to test, the number of test cases, or the number of defects discovered, this could indicate issues in estimating and management may seek training or other improvement measures. Lastly, if not all test cases have been successfully executed or defects successfully resolved by the project’s end, a retrospective can help determine the reason(s) why, often leading to efficiency suggestions for improvement, a reevaluation of the test strategy, and possibly reexamining exit criteria.

Jim does note that, as a project manager in several different organizations, as solid as intentions are for not repeating prior mistakes or inefficiencies, most projects do not perform the due diligence in reviewing retrospective results from prior projects during the planning stage of the current project. Aside from the overall project perspective, this lack of applying lessons learned can also affect testing specifically, especially during test planning. Sometimes this lack of consulting lessons from prior projects is due simply to a lack of discipline; the PMO or project management office can help here by intentionally adding lesson reviews as a task during the planning stage of a project. Another reason for neglecting lessons learned is that there just have not been solid knowledge management tools to capture and classify this information allowing easy access. As knowledge bases and technology continue to improve, this should be less of a reason not to plan accordingly and prevent history from repeating itself.

Also during the testing closure phase, the test manager and test team must decide which test cases and other work products should be retained and reused for efficiency on other projects. The test manager should consider the percentage of overall test cases that can be reused and then appropriately register them in a repository. Similar to the discussion on project-level retrospectives, there should be an easy way to find the reusable components for future projects; otherwise, the effort to catalog and add work products to a repository will be useless unless those artifacts are found and used.

Additionally, the test team should be considering the number of test cases that should be added to the regression test suite. There should be a careful strategy in managing the size and growth of this test suite as the addition of too many tests could lead to very large testing cycles on projects. One way to tackle this problem is to ease the burden of regression test suite execution by automating as many of these tests as possible.

Lastly, aside from saving valuable test cases for later reuse, the test artifacts should also be archived. These artifacts include test cases, test procedures, test data, final test results, test log files, test status reports, and other documents and work products. These should be placed in a configuration management system for easy access.

6.6Statistical Quality Control Techniques

Learning objectives

LO 7.6.1

(K2) Explain why a test manager would need to understand basic quality control techniques.

Since testing is closely linked to quality control, the test manager should have a firm grasp of the basic statistical quality control techniques, charts, and graphs used to provide indications of testing progress for overall success. Quality control is aligned with what’s been termed the Deming Cycle or Plan, Do, Check, Act (PDCA) Cycle (Figure 6-11). Although introduced by Walter A. Shewhart, it was W. Edwards Deming who popularized this process improvement model. The model is based on the following four iterative steps:

imagePlan

– Consider improvements to a process.

imageDo

– Implement those process improvements.

imageCheck

– Evaluate the improvements.

imageAct

– Where the improvements fall short of expectations, take corrective action to further improve, which could include appropriate planning for the next iteration.

image

Figure 6-11 PDCA Cycle

Two things are relevant concerning this model: Based on its simplicity, it has widespread process improvement application in terms of change management, project management, employee performance management, and obviously quality management. Secondly, the steps in the PDCA Cycle, taken together, have an overarching theme of continuous improvement. Continuous improvement involves a mind-set of ongoing efforts to evaluate processes with the goal of making them more efficient and effective.

Walter Shewhart has also contributed to one of the seven basic tools of quality. Of this collection, Shewhart introduced the concept of control charts, which are used to determine whether a process is in statistical control (Figure 6-12).

image

Figure 6-12 Control limits report

Specific limits, called upper control limits (UCL) and lower control limits (LCL), define the maximum and minimum thresholds. If a process contains measured points beyond these boundary control limits, those points are considered unacceptable variations. This can best be used by the test manager to help determine whether defects discovered during a project’s test phase are within proper control limits or are indeed beyond acceptable thresholds and require additional attention to determine the root cause.

To determine the root cause of issues, another primary quality tool from the set of seven basic quality tools is simply called the cause-and-effect diagram. Also known as the Ishikawa diagram after its creator, Kaoru Ishikawa, as well as the fishbone diagram since completed diagrams resemble the bones of a fish, this tool identifies possible root causes for problems in order to help the team focus on problem resolution. For instance, Jim has used root cause analysis and incorporated the technique in a course on quality and project management. For a typical root cause analysis (RCA) session, the facilitator assembles key people who were involved in or are knowledgeable about the issue, preferably a work group rather than management. After stating the problem, the facilitator highlights a few probable areas, such as people/personnel, machinery/tools, process, and so on. Of course, given the specific situation, more meaningful areas can be included. Then, the facilitator brainstorms with the team, looking to identify possible causes of the issue. A simple technique, called 5 Whys, can be used to generate more information or to get the team thinking about additional possible causes.

For example, if an issue is the reliability related to automated testing, one possible area to investigate is related to the unreliability of the automated tool in use (see Figure 6-13). Narrowing in on tool-related issues, the facilitator can ask the team why the tool is unreliable. When asked why it is unreliable, one answer could be that it is on an old server that is known to be unreliable. When asked further why the tool is on an old server, one response could be that management would not authorize the tool to be installed on a more reliable server due to cost issues. If it is determined that running the tool on an old, unreliable server is the primary or root cause of automated test reliability issues, a proposal can be made to migrate the tool to a more reliable server in order to help the success of this and potentially other projects that use the tool.

image

Figure 6-13 Cause-and-effect diagram

Successful test managers use quality tools to help uncover problems as well as make process improvements as part of a continuous improvement mind-set to make future projects more and more successful.

6.7Sample Exam Questions

In the following section, you will find sample questions that cover the learning objectives for this chapter. All K5 and K6 learning objectives are covered with one or more essay questions, while each K2, K3, and K4 learning objective is covered with a single multiple choice question. This mirrors the organization of the actual ISTQB exam. The number of the covered learning objective(s) is provided for each question, to aid in traceability. The learning objective number will not be provided on the actual exam.

Criteria for marking essay questions: The content of all of your responses to essay questions will be marked in terms of the accuracy, completeness, and relevance of the ideas expressed. The form of your answer will be evaluated in terms of clarity, organization, correct mechanics (spelling, punctuation, grammar, capitalization), and legibility.

Question 1

LO 7.2.1

In order to properly track and control a project, you need to determine the right metrics to collect and present. Decide which grouping would help you most achieve the goal of proper project tracking.

  1. Test environment, test systems, and test data readiness
  2. Test cases, defects, and coverage metrics
  3. Test resource availability, percentage of test cases that are automated, time spent maintaining automated and manual test cases
  4. The number of areas for improvement discovered during testing lessons learned sessions, the efficiency of archiving testing work products, and the percentage completion of tasks at the end of the project.

Question 2

LO 7.4.2

Your project sponsor informs you that, as test manager, you must begin preparing and presenting weekly reports to senior leadership concerning the test progress of your team on this project. Determine which of the report choices is best for this audience.

  1. Prepare a detailed defect report showing lots of information and trending over time since senior leaders like to analyze trends.
  2. For the first few meetings, conduct information sessions with the executive leadership to best understand what information they are looking for.
  3. Present one simple, very high-level metric to indicate overall project test status.
  4. Provide one slide containing a dashboard of no more than four high-level reports that show trends over time.

Question 3

LO 7.5.2

During the test implementation phase, your test lead presents you with the following readiness for testing report (see next page). With the plan to formally begin testing in two days, what would your recommendation be?

  1. Formally begin testing as planned since the percentage readiness components are never 100% complete prior to the start of test.
  2. Accept the release because you have some test resources dedicated to nothing other than environment and data readiness and maintenance.
  3. Since those test cases assessed as the riskiest will be executed earliest, you can begin as planned and worry about stabilization and readiness later.
  4. Unless the readiness for each of the components reaches close to 90% readiness in two days, do not accept the release into test and continue to monitor readiness progress on a day-per-day basis.

image

Question 4

LO 7.6.1

A test manager needs to understand basic quality control techniques because

  1. every manager on a project team needs to understand both quality control techniques and project management fundamentals in order to effectively contribute to project success.
  2. testing is closely related to quality control techniques that can help the test manager understand testing progress for overall project success.
  3. test managers may be asked by senior executives about quality control techniques during project metrics presentations.
  4. doing so will make her more credible with the management team.

Question 5

LO 7.3.1

Scenario 4: Set of Core Metrics

As test manager on a project to develop a new website for your global company, you need to create a set of core metrics for internal use by your project stakeholders.

Please list and describe the primary reports you would include, noting the main data points, reporting frequency and as much information as would be helpful for your test team to gather the data and prepare the reports for you.

Question 6

LO 7.4.1

Scenario 5: Dashboard of Test Status Metrics

You are the test manager on a project to launch a new companywide website.

To meet senior leadership’s reporting expectations concerning the test status of the project, you must develop a dashboard of test status metrics that is accurate, comprehensible, and appropriate for your audience.

Please describe what would be included in this dashboard in terms of level of detail, taking into account your target audience, access methods for the report, frequency of report updates, etc.

Question 7

LO 7.5.1

Scenario 6: Processes and Dashboards

You are a test manager and you and your team have been assigned to a software development project to create a companywide portal.

Senior leadership expects you to design the necessary reporting processes and build dashboards with the appropriate test metrics for them.

For each of the five major testing phases of:

– planning and control,

– analysis and design,

– implementation and execution,

– evaluating exit criteria and reporting, and

– closure

select two metrics and explain why you feel they are important.

For each report, be sure to take into account the specific information needs and level of sophistication of your reports’ target audience.

1www.cs.umd.edu/~mvz/handouts/gqm.pdf

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset