Keywords: None
Learning objectives
No learning objectives for this section.
In Chapter 5, we used a scenario of planning a summer vacation as a way to introduce the project management topics of planning, risk management, and retrospectives. Following with this analogy, once the vacation has begun, it’s important to track and evaluate several areas to help ensure that our vacation is successful. For example, from a project perspective, we’d need to determine our budget beforehand and, throughout the trip (without getting too exacting) we could do periodic variance analysis. This analysis could compare our actual expenses to date against both our overall budget and budget to date. We could report to our family where we are from a vacation expense perspective and, if we were way over budget, we could discuss ways to help us get back on budget (e.g., we might need to rely on tuna sandwiches over filet mignon for dinner for the rest of the trip). Also, to get to our vacation spot, we could consider comparing our current traveling time against our overall travel plans. If it takes, for instance, six hours of driving time, we could take checkpoints to see if we’re on schedule. If not, our family could determine whether corrective action needed to be taken . . . or we’ve just encountered too much traffic on the road and will arrive at our destination late.
This vacation analogy is one of several examples in our everyday lives of tracking a project or any major activity or initiative, taking periodic checkpoints to assess variance from a plan and overall performance. Truly, students are concerned with determining how well they are performing in class via their graded assignments and exams throughout the course and will look to take different tactics to raise their level if below expectations (e.g., studying harder, forming a study group, getting advice from their instructor, etc.). Similarly, employees are encouraged to have periodic performance reviews with their managers to ensure that they are meeting expectations concerning their goals; if not, corrective action can be taken early to get back on course. Lastly, similar to the information a project manager uses to assess project performance, a test manager needs to objectively track a project’s test components in progress, evaluate the data, and take any necessary corrective actions to bring back performance to an acceptable level. This also includes reporting the appropriate data at the correct level to the correct internal and external stakeholders.
In this chapter, we’ll explore some specific testing information that a test manager would be interested in tracking; delve into internal and external reporting to meet a variety of constituents’ needs; discuss test reporting considerations at each of the primary testing stages within a project; and close with an overview of key quality control techniques essential for a test manager. While some people think of test metrics as purely a Waterfall project consideration, interspersed throughout this chapter, we’ll see some Agile-specific metrics and ways to report on progress.
Learning objectives
LO 7.2.1 |
(K4) For a given level of testing determine the most effective metrics to be used to control the project. |
As a project progresses through its lifecycle and approaches its testing phase, project stakeholders—in particular the project sponsor, project manager, and test manager—will need to know test progress to gauge the quality of the project. In order to do this, test managers develop metrics and report on such.
The ISTQB defines key reporting terms as below. In Table 6-1, I’ve included two realistic examples for each of the terms defined.
What are the overall purpose and true value of the metrics that you and your test team will develop for projects? In general, your metrics focus on three major areas: the quality of the product being developed; the progress of the project in meeting its goals; and the quality of the test process. Your various testing metrics will help determine the level of quality in the product as it develops in terms of overall and point-in-time defects discovered/fixed/retested/confirmed. Additionally, your metrics will yield information indicating how well the project will meet its key performance indicators, such as completion on schedule, in part by showing the test team’s progress in executing planned test cases. Lastly, metrics can show high or low levels of quality in the test process through lessons learned sessions, offered at the end of major milestones or phases or the completion of the project in Waterfall projects or during sprint retrospectives at the end of each sprint in Scrum projects.
As a test manager, how do you best determine the metrics on which to periodically report? To begin, your metrics should align with and support the overall project objectives as well as specific testing objectives defined in the test policy. The objectives can include proper exit criteria from the test plan. Therefore, each metric should correlate to a key project objective, and you should be able to explain that correlation. For instance, at one place Jim worked in the project management office (PMO), from a project governance perspective, he was responsible for generating and distributing reports to project managers and their managers regarding conformance to the standard process and templates. These reports aligned with the overall management objective that all projects needed to follow the PMO-defined project process and use the prescribed templates.
Bear in mind that all metrics should include the following considerations:
Keeping these considerations in mind will go a long way to making metrics useful. If appropriate project decisions cannot be made based on reported metrics, they lose their application and usefulness and become an exercise in futility.
One useful approach to consider when developing a small set of metrics is the Goal-Question-Metric (GQM) approach, which begins with a set of goals defined by the organization/department/project and then characterizes a way to assess the achievement of those goals, resulting in a set of data per question in order to answer each question in a quantitative manner to assess performance.1
Since metrics are generally shared on a periodic (usually weekly) basis, it is highly recommended that any tools that can be used to both generate and distribute reports, relieving managers and/or staff from these tedious and repetitive tasks, be considered for use.
Typical testing data that could be tracked for a project include the following:
The Agile methodology often uses burn charts to represent project status over time. For instance, burndown charts are common and typically show story point movement over the course of a sprint or iteration. Similar to the risk burndown chart in Figure 6-4 to be addressed later, the sprint burndown chart in Figure 6-1 shows the amount of work remaining at the beginning of the sprint and the downward trend over the course of the sprint with the goal of zero story points remaining by the end of the sprint. Here, we can see that actual progress in completing story points during the sprint varies, with some days falling below plan (e.g., days 2, 3, 7, 8, 9, 10, and 11) and other days progressing above plan (e.g., 5, 6, 12, and 13); overall, all story points planned for the sprint have been completed. At the end of each sprint, the Scrum master updates the release burndown chart which, similar to the sprint burndown chart, reflects the progress the team has made in completing story points; the difference pertains to the scope as the sprint burndown chart focuses on the progress within the sprint while the release burndown chart reflects overall progress at the release level.
Burnup charts are also somewhat common with Agile projects. These typically depict functionality as represented by completed story points over time (see Figure 6-2). From this perspective, the trend increases over time as more and more functionality or user stories via story points are completed. Assuming a straight-line, linear approach in terms of an even allocation of completed story points, represented as planned progress, we can see that the actual progress, although never exceeding the plan within any one sprint, overall, achieves the goal of 200-story-point completion by the final, tenth sprint.
The above lists include metrics that capture information both point-in-time as a snapshot view or as trends to see how the information changes over time. Often, point-in-time information can be adequately represented in tabular form or as a histogram, such as the number of defects per status as of yesterday. In order to effectively see emerging trends over time, such as the number of defects per status per day over a 20-day period, a time-based graph such as a histogram would come in handy. Trends help uncover important information concerning schedule, resource, or quality issues. Obviously, trending information relies on and is derived from detailed information that is tracked accordingly.
In order to properly track this data, the test manager must ensure that the data being reported is correct. Incorrect or invalid data can make the status and interpretation of the data unrealistically positive (resulting in a false sense of security with no action taken when there may indeed need consideration and a planned course of action to be developed) or unrealistically negative (causing inappropriate and unnecessary action to take place). Either situation would poorly reflect on the test team and leave egg on the face of the test manager.
Learning objectives
LO 7.3.1 |
(K6) Create an effective test results report set for a given project for internal use. |
As mentioned at the end of the previous subsection, the accuracy and validity of the reported metric data are critical. When comparing actual values to expected outcomes and variances arise, there will undoubtedly be further investigation and corrective action taken to reduce the variances to bring future actual results in line with planned results. For example, if the plan is to find 100 defects per week but the project is trending at an actual rate of 50 defects per week, management will want to know why. Some possible causes could be better-than-expected software quality, resulting in fewer defects discovered. Also, this result could be due to the fact that testers are unexpectedly out sick or because of a downed test environment, preventing testers from executing test cases according to plan.
As previously mentioned, it is important to educate your audience in the metrics that you present. That assumes of course that you, as test manager, properly understand the information yourself. Understanding this information requires expertise. Over time, this experience includes the ability to effectively extract or derive the information, interpret it, and research any variances. Again, reporting raw data that is flawed not only is misleading but also will seriously undermine your credibility as a test manager.
After you have verified the reporting information for accuracy, the information is ready to be distributed internally to your project team for review and consideration of any necessary action plans. Accompanying the reporting information should be a clear explanation of what the metrics represent and how to interpret them; this preferably should be done via meetings or conference calls to allow dialogue rather than through a less interactive means such as email. Different project team members will need to see different reports. For instance, management may wish to see high-level testing efficiency reporting of actual versus planned test-case execution. The development team may be interested to see more detailed defect metrics as defects affect the development team; such reporting may be shared internally before releasing to project managers or external team members in order to better understand the reason for the information reflected in this metric report.
The test manager uses internal reports to manage and control the testing aspects of the project. Some of these internal reports include:
The test manager can look at other reports or question her team to help understand any significant gaps between actual versus plan. For example, assume that the test manager has determined that the number of test cases to be executed by her team is 50 per day. The team performs as expected on the first of eight days. However, productivity dips a bit on days 2 and 4; productivity takes a nosedive beginning on day 6 and lasting through day 8. What could account for this? After researching the possible causes of this dip in productivity, it was discovered that there was an inordinate number of defects detected by this test team after the first few days of testing. The testers then were refocused on retesting the test cases related to the defects and thus were not focused on testing their planned, previously unexecuted test cases for the day, reflected in their poor performance on this internal report.
One particular reality of many software development projects is that testing, positioned at the tail end of traditional, sequential models, often gets squeezed in terms of time in order to maintain the project schedule. This results in either stressed testers working extra hard and long hours to complete their testing tasks or some testing tasks being sacrificed and not getting done, compromising the quality of the product. If testing could somehow occur earlier than planned on projects, this concern may be reduced or even eliminated. One approach is to intentionally shift the testing design and execution as early as possible. Following an Agile approach called test-driven development, where requirements are turned into test cases first and then the code written and refactored until the test cases pass, this method moves testing much earlier into the process with the aim to produce better-quality products.
Additionally, given the general rule that it is easier and cheaper to fix a defect the earlier it is detected, the goal is to move test preparation and design, test execution, and overall risk mitigation as close as possible to the start of sprint cycles on a Scrum project. Often, for planned user stories per sprint, tests are not always designed and executed on day 1. There is sometimes a lag, and proper tests in support of stories contained within the sprint do not start for one or even several days into the sprint, causing this rush to test toward the end of the sprint cycle. Ideally, the following would happen:
In theory, this all makes sense. But how can the team do this in practice, especially when the team may be too occupied with closing activities of one sprint to expend time and energy designing test cases for the next sprint(s)? How can all planned test cases in support of the user stories (especially more important stories from a risk perspective) be executed on day 1 of the sprint before developers have coded to support the stories?
Test-driven development, noted above, can help, as user-story test cases can be executed on day 1 of the sprint since testing precedes actual development with this approach. Additionally, just as some time is allocated for future sprint planning, some time prior to the actual sprint can be used to design the test cases for user stories so, when the actual sprint begins, appropriate test cases are already predesigned before entering the sprint. Lastly, outside of the sprint cycle, the team can assess each user story from a risk perspective. This means that each story would be considered from a risk likelihood and risk impact perspective. Given a numerical assessment scale (e.g., 1 = low likelihood/impact, 2 = average likelihood/impact, 3 = high likelihood/impact), each story is assessed and assigned a risk priority number (RPN). For example, a story with a relatively high-risk likelihood of occurring and high impact if the risk does occur would be assigned an RPN of 9 (3 = high likelihood × 3 = high impact). Those stories with higher RPNs should be planned to be written and tested earlier in the sprint than later since these stories are more important to the product from a risk perspective and the overall mitigation of risks occurs by addressing the largest risks in the sprint first.
We can measure the effectiveness of this approach of test design, test execution, and risk mitigation as early as possible using metrics.
For example, as the risk burndown chart in Figure 6-4 shows, over the life of the sprint, as each user story is completed (the definition of done being executed and passed test cases associated with the story), there is a decrease in the overall risks associated with the sprint. For example, assume we begin with, for simplicity, 20 user stories assigned to the sprint, each with an RPN of 9 (high likelihood of occurrence times high impact or 3 × 3 = 9), with a total sprint RPN of 180. Day 1 of the sprint reflects this starting point. The second-day value of 162 is the starting point for that day, and the difference from the previous day (180 – 162 = 18) reflects completion of two user stories, since each has an RPN of 9. It helps to note that the sprint begins on a Wednesday (3/1), so we can see that the plateau from 3/4 to 3/6 is so because 3/4 and 3/5 are weekend days with no testing work accomplished, and we begin the week on Monday, 3/7, with a sprint backlog of 126 RPNs. This plateauing also occurs on the second weekend of the two-week sprint (3/11 and 3/12). Those stories with the larger RPNs should be planned to be completed toward the start of the sprint to best mitigate risk in case not all stories complete within this sprint as planned; obviously, in this example, each story has the same RPN so this prioritization would not make as much sense here, since all 20 user stories were assessed as high-risk items.
While some may think that internal reporting does not need to be as formal as that intended for external consumers, the quality and accuracy of the information is just as essential, especially if management and the project team will make decisions based on the information in these reports. A word to the wise then is to invest the proper amount of time to ensure that the information is sound and to use good reporting tools and methods to make this reporting as easy and as accurate as possible.
Learning objectives
LO 7.4.1 |
(K6) Create an accurate, comprehensible, and audience-appropriate test status dashboard. |
LO 7.4.2 |
(K4) Given a specific project and project team, determine the proper level and media for providing weekly test reports. |
Similar to internal reporting, the purpose of external reporting is to inform project stakeholders of the testing status of the project. The audience for external reporting often involves senior leadership or executives far removed from the day-to-day, operational work that the test team and other team members perform. At times, testing status may be a bit different from the status reported by other aspects of the project. This doesn’t mean that one report is more correct than another, but rather that the reporting often comes from different perspectives.
As with any form of communication, external reporting must take into account its audience and that audience’s needs and communicate status appropriately. For instance, the development team may require a detailed defect report to help better understand the level of quality software they deliver to the testing team. However, an executive may best be served by a high-level dashboard status of testing on the project. Generally, senior levels in a company require higher-level, trending information over information found in specific detailed reports. The quick visuals and trending information inherent in dashboards allow busy executives to quickly and easily interpret the metric data to inform them quickly and accurately of project and specifically test status.
Aside from tailoring the appropriate message to the target audience, another factor to consider is the amount and level of information to provide. For example, if the presenter will be available to answer specific questions, then this level of detail does not need to be in the material presented. However, if the presenter does not accompany the material, and reports appear for instance as a dashboard on the company intranet, it is advisable to create a facility to both display summary information and provide access to associated detailed information if the audience wishes to see specific details. Providing both the summary along with the capability to drill down for details adds credibility to the presenter.
External reporting will vary depending on the importance and criticality of the project and the expertise of the audience. However, some common examples of external reports for testing include:
Test coverage chart (requirements- or risk-based). This chart provides a way to see which requirements have been tested via appropriate test cases and can be based on requirements ranking or risk ranking (addressed in the previous reporting example of the risk mitigation chart). As a traceability matrix ensures that test cases are built to satisfy each requirement, this chart shows the progress in executing test cases that relate to the requirements, indicating how much testing of the requirements have been achieved along with any defects related to those executed test cases.
Similar to internal reporting, external reporting should be done on a regular basis, usually weekly. This helps set audience expectation in terms of anticipating what and when they will receive metric reports so they can best use that information for their own planning and necessary decision making. As a project manager, Jim has developed and used project-level templates such as a communication plan. Although the communication plan is intended to set expectations from the project manager to the project stakeholder in terms of what, when, and how project status and other communications will be handled, the concept applies here as well. If the test manager develops a similar communication plan (or is involved in influencing the project-level communication plan specifically for test status), this helps set the expectation for all project stakeholders.
Once the frequency and level of communication have been established based on the test manager’s target audience, the test manager may have several options in terms of communication vehicles used to publish and distribute metrics reports. These include:
The Agile methodology offers some interesting and unique metrics related to test performance and progress.
One type of performance monitor is a Kanban board. Having its roots in manufacturing and engineering from the 1940s, the Kanban technique is a way to visually see team progress; in fact, the Japanese word “Kanban” means “visual signal” or “card.” Jim was a product owner proxy on a Scrum project and worked with the project’s Scrum master to develop a Kanban board that was used at each daily sprint stand-up meeting. At this meeting, aside from each team member reporting on work completed the previous day, work planned for the day ahead, and reporting any obstacles, each team member also showed movement of user stories across from left to right as the story progressed from “to do” through “to be tested” to “done” (see Figure 6-10). Since Scrum methods were new for us at that time, this manual method of recording progress in a public way for the team to see served our needs. (This Kanban board served as an information radiator, which is a publicly posted dashboard for others to quickly see and assess progress and can include information such as user-story progress within a sprint.) The Scrum master would then record the changes in status from the physical Kanban board to a tool so overall status and progress at the project level (including completed, current, and future planned sprints) could be reported or analyzed.
Jim has broadened the use of this simple but effective means of reporting progress to both present and manage his annual performance goals. Very simply, he wrote his goals on sticky notes and, over time, moved the notes from the “planned” column to the “in-progress” column to the “completed” column. This acted as a way for him to easily show anyone the current progress of his planned goals. Since most goals were due on a quarterly or even annual basis, he didn’t complicate matters by maintaining history or tracking any trends in his progress or lack thereof. However, if necessary, he could have taken snapshots in time (e.g., monthly) to see if there were issues of concern with blocked goals over time. The advantage of this method is a simple way for management and fellow colleagues and (of course him!) to outline his work and progress toward completing goals.
Communication involves not only conveying information from the sender but also understanding information by the receiver. External reporting therefore is only effective when the target audience receives and understands the communication well enough to make plans and take the necessary action (if necessary). The reporting must include the proper level of detail to suit the needs of the intended audience. Providing too much information and details, when not necessary or warranted, can overwhelm the target audience, hampering their overall understanding and possibly discrediting not only the report but also its provider, you.
So, in order to be a successful test manager, one of the primary tasks is to adequately communicate test status to the intended audience at the appropriate level of detail so the audience can make informed decisions.
Learning objectives
LO 7.5.1 |
(K6) Given a particular lifecycle, project, and product, design appropriate test results reporting processes and dashboards, including metrics, taking into account the information needs and level of sophistication of the various stakeholders. |
LO 7.5.2 |
(K4) Analyze a given test situation during any part of the test process, including appropriate qualitative interpretation, based on a given test dashboard and set of metrics. |
In addition to including different test metrics reports for internal and external reporting, different reports make sense at various phases of the software development lifecycle, specifically in the testing phases.
Test planning involves identifying and implementing all activities and resources needed to meet both the mission and objectives noted in the test strategy. This could include identifying the features to be tested, the necessary testing tasks, who is assigned to perform each testing task, information on the test environment, entry and exit criteria, and any risks involved along with any mitigation plans.
Test control, an ongoing activity, is similar to performing the variance analyses mentioned previously. Although variance analyses exist at the projectmanagement level, comparing budgeted or planned schedule, cost, and scope against reality or actuals at the time, they also can be extended to the testing realm in comparing planned test-case execution or expected defect discovery against actuals. While the test schedule and other monitoring activities and metrics are defined during the planning process, comparison of actual information against these plans occurs during test control in order to measure success. During test planning, traceability is established between the test basis, the test conditions, and other test work products that can be monitored during test control and monitoring. Traceability will help determine the extent to which quality risks are mitigated, requirements are met, and supported configurations work properly; this reporting goes beyond that of mere test-case status and defect counts.
Typical metrics in these phases include the following, which allow the report consumer to quickly determine significant variance analyses, where actual status deviates from planned status:
Note that, if formal documentation of the system isn’t available, coverage must still be established and should be based on targets set in collaboration with stakeholders. Given a risk-based testing strategy, this is done in terms of risks. Coverage metrics are still necessary and helpful even if system documentation is absent.
There can also be cases where actual hours are less than planned hours as the test team is not as productive as planned. This could occur for several reasons, including unavailability of the test environment and/or systems, poor quality or missing test data, automation tool anomalies, unavailability of human resources, ill-prepared documentation or poor quality of incoming code. Regardless of the reason(s), the test manager must devise a plan and take action to rectify the lack of productivity and insufficient test coverage.
During the test analysis phase, the team is determining what to test based on the test objectives defined in the test policy, strategy and plans, taking into account stakeholders’ perspectives on project and product success, including factoring in product quality risk analysis. The ISTQB calls “what to test” the test condition. Test analysis sets out to create a set of test conditions for a project using test basis documents and work products such as requirements specifications and, on Agile projects, user stories.
Typical metrics for the test analysis phase include:
The test design phase then determines how to test those test conditions developed during test analysis.
Typical metrics for this test design phase include:
On the flip side, if defect trending shows fewer and fewer new defects being discovered and reported each day, assuming there is sufficient test execution and test coverage, this may indicate sufficient quality in the product.
Lastly, if defect trending shows the largest number of defects found early in the lifecycle, this could very well indicate an efficient test execution process.
The test implementation phase organizes the test cases, finalizes the test data and associated test environments, and creates a test execution schedule. This includes assessing risks, prioritization, the test environment, data dependencies, and constraints in order to devise a workable test execution schedule. This schedule must align with the test objectives, test strategy, and overall test plan.
Typical metrics at this test implementation phase include:
An example of data configuration readiness is an application requiring 100 configurations (to use round, easy numbers). As each configuration is completed, this contributes 1 percent to the total percentage of data configuration completeness.
An example of test data readiness is if source data can be directly loaded into the test environment. If so, after the data load, there is 100 percent test data availability. However, if the data needs to be massaged, changed, transformed in any way, the test data availability is not there until this transformation is complete. Just loading the raw or source data alone does not achieve this 100 percent test data availability since it is not yet ready for use.
One caveat is not to expect that 100 percent of the regression test suite can or should be automated (for example, some tests are too complex, take too long to automate, or are not run enough to justify automating them).
In the test execution phase, as its name states, the actual execution of tests occurs with appropriate results recorded.
Test execution phase metrics include:
At the evaluating exit criteria phase, the test manager relates the test results to the exit criteria. In fact, throughout the project, the test manager is checking test results to ensure steady progress toward meeting the exit criteria. The test manager must consider removing any obstacles that could prevent the project from meeting its exit criteria. For Scrum projects, there is consideration here of examining the product, user story completion, and feature sets against their definitions of done.
Although the test manager monitors this progress on a detailed level, she reports to testing stakeholders at a summary level from a total project perspective. At this point, no new metrics are developed or introduced in the testing project.
Metrics that are finalized at this stage, unless the testing phase has been extended, help with process improvement on future projects. These metrics include:
Similar to the closure of the entire project, the testing cycle includes a test closure phase that follows the completion of the test execution phase. Not unlike project closure, where the project manager ensures that all tasks on the project schedule are complete, the product is deployed to the customer in the production environment, a retrospective or lessons learned session is conducted, and all necessary project documentation is archived, the test manager and test team embark upon similar activities. From the test manager’s perspective, she would ensure that all test-related tasks are complete, the final work products have been delivered, actively participate in retrospective meetings, and archive necessary test data, test cases, system configurations, and the like for the project.
Typical metrics in this phase and its associated closure activities include:
As part of test closure activities, the test manager may also look for process improvements. In fact, during retrospective sessions, the facilitator not only asks the team what went well and what didn’t, she also probes for any process improvements, born usually from items that didn’t go as well as planned or expected. The purpose of the retrospective is to help future, similar projects to be more successful since the “boat has already left the dock,” so to speak, for the current project. Additionally, if team members will work on future projects together, retrospectives help uncover gaps in working relationships and can help foster better teamwork on future projects. For example, if on the current project defects seemed to be clustered around a certain area or functionality, future projects may consider including additional rigor in its quality risk analysis in part by including additional participants. Or, if there is an inordinate number of defects, the team could decide to do additional static testing, such as reviewing code through code inspections, or additional reviews of requirements, specifications, and designs. Also, if there were overall quality issues shown perhaps through a large number of defects, the team could brainstorm on ways to improve quality on future projects via process or tool improvements, or even training or certification to raise the skill level of project stakeholders, including the test team. Additionally, if the team seriously underestimated the amount of productive time to test, the number of test cases, or the number of defects discovered, this could indicate issues in estimating and management may seek training or other improvement measures. Lastly, if not all test cases have been successfully executed or defects successfully resolved by the project’s end, a retrospective can help determine the reason(s) why, often leading to efficiency suggestions for improvement, a reevaluation of the test strategy, and possibly reexamining exit criteria.
Jim does note that, as a project manager in several different organizations, as solid as intentions are for not repeating prior mistakes or inefficiencies, most projects do not perform the due diligence in reviewing retrospective results from prior projects during the planning stage of the current project. Aside from the overall project perspective, this lack of applying lessons learned can also affect testing specifically, especially during test planning. Sometimes this lack of consulting lessons from prior projects is due simply to a lack of discipline; the PMO or project management office can help here by intentionally adding lesson reviews as a task during the planning stage of a project. Another reason for neglecting lessons learned is that there just have not been solid knowledge management tools to capture and classify this information allowing easy access. As knowledge bases and technology continue to improve, this should be less of a reason not to plan accordingly and prevent history from repeating itself.
Also during the testing closure phase, the test manager and test team must decide which test cases and other work products should be retained and reused for efficiency on other projects. The test manager should consider the percentage of overall test cases that can be reused and then appropriately register them in a repository. Similar to the discussion on project-level retrospectives, there should be an easy way to find the reusable components for future projects; otherwise, the effort to catalog and add work products to a repository will be useless unless those artifacts are found and used.
Additionally, the test team should be considering the number of test cases that should be added to the regression test suite. There should be a careful strategy in managing the size and growth of this test suite as the addition of too many tests could lead to very large testing cycles on projects. One way to tackle this problem is to ease the burden of regression test suite execution by automating as many of these tests as possible.
Lastly, aside from saving valuable test cases for later reuse, the test artifacts should also be archived. These artifacts include test cases, test procedures, test data, final test results, test log files, test status reports, and other documents and work products. These should be placed in a configuration management system for easy access.
Learning objectives
LO 7.6.1 |
(K2) Explain why a test manager would need to understand basic quality control techniques. |
Since testing is closely linked to quality control, the test manager should have a firm grasp of the basic statistical quality control techniques, charts, and graphs used to provide indications of testing progress for overall success. Quality control is aligned with what’s been termed the Deming Cycle or Plan, Do, Check, Act (PDCA) Cycle (Figure 6-11). Although introduced by Walter A. Shewhart, it was W. Edwards Deming who popularized this process improvement model. The model is based on the following four iterative steps:
Plan |
– Consider improvements to a process. |
Do |
– Implement those process improvements. |
Check |
– Evaluate the improvements. |
Act |
– Where the improvements fall short of expectations, take corrective action to further improve, which could include appropriate planning for the next iteration. |
Two things are relevant concerning this model: Based on its simplicity, it has widespread process improvement application in terms of change management, project management, employee performance management, and obviously quality management. Secondly, the steps in the PDCA Cycle, taken together, have an overarching theme of continuous improvement. Continuous improvement involves a mind-set of ongoing efforts to evaluate processes with the goal of making them more efficient and effective.
Walter Shewhart has also contributed to one of the seven basic tools of quality. Of this collection, Shewhart introduced the concept of control charts, which are used to determine whether a process is in statistical control (Figure 6-12).
Specific limits, called upper control limits (UCL) and lower control limits (LCL), define the maximum and minimum thresholds. If a process contains measured points beyond these boundary control limits, those points are considered unacceptable variations. This can best be used by the test manager to help determine whether defects discovered during a project’s test phase are within proper control limits or are indeed beyond acceptable thresholds and require additional attention to determine the root cause.
To determine the root cause of issues, another primary quality tool from the set of seven basic quality tools is simply called the cause-and-effect diagram. Also known as the Ishikawa diagram after its creator, Kaoru Ishikawa, as well as the fishbone diagram since completed diagrams resemble the bones of a fish, this tool identifies possible root causes for problems in order to help the team focus on problem resolution. For instance, Jim has used root cause analysis and incorporated the technique in a course on quality and project management. For a typical root cause analysis (RCA) session, the facilitator assembles key people who were involved in or are knowledgeable about the issue, preferably a work group rather than management. After stating the problem, the facilitator highlights a few probable areas, such as people/personnel, machinery/tools, process, and so on. Of course, given the specific situation, more meaningful areas can be included. Then, the facilitator brainstorms with the team, looking to identify possible causes of the issue. A simple technique, called 5 Whys, can be used to generate more information or to get the team thinking about additional possible causes.
For example, if an issue is the reliability related to automated testing, one possible area to investigate is related to the unreliability of the automated tool in use (see Figure 6-13). Narrowing in on tool-related issues, the facilitator can ask the team why the tool is unreliable. When asked why it is unreliable, one answer could be that it is on an old server that is known to be unreliable. When asked further why the tool is on an old server, one response could be that management would not authorize the tool to be installed on a more reliable server due to cost issues. If it is determined that running the tool on an old, unreliable server is the primary or root cause of automated test reliability issues, a proposal can be made to migrate the tool to a more reliable server in order to help the success of this and potentially other projects that use the tool.
Successful test managers use quality tools to help uncover problems as well as make process improvements as part of a continuous improvement mind-set to make future projects more and more successful.
In the following section, you will find sample questions that cover the learning objectives for this chapter. All K5 and K6 learning objectives are covered with one or more essay questions, while each K2, K3, and K4 learning objective is covered with a single multiple choice question. This mirrors the organization of the actual ISTQB exam. The number of the covered learning objective(s) is provided for each question, to aid in traceability. The learning objective number will not be provided on the actual exam.
Criteria for marking essay questions: The content of all of your responses to essay questions will be marked in terms of the accuracy, completeness, and relevance of the ideas expressed. The form of your answer will be evaluated in terms of clarity, organization, correct mechanics (spelling, punctuation, grammar, capitalization), and legibility.
LO 7.2.1
In order to properly track and control a project, you need to determine the right metrics to collect and present. Decide which grouping would help you most achieve the goal of proper project tracking.
LO 7.4.2
Your project sponsor informs you that, as test manager, you must begin preparing and presenting weekly reports to senior leadership concerning the test progress of your team on this project. Determine which of the report choices is best for this audience.
LO 7.5.2
During the test implementation phase, your test lead presents you with the following readiness for testing report (see next page). With the plan to formally begin testing in two days, what would your recommendation be?
LO 7.6.1
A test manager needs to understand basic quality control techniques because
LO 7.3.1
Scenario 4: Set of Core Metrics
As test manager on a project to develop a new website for your global company, you need to create a set of core metrics for internal use by your project stakeholders.
Please list and describe the primary reports you would include, noting the main data points, reporting frequency and as much information as would be helpful for your test team to gather the data and prepare the reports for you.
LO 7.4.1
Scenario 5: Dashboard of Test Status Metrics
You are the test manager on a project to launch a new companywide website.
To meet senior leadership’s reporting expectations concerning the test status of the project, you must develop a dashboard of test status metrics that is accurate, comprehensible, and appropriate for your audience.
Please describe what would be included in this dashboard in terms of level of detail, taking into account your target audience, access methods for the report, frequency of report updates, etc.
LO 7.5.1
Scenario 6: Processes and Dashboards
You are a test manager and you and your team have been assigned to a software development project to create a companywide portal.
Senior leadership expects you to design the necessary reporting processes and build dashboards with the appropriate test metrics for them.
For each of the five major testing phases of:
– planning and control,
– analysis and design,
– implementation and execution,
– evaluating exit criteria and reporting, and
– closure
select two metrics and explain why you feel they are important.
For each report, be sure to take into account the specific information needs and level of sophistication of your reports’ target audience.