Keywords: confidence intervals, planning poker
Learning objectives
No learning objectives for this section.
Project management is practically in everything we do. We like to use the simple example of summer vacations to illustrate this point. First, we decide to take a much-needed vacation. We may consider the feasibility of taking a trip, at a very high level considering time (is time off from work available?), cost (the vacation must be within budget), scope (what do we enjoy doing that fits in the budget and time away from work?), benefits (spending uninterrupted fun time with our families, recharging batteries), and even some idea of risk (bungee jumping is not our idea of a fun vacation). After these high-level considerations, we begin planning the vacation, including where we want to go, what we want to do, how long we can afford to be away, and so on. After answers to general planning questions, we need to dig into the details and do more specific, in-depth vacation planning of what we need to do to get ready. When the big day arrives, we get into the car and start the vacation, doing all the things and seeing all the sights we so eagerly planned some time before. Finally, we return home exhausted yet refreshed, but the vacation doesn’t truly end as memories remain. This is also the time to reflect and consider:
consultative test strategy: Testing driven by the advice and guidance of appropriate experts from outside the test team (e.g., technology experts and/or business domain experts).
planning poker: A consensus-based estimation technique, mostly used to estimate effort or relative size of user stories in Agile software development. It is a variation of the Wideband Delphi method using a deck of cards with values representing the units in which the team estimates.
There is considerable overlap and involvement in what test managers and project managers do on projects. While the project manager has direct ownership, accountability, and responsibility in many areas, the successful test manager will be actively involved in many project management tasks, including the development and overall management of test-related tasks on the schedule, risks, reviews, assessments, and proper documentation.
In this chapter, we’ll cover common project topics of estimation, scheduling, budgeting, risk management, and quality management, distinguishing between areas and topics within the domain of the project manager and the test manager.
Learning objectives
LO 6.2.1 |
(K6) For a given project, estimate the test effort using at least two of the prescribed estimation methods. |
LO 6.2.2 |
(K6) Use historical data from similar projects to create a model for estimating the number of defects that will be discovered, resolved, and delivered on the current project. |
LO 6.2.3 |
(K5) During the project, evaluate current conditions as part of test control to manage, track, and adjust the test effort over time, including identifying any deviations from the plan and proposing effective measures to resolve those deviations. |
LO 6.2.4 |
(K5) Evaluate the impact of project-wide changes (e.g., in scope, budget, goals, or schedule), and identify the effect of those changes on the test estimate. |
LO 6.2.5 |
(K6) Using historical information from past projects and priorities communicated by project stakeholders, determine the appropriate trade-offs between quality, schedule, budget, and features available on a project. |
LO 6.2.6 |
(K2) Define the role of the test manager in the change management process. |
While the test manager primarily manages the testing phase, testing activities, and test team on a project, she must also be a participant in the overall project management aspects of the project. This includes task estimation, scheduling, budget, and resource allocation and management, dealing effectively with project trade-offs, change management, risk management, and overall quality management. Each of these areas will be considered in greater depth below. It is key that the test manager, as should other functional area managers such as the development manager, business/systems analysis manager, training manager, etc., not treat her specific areas separately from the overall project, which would be greatly detrimental to the success of the project and its outcomes. Rather, the test manager should collaborate and work closely with the project manager and other functional area managers as all areas are interdependent and rely on each other to build a quality product and influence the success of the project.
Each functional area must consider the work its team members must do to contribute to the overall success of the project. Since the testing team is one of several functional areas responsible for deliverables, the team must consider estimating the time and effort involved to complete all tasks related both to the test process and in support of other functional areas. One company Jim worked for required that each functional area lead review, contribute, and sign off on all applicable project and software development lifecycle (SDLC) documentation. This meant that he, as the project manager on the team, needed to review, comment on, and approve the requirements, design, test, and supporting documentation such as user manuals and training material. While to some this may seem overly rigorous, the benefits extended to the product, project, and team members. This collaboration promoted not only an understanding of the overall product, familiarizing the team members with the requisite format and content of the project deliverables, but also contributed to building strong working relationships with the other team members. On one specific team that used this rigorous approach, after they delivered a successful project, management rewarded them with a ferry ride across the Hudson River where Jim played the role of Mr. Rock and Roll in the on-boat fun and festivities (he has the pictures somewhere to prove it!).
So, estimation of applicable necessary tasks by each functional area is necessary to develop an overall project schedule. In particular, the test team needs to assess all of its main test tasks and determine the time and effort necessary to satisfactorily complete these tasks properly. The complexity and quality of the software will influence the test task estimates, dependent on the quality of the software and documentation delivered to the test team.
Estimation, like predicting the weather, can sometimes be more of an art than a science (no offense to meteorologists intended). However, here is a list of techniques that can be used to determine the time and effort requirements especially relevant for test implementation and execution efforts:
Brainstorming. This is a popular technique to help a team collaborate, generate ideas, and build on others’ ideas. There are various ways to implement brainstorming, each with pros and cons. These vary from freeform, where participants freely express ideas when they think of them, to round robin, where the facilitator calls on each person in line to contribute ideas. Brainstorming can be extended beyond idea generation to a collaborative session on developing task estimates using an Agile technique known as planning poker.
Planning Using Planning Poker
Jim had the opportunity to work on an Agile project where Planning Poker was used to estimate user stories.
For those history buffs, Planning Poker has its roots in the Delphic methods of estimation. More specifically, the original Delphic method (the term Delphic is based on the oracles of ancient Greece, meaning to give advice or prophecy, similar to forecasting the future) debuted in the 1960s. It consisted of asking experts in a particular field to individually and privately (no sharing allowed) develop estimates. One drawback to this approach is that, since the experts could not communicate or collaborate on their estimates, they were free to make their own assumptions necessary to develop their estimates. Thus, as assumptions varied, the estimates lacked a strong foundation and estimations could not always prove reliable. The Wideband Delphi approach improved upon its predecessor by (1) defining a repeatable and consistent series of estimation process steps and, perhaps even more importantly, (2) allowing collaboration among the estimators to discuss and modify estimates which they originally developed independently. Enter Planning Poker, which is founded upon a tradition of these Delphic estimation techniques.1
The Planner Poker estimation technique is consensus-based and those with the most experience in the specific area covered by the user story/requirements or those with the most compelling case can influence the overall team estimates to reach agreement. Each team member, such as developers, testers, and support staff, has a deck of playing cards with one of the following numbers appearing on the card face: 0, 1, 2, 3, 5, 8, 13, 20, 40, and 100. Several of the initial numbers in the sequence follow a Fibonacci sequence (for example, the second and third items sum to the fourth item (1 + 2 = 3); the third and fourth items sum to the fifth item (2 + 3 = 5)). For simplicity, the higher numbers, such as 20, 40, and 100, break this sequence and are used to represent relative sizes (akin to medium, big, very big effort). The product owner or user representative reads each user story, which is a short statement of the requirement. The team may ask clarifying questions of the product owner to better understand the effort involved. Then, each team member makes an individual judgment of effort represented by a numbered card, throws her choice card on the table, and the team notes the similarities of and differences between the choices. If all estimations are the same, that is the estimate for that work item. If, however, there are differences in the estimates, the team members discuss their reasoning and try to convince the others in order to reach a consensus estimate. Particularly those who selected very high or very low numbers are encouraged to share their reasoning. The team then re-estimates, and this process is continued until consensus is reached or the item is deferred until more information is obtained to help build consensus.
Once one or more techniques are selected and used, it’s a good idea to ensure that all test estimates include the following:
Aside from general, productive time spent devoted to test activities, the test manager and test team should be aware of other project time considerations, which in fact affect each functional area. This includes time allocated to nonproductive work, such as administrative overhead (e.g., completing time sheets), planned vacations and holidays, and training (although this contributes to both the skill set and thus value of testers and their contributions to future projects). Additionally, time devoted to team meetings, although productive, can be classified here as not contributing to real test task completion. These considerations must be taken into account when test team estimates are developed and, from the broader project perspective, the project manager considers this input from all functional areas. One area Jim has personally witnessed involved project leaders noting that work didn’t get done as planned since key resources were on vacation. The point here is that, when the team estimates were developed, neither the project leader nor the individual team members thought to adjust the schedule with planned time off.
After factoring in these various time considerations, it may be helpful to clearly state the number of test hours available to the team. This helps provide a straightforward variance analysis over time, where estimates can easily be compared to the actual time spent completing test tasks.
There are even more factors to consider when estimating test activities. These include:
Aside from estimates concerning the effort to conduct or execute test cases, the test manager must also consider the time involved in identifying defects; retesting in terms of regression testing and confirmation testing, used to ensure that defects have been properly fixed by the development team; documenting defect information; and tracking defects for reporting reasons. Depending on the complexity of the functionality, risk ratings of the requirements, or relative importance of the functionality, the test manager may assign varying estimates to ensure proper coverage, anticipating defect work. The test manager can also anticipate defects accordingly based on the size of the software being created, considering the number of developer hours, lines of code, or function point analysis, where, for example, there is a 1:5 ratio of defects per function point, or one defect for every five function points of code. These numbers can be derived from ratio analysis based on similar, past projects, prior working team relationships (developers and testers working together), and similar industry averages. To better understand the quality of the software, estimating defect work separately from test development and execution tasks allows a more flexible approach and can show if the defects are more or less than anticipated. If a standard estimate per defect is generated, this allows flexibility when comparing actual number of defects versus the anticipated number of defects indicating the quality of the software and/or the estimation of number of defects per software functionality and complexity. Estimating defect effort separately from the overall test development, execution, analysis, and documentation tasks allows a more reliable gauge of software quality since defect estimates and actual defect information are not buried in the overall testing effort estimates.
Although Planning Poker (previously explained) is often associated with Agile, in truth from a purist perspective, they are separate and unrelated. However, in practice, the Agile approach often does use Planning Poker and other estimating techniques with a focus on estimating only a smaller effort, specifically the user stories associated with a recent story workshop. It is generally more difficult to estimate and assess risk on a full set of project requirements. Agile has the team focus estimation and risk assessment efforts on only those user stories planned for the next few sprints, efforts that are more manageable by the team. Additionally, the risks on the scope within a short iteration will either be realized or discarded by the end of the sprint, and the next sprint can be planned accordingly.
After the various testing estimates are derived, it is important that, as the project moves forward, the project manager track the actual performance against the planned estimates; the test manager likewise performs a variance analysis between actual time and effort compared with estimates. Depending on thresholds set by the organization (that is, what are and are not acceptable variances), if testing or any other functional area within the project is beyond set boundaries, measures need to be taken to help bring the project back on track, including adjustments to resources, compromised changes to scope and functionality, and/or timeframes adjusted in order to produce a quality product and a successful project.
Generally, once all task estimates are known, the project schedule can be built, as a schedule is nothing more than a way to track who does what when and in what order. While there is an overall project schedule, defining necessary tasks according to the SDLC for software projects, the test manager can work with her team to develop the schedule concerning testing tasks during the various testing phases of the project. Since the testing team, as any functional area within the project, is dependent on other functional areas for delivering work products and meeting milestones, it is important that the testing schedule include these various touch points and highlight deliverables from other areas. The clearer the expectations in terms of the deliverables handed to the testing team, with objectively verifiable criteria to ensure that there is no ambiguity concerning the quality of those deliverables, the smoother the hand-off may be. With clear expectations, the testing team can compare the quality of the deliverables against the objective standards set, thereby either rejecting receiving deliverables if the quality is just not there or assessing the impact to both the testing team and the overall project due to poor quality deliverables. For example, if not all unit tests have been satisfactorily performed by the development team, the decision can be made, given enough schedule and resource availability, to have the development team invest additional time to complete the unit testing before handing the code to the testing team. Alternatively, the testing team can accept the incompletely unit-tested code and conduct additional tests or at least be aware that there will invariably be a higher number of defects discovered since the quality of the code was not to the level expected at the time of hand-off. Additionally, as schedules permit, testers can help developers with unit testing, affording testers greater knowledge of the software while allowing developers insight into test design techniques. Obviously, this also builds stronger working relationships and helps to break down functional area barriers that could otherwise be divisive.
There are many commercial project management tools that are used to develop project schedules, build tasks with assigned resources, track task completion, identify the critical path (that sequence of tasks which must be completed in order for the project to complete on time), and display Gantt charts that illustrate task start and end dates across time. These tools also clearly show task dependencies and the overall effect of a delay of an independent task on the task dependent on it. In fact, in one place where Jim worked, the project manager director ensured that all project schedules developed by her project managers had each and every task, other than the lead task, dependent on another task in the schedule; there were no orphaned tasks but each was interconnected. This ensured that any change in an early task would have the necessary ripple effect on subsequent, dependent tasks. Jim has carried this process with him and consider it a best practice.
Additionally, dependencies between functional areas may exist where there aren’t necessarily any formal deliveries. For example, there may be expectations and deliverables to the usability team such as functional software with complete features along with a usability analysis and then deliverables from the usability team such as defect reports and usability suggestions. This should all be clearly documented in the project schedule.
It is almost a given that, especially on large projects, the schedule will invariably change. Normally, project managers take a baseline, similar to software developers freezing code to prevent any changes through software builds. A baselined schedule allows the project manager and project team to easily assess the changes to the baselined schedule as the project moves forward, as tasks complete and unfinished tasks may move out or even move in with respect to their planned end dates. A baseline acts as a reference point, a fixed schedule, allowing deviations and changes from the planned schedule to be measured. Each organization is different in terms of how much deviation is allowed. At one place Jim worked, a 15 percent variance of the actual end date either later or, less likely, earlier than the planned end date would still be considered a successful project, of which bonuses depended. Of course, if the project manager believes that there will be schedule variance beyond accepted thresholds, she should consult the project sponsor and perhaps petition for additional project time if there are valid reasons for the delay (e.g., increased scope, key resource unavailability, and so on).
The Agile methodology welcomes change, especially to the test schedule based on what is planned for each iteration. It is good practice to freeze the requirements or user stories or items introduced into a timeboxed iteration or sprint based on the velocity of the team, which is a measure of how much work (often measured in story points or person-hours) or how many user stories the team can complete in each sprint. Testing work estimates of course factor into the work planned for each sprint. This ensures that, at least within the sprint, the work is planned, understood, implemented, and tested. Any items not completely working (that is, tested satisfactorily) in the current sprint are deferred to a future sprint and the team does not receive credit for that work item. Of course, especially based on the software functionality demonstrated to the product owner (user representative) at the end of a sprint, new user stories for new or changed functionality can be added to the backlog or list of work to do and the team, especially the test manager and testing team, need to remain flexible to change. However, as previously mentioned, it is a good practice to freeze the scope of work to which the team commits at the beginning of a sprint.
Project budgets and resource assignment and allocation to projects are different between projects and between organizations. A test manager typically has a constraint given the number of test team resources she has at her disposal. With this constraint in mind, her project budgeting exercises would be limited to the scope of work assigned to her testing resources. Thus, the scope of work that her team can manage will significantly affect project test schedules. She must adequately plan and manage her test resources across all planned and current projects in order to contribute to the success of each project yet not overwhelm her team.
Regarding specific budget needs, the test manager must consider:
Expanding on the first two staffing items above, the test manager must consider both the composition (including part-time and full-time permanent internal staff and contractors, as well as the mix of on-premises, offshore, and outsourced team members) and capability (skill levels and experience from junior to senior) of the team. In one organization where Jim worked, when performing pre-project estimates of work, we used a simple template with each functional area’s specialty noted as columns with corresponding rows denoting seniority level, with each level assigned a fully loaded rate (standard rate of pay with overhead costs and allocations included). When a high-level estimate of work was developed, each functional area manager noted the number of estimated hours of a resource at each specific seniority level. This provided a view as to the number of hours and dollars associated with each functional area for the work involved. This served as a first cut when developing project estimates, of course, until a full work breakdown structure was developed where refinements would undoubtedly be made. The test manager should know her team and be aware of which resources can best meet the needs of each feature or each project. The feature estimates can be taken in isolation but, if combined into a project, each functional area manager must consider the resource capacities of their team members to ensure adequate coverage for the features in the project in addition to other concurrent projects and non-project (e.g., ongoing, maintenance) work. This can be a trivial or involved process depending on the dynamics and size of the test organization as well as the number and size of concurrent projects. At times, resources (human and otherwise) may be shared from other functional areas. The team should assess whether this sharing is beneficial or detrimental to the project. Jim was a software developer shared for a time by a test team, and at the time this arrangement helped the overall project. However, there are times where the sharing of the same system or environment between the development and test teams should not be leveraged, as this shared environment could adversely affect the quality of the overall system.
One thing that the test manager and in fact all functional area managers should remember is that budgeting and resource allocation is not a static exercise but often an ongoing endeavor. Project priority needs which require shifting resources, quitting staff members, and late-added requirements necessitating additional external resources all contribute to the flexibility, adaptability, and dynamicity required of test managers. Test managers must remain vigilant in tracking their budgetary expenditures so that any significant variances can be reported and resolved immediately.
The project manager must work closely with each functional area’s manager in monitoring the project budget. While the budgetary estimates provided by the test manager and the quality of the test work provided by the test team are invaluable, the test team, similar to other functional areas such as software development, systems engineering, business/systems analysis, systems architecture, customer help desk support, documentation, and training, provides initial time-and-effort estimates to the project manager, who then combines, assesses, challenges, but ultimately manages the budget for the project from the project’s beginning to end. While the managers of each functional area, including the test manager, are responsible for their respective area’s budget, the project manager is responsible for overseeing the overall project budget to reduce or eliminate variances from plan as much as possible.
“Failing to plan is planning to fail,” as the old adage goes. In order for a project manager to manage and track a project, there obviously needs to be a plan developed and put into place with the proper metrics and mechanisms to assess at various points whether the project is on or off course. Without a schedule and a focus on the test team’s tasks, how would the test manager know if her team is on track, ahead, or behind? If your family is taking a trip within driving distance of your home, would you plan the trip from home and not consult a map (either hardcopy or electronic)? Even with a map, as you inevitably make a wrong turn, you need the map to note the variance to help bring you back on course. If there is no map for your journey or schedule to guide the project, how would you know if you are on track or derailed?
To Jim, a schedule or plan is perhaps the most important deliverable on a project. Although definition of requirements (so the team knows what is expected at the beginning) and success criteria (so the team knows if the project was successful at the end) are extremely important, it is the plan that describes who does what when for how long in what order and how the team will know when it is done.
A well-managed project then has a well-developed schedule with relevant tasks and clear task ownership. The test team is no exception to this and must have its tasks clearly defined and assigned so the team knows what is expected of them and when it should be done. Specifically for testers, this could include knowing exactly which tests to perform, when to start and complete each test, and the applicable metrics to be produced as a result of that testing. Typically, after the project manager works with each functional area to determine applicable tasks and durations (or estimates), he conducts a kick-off meeting with the project stakeholders. Among other things, this is an opportunity for the highlights and expectations of the project to be communicated to the entire team. After the kick-off, the project manager monitors the project schedule and works closely with the team members through the end of the project, updating and adjusting the schedule of tasks to project (and thus task) completion. The test manager works with the project manager to define the necessary quality goals and to track progress toward those goals. Some trade-offs inevitably occur, such as accepting a feature later than planned, risking incomplete or inadequate test coverage. The test manager needs to understand the risks in this late delivery and adequately communicate that to the project team, especially the project sponsor, so the correct decision can be made.
It is important not only to establish a workable project schedule built from estimates, but likewise to actively manage and monitor it in order to note any variances or deviations from the plan. Variance analysis can apply to budget and cost as well as schedule and timeline variance, and even to scope variance as a measure of scope creep. For example, standard PC tools can be used to establish a baseline schedule, track actual task completion with completion dates, and then compare the actual results against the baselined, planned results. In the case of test tasks, if completion of test cases is taking longer than anticipated, the test manager must determine why. There are various reasons, including unexpected or poorly estimated test case execution duration and tester inexperience. There also may be a greater number of defects discovered than planned in a particular area or module. This could indicate that the requirements were not clearly understood by the development team, the code wasn’t properly unit tested before delivery to the test team, etc. Even if the test team wasn’t responsible for the variance, bringing the variance to the team’s attention can help uncover process issues and the need to create new or improve existing processes. The point is that without a scheduled plan and periodic checks of actual information against that plan, the team would be unable to determine success or would realize issues later than necessary, when it may be too late to course-correct.
In Scrum projects, where many variances are due to circumstances within the team, the Scrum master is tasked with helping to remove obstacles that prevent the team from moving forward. Often, these obstacles are raised during the course of daily Scrum meetings, in which each team member briefly discusses the daily accomplishments, plans for the next day, and current obstacles or impediments. The Scrum master then works outside of these meetings to help resolve these obstacles.
There are various test metrics that can be useful for project tracking:
Perfect projects are rare; actually, they don’t exist. Most projects require a tradeoff between quality, schedule, budget, and features. This is best depicted in the project management triangle noted below.
Although variations of this triangle exist, let’s go with this basic model. Every project has constraints, such as limited time, limited cost, and limited scope. The project management triangle depicts this, with time or schedule on one vertex, cost or budget on another vertex, and scope or features on the final vertex. We like one variant of the triangle, as shown in Figure 5-1, which includes quality within the triangle, showing that quality is dependent on the three constraints as well as depicting the effect on quality based on changes to the constraints.
All three project constraints influence the overall quality of the software product. Affecting any one constraint has an effect on the other two dependent constraints and could affect the overall quality of the product. For example:
The beauty and simplicity of this model is that the mix of constraints, and any changes to the constraints, affect the overall quality of the software solution. The question always is how can an acceptable level of quality be maintained given changes to the schedule, budget, and/or functionality? This challenge always awaits the project team, especially the project manager, who is responsible for managing the project given the three constraints, and the test manager, who must ensure that high quality is preserved despite changes in the constraints. This is why it is imperative that any constraint changes mandated by the project sponsor be immediately discussed with the project manager who in turn will rely on both the development manager and test manager for consultation on what the true effect of the constraint changes will be to the project and its goal of meeting its objectives (which often are scope, cost, time, and quality or producing the functionality within the agreed-upon budget and timeframe while meeting quality expectations).
While the above holds true for those projects following a more traditional approach, for Scrum projects, time and quality are generally fixed; the schedule is not elastic, allowing for additional (or fewer) sprints, and quality considerations cannot be comprised. Therefore, the only variable on Scrum projects that can change is scope, or user stories, as more stories can be added to or current stories removed from the sprint backlog to properly meet expectations.
Preferably at the start of a project, it would be ideal to understand from the project sponsor which constraint is the most important. Is the sponsor interested in a well-defined set of functionality and features at the expense of a slight variation in schedule and budget? Or, is a particular deployment date most important, perhaps to gain the advantage in getting to market before the competition, given minimal changes to functionality and budget? Or, is the budget cast in stone and is more important than the full set of desired functionality and overall timeframe in completing the schedule? Knowing this important information up front will help the project manager and functional areas better plan the project, understanding what is really of paramount importance to the sponsor.
Unfortunately, this information is not always known at the start of the project, but may eventually come to light somewhere during the course of the project. In fact, the sponsor herself may not know the proper mix of constraints until the project is well under way. This requires the team to be as flexible (dare I say “Agile”?) as possible. This means that, from a testing and overall quality perspective, the test manager must be aware of the interdependencies between project components in order to make decisions on the impact of trade-offs. It is an unfortunate reality that, at times, quality may suffer due to the trade-offs in constraints that are required. For example, it may be determined during the testing phase that key functionality, which is essential to the overall product, was missed at the requirements stage. This may result in a decrease in overall quality, since the test team could not have planned for this additional functionality and may not be able to adequately test given the existing time and cost constraints. The test manager must clearly explain the impact of the reduction in testing time and associated risk to the product quality, noting that this new functionality may result in other areas not receiving adequate or even any testing time.
Change on a project is a given, a constant, something that inevitably will occur if not once then several times on typical projects. Change is so prevalent that the Agile development methodology acknowledges change as one of the key principles in its Agile Manifesto: “Welcome changing requirements, even late in development, Agile processes harness change for the customer’s competitive advantage.”4 The test manager must therefore have a flexible way to quickly understand the change, assess the impact of the change, and adapt to the change accordingly.
Change can occur in areas such as requirements, timeline, and budget (the project management triangle’s triumvirate of scope, schedule, and cost), and overall quality. Risks to projects that can affect the test team include unanticipated issues with test environments, shortening the overall testing time; unavailability of hardware infrastructure, such as a necessary server, hampering the development team’s compatibility testing, resulting in additional testing by the test team; and the like. (Risks are covered more extensively below.) It is important for the test manager to be able to perform an impact analysis in order to truly determine the ramifications of changes to the testing aspects of the project. This should be done more broadly by the entire team and include all changes and not necessarily only those that impact testing. If the project team does not have a process for impact analysis, it behooves the test manager to establish one for her test team to benefit the overall quality of the product to meet project objectives.
The individual impact analysis for each change is part of a larger, change management process that tracks, schedules, and assesses each change regarding impact. Given proper impact analysis, non-mandatory change (such as in what-if scenarios) can be discussed by the team before accepting, again with overall consideration of the affect the change will have on the project’s outcome. The information that the change management process captures can also be used toward the end of the project (or at the end of each Agile Scrum sprint closeout) during project retrospectives and can serve as useful information for future projects.
Experienced test managers and test teams, those who have invested years working both together as a team and within the testing field, will discover ways to make best use of everyone’s time on projects. A good test manager, besides protecting, supporting, and growing her test team, will ensure that the test team is making the most efficient use of its time. This in part includes the test manager attending meetings, including project status meetings with the project manager and functional area managers and leads, providing her team’s overall status and challenges, and then briefly sharing project highlights from the status meetings with her team. This ensures that the team can focus on their primary tasks and does not need to attend meetings that the test manager can and should attend as the representative of the test team.
Aside from insulating the team from additional meetings, the test manager can help the team make most efficient use of their time in the following ways:
Learning objectives
LO 6.3.1 |
(K4) Conduct a risk assessment workshop to identify project risks that could affect the testing effort and implement appropriate controls and reporting mechanisms for these test-related project risks. |
It is said that the only things absolutely certain are death and taxes. Therefore, life itself involves risks or uncertainties. You’ve probably heard of the mythical bus that wipes out employees; in our careers we’ve heard managers stating that it always made sense to mitigate the risk of key employees getting “hit by a bus” (variations include cars, trucks, or trains, but never boats for some reason) by providing training and hands-on experience to other employees to ensure coverage and maintain continuity in operational tasks (basically providing backups). Projects are no different in that they too contain risks or uncertainties. The Project Management Body of Knowledge (PMBOK) defines project risk as “. . . an uncertain event or condition that, if it occurs, has a positive or negative effect on one or more project objectives such as scope, schedule, cost, and quality.”5 Since project success is usually assessed by the sound balancing of the constraints within the Project Management Triangle (see Figure 5-1), consideration and management of uncertain events or conditions are key to a successful delivery of a product, service, or result (that is, a project). When we consider risks, we tend to think of only the unplanned negative things that can happen to a project, such as a vendor going out of business or a cyber-security breach of a key integration system, or, in the testing realm, sudden unavailability of a test tool that hampers progress in executing test cases. For purposes of our discussion, we’ll only consider negative risks (threats) or risks that, if they do occur, will have negative consequences on the project. This definition of risk is absolutely aligned with that found in the ISTQB syllabus, which defines risk as “a factor that could result in future negative consequences” with the focus on the negative impacts of risk. In fact, as risks are assigned a risk level either quantitatively (for example, 1 (low) through 5 (high)) or qualitatively (“low,” “medium,” “high”) based on their potential impact and likelihood of occurring, appropriate emphasis and mitigation strategies and actions can be taken. Positive risks, outside of the scope of our discussion, are often considered opportunities, such as a project completing substantially under budget. While on the surface this may seem like a wonderful project outcome, a project that comes under budget is often a reflection on the team (and especially the project manager) not properly estimating project effort. As credibility issues come into play, sponsors may be reluctant to fund future projects if the project manager and team have previously done a poor job of estimating a project’s efforts.
Using a SWOT Analysis
On a side note, it is interesting that risk as both positive opportunity and negative threat appears in two of the four quadrants included in a standard strengths, weakness, opportunities, and threats (SWOT) analysis, which is a simple yet helpful method to determine factors that can affect not only a project’s outcomes, but also individual career assessments and planning.
The project manager and project stakeholders including the test manager can collaborate to develop and maintain this grid pertaining to a project. Notice that the grid groups these aspects of the analysis in positive/negative as well as internal/external categories.
For example, a project strength such as an experienced team who has worked well on prior projects is both positive and internal.
After the SWOT analysis has been completed, the project manager and team should look to:
The beauty of the SWOT analysis lies in both its simplicity and application. SWOT is a simple tool to develop, apply, and maintain, requiring no training or special skills. SWOT is also applicable and can be used on a project level tied directly to risk identification and management; by a specific team, such as the test team to evaluate the effectiveness of its team members; and even in individual career management. In fact, Jim has developed his own professional career SWOT analysis and has taught on this to employees to help them manage their own careers.
While managing project risks is ultimately the responsibility of project managers, they are heavily dependent on their functional area colleagues, such as the test manager, to help identify risks, assess their impact and likelihood, and work through viable mitigation plans for each risk.
Project risk management is all about identifying, assessing, and controlling potential project risks that could have a negative impact on a project and its overall goals and objectives.
Although each company/department/shop may use slightly different names to identify the stages or phases a project undergoes in its life from beginning to end, the standard project lifecycle includes the following phases as shown in Figure 5-3:
When a project is in its planning phase, one of the deliverables a project manager develops is the project management plan. This plan includes a series of smaller, focused management plans that support the overall project management plan (e.g., communications, stakeholder, scope, etc.), including a risk management plan. The full list of project management plan components includes:
Although in a perfect world, we’d aim to eliminate all risks, in reality there are only a few main ways of dealing with risk, which apply to all risks, including test-related risks, identified in the risk management plan. These strategies include:
The project manager begins building the risk management plan by meeting with the applicable functional area managers/leads, including the test manager, to identify and document the various risks that could negatively impact the project. Often, a project manager may consult lessons learned or retrospective documents from similar projects to see whether any risks affecting previous projects may rear their ugly heads again and negatively affect this current project. Additionally, a project management team may include a template of common risks and successful mitigation strategies based on prior project experience and, when applicable, common sense. This initial research helps the project manager by setting her off on a good start to risk management. Jim had the early experience in the late 1990s of being involved with a company that achieved a Capability Maturity Model (CMM) Level 5 rating, the highest within Carnegie Mellon’s Software Engineering Institute’s maturity framework. As part of this rating, our quality management department developed a risk management template with appropriate job aid documentation that captured key risk identification information, pre-mitigation analysis, mitigation steps, and post-mitigation analysis. Let’s review a similar template in Figure 5-4.
This approach was a practical and honest way of managing project risks. Since the mitigation actions did not always entirely mitigate the risk, the template had provisions to show the remaining impact from a financial perspective of the impact of risks that could not be fully mitigated; this was the impact of residual or remaining risk after all necessary mitigating controls were put in place. Related to the previous discussion on the risk mitigation strategy of risk acceptance, our template and process, although not necessarily reflected in Figure 5-4, included a contingency reserve, which was a small percentage of the overall project cost reserved to handle risks. The ISTQB defines confidence intervals as the period of time within which a contingency action must be implemented in order to be effective in reducing the impact of the risk. Often, there are trigger dates that act as indicators to both start and stop the contingency plan of action; the trigger dates are those dates within the confidence intervals. For example, the aforementioned scenario of the inoperable test server would pose a risk to the project prior to the start of the test phase, when the server is configured properly to anticipate the upcoming testing activities, through the test phase, and possibly for some time after the test phase if post-deployment defects are found, requiring retesting using the test server and redeployment. These timeframes denote when the risk could be realized so the proper mitigation plans and actions could be taken; outside of this window, the risk is not realistic and, if it does occur, would have minimal to no impact on the project.
confidence interval: In managing project risks, the period of time within which a contingency action must be implemented in order to be effective in reducing the impact of the risk.
The best and most comprehensive risk management plan does absolutely no good to the project if, after it has been developed, it sits on a shelf and gathers dust. The key to good risk management is to actively review it with the functional areas, including the test team, periodically through the life of the project. Risk review includes reevaluating risks that exist in the plan to assess any changes to the likelihood, impact, and contingency and mitigation plans as the project progresses and any new information is known. Additionally, during risk review, the team should consider adding any new risks that have surfaced since the previous risk review. Lastly, if currently tracked risks that have passed have not been properly dispositioned as closed, this should be done during the course of the risk review. Aside from the mechanics and discipline of reviewing, updating, and actually using the risk management plan and associated deliverables to lead to a successful project, there are benefits to project team members meeting periodically, collaborating together, and wrestling with current and potential issues that could adversely affect the project. Often, as a good practice, project managers include schedule and action item review with the team periodically, as often as every week. Depending on the dynamics and particulars of the project, the risk management plan may be reviewed at each weekly project status meeting or in a less frequent manner, such as every two weeks or monthly. The project manager, with the buy-in from the project sponsor and team, would determine the review frequency. The key point is that manager/lead representatives from the applicable functional areas, such as system architecture/engineering, system/business analysis, software development, testing, deployment, operations, training, help desk support, and documentation, on software development projects collaborate periodically on project issues and risks including test risks.
As a key stakeholder on the project team, the test manager must play an active role in the entire risk management process, including initial identification of risks and mitigation plans as well as periodic risk reviews. In fact, Jim has seen places where the key stakeholders on software development projects were the development manager, the test manager, and the project manager. This is why it is so important for a good project manager to include, as mentioned earlier, periodic review with the functional area representatives of not only the schedule and action items, but of the risks documented in the risk management plan or risk register.
While test managers are clearly involved in test-related risks, there are many other types of risks not necessarily tied directly to the testing effort, which will nonetheless impact testing. This makes sense when you think of the broad definition of software quality assurance. We often think of quality assurance as predominantly testing tasks and activities within the context of a project that are undertaken to ensure that the product has high quality. While this is certainly one important aspect of quality assurance, there is much more to quality assurance than testing alone. Other areas affecting overall quality and potentially incurring additional project risk include:
This way the developers know what to design and develop, and the testers know what to test. The beauty of this approach is that a small subset of requirements is defined and resulting software is developed and tested in a short period of time, usually two to three weeks. This means that the product owner sees tangible, working software early and has time to request changes to help the software evolve to better meet user needs. Where many Scrum projects fail is due to the inability of the product owner to devote sufficient, focused time on the Scrum projects; generally, the more user involvement, the better-quality software is produced.
What Makes for a Good User Story?
A good Scrum user story has several, general characteristics. You can think of these characteristics as following the INVEST6 acronym, as solid user stories are:
To help keep the test manager aware of these risks, which are generally outside of his control, the project manager can supplement the risk management plan or risk register by listing the functional areas, such as software engineering, software development, as well as the test organization, potentially impacted by each risk. Additionally, the test manager or delegate, such as a test lead, should take a proactive role in participating in periodic project risk reviews usually conducted by the project manager so the test manager is aware of the impact to the test team and testing phase of the project given risks and issues identified by other functional areas. Here the test manager can be considered the liaison between the project’s functional area team members and her test team. For example, at the project status meeting, the test manager, representing the test team, testing phase, and associated testing activities of the project, learns of an issue delaying development. The test manager would note at this meeting the potential risk to the testing phase. After this meeting, the test manager shares the issue raised at the project status meeting and plans a course of action with the test team to deal with this real or potential issue.
Although quality is everyone’s responsibility, the buck often stops with the test team. In fact, quality seems so closely linked to the test team that, in some organizations, delegation of the quality responsibility lies with the test team such that the testing senior leader must sign off on the test plan, ensuring that adequate testing has occurred and the test results have met the overall test strategy and plan before the product can be deployed to production. We’ll discuss how quality assurance and testing relate in the next section.
Learning objectives
LO 6.4.1 |
(K4) Define how testing fits into an organization’s overall quality management program. |
At the time of this writing, Jim teaches a college class in systems analysis and design. During a lesson on managing a system implementation, he asked the class three questions:
How would you answer these questions?
The class answered this way:
Let’s look at each of these questions and answers separately.
What is software quality assurance?The very broad answer of anything done to ensure quality in the product (and, by extension, the service or result) is nonetheless accurate. The ISTQB defines “quality assurance” as “part of quality management focused on providing confidence that quality requirements will be fulfilled.”7 “Quality management” itself, according to ISO 9001:2015, is based on a set of principles, including customer focus; leadership; engagement of people at all levels; a process approach, where interrelated processes work together as a coherent system; and an ongoing focus on improvement.8 This has many manifestations and can be seen beyond software projects to include anything from customer surveys on the receipt of takeout purchased items from fast-food restaurants to that very familiar message on the help desk call, “This call may be monitored for quality assurance.” Software quality assurance includes any intentional activities taken to ensure that quality exists in software in particular. The concept of quality assurance can be extended to any of the products we buy and the services we use. But what, really, is “quality”? The ISTQB defines quality as “the degree to which a component, system, or process meets specified requirements and/or user/customer needs and expectations.”9 Noted quality gurus defined quality as fitness for use, meaning a lack of defects or bugs (J. M. Juran10) and conformance to requirements (Philip Crosby11).
Quality is not the run-of-the-mill, the mediocre, the mundane, or the everyday. The pursuit of quality is the pursuit of value, excellence, even superiority. Test managers (and others, as we’ll soon see) are responsible for contributing to excellent software. Test managers do this through building a solid team of test professionals, by not only participating in requirements, design, and test reviews, but also by challenging the concepts, ideas, and reasoning used in these artifacts not for the sake of building solid documentation, but in order to ensure superior quality software.
What one word do you most associate with software quality assurance?Most people in the software industry equate “testing” with “quality assurance.” In some organizations, there are separate departments for the test team and the quality assurance team. In other organizations, the test team is known as the QA team or quality assurance team. This is unfortunate because, not to overuse a cliché, but quality really is everyone’s job and responsibility on the project. There is often a prevailing mind-set that the test team (those poor souls who are the last in line before the product goes out the door) has the sole responsibility for building and ensuring quality in the product. This view is seriously outdated. As mentioned earlier in the section on project risk management, the quality of the product produced as a fully functional, operational system is dependent on the quality of the requirements gathered from the user and documented for the team; dependent on the quality of the designs built that are themselves dependent on a solid understanding of user requirements; dependent on the quality of the software code that is built and the database architecture that is implemented; dependent on the test strategy, plans, and actual test cases, using the requirements and other project artifacts to ensure quality in the test phase; and dependent on ancillary services and documentation developed in support of the product, such as user, help desk, operational documentation, training material, and so on.
How do you ensure software quality assurance?As the response to the previous question shows, ensuring software quality assurance occurs in each stage of the SDLC. At a CMM Level 5 company, we had much training and documentation concerning our model, processes, and procedures. Every project stakeholder, including the test manager and project manager, was required to review and eventually approve all necessary project documentation from the requirements through design and testing documentation as well as attend and participate in various reviews. This ensured that project team members were committed to the project and all understood what was being built through the requirements, design, development, and testing phases. While there is post-production verification after a product has deployed to production, we often don’t continually test the product, since project team members and other resources are allocated to new projects. However, we should be vigilant in soliciting input from users and customers regarding the continual value (i.e., quality) of the product long after it has been initially deployed.
We previously discussed that, in a project’s planning phase, the project manager develops the project management plan that itself includes several smaller, focused management plans, including the risk management plan. One other focused management plan developed during the planning activities is the quality management plan (see Figure 5-5 for some of the major components used to develop the quality management plan). The essence of the quality management plan is to minimize variation and deliver results that meet requirements. In order to do this, the quality management plan includes various baselines, which act as starting points for comparisons or deviations, such as the scope, schedule, and cost baselines. Since scope, schedule, and cost represent the key indicators of project success and quality (remember Figure 5-1’s project management triangle?), variations from each component’s baseline indicate potential quality issues.
Since quality management extends beyond testing, it is important to differentiate the responsibilities of each area; if one team handles both areas, it is crucial that testing activities be distinguished from overall quality management activities, so the test policy and test strategy expand into the broader quality policies and quality strategies that include much more than testing. The quality management discipline would be responsible for ensuring an integrated and consistent set of quality assurance and quality control processes, activities, and metrics. You can think of quality management (QM) as consisting of both quality assurance (QA) and quality control (QC).
QM = QA + QC
QA establishes the process for managing for quality and includes preventative measures such as establishing appropriate policies and guidelines taken to “assure quality” in software, such as reviewing test plans, selecting defect tracking tools, and training personnel in the various policies and guidelines (one of the companies Jim worked for invested in much training and documentation concerning the overall project and software methodology, helping them to achieve CMM Level 5 certification). The goal of QA is to prevent defects from entering the software. QA is more proactive.
QC on the other hand includes detection measures to determine the level of quality in the software. The goal of QC is to gauge and monitor the level of quality inherent in the software, assessing variation against requirements. QC is more reactive.
Think about quality assurance as defining the necessary quality requirements, such as standards, processes, procedures, and policies with an eye towards continuous improvement (enhancing and adapting those policies to fit the needs of the organization and business) and overall vision. Quality control then applies the quality assurance standards, processes, procedures, and policies against the product to check the product’s level of quality. Any variations, deviations, or inconsistencies should include a plan to resolve and, given management approval, will be instituted in order to raise the level of quality.
In the following section, you will find sample questions that cover the learning objectives for this chapter. All K5 and K6 learning objectives are covered with one or more essay questions, while each K2, K3, and K4 learning objective is covered with a single multiple choice question. This mirrors the organization of the actual ISTQB exam. The number of the covered learning objective(s) is provided for each question, to aid in traceability. The learning objective number will not be provided on the actual exam.
Criteria for marking essay questions: The content of all of your responses to essay questions will be marked in terms of the accuracy, completeness, and relevance of the ideas expressed. The form of your answer will be evaluated in terms of clarity, organization, correct mechanics (spelling, punctuation, grammar, capitalization), and legibility.
LO 6.2.6
As test manager on a project, you have been informed that the test environment won’t be ready until three weeks after the original planned date of availability due to infrastructure issues. This could seriously impact both the time to execute and analyze planned test cases as well as the scope of test cases necessary to produce a quality product.
Which of the following statements best describes the role of the test manager in this situation?
LO 6.3.1
As a test manager, you conduct a risk assessment workshop to identify project risks that could affect your team’s testing effort. As your workshop progresses, you and the functional area leads identify several potential risks with controls and monitoring or reporting mechanisms.
Of the scenarios below, decide which identified risk, control, and reporting mechanism directly addresses test-related project risks.
LO 6.4.1
As a test manager, consider how your testing team can play an active role in the overall quality management program of your company.
To that end, select the one scenario below where you and your team are not actively involved in this quality management program.
LO 6.2.1, LO 6.2.2
Scenario 3: Test Estimation
Assume you are a test manager involved in developing and maintaining a suite of products centered on a family of programmable thermostats for home, business, and industrial use to control central heating, ventilation, and air conditioning (HVAC) systems. In addition to the normal HVAC control functions, the thermostat also interacts with applications that run on PCs, tablets, and smartphones. These apps can download data for further analysis as well as actively monitoring and controlling the thermostats.
Three major customer types for this business are schools, hospitals and other health-care facilities, and retirement homes. Therefore, management considers its products as safety critical, though no FDA or other regulations apply.
The organization releases new software, and, when applicable, hardware, quarterly. It follows a Scrum-based Agile lifecycle, with five two-week iterations, followed by a three-week release finalization process. For the upcoming release, assume that there are five teams, each with one tester and four developers. The testers report in a matrix structure to you, the test manager. You also have two small teams within your test organization, one that focuses on test automation and another that creates and maintains test environments.
You follow a blended test strategy that includes requirements-based testing, risk-based testing, reactive testing, and regression-averse testing. During release planning, test estimation is done based on the number of requirements (user stories) and the number of risk items identified across those user stories for the entire release backlog. These estimates are used to avoid overcommitting in terms of overall release content. During iteration planning, test estimation is done on the number of requirements (user stories) and the number of risk items identified across those user stories selected for the iteration backlog.
Consider Scenario 3.
Assume that there have been three releases so far, all following the same Agile lifecycle, with similar (though in some cases differently-sized) teams, working on the same product line. The historical defect metrics for each release are as follows: Three major customer types for this business are schools, hospitals and other health-care facilities, and retirement homes. Therefore, management considers its products as safety critical, though no FDA or other regulations apply.
Part 1: |
Describe the process for release test estimation. |
Part 2: |
Describe the process for iteration test estimation. |
Part 3: |
Describe the use of the historical information presented above to create a model for estimating the number of defects found, resolved, and delivered in each release. |
LO 6.2.3
Continue with scenario 3, as described earlier, and as extended in the previous question.
After the first two iterations, you find that the following defect metrics apply so far.
Part 1: |
Identify deviations that have occurred from historical metrics. |
Part 2: |
Discuss how these deviations will affect your iteration estimation process. |
LO 6.2.4, LO 6.2.5
Continue with scenario 3, as described earlier, and as extended in the previous two questions.
Assume that, at the beginning of the third iteration, three of the senior developers quit and take jobs with one of your company’s main competitors. You are scheduled to have a meeting with management to discuss the impact of these resignations on testing and quality, and to make recommendations. Outline the key points you’ll address in your meeting.
1Thanks to Klaus Nielsen, “Software Estimation using a Combination of Techniques,” PMI Virtual Library, 2013, https://www.projectmanagement.com/articles/283931/Software-Estimation-using-a-Combination-of-Techniques for some brief history and background on the Delphic estimation techniques.
2Doctors Erik P.W.M. van Veenendaal and Ton Dekkers, “Testpointanalysis: a method for test estimation”, published in Project Control for Software Quality, Kusters R., A. Cowderoy, F. Heemstra and E. van Veenendaal (eds), Shaker Publishing BV, Maastricht, The Netherlands, 1999, http://www.erikvanveenendaal.nl/NL/files/Testpointanalysis%20a%20method%20for%20test%20estimation.pdf.
3For a good comparison between the PBS and WBS and how they can both be used successfully on projects, see Patrick Weaver, “Product versus work breakdown structure”, projectmanager.com.au, projectmanager.com.au/product-versus-work-breakdown-structure, August 13, 2015.
4Beck Kent, et. al., “Twelve Principles of Agile Software,” from the “Manifesto for Agile Software Development,” 2001, www.agilemanifesto.org/principles.html. This is the definitive authority on understanding all things Agile.
5Project Management Institute: A Guide to the Project Management Body of Knowledge (PMBOK Guide) – Fifth Edition, 2013, p. 310. Of course, the Project Management Institute’s PMBOK is the definitive guide on everything related to project management.
6Bill Wake, http://xp123.com/articles/invest-in-good-stories-and-smart-tasks. Although this handy acronym appears in many Agile references, Wake’s explanation is concise and even touches upon the SMART model (also covered in Chapter 2) which helps bring meaning to the development of goals.
7www.astqb.org/glossary/search/quality%20assurance.
8www.iso.org/iso/home/standards/management-standards/iso_9000.htm and www.iso.org/iso/pub100080.pdf. This is a short, focused overview on key quality management principles which, when taken together, form the framework for performance improvement and operational excellence.
9ISTQB Glossary, www.astqb.org/glossary/search/quality.
10ASQ website, asq.org/about-asq/who-we-are/bio_juran.html. The ASQ, or American Society for Quality, distinguishes itself as a global community of people dedicated to quality who share ideas and tools, similar to what communities of practice do.
11ASQ website, asq.org/about-asq/who-we-are/bio_crosby.html.