5Project Management Essentials

Keywords: confidence intervals, planning poker

5.1Introduction

Learning objectives

No learning objectives for this section.

Project management is practically in everything we do. We like to use the simple example of summer vacations to illustrate this point. First, we decide to take a much-needed vacation. We may consider the feasibility of taking a trip, at a very high level considering time (is time off from work available?), cost (the vacation must be within budget), scope (what do we enjoy doing that fits in the budget and time away from work?), benefits (spending uninterrupted fun time with our families, recharging batteries), and even some idea of risk (bungee jumping is not our idea of a fun vacation). After these high-level considerations, we begin planning the vacation, including where we want to go, what we want to do, how long we can afford to be away, and so on. After answers to general planning questions, we need to dig into the details and do more specific, in-depth vacation planning of what we need to do to get ready. When the big day arrives, we get into the car and start the vacation, doing all the things and seeing all the sights we so eagerly planned some time before. Finally, we return home exhausted yet refreshed, but the vacation doesn’t truly end as memories remain. This is also the time to reflect and consider:

  • Did we overall enjoy our vacation time, achieving the benefits we planned?
  • Did we stay within our planned vacation budget (or will the upcoming bills wipe us out)?
  • Did we spend too long or not enough time on vacation (sometimes, while absence makes the heart grow fonder, familiarity breeds contempt)?
  • Is there anything different we would do if we could do it all again (like bungee jumping was worth the risk and is more fun than we thought)?

ISTQB Glossary

consultative test strategy: Testing driven by the advice and guidance of appropriate experts from outside the test team (e.g., technology experts and/or business domain experts).

planning poker: A consensus-based estimation technique, mostly used to estimate effort or relative size of user stories in Agile software development. It is a variation of the Wideband Delphi method using a deck of cards with values representing the units in which the team estimates.

There is considerable overlap and involvement in what test managers and project managers do on projects. While the project manager has direct ownership, accountability, and responsibility in many areas, the successful test manager will be actively involved in many project management tasks, including the development and overall management of test-related tasks on the schedule, risks, reviews, assessments, and proper documentation.

In this chapter, we’ll cover common project topics of estimation, scheduling, budgeting, risk management, and quality management, distinguishing between areas and topics within the domain of the project manager and the test manager.

5.2Project Management Tasks

Learning objectives

LO 6.2.1

(K6) For a given project, estimate the test effort using at least two of the prescribed estimation methods.

LO 6.2.2

(K6) Use historical data from similar projects to create a model for estimating the number of defects that will be discovered, resolved, and delivered on the current project.

LO 6.2.3

(K5) During the project, evaluate current conditions as part of test control to manage, track, and adjust the test effort over time, including identifying any deviations from the plan and proposing effective measures to resolve those deviations.

LO 6.2.4

(K5) Evaluate the impact of project-wide changes (e.g., in scope, budget, goals, or schedule), and identify the effect of those changes on the test estimate.

LO 6.2.5

(K6) Using historical information from past projects and priorities communicated by project stakeholders, determine the appropriate trade-offs between quality, schedule, budget, and features available on a project.

LO 6.2.6

(K2) Define the role of the test manager in the change management process.

While the test manager primarily manages the testing phase, testing activities, and test team on a project, she must also be a participant in the overall project management aspects of the project. This includes task estimation, scheduling, budget, and resource allocation and management, dealing effectively with project trade-offs, change management, risk management, and overall quality management. Each of these areas will be considered in greater depth below. It is key that the test manager, as should other functional area managers such as the development manager, business/systems analysis manager, training manager, etc., not treat her specific areas separately from the overall project, which would be greatly detrimental to the success of the project and its outcomes. Rather, the test manager should collaborate and work closely with the project manager and other functional area managers as all areas are interdependent and rely on each other to build a quality product and influence the success of the project.

5.2.1Test Estimation

Each functional area must consider the work its team members must do to contribute to the overall success of the project. Since the testing team is one of several functional areas responsible for deliverables, the team must consider estimating the time and effort involved to complete all tasks related both to the test process and in support of other functional areas. One company Jim worked for required that each functional area lead review, contribute, and sign off on all applicable project and software development lifecycle (SDLC) documentation. This meant that he, as the project manager on the team, needed to review, comment on, and approve the requirements, design, test, and supporting documentation such as user manuals and training material. While to some this may seem overly rigorous, the benefits extended to the product, project, and team members. This collaboration promoted not only an understanding of the overall product, familiarizing the team members with the requisite format and content of the project deliverables, but also contributed to building strong working relationships with the other team members. On one specific team that used this rigorous approach, after they delivered a successful project, management rewarded them with a ferry ride across the Hudson River where Jim played the role of Mr. Rock and Roll in the on-boat fun and festivities (he has the pictures somewhere to prove it!).

So, estimation of applicable necessary tasks by each functional area is necessary to develop an overall project schedule. In particular, the test team needs to assess all of its main test tasks and determine the time and effort necessary to satisfactorily complete these tasks properly. The complexity and quality of the software will influence the test task estimates, dependent on the quality of the software and documentation delivered to the test team.

Estimation, like predicting the weather, can sometimes be more of an art than a science (no offense to meteorologists intended). However, here is a list of techniques that can be used to determine the time and effort requirements especially relevant for test implementation and execution efforts:

Brainstorming. This is a popular technique to help a team collaborate, generate ideas, and build on others’ ideas. There are various ways to implement brainstorming, each with pros and cons. These vary from freeform, where participants freely express ideas when they think of them, to round robin, where the facilitator calls on each person in line to contribute ideas. Brainstorming can be extended beyond idea generation to a collaborative session on developing task estimates using an Agile technique known as planning poker.

Planning Using Planning Poker

Jim had the opportunity to work on an Agile project where Planning Poker was used to estimate user stories.

For those history buffs, Planning Poker has its roots in the Delphic methods of estimation. More specifically, the original Delphic method (the term Delphic is based on the oracles of ancient Greece, meaning to give advice or prophecy, similar to forecasting the future) debuted in the 1960s. It consisted of asking experts in a particular field to individually and privately (no sharing allowed) develop estimates. One drawback to this approach is that, since the experts could not communicate or collaborate on their estimates, they were free to make their own assumptions necessary to develop their estimates. Thus, as assumptions varied, the estimates lacked a strong foundation and estimations could not always prove reliable. The Wideband Delphi approach improved upon its predecessor by (1) defining a repeatable and consistent series of estimation process steps and, perhaps even more importantly, (2) allowing collaboration among the estimators to discuss and modify estimates which they originally developed independently. Enter Planning Poker, which is founded upon a tradition of these Delphic estimation techniques.1

The Planner Poker estimation technique is consensus-based and those with the most experience in the specific area covered by the user story/requirements or those with the most compelling case can influence the overall team estimates to reach agreement. Each team member, such as developers, testers, and support staff, has a deck of playing cards with one of the following numbers appearing on the card face: 0, 1, 2, 3, 5, 8, 13, 20, 40, and 100. Several of the initial numbers in the sequence follow a Fibonacci sequence (for example, the second and third items sum to the fourth item (1 + 2 = 3); the third and fourth items sum to the fifth item (2 + 3 = 5)). For simplicity, the higher numbers, such as 20, 40, and 100, break this sequence and are used to represent relative sizes (akin to medium, big, very big effort). The product owner or user representative reads each user story, which is a short statement of the requirement. The team may ask clarifying questions of the product owner to better understand the effort involved. Then, each team member makes an individual judgment of effort represented by a numbered card, throws her choice card on the table, and the team notes the similarities of and differences between the choices. If all estimations are the same, that is the estimate for that work item. If, however, there are differences in the estimates, the team members discuss their reasoning and try to convince the others in order to reach a consensus estimate. Particularly those who selected very high or very low numbers are encouraged to share their reasoning. The team then re-estimates, and this process is continued until consensus is reached or the item is deferred until more information is obtained to help build consensus.

  • Test case iterations. The test team can estimate the time required to execute each test case once and then multiply that number by the number of estimated iterations for each test case. This assumes that the time to rerun the test case again is the same with each iteration. This technique can take into account the expected number of test case failures. Prior history of test failures per runs of test cases on similar projects can help with this estimate.
  • Quality risk analysis. Risk is generally the product of impact if the risk occurs and the likelihood of the risk occurring. Based on a risk analysis, those higher risk items require additional effort in testing. The test team would then estimate the number of tests needed for these high technical and/or business risk items, determine the average time needed to create or maintain each test case, and calculate the time necessary to execute each test case. Building on the previous estimating technique, it may be necessary to factor in several iterations of test case execution where the risk of failure is higher. Obviously, lower risk items require fewer test cases.
  • Function point analysis (FPA). FPA is a way to measure software by quantifying the functionality that is developed and provided to users. For example, the team can decide that 50 lines of code necessary to produce some functionality equates to one function point. From the test perspective, based on all of the function points included in the project, the test team can then determine the estimated number of test cases necessary to support the application and then determine the time necessary to create and execute those test cases.
  • Developer-to-tester-hours ratio. This estimation technique, as its name implies, attempts to derive a ratio of the number of developer hours to tester hours required on a project. Using historical data for similar projects in the same or similar organizations can help. However, this technique can be very subjective, and depends on things such as the relative abilities of both the developer and tester (e.g., using a highly capable developer’s hours against an average tester would not be reliable).
  • Test point analysis (TPA). TPA is an estimating technique useful for black box testing, or testing functionality without necessarily knowing the internal specifics of the functionality, and can be used in system and acceptance testing. TPA estimates take into account the following:
    • The size of the system, as determined by function points, including complexity (number of conditions in a function), interfacing (number of data sets used by a function), and uniformity (the extent to which the system contains similarly structured functions)
    • Test strategy (selection of quality characteristics for each function and the degree of coverage)
    • Productivity (the relationship between the number of hours necessary for a task and the number of associated function points)2
  • Historical heuristic. The past is often a good predictor of the future. When project managers conduct a project retrospective at the end of a project (or, ideally, at the end of each Agile sprint or Waterfall stage), the findings can be very helpful in influencing future, similar projects. Since a simple definition of “heuristic” is using experience to learn and improve, lessons learned over time can contribute to providing more solid estimates on future projects. For example, if a common theme is that the test team underestimates the number of iterations of each test case or the number of defects discovered, the test team can perhaps use other estimation techniques based on this information to more accurately predict test case iterations and expected defects on future projects.
  • Project management techniques. Bottom-up estimation techniques begin with detailed information and work up to higher levels of abstraction. The product breakdown structure (PBS) lists the desired outputs or products defined within the project. The requirements document in particular can help with this exercise. Once the products are identified, the work breakdown structure (WBS) is used to identify the tasks and activities necessary to deliver those outputs or products. The PBS defines where you want to go, while the WBS tells you how to get there.3

Once one or more techniques are selected and used, it’s a good idea to ensure that all test estimates include the following:

  • Planning and preparation work, such as time to develop the test documentation including the test plan, test cases, etc. as well as possibly time to review the test policy and test strategy at the organization level.
  • Time to acquire or create test data, whether manually, through data generators, via the use of production data, or some mix of all three sources. Data security considerations could make it mandatory to create anonymous test data and the effort for that would need to be factored into the test estimates.
  • Time to create and configure the necessary testing environments, test systems, and tooling (when necessary), allowing the proper exposure of defects through normal test conditions, the capability to operate normally when failures are not occurring, and the ability to replicate the appropriate environment (e.g., production) as necessary.
  • Adequate time to prepare test cases (e.g., preconditions, postconditions), execute test cases, analyze results (e.g., pass, fail), and record results.
  • Sufficient time to gather and report test execution information. Note that some test tools can help gather this data for reporting purposes or can be customized to facilitate this.

Aside from general, productive time spent devoted to test activities, the test manager and test team should be aware of other project time considerations, which in fact affect each functional area. This includes time allocated to nonproductive work, such as administrative overhead (e.g., completing time sheets), planned vacations and holidays, and training (although this contributes to both the skill set and thus value of testers and their contributions to future projects). Additionally, time devoted to team meetings, although productive, can be classified here as not contributing to real test task completion. These considerations must be taken into account when test team estimates are developed and, from the broader project perspective, the project manager considers this input from all functional areas. One area Jim has personally witnessed involved project leaders noting that work didn’t get done as planned since key resources were on vacation. The point here is that, when the team estimates were developed, neither the project leader nor the individual team members thought to adjust the schedule with planned time off.

After factoring in these various time considerations, it may be helpful to clearly state the number of test hours available to the team. This helps provide a straightforward variance analysis over time, where estimates can easily be compared to the actual time spent completing test tasks.

There are even more factors to consider when estimating test activities. These include:

  • An estimation of the quality of the software delivered to the testing team. This is based on the level of unit testing performed by developers; the quality of reviews, such as requirements or user story reviews, design reviews, code reviews and pair programming; and the results of static code analysis or running the code in an unexecuted manner to find potential defects.
  • Change control and configuration management processing. This includes the level of change or churn accompanying the software; more change usually translates to more testing.
  • Testing process maturity. More mature testing processes generally require less testing effort and smaller test estimates due to process efficiencies.
  • Project team maturity. A seasoned list of senior team members who have worked together before on other projects may result in smaller test estimates.
  • Software development methodology. This includes methodology choices such as traditional (e.g., Waterfall), Agile (e.g., Scrum), spiral, prototyping, and so on. If a chosen methodology is new to the team, such as a Waterfall shop using Agile methods on a project, time for learning and making “rookie” mistakes should be factored into the estimates.
  • Subsequent quality of defect fixes. This depends on the reliability and quality of the defect fixes; those defects which are not corrected the first time but require one or more iterations until the defect is fixed would influence the overall test case estimate.
  • Test environments. Limited support resources to build and maintain test environments influence testing estimates.
  • Business resources. The cost of business resources and their overall availability to resolve questions on requirements or participate in user acceptance testing can affect testing estimates.
  • Documentation and training. The availability of current, accurate documentation and trained personnel compared with no or poorly written documentation and untrained staff will significantly affect testing estimates. These areas can too often be neglected when developing valid estimates.
  • Reliability. The reliability of the test systems, test data, and the availability of a test oracle all contribute to the ability of the test team to perform efficiently and should be considered when developing test estimates. For example, if it’s suspected that test data from the production environment cannot properly be obtained or will need to be modified significantly for appropriate use in the test environment, testing estimates should reflect this additional work. Also, if the test oracle, or source used to determine how the software should function, for instance, from requirements or design models such as data transition diagrams or object models, are unavailable, ambiguous, or of poor overall quality, the additional work by the test team to understand expected functionality and process flows by whatever means necessary likewise must be factored in when developing testing estimates.

Aside from estimates concerning the effort to conduct or execute test cases, the test manager must also consider the time involved in identifying defects; retesting in terms of regression testing and confirmation testing, used to ensure that defects have been properly fixed by the development team; documenting defect information; and tracking defects for reporting reasons. Depending on the complexity of the functionality, risk ratings of the requirements, or relative importance of the functionality, the test manager may assign varying estimates to ensure proper coverage, anticipating defect work. The test manager can also anticipate defects accordingly based on the size of the software being created, considering the number of developer hours, lines of code, or function point analysis, where, for example, there is a 1:5 ratio of defects per function point, or one defect for every five function points of code. These numbers can be derived from ratio analysis based on similar, past projects, prior working team relationships (developers and testers working together), and similar industry averages. To better understand the quality of the software, estimating defect work separately from test development and execution tasks allows a more flexible approach and can show if the defects are more or less than anticipated. If a standard estimate per defect is generated, this allows flexibility when comparing actual number of defects versus the anticipated number of defects indicating the quality of the software and/or the estimation of number of defects per software functionality and complexity. Estimating defect effort separately from the overall test development, execution, analysis, and documentation tasks allows a more reliable gauge of software quality since defect estimates and actual defect information are not buried in the overall testing effort estimates.

Although Planning Poker (previously explained) is often associated with Agile, in truth from a purist perspective, they are separate and unrelated. However, in practice, the Agile approach often does use Planning Poker and other estimating techniques with a focus on estimating only a smaller effort, specifically the user stories associated with a recent story workshop. It is generally more difficult to estimate and assess risk on a full set of project requirements. Agile has the team focus estimation and risk assessment efforts on only those user stories planned for the next few sprints, efforts that are more manageable by the team. Additionally, the risks on the scope within a short iteration will either be realized or discarded by the end of the sprint, and the next sprint can be planned accordingly.

After the various testing estimates are derived, it is important that, as the project moves forward, the project manager track the actual performance against the planned estimates; the test manager likewise performs a variance analysis between actual time and effort compared with estimates. Depending on thresholds set by the organization (that is, what are and are not acceptable variances), if testing or any other functional area within the project is beyond set boundaries, measures need to be taken to help bring the project back on track, including adjustments to resources, compromised changes to scope and functionality, and/or timeframes adjusted in order to produce a quality product and a successful project.

5.2.2Defining the Testing Schedule

Generally, once all task estimates are known, the project schedule can be built, as a schedule is nothing more than a way to track who does what when and in what order. While there is an overall project schedule, defining necessary tasks according to the SDLC for software projects, the test manager can work with her team to develop the schedule concerning testing tasks during the various testing phases of the project. Since the testing team, as any functional area within the project, is dependent on other functional areas for delivering work products and meeting milestones, it is important that the testing schedule include these various touch points and highlight deliverables from other areas. The clearer the expectations in terms of the deliverables handed to the testing team, with objectively verifiable criteria to ensure that there is no ambiguity concerning the quality of those deliverables, the smoother the hand-off may be. With clear expectations, the testing team can compare the quality of the deliverables against the objective standards set, thereby either rejecting receiving deliverables if the quality is just not there or assessing the impact to both the testing team and the overall project due to poor quality deliverables. For example, if not all unit tests have been satisfactorily performed by the development team, the decision can be made, given enough schedule and resource availability, to have the development team invest additional time to complete the unit testing before handing the code to the testing team. Alternatively, the testing team can accept the incompletely unit-tested code and conduct additional tests or at least be aware that there will invariably be a higher number of defects discovered since the quality of the code was not to the level expected at the time of hand-off. Additionally, as schedules permit, testers can help developers with unit testing, affording testers greater knowledge of the software while allowing developers insight into test design techniques. Obviously, this also builds stronger working relationships and helps to break down functional area barriers that could otherwise be divisive.

There are many commercial project management tools that are used to develop project schedules, build tasks with assigned resources, track task completion, identify the critical path (that sequence of tasks which must be completed in order for the project to complete on time), and display Gantt charts that illustrate task start and end dates across time. These tools also clearly show task dependencies and the overall effect of a delay of an independent task on the task dependent on it. In fact, in one place where Jim worked, the project manager director ensured that all project schedules developed by her project managers had each and every task, other than the lead task, dependent on another task in the schedule; there were no orphaned tasks but each was interconnected. This ensured that any change in an early task would have the necessary ripple effect on subsequent, dependent tasks. Jim has carried this process with him and consider it a best practice.

Additionally, dependencies between functional areas may exist where there aren’t necessarily any formal deliveries. For example, there may be expectations and deliverables to the usability team such as functional software with complete features along with a usability analysis and then deliverables from the usability team such as defect reports and usability suggestions. This should all be clearly documented in the project schedule.

It is almost a given that, especially on large projects, the schedule will invariably change. Normally, project managers take a baseline, similar to software developers freezing code to prevent any changes through software builds. A baselined schedule allows the project manager and project team to easily assess the changes to the baselined schedule as the project moves forward, as tasks complete and unfinished tasks may move out or even move in with respect to their planned end dates. A baseline acts as a reference point, a fixed schedule, allowing deviations and changes from the planned schedule to be measured. Each organization is different in terms of how much deviation is allowed. At one place Jim worked, a 15 percent variance of the actual end date either later or, less likely, earlier than the planned end date would still be considered a successful project, of which bonuses depended. Of course, if the project manager believes that there will be schedule variance beyond accepted thresholds, she should consult the project sponsor and perhaps petition for additional project time if there are valid reasons for the delay (e.g., increased scope, key resource unavailability, and so on).

The Agile methodology welcomes change, especially to the test schedule based on what is planned for each iteration. It is good practice to freeze the requirements or user stories or items introduced into a timeboxed iteration or sprint based on the velocity of the team, which is a measure of how much work (often measured in story points or person-hours) or how many user stories the team can complete in each sprint. Testing work estimates of course factor into the work planned for each sprint. This ensures that, at least within the sprint, the work is planned, understood, implemented, and tested. Any items not completely working (that is, tested satisfactorily) in the current sprint are deferred to a future sprint and the team does not receive credit for that work item. Of course, especially based on the software functionality demonstrated to the product owner (user representative) at the end of a sprint, new user stories for new or changed functionality can be added to the backlog or list of work to do and the team, especially the test manager and testing team, need to remain flexible to change. However, as previously mentioned, it is a good practice to freeze the scope of work to which the team commits at the beginning of a sprint.

5.2.3Budgeting and Resource Allocation

Project budgets and resource assignment and allocation to projects are different between projects and between organizations. A test manager typically has a constraint given the number of test team resources she has at her disposal. With this constraint in mind, her project budgeting exercises would be limited to the scope of work assigned to her testing resources. Thus, the scope of work that her team can manage will significantly affect project test schedules. She must adequately plan and manage her test resources across all planned and current projects in order to contribute to the success of each project yet not overwhelm her team.

Regarding specific budget needs, the test manager must consider:

  • Costs of regular, internal test team staff. This includes the staff’s salaries, periodic salary increases (often driven by annual performance goals and results), benefits, vacation allocations (especially important when planning project resource allocations and availability of staff to work on projects during seasonal vacation times of the year), and investing in the future for current staff, including training costs in line with continuing education, travel and entertainment expenses (e.g., to attend testing symposiums and training), books, trade magazine subscriptions, membership fees, and costs associated with achieving certification (e.g., the ISTQB’s Certified Tester Foundation Level) and/or maintaining certification.
  • Costs of additional test team staff. This could include costs to procure staff through staff augmentation, such as external or contingent staff aimed at project-specific work. This could also include external resources in key areas where the test team may not have sufficient competency (e.g., installing, configuring, and using a test automation tool). External resources could be local or offshore. In the case of offshore workers, specific care must be taken to accommodate differences in work styles, communication styles, and culture, especially across countries and time zones.
  • Costs of facilities. This can include costs associated with building, maintaining, and configuring test labs.
  • Costs of equipment. This can include both the costs of the equipment itself to be tested and the costs associated with test equipment used on projects, such as servers, networks, printers, and the like.
  • Costs of software. This can include the costs of various software used in testing, such as operating systems, interface software, and databases storing test data. This can also include costs of software tools such as database tools, reporting tools, analysis tools, defect tracking tools, test case management tools, and test automation tools. Commercial tools in these categories usually require licensing and maintenance and support costs of which the test manager needs to be aware.
  • Costs in investments in long-term efficiency improvements. One example is the investment in a test case automation tool. The benefits of this investment include conversion of some number of manual test cases (especially those test cases in the regression test suite) to automated test cases, and improving efficiency (quicker execution of routine test cases with results) as well as team satisfaction (test team members do not need to run mundane tests again and again manually but can focus on more interesting and challenging test activities). The costs of test case automation, however, should not be underestimated, as it takes some time and planning to acquire a tool and use it wisely, often allocating development resources, either internally or externally through contractors, to properly program the tool for best use.

Expanding on the first two staffing items above, the test manager must consider both the composition (including part-time and full-time permanent internal staff and contractors, as well as the mix of on-premises, offshore, and outsourced team members) and capability (skill levels and experience from junior to senior) of the team. In one organization where Jim worked, when performing pre-project estimates of work, we used a simple template with each functional area’s specialty noted as columns with corresponding rows denoting seniority level, with each level assigned a fully loaded rate (standard rate of pay with overhead costs and allocations included). When a high-level estimate of work was developed, each functional area manager noted the number of estimated hours of a resource at each specific seniority level. This provided a view as to the number of hours and dollars associated with each functional area for the work involved. This served as a first cut when developing project estimates, of course, until a full work breakdown structure was developed where refinements would undoubtedly be made. The test manager should know her team and be aware of which resources can best meet the needs of each feature or each project. The feature estimates can be taken in isolation but, if combined into a project, each functional area manager must consider the resource capacities of their team members to ensure adequate coverage for the features in the project in addition to other concurrent projects and non-project (e.g., ongoing, maintenance) work. This can be a trivial or involved process depending on the dynamics and size of the test organization as well as the number and size of concurrent projects. At times, resources (human and otherwise) may be shared from other functional areas. The team should assess whether this sharing is beneficial or detrimental to the project. Jim was a software developer shared for a time by a test team, and at the time this arrangement helped the overall project. However, there are times where the sharing of the same system or environment between the development and test teams should not be leveraged, as this shared environment could adversely affect the quality of the overall system.

One thing that the test manager and in fact all functional area managers should remember is that budgeting and resource allocation is not a static exercise but often an ongoing endeavor. Project priority needs which require shifting resources, quitting staff members, and late-added requirements necessitating additional external resources all contribute to the flexibility, adaptability, and dynamicity required of test managers. Test managers must remain vigilant in tracking their budgetary expenditures so that any significant variances can be reported and resolved immediately.

The project manager must work closely with each functional area’s manager in monitoring the project budget. While the budgetary estimates provided by the test manager and the quality of the test work provided by the test team are invaluable, the test team, similar to other functional areas such as software development, systems engineering, business/systems analysis, systems architecture, customer help desk support, documentation, and training, provides initial time-and-effort estimates to the project manager, who then combines, assesses, challenges, but ultimately manages the budget for the project from the project’s beginning to end. While the managers of each functional area, including the test manager, are responsible for their respective area’s budget, the project manager is responsible for overseeing the overall project budget to reduce or eliminate variances from plan as much as possible.

5.2.4 Managing and Tracking a Project

“Failing to plan is planning to fail,” as the old adage goes. In order for a project manager to manage and track a project, there obviously needs to be a plan developed and put into place with the proper metrics and mechanisms to assess at various points whether the project is on or off course. Without a schedule and a focus on the test team’s tasks, how would the test manager know if her team is on track, ahead, or behind? If your family is taking a trip within driving distance of your home, would you plan the trip from home and not consult a map (either hardcopy or electronic)? Even with a map, as you inevitably make a wrong turn, you need the map to note the variance to help bring you back on course. If there is no map for your journey or schedule to guide the project, how would you know if you are on track or derailed?

To Jim, a schedule or plan is perhaps the most important deliverable on a project. Although definition of requirements (so the team knows what is expected at the beginning) and success criteria (so the team knows if the project was successful at the end) are extremely important, it is the plan that describes who does what when for how long in what order and how the team will know when it is done.

  • Who: The project stakeholders (e.g., sponsor, working team, management) each contribute to some degree to the project work. These human resources should be clearly identified in the schedule so team members understand what they are to do.
  • What: These are the applicable tasks at the appropriate level of detail necessary to complete the project. Tasks (the what) are done by people (the who).
  • When: Each task is assigned a start and end date either based on calendar dates or as offsets from other tasks.
  • How long: Each task has a duration or how long it will take to complete from start to finish. Depending on the project, while it may not add value to track tasks at the individual hourly level, it may be helpful to identify tasks at the daily level (e.g., number of days to complete each task).
  • What order: Tasks follow other tasks on which they are dependent in real time.
  • How to know when done: In one sense, the project is done when the last task in the schedule has completed. In the Agile Scrum methodology, the definition of done is an important concept and looks beyond simple task completion to truly determining if all the tasks to satisfy users’ requirements are done and the software is ready to ship.

A well-managed project then has a well-developed schedule with relevant tasks and clear task ownership. The test team is no exception to this and must have its tasks clearly defined and assigned so the team knows what is expected of them and when it should be done. Specifically for testers, this could include knowing exactly which tests to perform, when to start and complete each test, and the applicable metrics to be produced as a result of that testing. Typically, after the project manager works with each functional area to determine applicable tasks and durations (or estimates), he conducts a kick-off meeting with the project stakeholders. Among other things, this is an opportunity for the highlights and expectations of the project to be communicated to the entire team. After the kick-off, the project manager monitors the project schedule and works closely with the team members through the end of the project, updating and adjusting the schedule of tasks to project (and thus task) completion. The test manager works with the project manager to define the necessary quality goals and to track progress toward those goals. Some trade-offs inevitably occur, such as accepting a feature later than planned, risking incomplete or inadequate test coverage. The test manager needs to understand the risks in this late delivery and adequately communicate that to the project team, especially the project sponsor, so the correct decision can be made.

It is important not only to establish a workable project schedule built from estimates, but likewise to actively manage and monitor it in order to note any variances or deviations from the plan. Variance analysis can apply to budget and cost as well as schedule and timeline variance, and even to scope variance as a measure of scope creep. For example, standard PC tools can be used to establish a baseline schedule, track actual task completion with completion dates, and then compare the actual results against the baselined, planned results. In the case of test tasks, if completion of test cases is taking longer than anticipated, the test manager must determine why. There are various reasons, including unexpected or poorly estimated test case execution duration and tester inexperience. There also may be a greater number of defects discovered than planned in a particular area or module. This could indicate that the requirements were not clearly understood by the development team, the code wasn’t properly unit tested before delivery to the test team, etc. Even if the test team wasn’t responsible for the variance, bringing the variance to the team’s attention can help uncover process issues and the need to create new or improve existing processes. The point is that without a scheduled plan and periodic checks of actual information against that plan, the team would be unable to determine success or would realize issues later than necessary, when it may be too late to course-correct.

In Scrum projects, where many variances are due to circumstances within the team, the Scrum master is tasked with helping to remove obstacles that prevent the team from moving forward. Often, these obstacles are raised during the course of daily Scrum meetings, in which each team member briefly discusses the daily accomplishments, plans for the next day, and current obstacles or impediments. The Scrum master then works outside of these meetings to help resolve these obstacles.

There are various test metrics that can be useful for project tracking:

  • Test case design completion percentage. This is simply the percentage of test case designs that are completed. This is used to help gauge the additional work required to complete designing the entire suite of test cases.
  • Cost of quality. This measures the value and efficiency of testing by classifying project costs as the costs of prevention, detection, internal failure, and external failure.
    • Prevention costs are those that prevent or avoid quality problems and may include creation and maintenance of a quality system, as well as training.
    • Detection costs (also called appraisal costs) are costs involved with measuring and monitoring activities related to quality such as verification testing and performance of quality audits.
    • Internal failure costs are costs incurred before delivery to the customer and may include costs of rework to fix defective material and waste such as performing unnecessary work.
    • External failure costs are those incurred to fix defects that customers have discovered, such as repairs, servicing, and returns.
  • Defect detection percentage. This is the number of defects found by testing divided by the total number of known defects. If the percentage here is high, this indicates that the test team has found most of the defects and the customers or users found the remaining defects once in production.
  • Defects found versus defects expected. This indicates how many defects have actually been discovered compared with those defects expected to be found. If the ratio is too high, this could indicate quality issues where more defects than planned were actually found or there is an improper estimate of the overall expected defects by the test team.
  • Test case execution results. This reports on the progress of testing by providing the percentage of test cases executed in each of the following statuses: pass, fail, or blocked result.

Developer metrics include:

  • Defect removal effectiveness. This is the ability to remove defects from where they are discovered, including requirements, design, development, and once in production. It is more effective to remove defects in the same stage as the uncovered defects; for example, it is greatly more effective if a requirements review identifies a defect in the requirements phase and the analyst resolves that defect while still in this requirements phase than if propagated to the design or development phase where it is eventually resolved; basically, the earlier a defect is found and fixed, the better.
  • Feature completion. This simply relates the number of features considered completed versus the number of features remaining as a gauge of overall project completion.
  • Unit test coverage. This is an indicator of how much code is covered via developers’ unit tests.

5.2.5Dealing with Trade-Offs

Perfect projects are rare; actually, they don’t exist. Most projects require a tradeoff between quality, schedule, budget, and features. This is best depicted in the project management triangle noted below.

image

Figure 5-1 Project management triangle

Although variations of this triangle exist, let’s go with this basic model. Every project has constraints, such as limited time, limited cost, and limited scope. The project management triangle depicts this, with time or schedule on one vertex, cost or budget on another vertex, and scope or features on the final vertex. We like one variant of the triangle, as shown in Figure 5-1, which includes quality within the triangle, showing that quality is dependent on the three constraints as well as depicting the effect on quality based on changes to the constraints.

  • The time or schedule constraint refers to the amount of time necessary to complete the project based on project members’ estimates of the work required. This, of course, is heavily dependent on the scope of the project.
  • The cost or budget constraint is the budgeted amount for the project. This includes both internal costs, such as labor for existing human resources doing actual work on the project, and external costs, such as staff augmentation (e.g., hiring contractors on a temporary basis to complete tasks associated with the project), hardware and infrastructure costs related to the project, and software costs, such as necessary licenses for tool usage.
  • The scope or feature constraint refers to the actual features and functionality required by the user that must be implemented via the project.

All three project constraints influence the overall quality of the software product. Affecting any one constraint has an effect on the other two dependent constraints and could affect the overall quality of the product. For example:

  • Time. If the project sponsor requires that the project complete earlier than planned, thereby reducing the time constraint, this will invariably affect the cost (increased cost by perhaps paying human resources additional evening and weekend work to complete work given the new, compressed schedule) and the scope (decreased scope as not all originally planned features may be implemented given the reduced time).
  • Cost. If the cost allocated to the project is reduced, this could result in reduced scope, since there is not enough money to pay for the full scope originally planned, and increased time, since there may be fewer resources working on the project, thereby requiring a longer time to delivery.
  • Scope. If the project sponsor (representing the customer and end user) increases the scope of the project by requesting additional functionality or significant changes to planned functionality, this scope creep would typically require additional time and additional cost to design, implement, and test the added changes.

The beauty and simplicity of this model is that the mix of constraints, and any changes to the constraints, affect the overall quality of the software solution. The question always is how can an acceptable level of quality be maintained given changes to the schedule, budget, and/or functionality? This challenge always awaits the project team, especially the project manager, who is responsible for managing the project given the three constraints, and the test manager, who must ensure that high quality is preserved despite changes in the constraints. This is why it is imperative that any constraint changes mandated by the project sponsor be immediately discussed with the project manager who in turn will rely on both the development manager and test manager for consultation on what the true effect of the constraint changes will be to the project and its goal of meeting its objectives (which often are scope, cost, time, and quality or producing the functionality within the agreed-upon budget and timeframe while meeting quality expectations).

While the above holds true for those projects following a more traditional approach, for Scrum projects, time and quality are generally fixed; the schedule is not elastic, allowing for additional (or fewer) sprints, and quality considerations cannot be comprised. Therefore, the only variable on Scrum projects that can change is scope, or user stories, as more stories can be added to or current stories removed from the sprint backlog to properly meet expectations.

Preferably at the start of a project, it would be ideal to understand from the project sponsor which constraint is the most important. Is the sponsor interested in a well-defined set of functionality and features at the expense of a slight variation in schedule and budget? Or, is a particular deployment date most important, perhaps to gain the advantage in getting to market before the competition, given minimal changes to functionality and budget? Or, is the budget cast in stone and is more important than the full set of desired functionality and overall timeframe in completing the schedule? Knowing this important information up front will help the project manager and functional areas better plan the project, understanding what is really of paramount importance to the sponsor.

Unfortunately, this information is not always known at the start of the project, but may eventually come to light somewhere during the course of the project. In fact, the sponsor herself may not know the proper mix of constraints until the project is well under way. This requires the team to be as flexible (dare I say “Agile”?) as possible. This means that, from a testing and overall quality perspective, the test manager must be aware of the interdependencies between project components in order to make decisions on the impact of trade-offs. It is an unfortunate reality that, at times, quality may suffer due to the trade-offs in constraints that are required. For example, it may be determined during the testing phase that key functionality, which is essential to the overall product, was missed at the requirements stage. This may result in a decrease in overall quality, since the test team could not have planned for this additional functionality and may not be able to adequately test given the existing time and cost constraints. The test manager must clearly explain the impact of the reduction in testing time and associated risk to the product quality, noting that this new functionality may result in other areas not receiving adequate or even any testing time.

5.2.6Change Management

Change on a project is a given, a constant, something that inevitably will occur if not once then several times on typical projects. Change is so prevalent that the Agile development methodology acknowledges change as one of the key principles in its Agile Manifesto: “Welcome changing requirements, even late in development, Agile processes harness change for the customer’s competitive advantage.”4 The test manager must therefore have a flexible way to quickly understand the change, assess the impact of the change, and adapt to the change accordingly.

Change can occur in areas such as requirements, timeline, and budget (the project management triangle’s triumvirate of scope, schedule, and cost), and overall quality. Risks to projects that can affect the test team include unanticipated issues with test environments, shortening the overall testing time; unavailability of hardware infrastructure, such as a necessary server, hampering the development team’s compatibility testing, resulting in additional testing by the test team; and the like. (Risks are covered more extensively below.) It is important for the test manager to be able to perform an impact analysis in order to truly determine the ramifications of changes to the testing aspects of the project. This should be done more broadly by the entire team and include all changes and not necessarily only those that impact testing. If the project team does not have a process for impact analysis, it behooves the test manager to establish one for her test team to benefit the overall quality of the product to meet project objectives.

The individual impact analysis for each change is part of a larger, change management process that tracks, schedules, and assesses each change regarding impact. Given proper impact analysis, non-mandatory change (such as in what-if scenarios) can be discussed by the team before accepting, again with overall consideration of the affect the change will have on the project’s outcome. The information that the change management process captures can also be used toward the end of the project (or at the end of each Agile Scrum sprint closeout) during project retrospectives and can serve as useful information for future projects.

5.2.7Time Management

Experienced test managers and test teams, those who have invested years working both together as a team and within the testing field, will discover ways to make best use of everyone’s time on projects. A good test manager, besides protecting, supporting, and growing her test team, will ensure that the test team is making the most efficient use of its time. This in part includes the test manager attending meetings, including project status meetings with the project manager and functional area managers and leads, providing her team’s overall status and challenges, and then briefly sharing project highlights from the status meetings with her team. This ensures that the team can focus on their primary tasks and does not need to attend meetings that the test manager can and should attend as the representative of the test team.

Aside from insulating the team from additional meetings, the test manager can help the team make most efficient use of their time in the following ways:

  • Communication. The test team should not be isolated (or perceived as isolated) from the rest of the project team, yet attending every meeting to ensure “face time” or representation is neither efficient nor wise. So, while the test manager may attend most meetings, this does not preclude test team members from offline, individual discussions, phone calls, or email exchanges. In fact, a test team that does not communicate with other project team members will prove to be less effective, even if more efficient, jeopardizing the overall quality of the product. If the project uses an offshore test team or testing team members in other time zones, communication and understanding is especially important, and the test manager needs to ensure that this communication works effectively for all. Since communication and collaboration are unofficial ingredients for a successful project, the test team is certainly encouraged to meet and discuss issues with other team members. However, the test manager must find the correct balance between her team’s participation in meeting time versus test activity time. One Agile Scrum technique, intended to limit every team member’s time in meetings, is daily Scrums. These meetings, often no more than 15 minutes per day, require each team member to stand (sitting tends to make participants more comfortable and prone to discuss issues beyond the meeting’s time limits) and, one at a time, briefly report what he accomplished in the previous day, what he plans on working on that day, and any obstacles he has faced so the Scrum master can help remove project impediments. This is an excellent example in practice of enabling meetings to be more productive by intentionally limiting the time spent by the team in meetings.
  • Timeboxed periods. Just as the time allotted to daily Scrum meetings is respected and preserved by the team, the test team (and in fact all team members) on Agile projects should conform to the timebox established for each sprint (iteration), which usually is from two to four weeks. If there are any untested items (user stories) or those that the team believes have not been adequately tested, the sprint should not be extended. Rather, in this case, those items should be moved to a later sprint, and the team should not get credit for those items until they are properly tested.
  • Requirements issues. The test team should factor adequate time to resolve questions and issues in the requirements with the business analyst, project stakeholder or product owner (in the case of Agile Scrum, this is the project stakeholder responsible for user stories/requirements) as they affect test design. This additional time should be taken into account when the test team performs estimates of their testing tasks.
  • Test case automation. Executing test cases takes time. Manually executing test cases and recording results, especially repetitive tests as included in a regression test suite, can be a less than optimal use of valuable testers’ time. Test execution automation tools can help relieve the test team from performing repetitive testing, thereby allowing them to do more challenging and interesting work. However, test automation itself comes at a cost, including researching the best tool for the team’s organization, budgeting for software licensing and any vendor services, infrastructure considerations, tool training, and development and configuration of the tool. As with most things in life, the benefits of a test automation tool need to be weighed against the overall costs.
  • Regression testing. Related to test case automation, an effective regression testing strategy is to automate as many regression test cases as feasible since regression testing, by its very nature, is repeatable. This automation of regression test cases relieves the test team of monotonous, repeated manual testing, which can be error prone and thereby negatively impacting quality, allowing them to do more interesting and value-added work.
  • Smoke testing. It’s always a good idea to sample something before accepting it. Some beachgoers sample the ocean by slowly getting accustomed to the temperature of the ocean water, determining how rough or calm the water is, etc. Similar to this, testers should smoke test, or perform basic functionality, before accepting the code received from development and propagating it to multiple testers. If there are issues with the quality of the code and overall functionality, it makes most sense to catch this early and fix it before wasting valuable testers’ time testing poor-quality code.
  • Training. Keeping the skill sets of the test team current not only benefits their careers but also increases their value to the department. One way to do this is for the department to invest in continual training. However, this training, such as in Agile testing concepts and practices or learning how to be proficient in a test automation tool, ideally should not occur when key projects require testers to focus on testing tasks and should not be charged to a particular project’s budget unless this training is directly applicable to the success of a project.

5.3 Project Risk Management

Learning objectives

LO 6.3.1

(K4) Conduct a risk assessment workshop to identify project risks that could affect the testing effort and implement appropriate controls and reporting mechanisms for these test-related project risks.

It is said that the only things absolutely certain are death and taxes. Therefore, life itself involves risks or uncertainties. You’ve probably heard of the mythical bus that wipes out employees; in our careers we’ve heard managers stating that it always made sense to mitigate the risk of key employees getting “hit by a bus” (variations include cars, trucks, or trains, but never boats for some reason) by providing training and hands-on experience to other employees to ensure coverage and maintain continuity in operational tasks (basically providing backups). Projects are no different in that they too contain risks or uncertainties. The Project Management Body of Knowledge (PMBOK) defines project risk as “. . . an uncertain event or condition that, if it occurs, has a positive or negative effect on one or more project objectives such as scope, schedule, cost, and quality.”5 Since project success is usually assessed by the sound balancing of the constraints within the Project Management Triangle (see Figure 5-1), consideration and management of uncertain events or conditions are key to a successful delivery of a product, service, or result (that is, a project). When we consider risks, we tend to think of only the unplanned negative things that can happen to a project, such as a vendor going out of business or a cyber-security breach of a key integration system, or, in the testing realm, sudden unavailability of a test tool that hampers progress in executing test cases. For purposes of our discussion, we’ll only consider negative risks (threats) or risks that, if they do occur, will have negative consequences on the project. This definition of risk is absolutely aligned with that found in the ISTQB syllabus, which defines risk as “a factor that could result in future negative consequences” with the focus on the negative impacts of risk. In fact, as risks are assigned a risk level either quantitatively (for example, 1 (low) through 5 (high)) or qualitatively (“low,” “medium,” “high”) based on their potential impact and likelihood of occurring, appropriate emphasis and mitigation strategies and actions can be taken. Positive risks, outside of the scope of our discussion, are often considered opportunities, such as a project completing substantially under budget. While on the surface this may seem like a wonderful project outcome, a project that comes under budget is often a reflection on the team (and especially the project manager) not properly estimating project effort. As credibility issues come into play, sponsors may be reluctant to fund future projects if the project manager and team have previously done a poor job of estimating a project’s efforts.

Using a SWOT Analysis

On a side note, it is interesting that risk as both positive opportunity and negative threat appears in two of the four quadrants included in a standard strengths, weakness, opportunities, and threats (SWOT) analysis, which is a simple yet helpful method to determine factors that can affect not only a project’s outcomes, but also individual career assessments and planning.

The project manager and project stakeholders including the test manager can collaborate to develop and maintain this grid pertaining to a project. Notice that the grid groups these aspects of the analysis in positive/negative as well as internal/external categories.

  • Positive: Strengths and opportunities have positive effects on a project.
  • Negative: Identified weaknesses and threats have the potential to harm a project.

image

Figure 5-2 Sample SWOT analysis on a hypothetical project

  • Internal: Strengths and weaknesses are typically considered internal to a project.
  • External: Opportunities and threats pertain more to external factors affecting a project.

For example, a project strength such as an experienced team who has worked well on prior projects is both positive and internal.

After the SWOT analysis has been completed, the project manager and team should look to:

  • Capitalize on project strengths, such as allocating key people resources to a project to make best use of their talent in order to help the project be as successful as possible.
  • Improve upon weaknesses, for example through training or hiring experienced test team members to make them more proficient.
  • Exploit opportunities, such as upgrading a test tool in order to realize much-needed functionality improvements.
  • Eliminate or reduce threats, perhaps by investing in backup plans in case key external resources become unavailable.

The beauty of the SWOT analysis lies in both its simplicity and application. SWOT is a simple tool to develop, apply, and maintain, requiring no training or special skills. SWOT is also applicable and can be used on a project level tied directly to risk identification and management; by a specific team, such as the test team to evaluate the effectiveness of its team members; and even in individual career management. In fact, Jim has developed his own professional career SWOT analysis and has taught on this to employees to help them manage their own careers.

While managing project risks is ultimately the responsibility of project managers, they are heavily dependent on their functional area colleagues, such as the test manager, to help identify risks, assess their impact and likelihood, and work through viable mitigation plans for each risk.

5.3.1Managing Project Risks

Project risk management is all about identifying, assessing, and controlling potential project risks that could have a negative impact on a project and its overall goals and objectives.

Although each company/department/shop may use slightly different names to identify the stages or phases a project undergoes in its life from beginning to end, the standard project lifecycle includes the following phases as shown in Figure 5-3:

image

Figure 5-3 A generic project lifecycle

  • Initiation (starting the project)
  • Planning (organizing and preparing)
  • Execution (carrying out the work)
  • Closing (closing the project)

When a project is in its planning phase, one of the deliverables a project manager develops is the project management plan. This plan includes a series of smaller, focused management plans that support the overall project management plan (e.g., communications, stakeholder, scope, etc.), including a risk management plan. The full list of project management plan components includes:

  • Change management plan
  • Communications management plan
  • Configuration management plan
  • Cost baseline
  • Cost management plan
  • Human resource management plan
  • Process improvement plan
  • Procurement management plan
  • Scope baseline
    • Project scope statement
    • Work breakdown structure
    • Work breakdown structure dictionary
  • Quality management plan
  • Requirements management plan
  • Risk management plan
  • Schedule baseline
  • Schedule management plan
  • Scope management plan
  • Stakeholder management plan

Although in a perfect world, we’d aim to eliminate all risks, in reality there are only a few main ways of dealing with risk, which apply to all risks, including test-related risks, identified in the risk management plan. These strategies include:

  • Avoidance. Some risks can be eliminated, but usually not without consequences. For example, if a new risky feature cannot be developed and tested as planned without potentially adversely affecting other features in the product, this risky feature can be deferred to a later release once the product has matured and is more stable. However, this would impact the project objectives of delivering the full scope in the planned timeframe and the project sponsor obviously must agree to this change. This risk strategy in effect avoided the risk and potential negative consequences of introducing poor quality into the product.
  • Mitigation. Risk occurrences and responses can be proactively planned and then monitored to reduce the overall probability of the risk occurring at all. If, for example, the testing effort for a major component (or even the entire testing effort) is outsourced, it is a good idea for the test manager to provide the due diligence in maintaining a strong relationship with the outsourced partner to ensure that the testing staff is ready and able to begin testing as planned. Obviously, trust in and the reputation of the outsourced partner is key to managing this relationship in order to help manage the test effort and ultimately drive the success of the overall project and quality of the product, service, or result.
  • Transference. Some risks can be shifted from one team to another, perhaps even a third party. An example of this is the test manager insisting that any delay in the start date for test execution will result in an equivalent delay in the end date for test execution. In essence, the test manager has then transferred the risk of test execution delay onto the rest of the project team. Of course, on projects where the end date is fixed and cannot be compromised, this results in a squeeze of the testing timeframe, resulting in the potential of fewer than the full plan of test cases executed, contributing to additional risk to the quality of the product. To mitigate this, the tester may need to work additional hours during the testing phase, additional short-term testing resources (e.g., developers) can be used to run test cases, and developers can be placed on immediate standby to work through production issues as soon as they are discovered.
  • Acceptance. In some cases, it is either impossible or impractical to mitigate a risk, resulting in the project team willingly receiving the risk. For instance, if there is only one test server available to the project and it is not cost-effective or time-effective to purchase, install, and configure an additional test server to act as a backup, the team accepts the risk that the sole test server may crash, resulting in a potential delay in the test schedule and overall project schedule until the server can be repaired and made operational again.
  • Contingency. This involves having a plan in place to effectively reduce the impact of a risk should it occur. Retaining a team of technical support and help desk personnel acts as a contingency against the risk of software defects delivered to customers and end users.

The project manager begins building the risk management plan by meeting with the applicable functional area managers/leads, including the test manager, to identify and document the various risks that could negatively impact the project. Often, a project manager may consult lessons learned or retrospective documents from similar projects to see whether any risks affecting previous projects may rear their ugly heads again and negatively affect this current project. Additionally, a project management team may include a template of common risks and successful mitigation strategies based on prior project experience and, when applicable, common sense. This initial research helps the project manager by setting her off on a good start to risk management. Jim had the early experience in the late 1990s of being involved with a company that achieved a Capability Maturity Model (CMM) Level 5 rating, the highest within Carnegie Mellon’s Software Engineering Institute’s maturity framework. As part of this rating, our quality management department developed a risk management template with appropriate job aid documentation that captured key risk identification information, pre-mitigation analysis, mitigation steps, and post-mitigation analysis. Let’s review a similar template in Figure 5-4.

image

Figure 5-4 Sample risk management template

  • Risk identification. This section includes basic and unique information concerning each risk. For instance, risk number 4 documents the scenario where a key resource may leave the project. Notice that this risk, if it becomes reality, would affect the following risk types: project, the project schedule, and project cost. Each risk can map to one or more risk types.
  • Pre-mitigation analysis. The team estimates a 50% probability or chance that this may occur. This percentage may be due to prior experience with the department or company, or even knowledge about the resource’s intentions based on his/her current level of job satisfaction and even any prior company history of job change. This example assumes a fully loaded cost (including benefits and overhead allocations) of $200,000 per year. So, if absolutely nothing is done to address this risk, the impact of this occurring has a dollar value to the project of $100,000, given the employee’s cost and probability of risk occurrence. The project manager and project sponsor determine whether there should be any actions and/or costs taken to mitigate this risk.
  • Mitigation steps. The team identifies actionable steps that can be taken to mitigate the risk either fully or more usually partially. In this example, HR and management address the person to get a better sense if he/she might be looking to leave based on overall satisfaction or dissatisfaction at work. Although this action isn’t foolproof, the team notes that this action, which is still open (that is, hasn’t begun or been implemented yet), would not cost anything other than internal labor costs that aren’t generally listed here. While the mitigation is specifically targeted to the risk of the person leaving the company, there could be other actions taken if the resource stays within the company but either transfers to another department or is reassigned to another project. The project team would decide if these other scenarios with applicable actions should be defined and managed through this risk management file.
  • Post-mitigation analysis. The team assumes that taking this specific action will result in a reduction in the probability of occurrence from 50 to 30 percent; however, the cost of occurrence remains the same at $200,000. Given the actions and resultant reduced probability of occurrence, the real impact to the project is now estimated at $60,000.

This approach was a practical and honest way of managing project risks. Since the mitigation actions did not always entirely mitigate the risk, the template had provisions to show the remaining impact from a financial perspective of the impact of risks that could not be fully mitigated; this was the impact of residual or remaining risk after all necessary mitigating controls were put in place. Related to the previous discussion on the risk mitigation strategy of risk acceptance, our template and process, although not necessarily reflected in Figure 5-4, included a contingency reserve, which was a small percentage of the overall project cost reserved to handle risks. The ISTQB defines confidence intervals as the period of time within which a contingency action must be implemented in order to be effective in reducing the impact of the risk. Often, there are trigger dates that act as indicators to both start and stop the contingency plan of action; the trigger dates are those dates within the confidence intervals. For example, the aforementioned scenario of the inoperable test server would pose a risk to the project prior to the start of the test phase, when the server is configured properly to anticipate the upcoming testing activities, through the test phase, and possibly for some time after the test phase if post-deployment defects are found, requiring retesting using the test server and redeployment. These timeframes denote when the risk could be realized so the proper mitigation plans and actions could be taken; outside of this window, the risk is not realistic and, if it does occur, would have minimal to no impact on the project.

ISTQB Glossary

confidence interval: In managing project risks, the period of time within which a contingency action must be implemented in order to be effective in reducing the impact of the risk.

The best and most comprehensive risk management plan does absolutely no good to the project if, after it has been developed, it sits on a shelf and gathers dust. The key to good risk management is to actively review it with the functional areas, including the test team, periodically through the life of the project. Risk review includes reevaluating risks that exist in the plan to assess any changes to the likelihood, impact, and contingency and mitigation plans as the project progresses and any new information is known. Additionally, during risk review, the team should consider adding any new risks that have surfaced since the previous risk review. Lastly, if currently tracked risks that have passed have not been properly dispositioned as closed, this should be done during the course of the risk review. Aside from the mechanics and discipline of reviewing, updating, and actually using the risk management plan and associated deliverables to lead to a successful project, there are benefits to project team members meeting periodically, collaborating together, and wrestling with current and potential issues that could adversely affect the project. Often, as a good practice, project managers include schedule and action item review with the team periodically, as often as every week. Depending on the dynamics and particulars of the project, the risk management plan may be reviewed at each weekly project status meeting or in a less frequent manner, such as every two weeks or monthly. The project manager, with the buy-in from the project sponsor and team, would determine the review frequency. The key point is that manager/lead representatives from the applicable functional areas, such as system architecture/engineering, system/business analysis, software development, testing, deployment, operations, training, help desk support, and documentation, on software development projects collaborate periodically on project issues and risks including test risks.

5.3.2 Participating in Project-wide Risk Management

As a key stakeholder on the project team, the test manager must play an active role in the entire risk management process, including initial identification of risks and mitigation plans as well as periodic risk reviews. In fact, Jim has seen places where the key stakeholders on software development projects were the development manager, the test manager, and the project manager. This is why it is so important for a good project manager to include, as mentioned earlier, periodic review with the functional area representatives of not only the schedule and action items, but of the risks documented in the risk management plan or risk register.

While test managers are clearly involved in test-related risks, there are many other types of risks not necessarily tied directly to the testing effort, which will nonetheless impact testing. This makes sense when you think of the broad definition of software quality assurance. We often think of quality assurance as predominantly testing tasks and activities within the context of a project that are undertaken to ensure that the product has high quality. While this is certainly one important aspect of quality assurance, there is much more to quality assurance than testing alone. Other areas affecting overall quality and potentially incurring additional project risk include:

  • Requirements stage. Poorly documented requirements, incomplete requirements (e.g., lack of non-functional requirements such as performance, usability, security, etc.), ineffective requirements gathering methods, and unavailability of key customers to help understand requirements can all affect testing, the overall quality of the product and the overall success of the project, so test managers must be cognizant of overall poor quality of requirements. While requirements are often a key basis for the development of test cases, requirements are not the only possible test basis document. In fact, quality risk registers, user documentation, defect taxonomies, the current application’s functionality (if the project is intended to replace an existing application), and other documentation all serve as test basis documents. It may sound harsh to say that the old computer adage of garbage in/garbage out applies here; poor requirements lead to poor test cases. Even the best level of traceability, which is the matching of test cases to requirements to ensure test coverage, cannot ensure overall quality if requirements and other test basis documentation are sorely lacking. Of course, training business analysts and hiring those with experience and certification help in the requirements gathering and documenting efforts. Additionally, the Scrum mind-set addresses requirements gathering as an iterative and interactive process, where the product owner (the face of the customer, representing customer needs) plays a key role throughout the software development project. The product owner defines requirements in the form of what’s known as user stories. The product team (developers and testers) interacts with the product owner to clarify ambiguities in the user story requirements.

This way the developers know what to design and develop, and the testers know what to test. The beauty of this approach is that a small subset of requirements is defined and resulting software is developed and tested in a short period of time, usually two to three weeks. This means that the product owner sees tangible, working software early and has time to request changes to help the software evolve to better meet user needs. Where many Scrum projects fail is due to the inability of the product owner to devote sufficient, focused time on the Scrum projects; generally, the more user involvement, the better-quality software is produced.

What Makes for a Good User Story?

A good Scrum user story has several, general characteristics. You can think of these characteristics as following the INVEST6 acronym, as solid user stories are:

  • Independent (stories are separate and can be scheduled and implemented in any order)
  • Negotiable (story details are based on collaboration between the product owner and team members)
  • Valuable (stories have value to the customer)
  • Estimable (stories contain enough information in order for a high-level estimate to be completed)
  • Small (stories are the correct scope and can be implemented in no more than one month’s time)
  • Testable (stories contain clear acceptance criteria)
  • Design stage. Although poor requirements have a ripple effect and can adversely impact testing, inadequate requirements likewise can affect the quality of the system design. Poor design, including incorrect choices in architecture/platform, programming languages, and databases, as well as beginning the design stage before requirements have completed can all prove problematic for the test manager and test team. One reality check here is that, in order for a software development project to remain on schedule, it is often necessary to allow the design stage to begin before the requirements stage has completed. Jim has seen this in practice such that the project methodology for software projects included a threshold that a minimum of 85% of the requirements needed to be completed before the project was allowed to move to the design phase. In this case, based on project history, it was determined that the risk of premature and faulty design to some extent outweighed the risk of the project running late.
  • Development stage. Since testing is dependent on the quality of the software delivered from the development stage, poor (and especially no) unit testing, lack of adequate code reviews, absence of static code analysis, poor coding practices, and other risks affect testing in a negative way, often increasing the burden of testing or rejecting the code delivery until the quality of the delivered software is improved.
  • Integration stage. The following are just some items occurring during the integration stage that can adversely affect testing: inability to or inaccurate capture of outstanding errors; insufficient or unavailable integration tools; an inadequate integration test lab; a poor integration procedure that is not done methodically. Continuous integration, where developers integrate their code into a code repository up to several times a day, can help mitigate the risk of integration issues.
  • Other. Of course, there are inherent risks outside of the standard SDLC areas, including project resource risks (e.g., lack of qualified resources, redeployed resources), vendor risks (e.g., key vendor going out of business), etc. Additionally, there can be risks based on methodology, such as Scrum’s focus on a sufficient, comprehensive level of documentation. This may pose an issue if the test team struggles with what they may see as insufficient documentation, especially in terms of requirements and design documentation; testers may feel that they just don’t have enough documentation on which to base solid test cases. This risk can be mitigated by the working team setting documentation expectations up front, as the team would agree early on what constitutes the correct level of documentation for all stakeholders in order for them to do their work adequately; anything less constitutes a valid risk to the overall quality of the product.

To help keep the test manager aware of these risks, which are generally outside of his control, the project manager can supplement the risk management plan or risk register by listing the functional areas, such as software engineering, software development, as well as the test organization, potentially impacted by each risk. Additionally, the test manager or delegate, such as a test lead, should take a proactive role in participating in periodic project risk reviews usually conducted by the project manager so the test manager is aware of the impact to the test team and testing phase of the project given risks and issues identified by other functional areas. Here the test manager can be considered the liaison between the project’s functional area team members and her test team. For example, at the project status meeting, the test manager, representing the test team, testing phase, and associated testing activities of the project, learns of an issue delaying development. The test manager would note at this meeting the potential risk to the testing phase. After this meeting, the test manager shares the issue raised at the project status meeting and plans a course of action with the test team to deal with this real or potential issue.

Although quality is everyone’s responsibility, the buck often stops with the test team. In fact, quality seems so closely linked to the test team that, in some organizations, delegation of the quality responsibility lies with the test team such that the testing senior leader must sign off on the test plan, ensuring that adequate testing has occurred and the test results have met the overall test strategy and plan before the product can be deployed to production. We’ll discuss how quality assurance and testing relate in the next section.

5.4Quality Management and Testing

Learning objectives

LO 6.4.1

(K4) Define how testing fits into an organization’s overall quality management program.

At the time of this writing, Jim teaches a college class in systems analysis and design. During a lesson on managing a system implementation, he asked the class three questions:

  • What is software quality assurance?
  • What one word do you most associate with software quality assurance?
  • How do you ensure software quality assurance?

How would you answer these questions?

The class answered this way:

  • What is software quality assurance? The answers ranged from the broad “Anything that ensures quality in the product” to “When you call a company’s help desk and before the rep starts to speak you get the message, ‘This call may be monitored for quality assurance.’ ”
  • What one word do you most associate with software quality assurance? Not much of a response due to the class’s background and experiences.
  • How do you ensure software quality assurance? One surprising answer was “Continual testing, even after the product is in production.”

Let’s look at each of these questions and answers separately.

What is software quality assurance?The very broad answer of anything done to ensure quality in the product (and, by extension, the service or result) is nonetheless accurate. The ISTQB defines “quality assurance” as “part of quality management focused on providing confidence that quality requirements will be fulfilled.”7 “Quality management” itself, according to ISO 9001:2015, is based on a set of principles, including customer focus; leadership; engagement of people at all levels; a process approach, where interrelated processes work together as a coherent system; and an ongoing focus on improvement.8 This has many manifestations and can be seen beyond software projects to include anything from customer surveys on the receipt of takeout purchased items from fast-food restaurants to that very familiar message on the help desk call, “This call may be monitored for quality assurance.” Software quality assurance includes any intentional activities taken to ensure that quality exists in software in particular. The concept of quality assurance can be extended to any of the products we buy and the services we use. But what, really, is “quality”? The ISTQB defines quality as “the degree to which a component, system, or process meets specified requirements and/or user/customer needs and expectations.”9 Noted quality gurus defined quality as fitness for use, meaning a lack of defects or bugs (J. M. Juran10) and conformance to requirements (Philip Crosby11).

Quality is not the run-of-the-mill, the mediocre, the mundane, or the everyday. The pursuit of quality is the pursuit of value, excellence, even superiority. Test managers (and others, as we’ll soon see) are responsible for contributing to excellent software. Test managers do this through building a solid team of test professionals, by not only participating in requirements, design, and test reviews, but also by challenging the concepts, ideas, and reasoning used in these artifacts not for the sake of building solid documentation, but in order to ensure superior quality software.

What one word do you most associate with software quality assurance?Most people in the software industry equate “testing” with “quality assurance.” In some organizations, there are separate departments for the test team and the quality assurance team. In other organizations, the test team is known as the QA team or quality assurance team. This is unfortunate because, not to overuse a cliché, but quality really is everyone’s job and responsibility on the project. There is often a prevailing mind-set that the test team (those poor souls who are the last in line before the product goes out the door) has the sole responsibility for building and ensuring quality in the product. This view is seriously outdated. As mentioned earlier in the section on project risk management, the quality of the product produced as a fully functional, operational system is dependent on the quality of the requirements gathered from the user and documented for the team; dependent on the quality of the designs built that are themselves dependent on a solid understanding of user requirements; dependent on the quality of the software code that is built and the database architecture that is implemented; dependent on the test strategy, plans, and actual test cases, using the requirements and other project artifacts to ensure quality in the test phase; and dependent on ancillary services and documentation developed in support of the product, such as user, help desk, operational documentation, training material, and so on.

How do you ensure software quality assurance?As the response to the previous question shows, ensuring software quality assurance occurs in each stage of the SDLC. At a CMM Level 5 company, we had much training and documentation concerning our model, processes, and procedures. Every project stakeholder, including the test manager and project manager, was required to review and eventually approve all necessary project documentation from the requirements through design and testing documentation as well as attend and participate in various reviews. This ensured that project team members were committed to the project and all understood what was being built through the requirements, design, development, and testing phases. While there is post-production verification after a product has deployed to production, we often don’t continually test the product, since project team members and other resources are allocated to new projects. However, we should be vigilant in soliciting input from users and customers regarding the continual value (i.e., quality) of the product long after it has been initially deployed.

We previously discussed that, in a project’s planning phase, the project manager develops the project management plan that itself includes several smaller, focused management plans, including the risk management plan. One other focused management plan developed during the planning activities is the quality management plan (see Figure 5-5 for some of the major components used to develop the quality management plan). The essence of the quality management plan is to minimize variation and deliver results that meet requirements. In order to do this, the quality management plan includes various baselines, which act as starting points for comparisons or deviations, such as the scope, schedule, and cost baselines. Since scope, schedule, and cost represent the key indicators of project success and quality (remember Figure 5-1’s project management triangle?), variations from each component’s baseline indicate potential quality issues.

image

Figure 5-5 Major components of a quality management plan

Since quality management extends beyond testing, it is important to differentiate the responsibilities of each area; if one team handles both areas, it is crucial that testing activities be distinguished from overall quality management activities, so the test policy and test strategy expand into the broader quality policies and quality strategies that include much more than testing. The quality management discipline would be responsible for ensuring an integrated and consistent set of quality assurance and quality control processes, activities, and metrics. You can think of quality management (QM) as consisting of both quality assurance (QA) and quality control (QC).

QM = QA + QC

QA establishes the process for managing for quality and includes preventative measures such as establishing appropriate policies and guidelines taken to “assure quality” in software, such as reviewing test plans, selecting defect tracking tools, and training personnel in the various policies and guidelines (one of the companies Jim worked for invested in much training and documentation concerning the overall project and software methodology, helping them to achieve CMM Level 5 certification). The goal of QA is to prevent defects from entering the software. QA is more proactive.

QC on the other hand includes detection measures to determine the level of quality in the software. The goal of QC is to gauge and monitor the level of quality inherent in the software, assessing variation against requirements. QC is more reactive.

Think about quality assurance as defining the necessary quality requirements, such as standards, processes, procedures, and policies with an eye towards continuous improvement (enhancing and adapting those policies to fit the needs of the organization and business) and overall vision. Quality control then applies the quality assurance standards, processes, procedures, and policies against the product to check the product’s level of quality. Any variations, deviations, or inconsistencies should include a plan to resolve and, given management approval, will be instituted in order to raise the level of quality.

5.5Sample Exam Questions

In the following section, you will find sample questions that cover the learning objectives for this chapter. All K5 and K6 learning objectives are covered with one or more essay questions, while each K2, K3, and K4 learning objective is covered with a single multiple choice question. This mirrors the organization of the actual ISTQB exam. The number of the covered learning objective(s) is provided for each question, to aid in traceability. The learning objective number will not be provided on the actual exam.

Criteria for marking essay questions: The content of all of your responses to essay questions will be marked in terms of the accuracy, completeness, and relevance of the ideas expressed. The form of your answer will be evaluated in terms of clarity, organization, correct mechanics (spelling, punctuation, grammar, capitalization), and legibility.

Question 1

LO 6.2.6

As test manager on a project, you have been informed that the test environment won’t be ready until three weeks after the original planned date of availability due to infrastructure issues. This could seriously impact both the time to execute and analyze planned test cases as well as the scope of test cases necessary to produce a quality product.

Which of the following statements best describes the role of the test manager in this situation?

  1. Schedule delays are managed by the project manager so, while informed of this decision, it is the project manager’s responsibility to effectively deal with this situation.
  2. Since this delay will affect the test schedule and the functionality tested by your team, you voice your concern to the project team, including the project manager and project sponsor, explaining the effect this issue will have on your test team’s ability to test the planned functionality, offering documented alternatives with resulting impacts.
  3. The project manager works with the vendor who provides various options to get the necessary infrastructure on time, but the project manager is not very confident of the vendor’s plans. You therefore are at the mercy of the project manager and vendor and take their direction.
  4. The IT team responsible for building and maintaining the environment thinks they have some options to bring the project back on track. You rely on their guidance and hope for the best.

Question 2

LO 6.3.1

As a test manager, you conduct a risk assessment workshop to identify project risks that could affect your team’s testing effort. As your workshop progresses, you and the functional area leads identify several potential risks with controls and monitoring or reporting mechanisms.

Of the scenarios below, decide which identified risk, control, and reporting mechanism directly addresses test-related project risks.

  1. You learn that your department head has nominated your project to be the first to use a new agile scrum methodology. You and your team are very familiar with the conventional Waterfall technique. To mitigate the risk of project failure given a methodology with which the team is unfamiliar, you advocate additional training, hiring an agile coach, and postponing the project for another six months until the team is comfortable with this new approach. You take monthly assessments to gauge the team’s comfort level with agile and take the next steps to begin the project using agile when ready.
  2. You learn that the company will be undergoing a major reorganization that could affect your project. Since the reorganization may not affect your department or project for another year, you and the team plan on starting this project quickly with the goal to complete the project before the upsetting reorganization occurs. Given your “political connections,” you plan on closely monitoring the progress of the reorganization, aiming to keep your project way ahead of the upcoming reorganization.
  3. There is a risk that a key vendor will go out of business. The team suggests mitigating this risk by also engaging into a contract with a competitive vendor; in case the first vendor goes out of business, your company still has a viable contract with the second vendor. Your project team will periodically check on the progress of both vendor contracts to execution.
  4. You have heard that your test lead on your project has been unhappy and may be looking for other employment opportunities. You take steps to continue training a strong test team member to act as a backup and work closely with the test lead on this project. Additionally, you work with your HR department to conduct stay interviews with your test lead to help discourage his/her potential leaving. You monitor the progress of these interviews along with the progress of your backup resource plan.

Question 3

LO 6.4.1

As a test manager, consider how your testing team can play an active role in the overall quality management program of your company.

To that end, select the one scenario below where you and your team are not actively involved in this quality management program.

  1. You and your team take the active role in training others on the various testing policies and guidelines concerning quality assurance.
  2. Your test team tests for defects in software as part of quality control detection measures.
  3. You and your test team are asked to take a survey regarding choice of the best project management methodology, including Waterfall, agile, and other choices.
  4. As part of an overall quality management program, your team contributes to building the test policy and strategy as well as working on projects, testing software to help determine the overall quality of the software prior to release to production.

Question 4

LO 6.2.1, LO 6.2.2

Scenario 3: Test Estimation

Assume you are a test manager involved in developing and maintaining a suite of products centered on a family of programmable thermostats for home, business, and industrial use to control central heating, ventilation, and air conditioning (HVAC) systems. In addition to the normal HVAC control functions, the thermostat also interacts with applications that run on PCs, tablets, and smartphones. These apps can download data for further analysis as well as actively monitoring and controlling the thermostats.

Three major customer types for this business are schools, hospitals and other health-care facilities, and retirement homes. Therefore, management considers its products as safety critical, though no FDA or other regulations apply.

The organization releases new software, and, when applicable, hardware, quarterly. It follows a Scrum-based Agile lifecycle, with five two-week iterations, followed by a three-week release finalization process. For the upcoming release, assume that there are five teams, each with one tester and four developers. The testers report in a matrix structure to you, the test manager. You also have two small teams within your test organization, one that focuses on test automation and another that creates and maintains test environments.

You follow a blended test strategy that includes requirements-based testing, risk-based testing, reactive testing, and regression-averse testing. During release planning, test estimation is done based on the number of requirements (user stories) and the number of risk items identified across those user stories for the entire release backlog. These estimates are used to avoid overcommitting in terms of overall release content. During iteration planning, test estimation is done on the number of requirements (user stories) and the number of risk items identified across those user stories selected for the iteration backlog.

Consider Scenario 3.

Assume that there have been three releases so far, all following the same Agile lifecycle, with similar (though in some cases differently-sized) teams, working on the same product line. The historical defect metrics for each release are as follows: Three major customer types for this business are schools, hospitals and other health-care facilities, and retirement homes. Therefore, management considers its products as safety critical, though no FDA or other regulations apply.

image

Part 1:

Describe the process for release test estimation.

Part 2:

Describe the process for iteration test estimation.

Part 3:

Describe the use of the historical information presented above to create a model for estimating the number of defects found, resolved, and delivered in each release.

Question 5

LO 6.2.3

Continue with scenario 3, as described earlier, and as extended in the previous question.

After the first two iterations, you find that the following defect metrics apply so far.

image

Part 1:

Identify deviations that have occurred from historical metrics.

Part 2:

Discuss how these deviations will affect your iteration estimation process.

Question 6

LO 6.2.4, LO 6.2.5

Continue with scenario 3, as described earlier, and as extended in the previous two questions.

Assume that, at the beginning of the third iteration, three of the senior developers quit and take jobs with one of your company’s main competitors. You are scheduled to have a meeting with management to discuss the impact of these resignations on testing and quality, and to make recommendations. Outline the key points you’ll address in your meeting.

1Thanks to Klaus Nielsen, “Software Estimation using a Combination of Techniques,” PMI Virtual Library, 2013, https://www.projectmanagement.com/articles/283931/Software-Estimation-using-a-Combination-of-Techniques for some brief history and background on the Delphic estimation techniques.

2Doctors Erik P.W.M. van Veenendaal and Ton Dekkers, “Testpointanalysis: a method for test estimation”, published in Project Control for Software Quality, Kusters R., A. Cowderoy, F. Heemstra and E. van Veenendaal (eds), Shaker Publishing BV, Maastricht, The Netherlands, 1999, http://www.erikvanveenendaal.nl/NL/files/Testpointanalysis%20a%20method%20for%20test%20estimation.pdf.

3For a good comparison between the PBS and WBS and how they can both be used successfully on projects, see Patrick Weaver, “Product versus work breakdown structure”, projectmanager.com.au, projectmanager.com.au/product-versus-work-breakdown-structure, August 13, 2015.

4Beck Kent, et. al., “Twelve Principles of Agile Software,” from the “Manifesto for Agile Software Development,” 2001, www.agilemanifesto.org/principles.html. This is the definitive authority on understanding all things Agile.

5Project Management Institute: A Guide to the Project Management Body of Knowledge (PMBOK Guide) – Fifth Edition, 2013, p. 310. Of course, the Project Management Institute’s PMBOK is the definitive guide on everything related to project management.

6Bill Wake, http://xp123.com/articles/invest-in-good-stories-and-smart-tasks. Although this handy acronym appears in many Agile references, Wake’s explanation is concise and even touches upon the SMART model (also covered in Chapter 2) which helps bring meaning to the development of goals.

7www.astqb.org/glossary/search/quality%20assurance.

8www.iso.org/iso/home/standards/management-standards/iso_9000.htm and www.iso.org/iso/pub100080.pdf. This is a short, focused overview on key quality management principles which, when taken together, form the framework for performance improvement and operational excellence.

9ISTQB Glossary, www.astqb.org/glossary/search/quality.

10ASQ website, asq.org/about-asq/who-we-are/bio_juran.html. The ASQ, or American Society for Quality, distinguishes itself as a global community of people dedicated to quality who share ideas and tools, similar to what communities of practice do.

11ASQ website, asq.org/about-asq/who-we-are/bio_crosby.html.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset