7Testing Considerations for Domain and Project Factors

Keywords: feature-driven development, test-driven development

7.1Introduction

Learning objectives

No learning objectives for this section.

When you have been a test manager for quite some time, you’ll probably recognize this: each and every software development project is different from the previous one! This is so even when these projects claim, or maybe more accurately appear, to use the same software development approach, like Waterfall or an Agile approach as Scrum. Most likely the moment and level of involvement as a test manager and the testing goals will differ from project to project, often depending on the software development approach used. The same applies to both the amount of already existing documentation and the amount of documentation to be produced. In the first section of this chapter we’ll take a closer look at test management considerations for lifecycle models.

Another challenge you might recognize is being a test manager of a project for which all or part of the development is being done by an external team. In the second section of this chapter we’ll take a deeper dive at how to manage these partial lifecycle projects. Although you can think of many kinds of such projects, we limit ourselves to an explanation of the following four examples: integration projects, maintenance projects, hardware/software and embedded systems, and safety-critical systems.

The last section of this chapter addresses a typical test manager dilemma: When to release? Although it is almost never the test manager herself who makes such a decision, she should have a voice in the release considerations. As a test manager yourself, you might have been in a situation where you gave negative release advice, but your advice was overruled by for instance a business, marketing, or product manager. Leo has been there several times. Although our release advice is not always followed, it still makes sense to be aware of the considerations for the different release methods. We’ll explore in more detail the following methods: market demand, ease of maintenance, and ease of installation.

7.2Test Management Considerations for Lifecycle Models

Learning objectives

LO 8.2.1

(K6) Evaluate and report advantages and disadvantages associated with each lifecycle model in given situations.

LO 8.2.2

(K2) Describe the concepts usually found in Agile projects which may influence the testing approach.

The first project Leo worked on as a programmer he was responsible for creating the design, programming, and testing. This was a long time ago, but you could say this already looked a little bit like the T-shaped1 professionals Agile teams are looking for nowadays. (Leo finds this a very amusing observation.) But between then and now the (soft) skills of developers have changed and many types of software lifecycle models have seen the light. And to make it even more complicated, in actual practice, organizations tend to implement parts of different models and combine them into one hybrid model that should satisfy their specific needs. So virtually each and every organization has another separate approach to software development. These models can strongly influence a test manager’s approach to a given project. A test manager needs to know the differences between and similarities with these lifecycle models and should be able to play with it, because it determines the moment of involvement, the level of involvement, the amount of documentation (both available and produced) and the testing goals. In addition to that, the test manager must also be familiar with Agile approaches and how to participate effectively in an Agile project. In this section we’ll not only look at the different lifecycle models and their testing implications, but we’ll take an even closer look at the Agile methods, since they are fundamentally different from traditional models such as the V-model.

7.2.1Comparison of Lifecycle Models

Have you ever asked the development team which development method they are using? Leo has several times and expects you won’t be surprised by the answers he got. They just don’t know! Asking for the development method didn’t help him. However, there are some generic (test) aspects of development methods you could look at. As a test manager in such situations you could, for instance, find out at what moment and to what level you’ll be involved in the project and how much time you’ll get and/or need for testing. Ideally you are involved as soon as possible, but that might be hard, because in the situation of a traditional Waterfall project you might not get the chance to be heard at the beginning of the project. You might not even be on the project yet! On the other hand, in an Agile project it is common practice to be involved as soon as the project starts. Other aspects the test manager has to investigate as soon as possible are the amount of documentation available and the amount of documentation to be produced. This has, of course, a strong relation to the project’s lifecycle. In Agile development environments the amount of both available and produced documentation is often less compared with that of traditional environments. The testing goals in terms of test levels will also differ from development method to development method. In the end the test manager has to deal with all these different aspects and come up with an appropriate test approach for a specific development situation. The following chart (Figure 7-1) might help by establishing this approach. The chart provides a comparison of the lifecycle models and their testing implications for five testing aspects.

7.2.2Waterfall and Agile Models

In Figure 7-1 several lifecycle models are mentioned. The most used models are the Agile models and the Waterfall models. The other two models, V-model and iterative/incremental, are certainly still out there, but are less important and further can be seen as lying along a spectrum between Agile and Waterfall. Most test managers have worked, or are still working, in Waterfall environments and would be familiar with the model. Therefore only a brief refresher of the Waterfall model and a little more elaboration on the group of Agile software development approaches will follow.

image

Figure 7-1 Comparison of lifecycle models and their testing implications

Since most sections of this book apply to test managers working in a Waterfall environment, there is no need for extra elaboration about test management in such an environment. However, concerning working as a test manager in an Agile environment we’ve added two sections in which we’ll take a deeper dive in the testing concepts as well as in the changed role of the test manager in Agile approaches.

Waterfall

The first formal description of the Waterfall model is often cited as a 1970 article by Winston W. Royce, although Royce never used the term Waterfall in that article.2

The Waterfall model is a sequential (non-iterative) software development process, in which progress is seen as flowing steadily downward (like a Waterfall) through the phases of requirements elicitation, analysis, design, coding, testing, production/implementation, and maintenance (see Figure 7-2).

image

Figure 7-2 Waterfall model

The Waterfall model is derived from the traditional way of working in major projects in the construction industry. The purpose of this way of working is that you divide the project into several phases. You start with the first phase and do not start the second phase until you have completed the first. Completed in this situation means that phase 1 is reviewed and verified. And when you discover a defect in one of the phases you have to go back all the way to correct that phase and perform the following phases all over again.

Some people are fond of the Waterfall model, while other people hate it. At any rate, when you want to apply the Waterfall model you may consider the following advantages as well as the disadvantages (this is obviously not an exhaustive list).

  • Advantages:
    • Time spent early in the software production cycle can reduce costs at later stages. After all, you only move on to the next phase after finalizing (“without defects” is the general idea) the phase at hand.
    • There is a focus on documentation, which supports profound knowledge transfer to other project members as well as to new people to the project.
    • It is a straightforward model, with clear phases. Therefore all project members know exactly what the current project phase is and what is expected of them.
    • The model provides easily identifiable milestones in the development process.
    • The Waterfall model is well-known. Many people have experience with it, so they can easily work with it.
  • Disadvantages:
    • When a requirement is changed in, for instance, the coding phase, a large number of earlier phases must be done all over again.
    • The phases are often very long and therefore time and costs are difficult to estimate.
    • Within the project, the team members are often specialized. One team member may be active in the designing phase only, while a programmer may be active in the coding phase only. This could lead to waste of different resources. Often this is time. For example, the designer is working on the “perfect” design. Although the programmer could have started coding against the not-perfect version of the design, the programmer has to wait until the design phase is completed. This is a typical example of a waste of time.
    • Testing is only done in one of the final phases of the project, so the project gets late insight into the quality of the software.
    • There is much emphasis on documentation, which could lead to inefficiency.

Agile

Before we take this aforementioned deeper dive in the testing concepts as well as in the changed role of the test manager in Agile approaches, we first have to understand the basics of Agile: These are written down in the Agile Manifesto (2001) for software development.3 This manifesto describes the four values and 12 principles for software development:

  • The four values:
    • Individuals and interactions over processes and tools
    • Working software over comprehensive documentation
    • Customer collaboration over contract negotiation
    • Responding to change over following a plan
  • And the 12 principles:
  1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
  2. Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
  3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference for a shorter timescale.
  4. Business people and developers must work together daily throughout the project.
  5. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
  6. The most efficient and effective method of conveying information to and within the development team is face-to-face conversation.
  7. Working software is the primary measure of progress.
  8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  9. Continuous attention to technical excellence and good design enhances agility.
  10. Simplicity—the art of maximizing the amount of work not done—is essential.
  11. The best architectures, requirements and designs emerge from self-organizing teams.
  12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

The group of Agile software development approaches has become very popular because, in general, they should help organizations (at least compared to traditional approaches) in reducing time to market, being responsive to changing business and customer needs, and in delivering higher quality. All Agile approaches are based on short, timeboxed iterations that deliver increments of working software, through adaptive change, as more information comes to light in a communicative and collaborative manner.

ISTQB Glossary

test-driven development (TDD): A way of developing software where the test cases are developed, and often automated, before the software is developed to run those test cases.

feature-driven development: An iterative and incremental software development process driven from a client-valued functionality (feature) perspective. Feature-driven development is mostly used in Agile software development.

The most commonly used Agile approaches are:

  • Kanban4

    An approach to incremental, evolutionary process and systems change for organizations. It uses visualization via a Kanban board of a list of work items. It also advocates limiting work in progress, which as well as reducing waste due to multitasking and context switching, exposes system operation problems, and stimulates collaboration to continuously improve the system (Figure 7-3).

  • Scrum5

    A framework within which people can address complex adaptive problems, while productively and creatively delivering products of the highest possible value. The Scrum framework consists of Scrum teams and their associated roles, events, artifacts, and rules.

  • eXtreme Programming (XP)6

    A humanistic discipline of software development, based on principles of simplicity, communication, feedback, and courage.

image

Figure 7-3 Kanban board example

As with all approaches, it is rarely the case that the approach is used as originally intended. Usually, aspects from various approaches are combined to form an approach that works best in a particular situation. The one combination we see often is a Scrum team using Kanban boards. Other concepts are also used in Agile environments. Think of behavior-driven development, acceptance test-driven development, feature-driven development, and test-driven development and of testing approaches like exploratory testing and context-driven testing.

Of course you as a test manager will need to adapt to all these (combined) Agile approaches, but it is also important to remember that many good practices in testing, common to other models, still apply. We’ve been involved in many situations where organizations working with a certain testing approach ceased with this approach when they switched from a traditional way of developing to the Scrum approach. Leo’s first reaction always was and is: “So, you think you don’t need something like conducting a risk assessment or a test design technique in whatever form anymore?” When you have a sprint of two weeks, the team more than ever needs—due to the short period of time—to find a way to focus on, and to cover, the most important risks. A risk assessment certainly would support the team in identifying and focusing on the most important risks. And when applying the appropriate test design technique (related to the specific risk to cover), you will be able to create the fewest number of test cases and still achieve the highest chance of finding defects or covering risks. In short, as a test manager don’t forget all the proven approaches from the past but reuse or adapt those to the new situation. For instance, we know many projects where an exploratory testing approach is integrated with some—adapted to the situation—test design techniques and where they are very happy with this combination that seems to work very well for them.

7.2.3Testing Concepts in Agile Approaches

The four values and 12 principles of the Agile Manifesto have a neutral relationship with the test approach. And Scrum too, which is an approach based on the Manifesto, scarcely provides any guidance on integrating testing and an Agile approach, or on what the consequences might be for the role and responsibilities of professional testers. It will also be no surprise that the integration of a test approach with the Agile approach does not always run smoothly. This is rather strange, as testing is not only an extremely important activity but must also be fully integrated with the chosen Agile approach. In particular, the following concepts are usually found in Agile approaches:

  • The test process must be integrated in the Agile approach.
    • The test activities must be integrated in the development process itself. This means that testing is no longer a separate phase but is rather a continual activity of the Agile teams. Cross-training within the group is necessary to provide the best coverage of all tasks in an iteration.
    • All team members must be prepared to perform test activities. Although the team must contain a professional tester, this does not mean that all test activities must be carried out by this tester. Skilled testers and test managers learn more about other processes, such as design and development, and help other team members to understand the need for testing.
    • More communication is needed.

      With Agile methods, the emphasis lies on direct communication, preferably face-to-face, rather than on written modes of communication. It is best for Agile teams to be housed at a single location in order to make this possible. If possible, all the people needed for the project should be accommodated together in a team at one location. If the team cannot be colocated, easy and quick communication methods must be established. For example, a daily status meeting via video-conferencing tool and project documentation exchange via a (cloud) tool.

    • Testing is the driving force behind the quality of the project.

      Testing is no longer the last safety net before the software is implemented. Now the tester collaborates with all team members to provide continual information on the product’s quality and its satisfaction of the business requirements. In fact the whole team, including each individual team member, must assume responsibility for quality.

    • Test automation is necessary.

      The use of automation is becoming increasingly important and indispensable in the realization of a successful Agile project. Test automation must be utilized effectively to minimize the cost of regression testing. Regression testing is more important than ever, because the continuous change of Agile development incurs significant regression risk. Tests should be run frequently. When testers do have the skills for it, they could develop and maintain the automated test suite themselves. In practice they are often working together with the developers of the team.

    • Testing must be incorporated into the definition of done.

      The project or the iteration is not yet done when the software has been built. It is only when this has been tested and all defects have been rectified that one can say that it is done. Accordingly, it is important to include the test aspects in the definition of done and be sure they are measurable and understandable.

  • Find the right balance by making conscious choices.
    • Working software is more important than extensive documentation, which means that the “lack” of information must be compensated by more communication. For cases in which it is not absolutely clear if something should be documented or not, we always pose the following questions:
    • Why are we creating documents: does this documentation have value for the business?
    • For whom are we creating documents: does the team benefit from this documentation?

    The answer is context-specific, of course, but if the entire team responds negatively to both questions, documentation would appear to be unnecessary.

    • (Re)use the strength of proven test approaches

      Adapt and adjust this approach geared to the Agile environments. Best practices in testing such as defining test conditions and selecting appropriate testing techniques are still applicable.

    • Improve continuously.

      Continuous test approach improvement must be implemented and planned approaches should change as needed to react to project realities.

7.2.4Changed Role of Test Manager in Agile Approaches

Agile methods often work with a variable scope, but with fixed timeboxes and fixed quality. This is a huge change compared to the old Waterfall approach. In that approach often the scope was fixed and the release date flexible, although the latter initially was not meant to be flexible. Traditionally the test manager—and tester—were and still are critical to the success of the projects, especially in respect to establishing the quality of the products. But the roles are somewhat altered. The role of the “traditional” test manager changes when an organization adopts an Agile way of working. Sometimes test managers become more of a staff manager rather than a manager within a project of work. They will, for instance, support a Scrum master and/or a project manager (if there is still one) in the overall delivery. There is no reason, depending on the type of project and the skill set of the individual, why a test manager cannot be part of the team, however, in some cases this may cause issues as the test manager will no longer be the “manager,” as the Agile team itself needs to be self-managing.

The fact that several traditional roles on a development project such as project manager, business analyst, etc. are not adopted directly in Scrum does not mean that the test management activities should not be executed. On the contrary, these remain unfailingly important. But they may be executed by any random team member with the appropriate expertise and skills. Nevertheless, it is advisable to have a professional tester on the team, to guarantee test expertise in the team. This tester has knowledge of the execution of a risk analysis, static and dynamic testing, test design techniques, the creation and execution of test cases, test automation, etc. But this does not mean that all test activities must be executed by this tester. Other team members may be requested to provide support in the creation and execution of the test cases, for example. In such a situation, the professional tester can act as a coach.

Test managers will need to mentor those persons who are undertaking testing activities and to provide guidance on the skills and competencies required by the tester in the Agile project. We may facilitate the formation of the Agile team at the onset of the project. The test manager will have a more hands-off role. Instead, she must be mentoring the testers within the team to be self-managed.

Test managers could also facilitate, along with the Scrum master, the management and removal of roadblocks. In some instances the test manager may anticipate a roadblock and ensure that it is removed before the team identifies it. Our experience can also be used during the risk identification and analysis during the initial reviews of the user stories. This of course can be used as input into assessing the technical difficulty of implementation of the user stories during the estimation period. The test manager may also work with the team on the test strategy/approach during the planning meetings at the release and sprint levels. Often the test manager will be part of the testing specialist group that will be called upon at various times throughout the project.

7.3Managing Partial Lifecycle Projects

Learning objectives

LO 8.3.1

(K6) Compare the various partial project types, discussing the differences between these projects and pure development projects.

The test manager is often involved in projects for which all or part of the development is being done by an external team. This type of project brings its own test management challenges that are sometimes caused by late involvement, sometimes by lack of documentation, and sometimes by multiple ownership. In this section, we’ll take a closer look at some of these partial lifecycle projects: integration projects, maintenance projects, hardware/software and embedded systems projects, and safety-critical systems projects.

7.3.1Integration Projects

In many situations the test manager has to handle testing lifecycle changes that may be required when dealing with an integration project. The moment of involvement for externally developed software (or partially externally developed software) is often later than software that is developed in-house. The software is usually fully developed and assumed to be at least unit tested at the time it is received for integration. This means the test team may not have been involved in requirements and design reviews, and that the planning, analysis, and design time for the project are significantly reduced. In this section, we’ll look in more detail at two of the most common integration project situations: a situation in which we’ve to deal with a supplying and accepting (demanding) party of the software, and a situation in which an organization implements commercial-off-the-shelf software.

Supplying and Accepting Parties

While developing software, a separation can be made between the responsibilities of client, user, manager, and system administrator on the one hand and system developer and supplier on the other. In the context of testing, the first group is collectively known as the accepting (requesting) party and the second group as the supplying party. The supplying party could be both an internal IT department and an external software supplier. Other concepts that are also referred to in this context are the demand and supply organizations. At a general level, there are two possible aims in testing:

  • The supplying party demonstrates that what should be supplied actually is supplied.
  • The accepting party establishes whether what has been requested has actually been received and whether they can do with the product what they want to/need to do.

In practice, the separation is often less concrete and sometimes system developers (from the supplying party) will offer their help to the accepting party. For instance, in the information analysis, when writing the specifications and in the acceptance test and vice versa, the expertise of users and administrators may also be employed in the building activities. On the other hand, often the situation occurs in which the supplying party and the accepting party don’t cooperate properly. In both situations it is important to define the various responsibilities clearly. This certainly applies to testing. Who acts as the client of a test, who accepts it, who wants advice on the quality and who will test what, and when? So, what must a test manager consider in this situation?

  • Define expected quality.

    What is the expected quality of the software delivered by the supplying party? The test manager and the accepting party have to think about this up front and share these expectations with the supplying party as early as possible.

  • Define additional tests.

    What has already been tested by the software supplier and with what coverage?

    • When “lucky,” the supplier will show you—the test manager—their test strategy and maybe some test reports. In that situation you can establish an acceptance test strategy covering the still outstanding risks.
    • When “unlucky” or when the supplier isn’t willing to share their testing information, you have to think of another approach to establish the quality of the delivered software. You could for instance start with a smoke test or a kind of exploratory testing to gain an initial assessment of the quality of the software and take it from there. Software that shows significant defects at this point will probably need additional system testing, although this might have been the responsibility of the software supplier.
  • Define intake tests.

    Leo has been a test manager for many international testing projects. In most situations the software was built and delivered by an external (often foreign) party. Of course, we asked for their test cases, test reports, etc. But we hardly received anything useful. They just didn’t want us to “look in their kitchen.” But we, as the accepting party, didn’t want to waste our time—especially not valuable end user time—on software that wasn’t ready (due to poor quality of the software) for the acceptance test. So what we did was define 15 to 20 (end-to-end) test cases that covered the most important processes of the organization. We gave these to the supplier with the demand they had to run—after the delivered software was installed on our acceptance test environment—these test cases successfully, before we would even think about starting our acceptance tests. In Germany, where Leo’s been working, they call it the Demonstrationstest. Many organizations adopted this idea and some organizations even made this a section in their contract with the software supplier.

Commercial Off-the-Shelf (COTS) Software

A large number of organizations are implementing commercial off-the-shelf software. Although the concept of COTS software suggests that there is not much that can go wrong, in practice things turn out to be different. Often implementations take much longer than planned, expected advantages for the organization have to be downscaled considerably during implementation, and at the time of deploying to production many things turn out not to be working correctly yet. Obviously organizations run big risks with such implementations. The question is, of course, where exactly risks are to be found when implementing COTS software:

  • Meeting the supplier’s specifications.

    In contrast with customized applications, the risk that COTS software will not meet the supplier’s specifications is likely to be small. This holds at least if (this specific version of) the package has been delivered to tens, if not hundreds, of other organizations.

  • Tuning parameter settings and adding customized applications.

    A much greater risk lies in the fact that several people have been working for weeks or months to tune the COTS software (parameter settings) and to add customized applications in order to make the COTS software work for the organization. Such complex activities can hardly be performed without making errors.

  • Adapting the working procedures and processes.

    Together with the implementation of COTS software, the working procedures and processes of the organization are usually adapted. In this case, the risks are that new procedures are not described well, or that people cannot handle these new procedures, or that new procedures are not compliant with the way the COTS software is supposed to be used.

  • Involving the end users.

    The end users are often insufficiently involved in the implementation of the COTS software and the modification of the processes. The risk of this is insufficient user acceptance.

Therefore, the risks hardly concern the functionality of the COTS software, but rather the implementation of this software in the organization. Since most COTS software supports a (large) number of organizational processes, risks are usually huge. These risks must be considered by the test manager, especially in the development of the test strategy. In case of COTS software, the test manager is much more focused on not testing. Everything that is standard and is frequently used by many users (in other organizations), and therefore has proven itself, does not have to be tested again. The question for testing COTS software is: what is considered low or very low risk, especially in terms of likelihood of failure? These aspects consequently do not need to be tested.

7.3.2Maintenance Projects

The use and development of information systems and technical infrastructures have grown more than ever. The dependency between business processes and automated information systems becomes increasingly stronger. Besides this, existing information systems often need to be adapted to new wishes. The awareness has grown that the costs during use and maintenance of information systems exceed the initial development costs by far. The existing literature on testing and test methods provides a sufficient grip on testing of new software applications. Practical experience has proved that it is still difficult to apply this approach to testing of software during maintenance. If we assume that the average life expectancy of an application is 10 years, we can safely state that testing during maintenance happens more often than the testing of new software. Consequently, it is not surprising that more money is involved in testing during maintenance than during the development period of software applications. Testing during maintenance is not a new work area. On the contrary, for many people it is a daily routine. This also applies to us, and below you’ll find a few of our own practical pieces of advice:

  • Organize a kickoff session with all relevant parties to obtain clarity on various subjects.
  • Create a standard maintenance test plan containing all reusable test aspects.
  • Set up, use, and maintain a regression test set that can be adjusted in size.

But first, here is a short introduction to maintenance, just to avoid misunderstandings regarding the concepts. A part of application management that occurs on an operational level is called maintenance. In general, a distinction is made between three types of triggers for maintenance: modifications, migration, or retirement of the software. Maintenance can be performed ad hoc or in a planned way. Ad hoc maintenance is performed to fix defects that cannot be delayed, because they cause unacceptable damage in production. Corrective maintenance is the only form of ad hoc maintenance. Planned maintenance comprises all other types of maintenance. These are performed in accordance with regular development processes per release, which usually start with an impact analysis. Some types of corrective maintenance can also be performed in a planned way. This concerns defects that do not need to be resolved immediately, because they cause acceptable or no damage in production.

Let’s have a closer look at the previously mentioned practical advice:

  • Organize a Kickoff Session

    When making an inventory of the test basis it sometimes happens that changes are not very well-documented and, in the case of ad hoc maintenance, the actual cause for the change is not described. A well-tried method to gain clarity on this is to organize a kickoff session with all parties concerned (business information management and management of technical infrastructure, developers, users, and testers). Defining the impact, identifying risks and defining and/or adapting the test strategy, are topics that fit very well into such a kickoff session. Don’t forget taking the non-functional quality attributes into account as well in that session, and also keep the existing situation in mind when collecting these non-functionals. Improving, for example, a performance requirement from 3 seconds to 1 second is very hard to realize in a maintenance release, if this was not a prerequisite in the original design. Specifically in case of ad hoc maintenance, the kickoff session can be used to discuss ways to reproduce an error in a test situation. The challenge in organizing a kickoff session is getting the desired participants together at the same time, in view of the usually very limited timeframe for testing.

  • Create a Standard Maintenance Test Plan

    It is advisable not to repeatedly invent a new way of working. Both the test process and reusable test aspects such as test specification techniques, required infrastructure, organization, communication, procedures, and a regression test set can be worked out and written down once in a standard maintenance plan. A (limited) test strategy needs to be included in this as well. By, for example, performing a risk analysis on a system, both the very risky and less risky parts can be distinguished. A change to a risky system part requires more test effort than on less risky parts. In case of ad hoc corrective maintenance, solving the production problem has top priority. Even though this results in not taking all necessary steps of a structured test approach, it is vital just then to have a standard maintenance test plan available, since it describes which test activities are essential in an ad hoc situation and should always be performed. If, besides this, a calibrated regression test set is available that can be easily adapted to the test strategy, it is possible to perform a high quality test, even in an ad hoc corrective maintenance situation.

  • Set Up, Use, and Maintain a Regression Test Set

    The most important difference between the test strategy for new development and maintenance is the likelihood of occurrence of the risk. A number of changes are implemented in an existing system, mostly as a result of problem reports or change requests. These changes can be incorrectly implemented and need to be tested. With the adjustment there is also a minor chance that faults are introduced in the unchanged parts of the system, as a result of which the system deteriorates in quality. This phenomenon of quality deterioration is called regression and is the reason why the unchanged parts of the system are tested as well. In fact, risk classifications of subsystems during maintenance can differ from those of new development. Let’s look at an example. A new-build high-risk subsystem is tested thoroughly during building and after that released into production. Later, in a maintenance release some new functionalities are added, but the high-risk subsystem hasn’t changed. Because the chance for regression for this high-risk subsystem is the only risk involved, a less thorough test can be performed compared to the situation when it was build. This type of strategy determination is called (test) impact analysis. Per change (an accepted request for change or a solved problem report) an inventory is done on which system parts were changed, which system parts may have been influenced by the change and which quality characteristics are relevant. There are various possibilities for testing each change, dependent on the risks (see Figure 7-4):

  1. A limited test, focused on the change only
  2. A complete (re)test of the function that is changed
  3. The testing of the coherence of the function that is changed and the adjacent functions
  4. Testing the entire system

In addition to this, you might consider executing a regression test for the entire system. The regression test focuses mostly on the coherence between the changed and the unchanged parts of the system, since these are most likely to suffer from regression. If the test strategy for the new development is available, the levels of importance attributed to the subsystems play a role in the construction of the regression test. A regression test can be executed in part or in full, depending on the risks and the test effort required. The use of test tools is most effective in the execution of regression tests. The main advantage of automation of regression tests is that a full test can be executed each time with limited effort and it is not necessary to decide which parts of the regression test will, and which will not, be executed. The choice to formulate the strategy either in terms of subsystems or change requests is affected by the number of change requests and the system part affected by the changes. The more changes and the larger the part of the system affected, the stronger the preference goes to determining the test strategy at the subsystem level, rather than basing it on change requests.

image

Figure 7-4 Test strategies

Keeping the regression test set up-to-date is likely to be forgotten. It is therefore advisable to incorporate updating the regression test set as a clear activity in the test phase. While executing the test it may have happened that the system did not react in the way assumed in the test case. If that was an incorrect assumption, the test case has to be adjusted in accordance with the production situation. Furthermore, a decision has to be made if new test cases need to be added to the regression test set. This can be done simply by using the changes and possible defect reports as a basis. It has to be taken into account that the correct classifications used for the calibrated regression test set have to be added.

Besides test cases resulting from planned maintenance paths, test cases can result from ad hoc corrective maintenance as well. Although it is often stated in the latter case that it just concerns the solving of a problem for which the functionality remains unchanged and therefore does not have to be included in the regression test set, it can be useful to include them for the following reason: the problem was solved in one particular software version, but the change has to be implemented in the following software versions as well. Often this does not happen for some reason, and therefore it is advisable to add a specific test case to the regression test set.

Finally, it could be desirable to adjust the risk analysis in the standard maintenance test plan. This mainly applies to the chance of failure of the tested subsystems, which can be adjusted based on the number of detected defects.

7.3.3Hardware/Software and Embedded Systems Projects

As you might expect, testing hardware and software combination projects and embedded systems projects require a somewhat different test approach compared to testing of “traditional” software projects. You could say that embedded systems is the ultimate form of a hardware and software combination project. However, embedded systems means many different things to different people. It covers a broad range of systems, including mobile devices such as mobile phones and tablets, keyless entry, cameras, robots, thermostats, etc. Although the list of embedded system examples can be endless, they all have a common factor—namely, a combination of hardware and software that interact with the surrounding physical world. On the other hand, “There is no general consensus about what an embedded system is, nor is there a complete list of characteristic properties of such systems,” as stated by Bas Graaf, Marco Lormans, and Hans Toetenel from Delft University of Technology in the Netherlands.7 Whatever the definition you prefer or make up yourself, testing embedded systems will always be a challenge for the test manager! Let’s look into a few of these challenges: test environment, integration test strategy, and test organization.

Test Environment

The three most important elements of the test environment are:

  • Hardware, software, and network

    In contrast to “traditional” software, embedded systems can have different physical appearances in the different development stages, which often require different test environments. Besides the production type itself, you might have to test models of prototypes in the early stage of system development.

  • Test databases

    The need for tests to be repeatable with reproducible test results also applies to embedded systems. So tests—along with the storage of test data—must be designed to support this.

  • Test tooling

    Like testing traditional software, it is not always possible yet to run the software in the real world. In that situation, simulation (e.g., stubs, drivers) and measurement (e.g., for detecting and analyzing output) equipment may be required.

As test manager, you must be aware that these test environments tend to be complex, so more time must be allotted in the schedule for setting up and testing these test environments. The complexity of these environments may result in more frequent equipment failure and replacement issues, which may cause unexpected downtime. The testing schedule must allow for outages of equipment and provide ways to utilize the test resources during system downtime.

Prototype is mentioned above. Using a prototype for testing has many advantages as long as you are aware of the following risks. Prototypes may show behavior that the production models will not show. This can be either good or bad. Maybe a prototype defect is reported, although this can’t happen in a production environment. The other way around is of course also possible. The prototype is working fine, but the production version shows different, maybe even abnormal, behavior. Often it makes sense to apply for an “at least two of each” rule for prototypes. With this rule you could mitigate the problem of an individual prototype with a “bad unit” in it.

Even more difficult to find with testing is the situation in which the prototype shows incorrect behavior but the software accepts that incorrect behavior when it should raise an error. This will mask the problem that will only appear when working in the production environment. When using a prototype in a testing environment, the tester must work closely with the prototype developers to understand the status and behavior of the prototype at hand. When working together, the occurrence of aforementioned risks should not happen.

Integration Test Strategy

An integration test strategy is necessary because of all the dependencies between different software modules, different hardware parts, and between the software and the hardware. Of course, this test strategy depends on the integration approach chosen by the project. Top-down, bottom-up, and big bang integration are the three fundamentally different approaches. Because these three approaches are not mutually exclusive, there is a variety of approaches due to combinations of these three. Which approach will be the chosen approach depends on these factors:

  • Architecture
  • Availability of the integration items (software or hardware from external suppliers)
  • Size of the system
  • Whether it is a new system or an adjusted existing system

The test manager has to determine the integration test strategy for one (or a combination) of the following integration approaches:

  • Top-down integration.

    In this approach, the backbone structure of the system is crucial. This backbone (also called control) structure is developed in a top-down sequence, which creates the opportunity for a top-down integration of the modules. This naturally starts with the top-level control module. Per level and after integrating all connected modules for that specific level this level must be tested. As with testing traditional software, modules that are not ready or available yet will be replaced by stubs. A disadvantage of this approach could be that changed requirements with an impact on low-level modules may lead to changes in top-level modules. This may lead to a (partly) restart of the integration process and its testing. As a test manager you should think of a regression test in order to support this. Another disadvantage might be the number of stubs necessary to test every integration step. An advantage of top-down integration is that, even though major parts of the system are still not available and may be substituted by stubs, an early look and feel of the entire system can be achieved.

  • Bottom-up integration.

    This integration approach starts with low-level modules with the least number of dependencies. Often drivers are developed and used to test these modules. This approach can be used to incrementally build the system and may (maybe even will) lead to an early detection of interface problems. By building the system incrementally, these problems can be isolated rather easily and are therefore much cheaper to resolve compared to when they are discovered when the complete system is ready. Although this is a huge advantage, you have to take a few disadvantages into account as well. Often you need a lot of drivers to execute this approach and—because of the iteration of the tests—this approach can be very time-consuming.

  • Big bang integration.

    This integration approach is a very simple approach; after all modules have been integrated, the system can be tested as a whole. A huge advantage is no stubs and drivers have to be used, and the strategy is quite straightforward. An obvious disadvantage of this approach is that it could be difficult to find the causes of defects. And another disadvantage is that integration can only start when all modules are available.

Test Organization

Testing embedded systems has its own demands on the test organization. As a test manager you might consider these factors:

  • The degree of technical expertise needed in the testing team (or training needed)
  • Solving issues (e.g. downtime) in the testing environment can be difficult and time-consuming
  • Retesting can also be very time-consuming
  • Collaboration with the developers (knowledge sharing)

All of this will influence the time schedule. As a test manager it is smart to play a role in establishing the project (test) schedule to make sure enough time is planned, for instance, for solving (unexpected) issues in the testing environment and retesting.

7.3.4Safety-critical Systems Projects

What is a safety-critical system?8 Generally you could say a system is a safety-critical system when a failure can cause serious damage to people’s health (or worse). Safety-critical systems are often embedded systems. Examples of such systems are in avionics, medical equipment, and nuclear reactors. But other systems could also be safety-critical, for instance an application that runs on a network of PCs and mobile devices that supports doctors, nurses, and physicians’ assistants in making diagnostic decisions. With such systems, risk analysis is extremely important and rigorous techniques to analyze and improve reliability are applied.

Building a safety-critical system involves dealing with law and certification authorities. Some of the regulations and/or requirements related to safety critical systems are very strict. In order to fulfill these requirements, there has to be a well-structured process with clear deliverables. For instance, a standard for certification of commercial avionics applications can be used (in the United States DO-178C and the European analog ED12C),9 or the IEC 61508 guideline,10 which describes a general-purpose hierarchy of safety-critical development methodologies that has been applied to a variety of domains ranging from medical instrumentation to electronic switching of passenger railways. Another standard, developed by the British Ministry of Defence, is a standard for safety management called MOD-00-56.11 Part of this standard is a description of a structured process for developing and implementing a safety-critical system. The process shares some products and activities with the test process. Also, the safety process includes several test activities. If the shared activities and products are not coordinated well, then both processes can frustrate each other. The result is then a product that is not certifiable or a product that offers the incorrect functionality.

Let’s take a closer look at a typical safety lifecycle process and what a test manager could do when taking part in such a process.

Safety Lifecycle Process

Let’s consider—at a high level—a typical safety lifecycle process. The objective of this process is to develop from some global requirements a system certified for safety-related use.

  • Prerequisites to successful safety management.

    Successful safety management requires that organizations and project teams must follow good practices in areas such as:

    • Quality
    • Configuration management
    • Use of suitably qualified and experienced personnel
    • Management of corporate and project risk
    • Design reviews
    • Independent review
    • Closed-loop problem reporting and resolution (e.g., take corrective actions to prevent similar problems in the future)
  • Setting safety requirements.

    One of the most difficult elements of the safety process is setting the level of required safety risk for the system. Individual projects will be guided by departmental safety policy but must develop and record their own justification for the targets and criteria which they use. The requirements for safety will vary according to the system size, function, or role, but will include one or more of the following:

    • Legal and regulatory requirements
    • Certification requirements
    • Safety-related standards
    • Policy or procedural requirements
    • Risk targets (quantitative and qualitative)
    • Safety integrity requirements
    • Design safety criteria
  • Safety management planning.

    If the safety requirements define where we want to reach, the safety management plan sets out how to reach the destination.

  • Safety stakeholders.

    Safety management is most successful when the decision makers have good engagement with stakeholders from an early stage of a project. The stakeholders must be identified, and then there should be consultation to understand their requirements, with support where necessary from subject matter experts.

  • Safety monitoring and audits.

    There is never certainty that the risks of accident occurrence have been fully controlled or that a positive safety culture is prevalent within an organization. The non-occurrence of system accidents or incidents is no guarantee of a safe system. Safety monitoring and safety audit are the methods used to ensure that the “safety system” does not decay but is continually stimulated to improve the methods of risk control and safety management.

  • Safety compliance assessment and verification.
    • Safety compliance assessment is concerned with checking whether the system achieves, or is likely to achieve, the safety requirements. It uses both design analysis and auditing techniques. If the requirements are not achieved, then corrective action has to be taken and the safety must be reassessed.
    • Safety verification aims to provide assurance that the claimed theoretical safety characteristics of the system are achieved in practice. This will involve reviewing all safety incidents that occur and testing that safety features operate as they should.

Test Management Considerations in a Safety Lifecycle Process

As a test manager you must expect—even demand—to be involved in all of the above phases of the lifecycle. Let’s take a closer look at these phases and what your considerations could be per phase.

  • Prerequisites to successful safety management.

    Think of how to organize design reviews by the test team or by an external—independent—party and risk assessments. Actually, determine a quality assurance approach in general you want to use.

  • Setting safety requirements.

    The requirements (functionality, security, performance, usability, etc.) are the test basis for the test. This is the time to start reviewing and executing the risk assessment. Obviously safety-critical systems provide some special challenges, including the need for additional testing for conformance to regulations, certification, and published standards. For tracing and tracking purposes don’t forget to implement and maintain traceability matrices, which are very helpful aids to use.

  • Safety management planning.

    The test team will spend significant time on reviewing and verifying documentation, including test documentation. Allocating both time and people with needed skills and knowledge is key. The allocated test time should be part of the overall project plan and be clear to everyone involved.

  • Safety stakeholders.

    Make an inventory of your principal stakeholders. Involve them as much as possible in the testing activities, when executing the risk assessment and when determining the exit and acceptance criteria.

  • Safety monitoring and audits.

    Monitor the test safety aspects by executing audits, and give advice based on the results of the audits. Sometimes compliance to industry-specific regulations and standards may influence some test aspects:

    • The level of (test) documentation required
    • Which (test) tool must be used
    • The thoroughness of testing, whether automated or not
    • The level of code and requirements coverage
    • The manner of defect classification
    • Whether all defects must be documented or not
  • Safety-compliance assessment and verification.

    During test execution defects will occur. Of course, these defects need to be analyzed and probably resolved. The correction of a safety defect can have influence on the functioning of the system and vice versa, as a correction of a functional defect can influence safety. In order to overcome this problem the impact analysis and the corrective actions should be centralized. As a test manager, be aware that a corrective action for a safety defect can lead to a functional retest.

7.4Release Advice and Considerations

Learning objectives

LO 8.4.1

(K4) Analyze the business context with respect to deployment, installation, release management, and/or product road map and determine the influence on testing.

As a test manager, it could be difficult to give accurate release advice. Often, you must look at it from two angles—the quality of the software, and considerations of the product such as market demand, ease of maintenance, or ease of installation. In this section, we’ll take a closer look at both angles.

7.4.1Release Advice

Often, the release advice is created at the end of the test execution stage. The purpose of the release advice is to provide the client and other stakeholders with a level of insight into the quality of the software that will allow them to make informed decisions on whether the software could be released. The information in the release advice should not actually come as a surprise to the client. She has been kept abreast of developments relevant to her by means of reliable progress reports and, where necessary, risk reports. In order to supply the client with the information necessary at this stage, the release advice must cover at least the following subjects:

  • Release recommendation.

    A recommendation as to whether, from the point of view of the testing, it would be advisable to release the software. The final decision, however, on whether or not to release the software does not lie within the test process. Many more factors are at work here, other than those relating to the test process. For example, political or commercial interests that make it impossible to postpone the release, despite a negative release advice, should be considered. In the next section this will be discussed in more detail.

  • Obtained and unobtained results.

    Which test goals have been achieved and which have not, or only to a certain degree? On the basis of the test results, the test manager gives her opinion and advice on the test goals set by the client. It is also indicated whether the exit criteria have been met. The number and severity of the open defects play an important role here. Per defect, it is indicated what the consequences are for the organization. If possible, risk-reducing measures are also indicated, such as a workaround, allowing the software to be released without the defect being resolved.

  • Risk estimate.

    At the beginning of the test process, an agreement is made with the client about the extent to which product risks will be covered, and with what degree of thoroughness. For various reasons, it may be decided to cover certain parts less thoroughly with testing than the risk estimate indicates. Moreover, during the test process, all kinds of changes are still usually being made to the original strategy; additionally, the original risk estimate has possibly been adjusted, perhaps resulting in additional or different risks. The test manager points out which characteristics or software parts have not been tested or have been less thoroughly tested than the risks justify and so present a higher risk. The associated consequences are also shown.

7.4.2Release Considerations

As opposed to the release advice, the test manager may or may not be able to have a voice in release considerations. This is often the domain of the business, marketing, and product managers. The test manager does, however, need to be aware of considerations for the different release methods, products, and customers. A few examples of possible considerations are the market demand, ease of maintenance, and the ease of installation.

Market Demand

In one of Leo’s projects as a test manager, he worked for a bank where an Internet banking application was developed. At the time the software had to be released into production, it turned out that the front-end and back-end of the system couldn’t communicate with each other. Obviously, he gave a negative release review. But the client ignored this because they knew other banks were also working on this feature. And when those banks were able to offer this feature to their customers before “his” bank could, the bank would lose customers for sure. As a temporary measure, workers were hired to enter the incoming transactions to the back-end system and vice versa. This was not an exceptional situation, because depending on market demands it may make sense to release a product before a feature was complete. Some obvious considerations may be competitive products are already in the market, a replacement for a product is already problematic, or the organization wants to get early feedback on a new concept. Whatever the reason, the test manager may have to adjust the exit criteria for the testing phases to support this type of partial release. Exit criteria adjustments should be done only with the approval of the project team as this can have quality and support implications. As in the example above, a real-life simulation with the temporary workers was carried out, before releasing the software into production.

Ease of Maintenance

In the past we were always a little bit hesitant in releasing faulty software into production. Of course that depended on the type of software. We wouldn’t do that with safety- or mission-critical software. But nowadays in some environments it is hardly an issue anymore. When we examine apps on our mobile phones, we often have to deal with a few issues. But we also know that we’ll get a new version of the app in a matter of hours or days. So a release decision may consider the ease of delivering fixes to the customer. Depending on the type of software, fixes may be automatically downloaded, may require media delivery, or may be available for the customer to download as needed. The ease of both obtaining and installing fixes may determine the acceptability of releasing a product prior to final testing approval.

Although it could be very easy to install fixes, you still have to think about risks such as—when the product fails in production—image loss, loss of income, damage claims, or unhappy customers. As in the app example, software may be released in this way with the assumption that the user would rather deal with some defects that will be resolved quickly than wait for a more solid release. Of course this decision depends—again—on the type of software. This would apply more to non-critical software than it would to safety- or mission-critical software. Although the decision of releasing the software into production is made by others (e.g., business, marketing, or product manager) rather than the test manager, the test manager needs to be able to supply accurate information regarding the outstanding risks, the known issues, and an approximate schedule for the delivery of fixes.

Have you ever been in the situation where the software supplier fired one after the other software fixes at you? We have. Those need not be a problem as long as those fixes are easy to install. On the other hand, installing those fixes may require additional coding by the customer’s development team, making them less likely to quickly install changes and fixes. As a test manager in a project where a software package was implemented by a third-party software supplier, the package supplier sent Leo and his team one after the other software updates. The problem they had was that the package was incorporated in the client’s software landscape with many tailor-made interfaces. The almost continuous stream of software updates led to the situation that the client’s software engineers were almost constantly busy adjusting the various interfaces. But in the end they refused to install all these updates. The consequence of this was that support by the software supplier became difficult because—in the end—they had a lot of customers running multiple different versions of the software.

Ease of Installation

Sometimes it can be hard to install a fix or a new version of the software, and sometimes it is a piece of cake. Some considerations involve the consequences of the installation of a maintenance release, a series of fixes, and the deployment mechanism.

  • Maintenance release.

    When considering a maintenance release, the test team must understand any effects the fixes will have on the existing software and user organization.

    • Is a data conversion required? If so, the test team has to test this conversion as well.
    • Will additional coding in adjacent software systems be required? When this is the case, probably more testing will be required by the test team.
    • Will the customer need to adjust procedures? Again, if so, this has to be tested before the maintenance release is released to production.
    • Will the customer need to retrain their users because of a change to the user interface? Maybe the user needs additional training in the test environment.
    • Will the customer incur downtime during the installation and implementation of the fix? This is something that must be taken care of by the project or business manager.
  • Series of fixes/updates.

    Some updates are easy to install, and others are difficult. If you are familiar with installing updates, you probably know that the supplier assumes that you already have installed previous updates. And when you have not installed all previous updates, you probably experienced problems when installing the newest update. Right? So, often customers should not have the possibility to choose which updates they want to install and which ones they do not. The newest update might not work on their system when skipping previous updates; it also makes support of installation and testing more complicated. To avoid the problem of customers picking the updates they would like to have and to ignore others, sometimes the supplier of these updates combines these updates in one package, which must be installed as a whole. This could lead to a greater testing effort.

  • Deployment mechanism.

    Cool, the update is tested and works perfectly according to whatever specification. But that was still in a testing environment. As a test manager, don’t forget to pay attention to testing the deployment mechanism, preferably together with the operations people. Ensure it is the correct version of the update in the required format for the proper system. This may include testing the installation procedure, any automated wizards, various upgrade paths, deinstallation procedures, and mechanisms used to actually deliver the fix to the user.

7.5Sample Exam Questions

In the following section, you will find sample questions that cover the learning objectives for this chapter. All K5 and K6 learning objectives are covered with one or more essay questions, while each K2, K3, and K4 learning objective is covered with a single multiple-choice question. This mirrors the organization of the actual ISTQB exam. The number of the covered learning objective(s) is provided for each question, to aid in traceability. The learning objective number will not be provided on the actual exam.

The content of all of your responses to essay questions will be marked in terms of the accuracy, completeness, and relevance of the ideas expressed. The form of your answer will be evaluated in terms of clarity, organization, correct mechanics (spelling, punctuation, grammar, capitalization), and legibility.

Scenario 7: Experiment

In an organization the software development department would like to start an experiment. This experiment involves two different software development approaches. The department is split up into two groups. Both groups will develop the same software product, but one group will be using a Waterfall approach and the second group an Agile approach (Scrum).

Question 1

LO 8.2.1

Refer to Scenario 7.

Compare and elaborate for both the Waterfall and the Agile (Scrum) approach on the moment of involvement as a tester, level of involvement as a tester, and level of supplied system documentation.

Question 2

LO 8.2.2

A Scrum team has adopted the following approach:

  • A sprint lasts for two weeks.
  • The sprint starts with a team of architects designing the system for three days.
  • Then this team of architects is replaced by a team of software engineers who will write the software for five days.
  • Finally, the test team comes in and tests the product for two days.

The test team is complaining they can’t get the job done.

What should be done to make Scrum work in this situation?

  1. Add more professional testers.
  2. Extend the sprint length up to four weeks.
  3. Integrate test activities in the development process itself.
  4. Remove user stories from the “doing” column on the Scrum board.

Scenario 8: Implementing COTS Software

An organization is experiencing problems with its customer relationship management (CRM) software. It was designed and built a long time ago. The organization has decided to look for a commercial-off-the-shelf (COTS) replacement for the CRM software. After a selection procedure, they picked one that would suit their needs. They visited a lot of other organizations using the same CRM COTS software in order to hear more about their experiences with this COTS software. All these organizations were unanimous in their experiences. They were all extremely satisfied.

Question 3

LO 8.3.1

Refer to Scenario 8.

Which risks do you see or do you not see when implementing this CRM COTS software?

Question 4

LO 8.4.1

Near the end of the software development project the test manager provides release advice to the project team. Although the test manager gave negative release advice (due to some severe defects), the software was released into production anyway.

What could be a reason to ignore the test manager’s release advice?

  1. A safety-critical product does not allow any delays.
  2. Being first with this product on the market.
  3. The product can be installed very easily.
  4. The test manager has a history of giving incorrect advice.

1If you imagine the letter T being a representation of a person’s skills, the vertical part of the T represents the core skill or expertise. In testing we would naturally suggest this is the core skill of testing (of which there are many variations and subskills). The horizontal part of the T represents the person’s ability to work across multiple disciplines and bring in skills and expertise outside of the core skills (like designing or programming). The simplest definition of the characteristics of a T-shaped person is given by Jim Spohrer: “A T-shaped person is that they are better at team work than I-shaped people. I-shaped people are good at talking to other I-shaped people like them. T-shaped people can talk with I-shapes in their area of depth, but they can also have productive conversations with specialists from many other areas. Beyond productive conversations, T-shapes also have empathy or an attitude that makes them eager to learn more about other areas of specializations.” Want to know more about T-shaped professionals? Refer to www.service-science.info/archives/3648.

2Although Royce presented the Waterfall model as an example of a flawed, nonworking model, it became a very popular software development model that nowadays still is one of the most used models. If you’re interested in his paper, refer to www.cs.umd.edu/class/spring2003/cmsc838p/Process/waterfall.pdf.

3www.agilemanifesto.org. For US readers for whom the word manifesto sounds rather pretentious or stilted, it is a common term in Europe that is used to describe what would be called political platforms or position papers in the US.

4Kanban: Successful Evolutionary Change for Your Technology Business, by David J. Anderson, and Donald G. Reinertsen. In this book you’ll find answers to questions like: What is Kanban? Why would I want to use Kanban? How do I go about implementing Kanban? How do I recognize improvement opportunities and what should I do about them?

5Definition as given by the authors of “The Scrum Guide,” Ken Schwaber and Jeff Sutherland. www.scrumguides.org/docs/scrumguide/v1/scrum-guide-us.pdf. This is where it all started. If you haven’t read it yet, just do it, so you know what Scrum is all about. It is less than 14 pages, so the size of the document cannot be an excuse for not reading it.

6Extreme Programming Explained: Embrace Change, 2nd Edition (The XP Series) by Kent Beck and Cynthia Andres. XP principles are often seen as the basis for Scrum. For instance, user stories are mentioned in XP, but not in the Scrum guide, although many people mention user stories and Scrum in the same breath.

7Graaf, Lormans, and Toetenel present in their paper, “Software Technologies for Embedded Systems: An Industry Inventory,” some results of the MOOSE (software engineering MethOdOlogieS for Embedded systems) project. MOOSE is a project aimed at improving software quality and development productivity in the embedded systems domain. One of the goals of this project is to integrate systems and software engineering, requirements engineering, product architecture design and analysis, software development and testing, product quality, and software process improvement methodologies into one common framework and supporting tools for the embedded domain. For more reading refer to virtual.vtt.fi/virtual/proj1/projects/moose/docs/graaf_in_template_springer.pdf.

8Are you interested in practical lessons that can be applied for building safety-critical systems based on what is currently known about building safe electromechanical systems and past accidents? If yes, refer to Nancy Leveson’s book Safeware: System Safety and Computers.

9The Federal Aviation Administration (FAA) is the national aviation authority of the United States, with powers to regulate all the aspects of American civil aviation. These include the construction and operation of airports, the management of air traffic, the certification of personnel and aircraft, and the protection of US assets during the launch or reentry of commercial space vehicles. Refer to www.faa.gov.

10IEC stands for International Electrotechnical Commission for all electrical, electronic, and related technologies. IEC uses the following definition of functional safety: “Freedom from unacceptable risk of physical injury or of damage to the health of people, either directly, or indirectly as a result of damage to property or to the environment.” Learn more about this at www.iec.ch/functionalsafety.

11When you are interested in an introduction to system safety management concepts, terms, and activities, refer to the introduction booklet as written by the British Ministry of Defence: www.gov.uk/government/uploads/system/uploads/attachment_data/file/27552/WhiteBookIssue3.pdf.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset