6      PREPARING FOR UAT – PLANNING

We need to begin our UAT exercise as we begin any important exercise – by deciding what it is we are trying to achieve. By the time we get to UAT you may think this should be already well defined but remember that change is the curse of planning. There will have been many diversions from the original plan and requirements – both accidental and deliberate. Right here and now is where we must finally decide what we believe the project’s business objectives are – the business intent – and what the system to meet those objectives must look like.

The way we shape, plan and prepare for UAT will determine how effective it is. In this chapter we address all the key elements of planning for UAT. Those key elements can then be packaged into a UAT test plan. We have not covered the generic elements of routine test planning, which can be found in almost any testing textbook and which are well covered by standards such as IEEE 828, preferring to focus attention on those things that are unique and important to UAT.

Topics covered in this chapter

•  Deciding what we want to achieve

•  Acceptance criteria

•  UAT objectives

•  Entry criteria

•  Defining the testing we will need

•  Creating a test basis for UAT

•  Setting up the test management controls

THE UK PASSPORT AGENCY – THE IMPORTANCE OF PLANNING

In the summer of 1999 the UK Passport Agency brought in a new Siemens computer system without sufficiently testing it or sufficiently training staff on how to use it. At the same time an unusually high demand for passports was driven by a change in the law, which meant that children under the age of 16 required their own passport when travelling abroad. The combination of the two factors resulted in a backlog of 565,000 passport applications and the Home Office was forced to pay out millions in compensation for missed holidays and staff overtime.

The National Audit Office report into the passport delays found there to be three main causes:

•  a lack of adequate planning and testing of the likely time needed for staff to learn the new processes;

•  insufficient contingency planning, which led the pilot test phase to be extended before the issues it raised had been fully overcome;

•  a lack of communication with the general public.

Despite the fact that the pilot implementation showed that the new system produced lower volumes of passports than expected, it was decided to roll out the system to the rest of the passport office locations. As part of the pilot the old system had been removed and the new system was installed. The decision to implement, or as the UK Passport Office described it ‘extend the pilot’, was made because the old system was deemed outdated and potentially not year-2000 compliant, and returning to the old system was deemed costly and risky. What was not known at the time was that the expected demand for passports had been greatly underestimated and that when the demand for passports grew, bringing in extra staff and overtime would not have the expected positive impact on productivity because the extra human resources could not overcome some of the limitations of the system.

One of the key recommendations issued from the National Audit Office was:

Project managers should plan for adequate testing of the new system before committing to live operation, in particular for staff to learn and work the system.

The difficulties experienced at the UK Passport Agency are relevant to the effort of planning in a number of ways. The example illustrates the importance of:

•  the complexity of implementations and the outside influences that affect what decisions are made regarding the extent of testing required or when to cease testing;

•  understanding the difference between testing and implementation;

•  understanding the acceptance criteria and the potential risks that arise when acceptance criteria are changed.

DECIDING WHAT WE WANT TO ACHIEVE

Before we commit resources to doing any testing we need to be sure we are testing the right things in the right way. We have some deliverables from the development project as a starting point:

•  a set of business requirements in the form of an RS that most likely will have changed;

•  some testing done by the development team and perhaps also by some independent testers that may or may not be well documented;

•  a system that should be complete and ready for UAT;

•  a trained UAT team ready to begin its work.

This is our starting point, but what is our end point? How will we know that we have finished UAT? Our next step is to decide these things so that we can use them to identify how good our starting point is and to plan a way to get to our desired end point.

ACCEPTANCE CRITERIA

We have to have a way to decide when to stop testing and release the system into operation. The decision is not ours to make, but the decision maker(s) will need some good objective information to help when making a judgement and it will be our job to provide as much of that information as we can.

Acceptance criteria are measures of what the sponsor and users want that we can use as a target to aim for at the conclusion of UAT, but they also act as a basis for gathering and reporting information.

The obvious acceptance criteria might be that the system works correctly, has no defects and is ready for release on the planned release date. There is certainly nothing wrong with those criteria – but what if they cannot all be achieved? If some of the criteria are unachievable in a reasonable time frame, how do we decide what to do? This is the reason why release decisions can become more emotional than rational – the objectivity provided by setting criteria for acceptance is compromised if any one of those criteria is found to be too difficult to achieve for one reason or another and the rational discussion that generated the criteria in the first place needs to be repeated when the participants are under the stress of delivery deadlines.

The statistics from The Standish Group and others suggest that there is quite a high likelihood that we will not achieve all three of the ‘obvious’ criteria together. We will have to make decisions about what to do when we are under pressure from users and from the business to get the system into service as fast as possible, and that is why we need to set realistic acceptance criteria well before UAT begins. By realistic we mean recognising the likelihood of not achieving an ideal outcome and planning to limit how far we allow ourselves to deviate from the ideal. If criteria are based on the worst scenario we can accept, they become realistic because everyone involved in the decision knows that failing to achieve them is not just letting go of the ideal but opening up the real possibility of failure.

How do we identify realistic criteria? We start from the ‘ideal’ criteria:

1.  The system works correctly.

2.  The system has no defects.

3.  The system is ready on launch date.

Now imagine that we are at launch date, UAT is not yet completed and there are 20 defects outstanding. What should we do? It all depends…of course. There are two extremes. At one end the time pressure is so great that the other criteria have to be ignored, so we release and deal with the problems as they arise. At the other we are under no time pressure so we complete UAT and fix all the defects before we release. Neither of those extremes is likely to be the reality so we need to decide how much room for manoeuvre we have in each of the three key criteria.

If we take the time criterion, we need to understand how critical the release date is. What will happen if the system is not released on that date? What costs will be incurred? How will business be affected? How much delay could be tolerated? How quickly would the costs and problems ramp up with a delayed release date?

For the defects criterion we need to decide on criticality. Some defects are more serious than others. We could split defects into three types: critical, serious and routine:

•  A critical defect is one that will prevent the system from delivering its core capabilities or achieving its business benefits.

•  A serious defect is one that is not critical but will impair performance significantly. There may be ways to offset the problems – ‘workarounds’.

•  A routine defect is one that can be fixed routinely because it does not significantly impact on the performance of the system.

We can obviously not tolerate any critical defects, but we may be able to tolerate some serious defects and many routine defects. Bear in mind that the numbers are important. Releasing with 10 routine defects should not have much impact on the system; hundreds of routine defects, on the other hand, may have some impact and may also take some time to correct. So the scale goes from zero defects of any kind at one end, to zero critical defects, ‘a few’ serious defects and unlimited routine defects at the other end. Of course three levels of criticality can be expanded to five, seven or however many you need.

The functionality criterion is simpler. There is obviously some functionality (and some non-functional behaviour) that is critical to the system and, by the same token, there is some that is almost cosmetic in nature. It is common to consider three levels of criticality for system functionality:

•  Essential functions are those without which the system cannot achieve its business benefits.

•  Important functions are those that are not critical but the system will not perform as expected without them (there may be ways of working around the missing functions, though there might be some training or retraining needs as a result).

•  Cosmetic functions are definitely not essential but they may enhance usability or save time or effort.

One simple way to deal with this criterion is to prioritise functions for testing so that the essential functions are tested first, but we may still have a less than ideal scenario when planned testing time is exhausted.

Now there are three interacting criteria to consider and any decision impacts all three. If, for example, we decide to fix all the critical defects pre-release, it will have an impact on testing because the fixes will need to be tested.

There is no simple formula that provides the ideal set of release criteria, nor is there a guaranteed outcome from any choice that we make at release time. The point here is that it is essential to think about acceptance criteria well before they are used so that everyone understands the relative importance of each criterion, data can be collected and tracked to ensure that performance against each of the criteria can be accurately reported, and rational decisions can be made when they are needed.

UAT OBJECTIVES

The other side of acceptance criteria is the strategy they generate. Acceptance criteria that make system functionality paramount will encourage a strategy that puts progress with testing ahead of defect corrections, with the possible impact of delays while critical defects are fixed. Acceptance criteria that make the delivery deadline paramount will encourage prioritisation and early defect fixing to minimise the likelihood of delay, but possibly at the expense of completeness of the system at release. The acceptance criteria alone provide a goal but they do not define a strategy.

Acceptance criteria define the status of the system at release but they are not helpful in deciding how to achieve that status. We need to consider how best to manage UAT to achieve the acceptance criteria: we need a UAT strategy. As well as achieving a goal, a strategy needs objectives that define the way the goal is approached. In the case of UAT we have to consider the reasons we do UAT – to reduce risk, to gain confidence, to assess the readiness of the system for live use and to facilitate the transition to live use. Each of these is an objective that we could track, for example by checking off risks against a risk register or by measuring readiness against a checklist.

As with acceptance criteria we need a mix of objectives that enable us to approach the acceptance criteria in a way that does not neglect any aspect of UAT but ensures that attention is paid to each aspect by measuring progress towards it during the testing. All four of the primary objectives will be likely to feature in a UAT strategy and generate key milestones that enable the team to track progress towards the acceptance criteria.

ENTRY CRITERIA

Recall now our deliverables from development:

•  a set of business requirements in the form of an RS that most likely will have changed;

•  some testing done by the development team and perhaps also by some independent testers that may or may not be well documented;

•  a system that should be complete and ready for UAT;

•  a trained UAT team ready to begin its work.

Are these all in place and in a state that we can work with? Are the project and the system ready for UAT?

By way of example, imagine what would happen if we began UAT with 20 critical defects outstanding – the system code would be changing frequently and in critical areas so every test we did would be invalidated by a change within days and we would have to repeat it. This would be an expensive waste of time, but worse still, it would make change control so complex we would be at risk of losing control of the state of the system code or the testing.

To avoid this kind of problem we need some entry criteria for UAT to give UAT a reasonably ‘clean’ start. Like acceptance criteria these will typically be based on outstanding defects, any outstanding testing and any issues raised by the development team and not yet resolved (such as the inability to complete system testing because no test environment was available).

Like acceptance criteria, entry criteria can be simple. Here is a possible set of criteria:

•  All testing up to system testing is completed.

•  No defects are outstanding.

•  No issues are unresolved.

These are all eminently sensible criteria. However, we need to be a little more pragmatic and flexible to avoid logjams. So perhaps more realistic criteria might be:

•  All testing up to system testing is completed with no outstanding incidents.

•  No critical defects.

•  Not more than five serious defects.

•  Not more than ten routine defects.

•  No issues affecting testing are unresolved.

Even these might be negotiable, but they do set a target for the development team to achieve for a handover to UAT. Equally important they give the UAT team some leverage. If a project has had problems and is running late, there could be a lot of pressure to just get UAT done so we can get the system into service. The pressure is understandable, but what value would UAT add under these circumstances? None at all. It would simply be a case of completing an activity because it is on the project plan.

At this critical stage in any project it is vital that decisions take into account all their implications for the project, for the system, for the users and for the business. Entry criteria have no absolute power and cannot prevent bad decisions, but they do at least make the nature of the decisions visible and they enable the risks associated with a decision to be assessed.

Once we have achieved the entry criteria we are ready to begin the testing, or at least the planning.

DEFINING THE TESTING WE WILL NEED

Our work on acceptance criteria and UAT objectives has provided us with targets. The acceptance criteria tell us what the business needs are and the UAT objectives tell us what UAT needs to achieve.

Now we can use our testing skills to decide how we will achieve the objectives in a practical and effective way. We define a UAT strategy to identify what needs to be tested and to what level, and from the strategy we can build a plan that we can execute.

At this stage much depends on the size and complexity of a project. In some cases a set of UAT objectives, a strategy and a plan might seem like overkill. If a project is genuinely small and simple it may be feasible to define some acceptance criteria and to use these directly for defining the tests, missing out all the intermediate steps. These are decisions that can only be taken at project level. If you have understood the purpose of a UAT strategy and plans then you will be in a good position to decide what level of detail is needed in these stages or even whether they are needed at all. Be careful, though, that your judgement is a rational one and not based on time, management or any other external pressure. It is always nice to be popular, especially with superiors, but popularity can be very short-lived if the outcome is bad for the business.

Defining a UAT strategy

A UAT strategy is our way of combining acceptance criteria and UAT objectives, all based on stakeholder priorities and concerns, to arrive at a set of milestones and a set of measures by which we can determine progress at each step. From there we can begin to define the testing activities we will need to achieve each of the milestones.

As testers we will have our own view on what the testing priorities should be. Should we test the most important functions first or the most frequently used? Should we test that the most serious risks have been averted or start with the easy low-risk areas to build up some early confidence?

Defining a strategy is about confronting all the choices and making a decision on priorities that gives us a path that delivers value.

Figure 6.1 is a very rough schematic but shows how cost and value relate to each other in an IS project. The development phase is all cost. In an ideal delivery – on time, on budget and UAT successful – the system then begins to add value, first recovering the cost of development and then yielding net benefits to the business over time. In a late and over-budget delivery, but one where testing and UAT are deployed effectively to ensure the best outcome, the total cost is higher and the delivery of net benefits is delayed, but the eventual success of the system is still assured. In the third case, where a late and over-budget development is forced into an on-time delivery by reducing the time and cost of UAT, the value-add line is flatter as a result of problems not resolved before delivery that plague the system’s effectiveness, slow down the ramp-up of benefits, delay the recovery of costs and the point where net benefits are achieved and reduce the eventual benefits to the business.

Figure 6.1 The cost and value of a system

images

Of course this is just a schematic and has no specific values assigned. It merely identifies the nature of the risks. The final curve may not be as flat as it is drawn – but equally it may be flatter. One effect not shown in the schematic is the impact of release before issues have been resolved. The impact of this could be a further delay and cost as critical issues are addressed to enable users to begin building value with the system.

The purpose of the schematic is to invite the very important discussion about the implications of release decisions. The outcome of the discussion will be a strategy that achieves an acceptable line on the diagram, and that will incorporate an appropriate UAT strategy.

High-level test planning

Once a UAT strategy is settled the high-level plan can be created to define how to test to the required level, meaning what tests are required and how they should be organised for an effective and efficient UAT.

Identifying test environments

Tests will typically need a test environment that is as close as possible to the live environment. For each test we need to identify what test environment will be needed and how it needs to be set up to support the tests.

CREATING A TEST BASIS FOR UAT

We have three useful methods for discovering what updates or extensions may be needed to the RS: reviews, structured interviews and observation.

Reviewing existing requirements

Chapter 2 explained the reasons why the RS is almost certainly no longer up to date or complete by the time the UAT preparations begin. Even if it was possible for the business representatives to define perfectly what they need from the IS at the start of a project, mistakes will creep in as more people interpret the requirements based on their own assumptions and as the needs of the business change over time. Changes may also have been made to the system that were not updated in the RS.

We need to review the original requirements, together with any changes that have been made to the RS, to determine whether they accurately represent what is being delivered and whether they now reflect user expectations.

Review team

The review team needs to include the author of the original requirements if they are still available, the individual(s) who will be designing the tests for UAT and the UAT leader. In fact it would be beneficial for the all UAT team members to be involved in the requirements review because they will gain familiarity with the requirements and the rationale behind the system that has been built as well as being able to bring their user perspective to bear on the review to identify any changes or additions that would make the UAT specification more practical and realistic as a test basis.

Preparation

Strictly speaking no preparation is needed for a walk-through, but there is a lot of value in reading and absorbing the content of the RS and this is an exercise that will have to be done at some stage to enable testing to proceed smoothly. Better to do it now as preparation for testing than later, because the preparation can be turned to immediate and good value.

The preparation is simply to read the document, taking careful note of any aspects that appear to be incomplete or incorrect, noting any questions you have and preparing your own notes as an aide-memoire for the review meeting so that you do not forget important points in the heat of the moment. A checklist may help the reviewers, especially those new to UAT, focus on the types of issues they will need to look out for, such as:

•  Problems: anything that the reviewer thinks would not work in real life.

•  Inaccuracies: anything that the reviewer thinks is a mistake or misinterpretation of the requirement.

•  Ambiguities: any requirements that can be interpreted in more than one way or cannot easily be tested.

•  Omissions: anything the reviewer feels is missing from existing requirements, any requirements that are missing from the document and any other omissions such as the lack of unique IDs.

•  Clarity: spelling, grammar and quality of written text are not important in themselves but they are very important if they tend to confuse the reader or make reading the document difficult.

A checklist can also be used to convey the important message that the exercise is focused on error detection, not correction. This means that the reviewer does not need to know what the requirement ought to be, only know or suspect that it is incorrect.

The aim of the review is to identify important deficiencies in the RS that may have led to errors in implementations (to help target testing) and to identify gaps that make it likely the implemented system will not meet the true business requirements.

Conduct

The walk-through should be conducted in a relaxed and informal way but with some control over the time. Normally the author would lead a walk-through, but for UAT it might be advantageous to ask the UAT leader to chair the meeting. Reviewers should be encouraged to raise questions and make comments freely and a scribe should be appointed to capture all comments that lead to any actions (such as updating the specification). Discussion can be free as long as time is not being wasted. It is important to adopt the right tone so that the author in particular does not feel threatened – always remember you are commenting on a document and not on its author. The comments should be impersonal and objective, with no suggestion of criticism or blame.

The chair may have to step in from time to time to ensure the meeting progresses and does not get bogged down in detail – a useful rule being that the review should identify issues but not try to resolve them. In practice the simplest ones can probably be resolved there and then, but discussion needs to be limited if a quick resolution is not found – the issue can then be documented for follow-up later.

Reviews that last more than a couple of hours tend to become quite tiring and less effective as time goes on. If it is clearly impossible to complete the requirements review in two hours, a second review meeting will need to be convened to complete it.

Follow-up

Follow-up for a requirements review will have two components:

1.  The author completing any actions or changes agreed and documented at the meeting. These will be primarily to enable test planning and test design.

2.  The UAT leader taking note of any areas of the requirements where there are concerns about completeness and correctness so that testing can be targeted on these.

The review chair should take responsibility for checking with the author that any changes are done within a reasonable time frame, bearing in mind that design of the tests for UAT cannot progress until any significant changes have been made to the requirements documentation.

When requirements are being reviewed as a prelude to UAT, the important defects in the document are those that will affect testing.

Exercise 6.1

Case study: requirements review

One of the Excelsior plug-ins allows users to request approval for absences via the system. Table 6.1 shows some of the requirements that this absence request and approval functionality were built on. Using the criteria for the walk-through checklist and the information provided in Chapter 2 about what makes a good requirement, note down any issues you can find with the requirements as if you were part of a review.

Table 6.1 Absence requirements

images

Our answer is in Appendix B.

Interviewing stakeholders

The best way to find out what user expectations are is to ask the users! The problem we may run into is that expectations may have moved on a long way since the requirements were first written, and not necessarily in a realistic or practical way in every case, so we have to do some careful sifting.

Other stakeholders, especially the sponsor, can act as a balance so we need to carry out a fairly systematic review of stakeholder expectations across all the main groups – sponsor, users, managers and developers. We need to include developers to ensure that if we capture expectations that were not originally requirements, they are actually feasible and achievable in a realistic time frame. So this is a tweaking exercise, not an opportunity to rewrite the requirements. Having said that, the opportunity to air their views can be a significant driver towards getting users’ support for a roll-out, even if not all their expectations are met.

The vehicle for gathering expectations is a semi-structured interview. The structured part will ensure that everyone is asked the same set of questions to give us a consistent set of information. The unstructured part is about giving individuals the opportunity to expand on their answers and express their concerns and wishes in a more complete way.

Observing user processes

If we are not already familiar with the way users work in the organisation, or if the new system will involve significant changes to the way users work, or if the system is the first IS in the organisation, we need to understand what is currently happening so that a smooth transition can be made to the new system. The transition is not in itself part of UAT, but UAT must ensure that the system is capable of handling the way users are expecting to work.

Capturing user processes is a delicate area because much depends on how prepared the organisation is for the change. By the time UAT is being planned, users should already be aware of what will happen after the system is operational. Training should have at least begun and processes should have been mapped out. If all this has happened the transition may be well in hand and we will have access to the information we need to build tests that follow the new user processes.

In the event that transition is not as well advanced as it should be, however, we need some basis for constructing tests that can enable users to interact with the system in a realistic way. If we observe how users work now we can at least make a reasonable assessment of how the user processes will need to change to work with the new system. Observations can then be recorded in a form that facilitates test design – the ideal recording method is to construct user stories and use cases, which we turn to next.

Building use cases and user stories

User stories

As we explained in Chapter 2, user stories are not really requirements but they are a good way to capture and document some of the key user expectations. For UAT we will need to spend time with as many users of any existing system and any designated users of the new or updated system as we have time for. Capturing their views in user stories will be a relatively quick and consistent way to create a consolidated picture to supplement the RS.

We described what user stories look like in Chapter 2, but here is a quick reminder.

CAPTURING USER STORIES

User stories are used mainly with agile methods as a means of gathering requirements, but they can be valuable as a means of expressing requirements or to summarise requirements information whenever tests are being designed.

A user story is made up of one or two sentences in the everyday language of the end-user that covers what a user needs to do as part of their job function.

Example:

•  As a team member I want to be able to complete a contract so that it can be processed.

•  As a manager I want to be able to approve a contract created by my team.

The user story normally takes the form: ‘As a <role> I want <goal or desire> so that <benefit>’, although the benefit may not be stated where it is implicit. The alternative syntax is ‘In order to <benefit> as a <role> I want to <goal or desire>.’

Example 6.1 – Excelsior requirements update

The business requirements for Excelsior were captured at the start of the project in an RS. However, at least one key requirement was omitted and, since the requirements were written, circumstances have changed, which has resulted in a number of the requirements being out of date:

1.  New company rules state that all employees must have their expenses approved and this no longer, as was previously the case, excludes managers.

2.  No requirements exist to allow executive assistants (EAs) to manage requests on behalf of managers in any part of the system. If this functionality were not provided, EAs would have to log on with the manager’s logon details, causing a potential audit and HR issue.

3.  No requirement was captured to identify an invoice as ‘third-party billing’, which the organisation needs to capture for reporting purposes and for managing disbursements – expenses incurred as a result of client work by a third party that may be treated differently by the taxman.

The development team is aware of the first two new requirements and the code has been written to deliver them, but the changes were not captured in the RS. The UAT team has carried out semi-structured interviews to try to capture all the current user requirements and collated and analysed the data. A meeting has been organised to create user stories based on the interview findings. These are the resulting user stories:

1.1. As a manager I want to be able to send my expenses for approval.

2.1. As a manager I want to be able to assign my account to my EA so that they can manage the request process on my behalf.

2.2. As a manager I want to be able to approve a claim managed by my EA.

2.3. As an EA I want to be able to assign my account to another EA in my absence.

3.1. As an account manager I want to be able to mark an invoice as third-party billing.

3.2. As the accounts director I want to be able to unmark a contract as third-party billing.

Use cases

Use cases are often used to capture current processes and can bridge a gap between requirements written by users in plain language and the more technical specification language. As we explained in Chapter 2, a use case describes the interactions between a role (actor) and a system in order to achieve a particular outcome, using diagrams or simple structured text. The actor can be a person or another system that our system will interact with.

We will return to the Excelsior case study in a moment. First let’s further examine how use cases are written using the relatively straightforward example of an ATM.

Example 6.2

If the system we are implementing is an ATM, the use cases should identify the high-level processes that the user needs to be able to carry out. So in the first instance we may identify that there are two things the user needs to be able to do; withdraw cash and order a statement. The initial use cases can be listed in a use case grid that provides the basis for the use case exercise.

Table 6.2 ATM use cases

images

There is no one right way of writing a use case but a style that is commonly used is:

•  Title

•  Actor(s)

•  Main success scenario

    •  Step

•  Extensions

    •  Extension

Here is a primary use case for withdrawing cash.

Title: Withdraw cash

Actor(s): Bank customer (any bank), banking system

Main success scenario: Any bank customer can obtain cash from their bank account

Basic path:

  1. The customer puts a bank card into the ATM.

  2. The ATM verifies the card’s validity.

  3. The ATM checks the country of issue.

  4. The ATM requests a PIN.

  5. The customer enters their PIN.

  6. The ATM validates that the PIN is valid for the bank card.

  7. The ATM presents options including withdraw cash.

  8. The customer chooses withdraw cash.

  9. The ATM presents options for amounts of cash.

10.  The customer chooses an amount or enters an amount.

11.  The ATM checks that it has enough cash in the ATM.

12.  The ATM checks that the customer is below withdrawal limits.

13.  The ATM checks that there are enough funds in the customer’s bank account.

14.  The ATM debits the customer’s bank account.

15.  The ATM offers the option to print a receipt.

16.  The customer selects the print receipt option.

17.  The ATM returns the bank card.

18.  The customer removes the bank card.

19.  The ATM prints the receipt.

20.  The ATM issues the cash.

21.  The customer takes the cash.

The steps represent the path through the system to allow cash withdrawal and states how each of the actors interacts with the system. This use case is also known as the ‘sunny day’ use case or primary use case, because it is the path that is most likely to occur when all goes well. In order to complete the creation of the use cases, ‘rainy day’ scenarios or edge cases must also be written. These represent the alternative paths if something goes wrong, for example what the path should be if the customer does not have sufficient funds in their bank account in order to withdraw the amount requested.

One common way to note the alternative paths is to enter the alternative steps below the primary use case. The whole process does not need to be described in full, only the steps that are different for each alternative path.

It is easier, more efficient and good practice in terms of risk management to write the sunny day scenarios first and derive the edge cases from them. If lack of time becomes an issue, the most important use cases will have been written first.

Edge cases from the ATM use case would have to cover how to deal with:

•  an invalid bank card;

•  a foreign bank card;

•  an invalid PIN entry;

•  an invalid amount entered;

•  insufficient cash in the ATM;

•  customer going over withdrawal limit;

•  insufficient funds in account;

•  insufficient paper in the ATM to print a receipt;

•  customer not taking the cash.

There will also be edge cases that may happen but are less likely to, for instance how the system would deal with the inability to debit the account. The goal is not to define every possible use or edge case but to find the common use cases and prioritise accordingly. The use cases and edge cases can be circulated round the UAT team to get feedback on whether the most likely important scenarios have been covered. It is then up to the business to decide how much time and money should be spent testing the scenarios that are less likely to occur.

Creating the working requirements set

From our reviews, interviews and observations we should now have a reasonably complete picture of what the system needs to do to meet expectations. If we record all this information, as far as possible, in the form of user stories and use cases we will have a good basis for building tests. If the original requirements are not in this form we can make a judgement, depending on how much has changed and how complete the requirements were originally, about whether to convert all our information to a common format.

Exercise 6.2

Can you think of three other user stories that could be added to those listed above that relate to a manager sending expenses for approval, a manager assigning an account or third-party billing contracts?

Our answer is in Appendix B.

Example 6.3 – Excelsior case study

At a high level the user stories we generated in Example 6.1 defined what is missing from the current Excelsior requirements. After further discussion it was decided that user story 2.3 was not required and that it would be more efficient to change the business processes instead. It was agreed that managers would deal with any requests themselves (functionality captured in the original requirements) or that direct reports would be notified of the EA’s absence.

The user stories, apart from user story 2.3, were signed off by the stakeholders as the requirements that need to be added to or changed in the RS and that need to be tested during UAT. Other less important issues that were discovered were deemed insignificant and ignored or added to the list of change requests to be included in future releases. The project team then set out to create use cases based on the user stories.

Table 6.3 Expenses use cases

images

images

images

images

images

SETTING UP THE TEST MANAGEMENT CONTROLS

While we are defining the testing we also need to bear in mind the management of defects – those already identified before UAT starts and those we find as we do our testing. These defects will affect the acceptance decision so effective tracking and accurate counting is crucial. Test logging will also be a key discipline so that we can tell at any time how much of the planned testing has been completed and how much is still left to do. Finally we will need to ensure that change control is in place to ensure that all changes to the system or the tests are identified and followed up. Every defect will potentially lead to change in the system code and the tests, and every update to system code will generate new tests to check the defect has been corrected and also regression tests to ensure nothing else has changed. We need this not only to ensure we are always testing the latest version of the system code after changes have been made but also to identify the ‘tail’ of testing that follows each change so that we know what additional testing is required over and above what was originally planned.

Change control

The change control mechanism will already be in place because the developers will have used it throughout development. You may need a briefing on how to access it and interpret the information, and that will be a good initial point of contact with the development team if you are not already working together. The development team can then show you the state of the system code at the point of handover so you can see for yourself what changes have been made and what testing might be outstanding from any recent changes.

You will need to bring your test cases and test scripts under change control in case they later need to be changed and also so that the links between system code and the tests run on it are clear; the maintenance team will need this later.

Test logging

Test logging ensures that we know which tests have been run and which have not; therefore it also helps us to estimate what testing still needs to be done and when it should be complete. The test logs may give us some information that we can use to streamline testing or to add new tests to the plan. We need to set up a test logging mechanism before we start testing.

A test log is really a simple list of tests that have been designed and scheduled so that you can identify when each is complete. In practice we need a little more information because we need to capture any changes made to tests, any rescheduling of tests for whatever reason, and any follow-up to test failures, such as the retest and any regression testing being entered into the log.

The log will capture every test completion and its outcome so we can easily generate valuable statistical data from it to measure progress against objectives and to predict our completion date on a dynamic basis. We will need to create a test log before we begin test implementation. There is an example of a test log in Chapter 8 that you can use as a starting point. Implemented as a spreadsheet it provides a simple tool to use and from which to gather data.

Incident reporting and defect management

Incidents are the outcome of any test failure. We call them incidents because we do not know yet that we have found a defect; we only know that the actual result of the test does not match the expected result. This could be due to a defect in the system but it could also be a defect in the test or in the test environment, or the tester may have simply made a mistake. The incident report should enable a developer to repeat the test and get the same result so that they can diagnose the cause.

IM becomes defect management when an incident is diagnosed as a defect in the system code or documentation. Other outcomes will need to be followed up by changing tests or some other action. As for test logging, IM systems are usually based on databases so that data can be collected, analysed and reported – we will need this to determine whether objectives have been met.

Like change control, IM should be in place and the development team can brief you on how to raise incidents and define reports for your own use. If the IM system is automated, it will incorporate a workflow element so that the steps required to clear an incident will be managed by the tool; if not, there will be a process defined that you will need to become familiar with.

CHAPTER SUMMARY

In this chapter we have covered the very important first step of planning our UAT exercise. As with all complex activities, UAT needs to be meticulously planned so that we know exactly what we need to do, when, how and with whom. Equally importantly we need to communicate to everyone what is going to happen so that they can make their contribution. Finally the plan needs to identify what we will do when things go wrong so that we can deal with the situation effectively and efficiently without introducing delays or uncertainty.

After reading this chapter you should be able to answer the following questions:

•  How do I make sure I have the right requirements as a test basis?

•  How do I decide what testing I need to do?

•  How will I know when I have done enough testing?

•  How will I know if the results are acceptable?

•  What will I do if things go wrong?

What have you learned?

Test your knowledge of Chapter 6 by answering the following questions. The correct answers can be found in Appendix B.

1. Which of the following is a valid UAT objective?

A.  To ensure the system is delivered with no defects

B.  To remove all risks from the system before release

C.  To ensure the risk of releasing the system is acceptable

D.  To complete everything in the UAT plan

2. Which two of the following would be valid as acceptance criteria for a system?

A.  All planned testing has been completed

B.  There are no outstanding change requests

C.  There are no outstanding test incidents

D.  There are no serious defects

E.  All essential functions have been tested

3. Which of the following must be addressed in UAT?

A.  Business requirements

B.  User stories

C.  Use cases

D.  Business processes

E.  User expectations

F.  Technical specifications

Some questions to consider (our responses are in Appendix B)

1.  You are finding that users are reluctant to express ideas about what they expect from the system when it is delivered and fall back on what is in the requirements. What would you do?

2.  The sales manager believes that acceptance criteria should not delay delivery to customers who have expectations of early delivery. The development manager believes that acceptance criteria should include only requirements coverage. The marketing manager is concerned about the image of the business and insists on zero serious defects as an acceptance criterion. What should you do?

3.  The development team is reluctant to give the UAT team access to the incident reporting system because it could result in lost incident data if someone makes a mistake. What would you do?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset