10      LIFE AFTER UAT

Once a decision to release a system has been made there is still much work to do to ensure a successful launch and, while those activities are beyond the scope of this book, we need to examine the large and important contribution that the outputs of UAT can make to that success. So this short chapter considers what happens after UAT.

Topics covered in this chapter

•  Post-UAT reporting

•  End-user training

•  Preparing a roll-out strategy

•  Implementation

•  Post-implementation defect corrections

•  Measuring business benefits

•  The end of UAT?

POST-UAT REPORTING

After the UAT party is over there is serious work to be done. UAT is usually an activity that is completed against a backdrop of pressure, and completion is often a relief to all concerned. What happens next, though, can turn all the effort of UAT into a treasure house if the opportunity is grasped before the UAT team is disbanded and everything returns to normal.

Reporting may feel like a chore we would rather avoid but reporting on a well-planned and well-managed UAT gives us the opportunity to reflect on what has happened, gather and analyse the strands of information, and produce some powerful tools for the following phases.

There are many ways that a test summary report can be structured. Here is a simple outline of what a test summary report might contain so that we can explore what content it should have and how we might utilise that content to help with implementation, support, training and evaluation.

We repeat in Figure 10.1, for your convenience, the test completion report outline so that we can explore a little more detail to better understand what needs to be reported and why, and so that we can see what value this information can have for future phases of the system’s life.

This is the format we used as the basis for decision making so it is designed to provide a succinct summary of the key information related to evaluation against acceptance criteria and evaluation of risk. We probably do not need all the collected data to draft this report but we do need to save them all because our need now is to look beyond the risk of release to the achievement of success, and for this we might need a revised structure to incorporate more analysis and evaluation.

Figure 10.2 is a report format that we can use as a forward-looking report. We can copy over all the relevant sections from the original test completion report so it is not a major effort to produce this, but the presentation of information is now slanted towards the activities still to be completed.

Most of the headings in the report should be self-explanatory but we will explore a little more the detail that can be covered in some of the sections.

Test reporting

Reporting on the tests should be simple enough. We have already established that enough testing was done to achieve the desired result but it still might be worth doing a little more analysis. Showing graphically how many tests passed and failed first time, how many passed after defect correction, how many passed and failed in different areas of the system and so on can be revealing. If you spot patterns or trends in the data, for example coverage was better in some areas than others or that test failures were more common in some areas than others, this is worth noting and following up.

The purpose of this report of the testing is to identify anything that might suggest there might be potential problems at implementation, so clustering is something to pay particular attention to.

Defects analysis

The starting point is graphical summaries again, this time of incidents raised and closed, defects identified and fixed, and both incidents and defects not yet closed. These will merit further investigation. It is interesting to look at incidents by age (how long they were open) because the longer-lived ones usually account for quite a lot of investigation or defect correction. We should also look at the rate of incidents being raised and closed. Spikes of incidents may point to spikes of testing activity or spikes of defects, so we should also report on the average incidents per test and significant deviations from that average. We might also look at defects per incident to see how many incidents were caused by other factors such as test script errors.

If we noted some clustering of test failures, we will see the same clustering in incident reports and here we can explore the clustering by areas of the system to see if there are areas where defect rates are higher than normal. These might be pointers to future problems. Closed incident reports will show the outcome, and in some cases this may have been a workaround rather than a change to the software. We may also have found some problems that required process changes and there may have been incidents that were reported several times by different people in different tests. Incidents that turn out to be duplicates can point to the kinds of problem that are hard to pin down and these might merit further attention.

Figure 10.1 UAT completion report outline

images

Figure 10.2 Post-UAT analysis report

images

Frequently asked questions (FAQs) and workarounds

The workarounds we identified from incident analysis can be documented as part of user training and documentation. FAQs can be distilled from individual tester’s feedback and a workshop to debrief on the testing and shared experiences can be a useful way to capture these FAQs. They can be added to any online help and incorporated into end-user training and user guides.

Lessons learned

Lessons learned can take many forms. How could we have done UAT better? Did we have problems that could have been eliminated before UAT? Did we start UAT at the right time and were we prepared? Did we check on the entry criteria at the start of UAT? Did we meet the acceptance criteria at the end? Did UAT run to plan? If not, was the overrun in time, cost, effort or some combination? What caused the overruns? Could they have been avoided?

This is a list of questions that spring to mind. For a real lessons-learned session we should probably have a checklist of questions – some standard, some suggested by recent experience and some that will occur to us at the session. All the questions are worth answering because there may be patterns that will help us to avoid similar problems in future.

END-USER TRAINING

We can use the UAT training mechanism and materials as a starting point for end-user training, although we will need to take a slightly different perspective for end-users. All the previous analysis can be utilised to give end-users the best possible background on where to expect problems and how to deal with them, while the experience of UAT can be fed in with hints and tips on getting the best from the system. At least one part of the training could be delivered by a member of the UAT team to give end-users the opportunity to learn as much as they can from the experience of this person.

Training for new starters

New starters who arrive after the system is commissioned will need training and here again the insights from UAT can be helpful in ‘grounding’ the training.

PREPARING A ROLL-OUT STRATEGY

The evaluation of risk and subsequent analysis of UAT results and experience are important inputs into the roll-out strategy. Depending on the size and geographical spread of the organisation there might be a range of possible strategies, ranging from ‘big bang’ (putting the system on every desktop at once) to a series of pilot releases in different areas. Usually at least one pilot is used to enable any problems to be resolved in a relatively small implementation.

With the benefit of the UAT feedback a roll-out programme can be refined:

•  High defect rates in UAT might suggest a smaller initial pilot followed by a ramp-up when problems have been identified and rectified.

•  User interface problems might suggest a smaller pilot with increased support to help an initial group gain experience so that they can support a ramp-up.

•  Workarounds can be trialled in a small initial pilot before the main roll-out.

•  Revised user guides and help screens based on UAT feedback can be trialled in initial pilots.

•  The absence of problems in UAT might encourage a slightly quicker ramp-up than had originally been envisaged.

IMPLEMENTATION

Full implementation, either initially or after a series of pilots, will still be a major stage in the life cycle. Feedback from UAT might be one way that required levels of technical support and business support (help desk) can be estimated for this exercise.

POST-IMPLEMENTATION DEFECT CORRECTIONS

Whatever happens during pilots or full implementation, some defects will emerge from the increased level of usage. Some of these defects will need to be corrected urgently, while others will be placed on the prioritised list of changes to be made over time.

If a collection of defect corrections is required in the early post-implementation period, there may need to be a new release of the system. Although this will be smaller than the original system release it will be critical because a system is already running and failures in the new release will therefore be critical to users, even if they are relatively minor. For that reason the option of reconstituting the UAT team to conduct a structured mini-UAT would be a good risk management strategy.

MEASURING BUSINESS BENEFITS

The final stage of full implementation is the ramp-up of the system to a level that will achieve the desired business benefits. This may come days, weeks or months after initial release depending on the level of problems that arise in the early period of use.

Unless the benefits are clear and obvious, for example providing a service that was not previously available, the business benefits will need to be measured to assure ourselves that they really have been achieved. Benefits such as improved profitability, reduced inventory or greater flexibility in the use of resources are the kinds of benefits that need a structured exercise of measurement.

Measurement can be done by analysing the data routinely collected by the system to identify expected changes, but this will take some time and will not eliminate the impact of other changes that might affect the measurement, for instance a period of bad weather that depresses sales. A more consistent result will need a controlled measurement exercise comparing the improvement parameters before and after the system is changed. The UAT team is an ideal resource for building the data by running the experimental data through the system before the changes are released and running it again at an agreed period after the changes have been released. The UAT team, with their recent experience of formal testing and the discipline of test reporting, would provide an ideal resource for the measurement exercise.

THE END OF UAT?

This final chapter has pointed to some of the ways that the UAT exercise and the experience and skills it develops in people can benefit other areas of the project and help to make implementation and evaluation easier and smoother.

The same set of skills and experience remains in the business after UAT and the unique experience of carrying out a structured test programme on an IS that brings important benefits to the business will have provided a major personal development opportunity for the team’s members. As well as rewarding them for a job well done, it would be a good time to ensure that all these new skills and experience is recorded so that it can be called on again if and when a similar exercise is needed.

CHAPTER SUMMARY

This chapter has provided some guidance on what needs to happen during roll-out of the system you have tested. These are activities that are not the responsibility of the UAT team, but awareness is a valuable part of understanding ‘the big picture’ and may help you to do a better job of UAT.

After reading this chapter you should be able to answer the following questions:

•  What happens after UAT is completed?

•  How does a tested system get brought into service?

•  What needs to be done to follow up on insights from UAT about system readiness?

•  How can the organisation prepare itself for the new system’s arrival?

•  How can we minimise the risk of problems when we bring the system into service?

•  How can we capture and harness the skills and experience gained during UAT?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset