Chapter 13
Service Validation and Testing and Change Evaluation

THE FOLLOWING ITIL RELEASE, CONTROL, AND VALIDATION CAPABILITY INTERMEDIATE EXAM OBJECTIVES ARE DISCUSSED IN THIS CHAPTER:

  • images Service validation and testing and change evaluation are discussed in terms of
    • Purpose
    • Objectives
    • Scope
    • Value
    • Policies
    • Principles and basic concepts
    • Test models and testing perspectives (SVT)
    • Process activities, methods, and techniques
    • Evaluation concepts and reports (CE)
    • Triggers, inputs, outputs, and interfaces
    • Maintaining test data and models (SVT)
    • Process roles and responsibilities
    • Information management
    • Critical success factors and key performance indicators
    • Challenges
    • Risks

The syllabus covers the day-to-day operation of each process and the detail of the process activities, methods, and techniques and information management. The managerial and supervisory aspects of service transition processes are covered in the service lifecycle courses.

This chapter covers the day-to-day operation of each process and the detail of the process activities, methods, and techniques and information management. The managerial and supervisory aspects of service transition processes are covered in the ITIL Intermediate Certificate Companion Study Guide (Sybex, 2016).

In service validation and testing, we review the concepts of quality assurance, ensuring that the new or changed service will meet the requirements of the business.

Change evaluation is concerned with the choices the business makes relating to the new or changed service. It allows for the organization to decide whether the new or changed service meets the acceptance criteria agreed to as part of the design.

Service Validation and Testing

The underlying concept behind service validation and testing is quality assurance—establishing that the service design and release will deliver a new or changed service or service offering that is fit for its purpose and use.

Purpose

The purpose of the service validation and testing process is to ensure that a new or changed IT service matches its design specification and will meet the needs of the business.

Objective

The objectives of service validation and testing are to ensure that a release will deliver the expected outcomes and value within the projected constraints and to provide quality assurance by validating that a service is “fit for purpose” and “fit for use.” Another objective is to confirm that the requirements are correctly defined, remedying any errors or variances early in the service lifecycle. The process aims to provide objective evidence of the release’s ability to fulfill its requirements. The final objective is to identify, assess, and address issues, errors, and risks throughout service transition.

Scope

The service provider has a commitment to deliver the required levels of warranty as defined within the service agreement. Throughout the service lifecycle, service validation and testing can be applied to provide assurance that the required capabilities are being delivered and the business needs are met.

The testing activity of service validation and testing directly supports release and deployment by ensuring appropriate testing during the release, build, and deployment activities. It ensures that the service models are fit for its purpose and use before being authorized as live through the service catalog. The output from testing is used by the change evaluation process to judge whether the service is delivering the service performance with an acceptable risk profile.

Value to the Business

The key value to the business and customers from service testing and validation is in terms of the established degree of confidence it delivers that a new or changed service will deliver the value and outcomes required of it and the understanding it provides of the risks.

Successful testing provides a measured degree of confidence rather than guarantees. Service failures can harm the service provider’s business and the customer’s assets and result in outcomes such as loss of reputation, loss of money, loss of time, injury, and death.

Policies, Principles, and Basic Concepts

Now we’ll look at the policies and principles of service validation and testing and the basic concepts behind it. The policies for this process reflect strategy and design requirements. The following list includes typical policy statements:

  • All tests must be designed and carried out by people not involved in other design or development activities for the service.
  • Test pass/fail criteria must be documented in a service design package before the start of any testing.
  • Establish test measurements and monitoring systems to improve the efficiency and effectiveness of service validation and testing.
  • Integrate testing into the project lifecycle to help detect and remove defects as soon as possible.
  • Test library and reuse policy. Since many of the tests can be repeated, it is valuable to keep a library of test scripts, test models, test cases, and test data that can be reused.
  • Engage with customers, stakeholders, users, and service teams to enhance test skills.
  • Automate testing wherever possible.

Service validation and testing is affected by policies from other areas of service management. Policies that drive and support service validation and testing include the service quality policy, the risk policy, the security policy, and the service transition, release management, and change management policies.

Service quality will be defined by the senior management, based on customer and stakeholder input. This will drive the adoption and measurement of the basic quality perspectives as listed in the service strategy—level of excellence, value for money, conformance to specifications, and meeting or exceeding expectations. The organization will help to prioritize these, and this will influence the approach taken to service validation and testing.

Risk policies have a direct link to testing. The testing must measure risk according to the appetite for risk in the organization. The testing should include any requirements of risk from the security policy.

The service transition policy sets the working practices for the entire service lifecycle stage. This will include the approach to managing transitions of all scales, and defining the levels of control required during the lifecycle stage. It will cover areas such as governance, use of frameworks, reuse approaches, business alignment and relationship management, knowledge transfer, managing course corrections, early life support, and resource management.

The release policy will define the frequency and type of releases, which in turn will influence the testing approach. The more frequent the releases, the more likely it is to automate and look for reusable test models.

The use of change windows will also influence the testing, so the change management policy will be important in defining the testing approach.

Test Models and Testing Perspectives

Because testing is directly related to the building of the service assets and products that make up services, each one of the assets or products should have an associated acceptance test. This will ensure that the individual components will work effectively prior to use in the new or changed service. Each service model should be supported by a reusable test model that can be used for both release and regression testing in the future. Testing models should be introduced early in the lifecycle to ensure that there is a lifecycle approach to the management of testing and validation.

Test Models

A test model should include a test plan, what is to be tested, and the test scripts that will be used for each element of the service. Making the test model reusable and repeatable ensures that testing remains effective and efficient. This will support traceability back to the design criteria or initial requirements, and the audit of the test execution, evaluation, and reporting. Table 13.1 shows some examples of test models.

TABLE 13.1 Examples of service test models

Test model Objective/target deliverable Test conditions based on
Service contract test model To validate that the customer can use the service to deliver a value proposition Contract requirements. Fit for purpose, fit for use criteria
Service requirements test model To validate that the service provider can deliver/has delivered the service required and expected by the customer Service requirements and service acceptance criteria
Service level test model To ensure that the service provider can deliver the service level requirements, and that service level requirements can be met in the live environment, e.g., testing the response and fix time, availability, product delivery times, support services, etc. Service level requirements, SLA, OLA
Service test model To ensure that the service provider is capable of delivering, operating and managing the new or changed service using the as-designed service model that includes the resource model, cost model, integrated process model, capacity and performance model, etc. Service model
Operations test model  To ensure that the service operation functions can operate and support the new or changed service/service component including the service desk, IT operations, application management, technical management. It includes local IT support staff and business representatives responsible for IT service support and operations. There may be different models at different release/test levels, e.g., technology infrastructure, applications, etc. Service model, service operation standards, processes and plans
Release deployment test model  To verify that the deployment team, tools, and procedures can deploy the release package into a target deployment group or environment within the estimated timeframe. To ensure that the release package contains all the service components required for deployment, e.g., by performing a configuration audit Release and deployment design and plan
Deployment installation test model To test that the deployment team, tools, and procedures can install the release package into a target environment within the estimated timeframe Release and deployment design and plan
Deployment verification test model To test that a deployment has completed successfully and that all service assets and configurations are in place as planned and meet their quality criteria Tests and audits of actual service assets and configurations

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS

Testing Perspectives

Service validation and testing should focus on the perspective of those who will use, deliver, deploy, manage, and operate the service. The test entry and exit criteria will have been developed during the development of the service design package. This should cover aspects such as these:

  • Service design (functional, management, and operational)
  • Technology design
  • Process design
  • Measurement design
  • Documentation
  • Skills and knowledge

Service acceptance testing is concerned with the verification of the service requirements. This is verified by the stakeholders, who include business customers or customer representatives, suppliers, and the service provider.

Business Users and Customer Perspective

The business perspective of acceptance testing should consist of a defined and agreed means for measuring the service to ensure that it meets their requirements, and that it has appropriate mechanisms in place to manage the interface to the service provider. This will require the business to provide an appropriate level and capability of resources to take part in the tests.

The service provider requires the interaction of the business to ensure continued engagement in the transition, and to ensure that the overall quality of the new or changed service is meeting expectations.

Use cases can be used to ensure that the testing covers realistic scenarios of interaction between the service provider and the business.

User testing should cover the requirements for applications, systems, and services, and ensure that these meet the functional and quality requirements of the end users. User acceptance testing (UAT) should be as realistic as possible to simulate the live environment. It is important to ensure that the test is viewed appropriately and that the expectation is that not everything will work perfectly.

Service Operations and Continual Service Improvement Perspective

It is important not to forget to test that the new or changed service can be managed and supported successfully. This should include some basic elements:

  • Technology
  • Staff skills, knowledge, and resources
  • Supporting processes and resources
  • Business and IT continuity
  • Documentation and knowledge management via the SKMS

The engagement of continual service improvement should ensure that the new or changed service is adopted as part of the scope of the overall service management improvement approach.

Levels of Testing

Testing is directly related to the building of the service assets and products, and it is necessary to ensure that each one has an associated acceptance test and activity to verify that it meets requirements.

The diagram in Figure 13.1, sometimes called the service V-model, maps the types of tests to each stage of development. Using the V-model ensures that testing covers business and service requirements, as well as technical ones, so that the delivered service will meet customer expectations for utility and warranty. The left-hand side shows service requirements down to the detailed service design. The right-hand side focuses on the validation activities that are performed against these specifications. At each stage on the left-hand side, there is direct involvement by the equivalent party on the right-hand side. It shows that service validation and acceptance test planning should start with the definition of the service requirements. For example, customers who sign off on the agreed service requirements will also sign off on the service acceptance criteria and test plan.

Image shows service lifecycle configuration levels and baseline points for service plan (review, acceptance, operational, and release design and plan). These finally end with service component build and test.

FIGURE 13.1 Example of service lifecycle configuration levels and baseline points

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS.

Types of Testing

There are many testing approaches and techniques that can be combined to conduct validation activities and tests. Examples include modeling or simulating situations where the service would be used, limiting testing to the areas of highest risk, testing compliance to the relevant standard, taking the advice of experts on what to test, and using waterfall or agile techniques. Other examples involve conducting a walkthrough or workshop, a dress rehearsal, or a live pilot.

Functional and service tests are used to verify that the service meets the user and customer requirements as well as the service provider’s requirements for managing, operating, and supporting the service. Functional testing will depend on the type of service and channel of delivery. Service testing will include many nonfunctional tests. They include testing for usability and accessibility, testing of procedures, and testing knowledge and competence. Testing of the warranty aspects of the service, including capacity, availability, resilience, backup and recovery, and security and continuity, is also included.

Process Activities, Methods, and Techniques

There are seven phases to service validation and testing, shown in Figure 13.2. The basic activities are as follows:

  • Validation and test management
  • Plan and design tests
  • Verify test plan and test design
  • Prepare test environment
  • Perform tests
  • Evaluate exit criteria and report
  • Test cleanup and closure
Image shows validation and test management receives authorized change and delivers change evaluation. These further link to different processes in revise tests to deliver required results through two-way arrows.

FIGURE 13.2 Example of a validation and testing process

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS.

The test activities are not undertaken in a sequence; several may be done in parallel; for example, test execution begins before all the test design is complete. Figure 13.2 shows an example of a validation and testing process; it is described in detail in the following list:

  • The first activity is validation and test management, which includes the planning, control, and reporting of activities through the test stages of service transition. Also included are managing issues, mitigating risks, and implementing changes identified from the testing activities because they can impose delays.
  • The next step, plan and design tests, starts early in the service lifecycle and covers many of the practical aspects of running tests, such as the resources required; any supporting services (including access, security, catering, and communications services); agreement on the schedule of milestone, handover, and delivery dates; agreement on the time for consideration of reports and other deliverables; specification of the point and time of delivery and acceptance; and any financial requirements.
  • The third step is to verify the test plan and test design, ensuring that the testing included in the test model is sufficient and appropriate for the service and covers the key integration points and interfaces. The test scripts should also be checked for accuracy and completeness.
  • The next step is to prepare the test environment using the services of the build and test environment staff; use the release and deployment management processes to prepare the test environment where possible. This step also includes capturing a configuration baseline of the initial test environment.
  • Next comes the perform tests step. During this stage, the tester carries out the tests using manual or automated techniques and records findings during the tests. In the case of failed tests, the reasons for failures must be fully documented, and testing should continue if at all possible. Should part of a test fail, the incident should be resolved or documented (e.g., as a known error) and the appropriate retests should be performed by the same tester.
  • The next step is to evaluate the exit criteria and the report, which has been produced from the test metrics. In this stage, the actual results are compared to what was expected. The service may be considered as having passed or failed, or it may be that the service will work but with higher risk or costs than planned. A decision is made as to whether the exit criteria have been met. The final action of this step is to capture the configuration baselines into the CMS.
  • The final step is test cleanup and closure. During this step, the test environments are initialized. The testing process is reviewed, and any possible improvements are passed to CSI.

Next we’ll look at the trigger, inputs and outputs, and process interfaces for service validation and testing.

Trigger

This process has only one trigger, a scheduled activity. The scheduled activity could be on a release plan, test plan, or quality assurance plan.

Inputs

A key input to this process is the service design package. This defines the agreed requirements of the service, expressed in terms of the service model and service operation plan. The SDP, as we have discussed previously, contains the service charter, including warranty and utility requirements, definitions of the interface between different service providers, acceptance criteria, and other information. The operation and financial models, capacity plans, and expected test results are further inputs.

The other main input consists of the RFCs that request the required changes to the environment within which the service functions or will function.

Outputs

The direct output from service validation and testing is the report delivered to change evaluation. This sets out the configuration baseline of the testing environment, identifies what testing was carried out, and presents the results. It also includes an analysis of the results (for example, a comparison of actual results with expected results) and any risks identified during testing activities.

Other outputs are the updated data and information and knowledge gained from the testing along with test incidents, problems, and known errors.

Interfaces

Service validation and testing supports all of the release and deployment management steps within service transition. It is important to remember that although release and deployment management is responsible for ensuring that appropriate testing takes place, the actual testing is carried out as part of the service validation and testing. The output from service validation and testing is then a key input to change evaluation. The testing strategy ensures that the process works well with the rest of the service lifecycle—for example, with service design, ensuring that designs are testable, and with CSI, managing improvements identified in testing. Service operation will use maintenance tests to ensure the continued efficacy of services, whereas service strategy provides funding and resources for testing.

Service Validation and Testing Process Roles

In Chapter 1, “Introduction to Operational Support and Analysis,” we explored the generic roles applicable to all processes throughout the service lifecycle. These are relevant to the service validation and testing process, but there are specific additional requirements that also apply. Remember that these are not “job titles”; they are guidance on the roles that may be needed to successfully run the process.

Service Validation and Testing Process Owner

The generic process owner role responsibilities described in Chapter 1apply to this role, and in addition, these specific requirements apply:

  • Defining the overall test strategy for the organization
  • Ensuring that there is an integrated approach with change management, change evaluation, release, and deployment

Service Validation and Testing Process Manager

It is important to ensure that this role is assigned to a different person from the one who has responsibility for release and deployment management, to avoid conflicts of interest.

The generic process manager role responsibilities described in Chapter 1apply to this role, and in addition, these specific requirements apply:

  • Helping to design and plan testing conditions, test scripts, and test data sets to ensure appropriate and adequate coverage and control
  • Allocating and overseeing test resources, ensuring that test policies are adhered to
  • Verifying tests conducted by other teams, such as release and deployment
  • Managing test environment requirements
  • Planning and managing support for appropriate tools and processes
  • Providing management reports on test results, issues, risks, and progress

Service Validation and Testing Practitioner

This role typically includes

  • Conducting tests as defined in the plans and designs, as documented in the service design package
  • Recording, analyzing, diagnosing, reporting, and managing test events, incidents, problems, and retests, dependent on agreed criteria
  • Administering test assets and components

Other Roles Contributing to Service Validation and Testing

A number of roles contribute to the service validation and testing process:

  • Change management—ensuring that tests are appropriate for the authorized changes
  • Developers/suppliers—collaboration between testing staff and development/build/ supplier personnel
  • Service design personnel—designing tests is an element of overall design
  • Customers and users—performance acceptance testing

Information Management

As previously mentioned, the nature of IT service management is repetitive and benefits greatly from reuse of data, scripts, and models during transition. Service management suggests that a test library is maintained. This will also include the use of automated testing tools (computer-aided software testing), which are becoming more and more a part of the service validation and testing process.

Test Data

Data is a requirement for all testing, and its relevance will determine the success of the test. When applied to software this is clearly necessary, but it also applies to other testing environments.

Test Environments

Test environments should be maintained and protected. All changes should be reviewed to see if they have an impact on the test environments. All of these aspects will need to be considered:

  • Updating of the test data
  • Whether a new separate set of data is required (the existing set will be required for existing services)
  • Redundancy of the test data or environment
  • Levels of testing

Maintenance of test data should be carried out as part of day-to-day operational activity. It should consider

  • Separation from live data
  • Data protection regulations
  • Backup of test data
  • The consequent test database, which can be used as a secure training environment

Critical Success Factors and Key Performance Indicators

As with all processes, the performance of service validation and testing should be monitored and reported, and action should be taken to identify and implement improvements to the process. Each critical success factor (CSF) should have a small number of key performance indicators (KPIs) that will measure its success, and each organization may choose its own KPIs.

The following are two examples of CSFs for service validation and testing and the related KPIs for each.

The success of the CSF “Achieving a balance between cost of testing and effectiveness of testing” can be measured using KPIs that measure

  • The reduction in budget variances and in the cost of fixing errors
  • The reduced impact on the business due to fewer testing delays and more accurate estimates of customer time required to support testing

The success of the CSF “Providing evidence that the service assets and configurations have been built and implemented correctly in addition to the service delivering what the customer needs” can be measured using KPIs that measure both the improvement in the percentage of service acceptance criteria that have been tested for new and changed services and the improvement in the percentage of services for which build and implementation have been tested separately from any tests of utility or warranty.

Challenges

The most frequent challenges to effective testing are based on other staff’s lack of respect and understanding for the role of testing. Traditionally, testing has been starved of funding, and this results in an inability to maintain an adequate test environment and test data that matches the live environment, with not enough staff, skills, and testing tools to deliver adequate testing coverage. Testing is often squeezed due to overruns in other parts of the project so the go-live date can still be met. This impacts the level and quality of testing that can be done. Delays by suppliers in delivering equipment can reduce the time available for testing.

All of these factors can result in inadequate testing, which, once again, feeds the commonly held feeling that it has little real value.

Risks

The most common risks to the success of this process are as follows:

  • A lack of clarity regarding expectations or objectives
  • A lack of understanding of the risks, resulting in testing that is not targeted at critical elements
  • Resource shortages (e.g., users, support staff), which introduce delays and have an impact on other service transitions

Change Evaluation

Before a transition can be closed, it needs to be reviewed to ensure that it has achieved its purpose with no unidentified negative side effects. Successful completion of the change evaluation ensures that the service can be formally closed and handed over to the service operation functions and CSI.

An evaluation report is prepared that lists the deviations from the service charter/SDP and includes a risk profile and recommendations for change management.

Purpose

The purpose of the change evaluation process is to understand the likely performance of a service change and how it might impact the business, the IT infrastructure, and other IT services. The process provides a consistent and standardized means of assessing this impact by assessing the actual performance of a change against its predicted performance. Risks and issues related to the change are identified and managed.

Objectives

The objectives of change evaluation include setting stakeholder expectations correctly and providing accurate information to change management to prevent changes with an adverse impact and changes that introduce risk being transitioned unchecked. Another objective is to evaluate the intended and, as much as possible, the unintended effects of a service change and provide good-quality outputs to enable change management to decide quickly whether a service change is to be authorized.

Scope

Effective change management means that every change must be authorized by a suitable change authority at various points in its lifecycle. Typical authorization points include before build and test starts, before being checked into the DML (if software related), and before deployment to the live environment. The decision on whether to authorize the next step is made based on the evaluation of the change resulting from this process. The evaluation report provides the change authority with advice and guidance. The process describes a formal evaluation suitable for use for significant changes; each organization will decide which changes need formal evaluation and which will be evaluated as part of change management.

Value to the Business

Change evaluation is concerned with value. Effective change evaluation will judge whether the resources used to deliver the benefit that results from the change represent good value. This information will encourage a focus on value in future service development and change management. CSI can benefit enormously from change evaluation regarding possible areas for improvement within the change process itself and the predictions and measurement of service change performance.

Policies, Principles, and Basic Concepts

The following reviews some of the key policies that apply to the change evaluation process. The first is that service designs or service changes will be evaluated before being transitioned. Second, although every change must be evaluated, the formal change evaluation process will be used only on significant changes; this requires, in turn, that criteria be defined to identify which changes are “significant.”

Change evaluation will identify risks and issues related to the new or changed service and to any other services or shared infrastructure. Deviation from predicted to actual performance will be managed by the customer accepting the change with the deviation, rejecting the change, or introducing a new change to correct the deviation. These three are the only outcomes of change evaluation allowed.

The principles behind change evaluation include committing to identifying and understanding the consequences of both the unintended and intended effects of a change, as far as possible. Other principles include ensuring that each service change will be fairly, consistently, openly, and, wherever possible, objectively evaluated and ensuring that an evaluation report is provided to change management to facilitate decision making at each authorization point.

The change evaluation process uses the Plan-Do-Check-Act (PDCA) model to ensure consistency across all evaluations. Using this approach, each evaluation is planned and then carried out in multiple stages, the results of the evaluation are checked, and actions are taken to resolve any issues found.

Change Evaluation Terminology

Table 13.2 shows the key terms used by the change evaluation process.

TABLE 13.2 Key terms that apply to the change evaluation process

Term Meaning
Actual performance The performance achieved following a service change.
Countermeasure The mitigation that is implemented to reduce risk.
Deviations report A report of the difference between predicted and actual performance.
Evaluation report A report generated by the change evaluation process, which is passed to change management and which consists of A risk profile A deviations report A recommendation A qualification statement
Performance The utilities and warranties of a service.
Performance model A representation of a service that is used to help predict performance.
Predicted performance The expected performance of a service following a service change.
Residual risk The remaining risk after countermeasures have been deployed.
Service capability The ability of a service to perform as required.
Service change A change to an existing service or the introduction of a new service.
Test plan and results The test plan is a response to an impact assessment of the proposed service change. Typically the plan will specify how the change will be tested; what records will result from testing and where they will be stored; who will authorize the change; and how it will be ensured that the change and the service(s) it affects will remain stable over time. The test plan may include a qualification plan and a validation plan if the change affects a regulated environment. The results represent the actual performance following implementation of the change.

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS

Change Evaluation Process

Figure 13.3 shows the change evaluation process and its key inputs and outputs. You can see the inputs that trigger the evaluation activity and the interim and final evaluation reports that are outputs of the process. Where performance does not meet the requirement, change management is responsible for deciding what to do next.

Flowchart shows request for evaluation and inputs (RFC, SDP, test plan, and results) together trigger to plan evaluation, evaluate predicted performance, predicted and actual performance ok?, change complete?, and end.

FIGURE 13.3 Change evaluation process flow

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS.

Evaluation Plan

Evaluation of a change should ensure that unintended effects as well as intended effects of the change are understood. Unintended effects will often be negative in terms of impact on other services, customers, and users of the service. Intended effects of a change should match the acceptance criteria. Unintended effects are often not seen until the pilot stage or even until live use; they are difficult to predict or measure.

In Table 13.3 are the factors to be included when considering the effect of a service change.

TABLE 13.3 Factors to consider when assessing the effect of a service change

Factor Evaluation of service design
S—Service provider capability  The ability of a service provider or service unit to perform as required.
T—Tolerance The ability or capacity of a service to absorb the service change or release.
O—Organizational setting The ability of an organization to accept the proposed change. For example, is appropriate access available for the implementation team? Have all existing services that would be affected by the change been updated to ensure smooth transition?
R—Resources The availability of appropriately skilled and knowledgeable people and sufficient finances, infrastructure, applications, and other resources necessary to run the service following transition.
M—Modeling and measurement The extent to which the predictions of behavior generated from the model match the actual behavior of the new or changed service.
P—People The people within a system and the effect of change on them.
U—Use Will the service be fit for use? Will it be able to deliver the warranties? Is it continuously available? Is there enough capacity? Will it be secure enough?
P—Purpose  Will the new or changed service be fit for purpose? Can the required performance be supported? Will the constraints be removed as planned?

Evaluation of Predicted Performance

Using customer requirements, including the acceptance criteria, the predicted performance, and the performance model, a risk assessment should be carried out. An interim report will be sent to change management.

The report will compare the outcome of the predicted performance to the acceptance criteria and includes the risk assessment. It will include a recommendation as to whether the change should proceed. This will be forwarded to the next authorization point.

Evaluation of Actual Performance

Before change management makes a decision on authorization of a step in a change, change evaluation will evaluate the actual performance. The results will be sent to change management and interim decisions can be made. The decision will rest with the business. If the recommendation is to stop, then the business will have to verify that recommendation.

If the recommendation is to continue, then the next authorization point will be reached for the business to make a decision.

Post implementation, a report on the actual performance, along with a risk assessment, will be compared to the acceptance criteria and predicted performance results. Based on the results of the risk assessment, a decision will be recommended for the business. If it is acceptable, then the decision for acceptance will be made through change management.

Risk Management

Each organization will have its own approach to risk management, and this approach should be used to assess the new or changed service during the change evaluation process. The level of risk should be appropriate for the expected benefits and business appetite for risk. The comparison between actual and predicted will be assessed for risk to the business.

This output is known as a deviation report, and it will be used to support decisions regarding implementation of the new or changed service.

Associated test plans and results from tests will be used in conjunction with the deviation report, and the total evaluation will be presented in an evaluation report.

Evaluation Report

The evaluation report contains the following sections:

  • Risk profile, which explains the remaining risk left after a change has been implemented and after countermeasures have been applied
  • Deviations report, which describes the difference between predicted and actual performance following the implementation of a change
  • Qualification and validation statement (if appropriate), which is a statement of whether the IT infrastructure is appropriate and correctly configured to support the specific application or IT service
  • Recommendation, which is a recommendation to change management to accept or reject the change based on the other factors within the evaluation report

Now let’s look at the trigger, inputs and outputs, and process interfaces for change evaluation.

Trigger

Let’s look first at the trigger for this process. The trigger for change evaluation is receipt of a request for evaluation from change management.

Inputs

Inputs to change evaluation are the service design package (which includes the service charter and service acceptance criteria), a change proposal, an RFC, a change record, and detailed change documentation. Other possible inputs are discussions with stakeholders and the test results and report.

Outputs

The outputs from change evaluation are interim and final evaluation report(s) for change management.

Interfaces

Change evaluation interfaces with a number of other processes. The main interfaces are as follows:

Transition Planning and Support Change evaluation works with transition planning and support to ensure that appropriate resources are available when needed and that each service transition is well managed.

Change Management There is a critical interface with change management. The processes of change evaluation and change management must be tightly integrated, with clear agreement on which types of change will be subject to formal evaluation. The time required for this evaluation must be included when planning the change. In addition, it is change management that triggers change evaluation, and change management is dependent on receiving the evaluation report in time for the CAB (or other change authority) to use it to assist in their decision making.

Service Design Coordination This process provides the information about the service that change evaluation requires in the form of a service design package.

Service Level Management or Business Relationship Management Change evaluation may need to work with these processes to understand the impact of any issues that arise and to agree on the use of customer resources to perform the evaluation.

Service Validation and Testing This process provides change evaluation with information; the two processes must coordinate activities to ensure that required inputs are available in sufficient time.

Information Management

Information from testing should be available from the service knowledge management system. Interim and final evaluation reports should be checked into the configuration management system.

Change Evaluation Process Roles

This section explores the process roles relating to the change evaluation process.

Change Evaluation Process Owner

The process owner role will carry out the generic process owner roles as described in Chapter 1. In addition, the process owner will:

  • Work with other process owners to ensure an integrated approach across design, change management, change evaluation, release and deployment, and service validation and testing

Change Evaluation Process Manager

In addition to the generic process manager responsibilities, the responsibilities of the process manager also include

  • Planning and coordinating resources to evaluate changes
  • Ensuring that the change evaluation reports and interim evaluation reports are produced in a timely manner to enable decision making

Change Evaluation Process Practitioner

The practitioner role responsibilities include

  • Using the service design package and release package to develop an evaluation plan
  • Establishing risks and issues associated with all aspects of the transition
  • Creating an evaluation report

Critical Success Factors and Key Performance Indicators

As with the other processes, the performance of change evaluation should be monitored and reported, and action should be taken to improve it. Here are two examples of CSFs for change evaluation and the related KPIs for each.

  • Critical success factor: “Stakeholders have a good understanding of the expected performance of new and changed services.”
    • KPI: The reduction in incidents reported when a new or changed service fails to deliver the required utility or warranty
    • KPI: Increased customer satisfaction with new or changed services
  • Critical success factor: “Change management has good-quality evaluations to help them make correct decisions.”
    • KPI: Increased percentage of evaluations delivered within the agreed timeframe
    • KPI: Reduced number of changes that fail or need to be backed out
    • KPI: Increased change management staff satisfaction when surveyed

Challenges

Challenges to change evaluation include developing standard performance measures and measurement methods across projects and suppliers, understanding the different stakeholder perspectives that underpin effective risk management for the change evaluation activities, and understanding (and being able to assess) the balance between managing risk and taking risks because this affects the overall strategy of the organization and service delivery. A further challenge is to measure and demonstrate less variation in predictions during and after transition.

The remaining challenges to change evaluation include taking a pragmatic and measured approach to risk and communicating the organization’s attitude toward risk and approach to risk management effectively during risk evaluation. Finally, change evaluation needs to meet the challenges of building a thorough understanding of risks that have impacted or may impact successful service transition of services and releases and encouraging a risk management culture where people share information.

Risks

The most common risks to the success of this process include a lack of clear criteria for when change evaluation should be used and unrealistic expectations of the time required. Another risk is that staff members carrying out the change evaluation have insufficient experience or organizational authority to be able to influence change authorities. Finally, projects and suppliers who fail to deliver on the promised date cause delays in scheduling change evaluation activities.

Summary

This chapter explored two more processes in the service transition stage: service validation and testing and change evaluation. You learned how service validation and testing ensures that the release is fit for use and fit for purpose. We explored the activities relating to the process and the requirements for information management.

We considered the process of change evaluation and the various outputs in the form of interim or final reports.

Finally, we examined how each of these processes supports the other and the importance of these processes to the business and to the IT service provider.

Exam Essentials

Understand and be able to list the objectives of service validation and testing. The objectives of service validation and testing are to ensure that a release will deliver the expected outcomes and value and to provide quality assurance by validating that a service is fit for purpose and fit for use. The process aims to provide objective evidence of the release’s ability to fulfill its requirements.

Understand the dependencies between the service transition processes of release and deployment management, service validation and testing, and change evaluation. Release and deployment management is responsible for ensuring that appropriate testing takes place, but the testing is carried out in service validation and testing. The output from service validation and testing is then a key input to change evaluation.

Be able to list the typical types of testing used in service validation and testing. The main testing approaches used are as follows:

  • Simulation
  • Scenario testing
  • Role playing
  • Prototyping
  • Laboratory testing
  • Regression testing
  • Joint walkthrough/workshops
  • Dress/service rehearsal
  • Conference room pilot
  • Live pilot

Be able to describe the use and contents of a test model. A test model includes a test plan, what is to be tested, and the test scripts that define how each element will be tested. A test model ensures that testing is executed consistently in a repeatable way that is effective and efficient. It provides traceability back to the requirement or design criteria and an audit trail through test execution, evaluation, and reporting.

Understand the purpose of change evaluation. Change evaluation enables us to understand the likely performance of a service change and how it might impact the business, the IT infrastructure, and other IT services. It provides a consistent and standardized means of assessing this impact by assessing the actual performance of a change against its predicted performance.

Be able to explain the objectives of change evaluation. The objectives of change evaluation include setting stakeholder expectations correctly and preventing changes from being accepted into the live environment unless it is known that they will behave as planned. Another objective is to provide good-quality outputs to enable change management to decide quickly whether a service change is to be authorized.

Understand which changes go through the formal change evaluation process and the production of an evaluation report. Only those changes the organization has deemed important enough or risky enough to require a report will go through the formal change evaluation process; simpler changes will not, except through the change review process.

Be able to list and explain the main process interfaces with change evaluation. The major interfaces are with the following processes:

  • Transition Planning and Support To ensure that appropriate resources are available when needed.
  • Change Management There must be agreed criteria for evaluation and time allowed for its completion. Change management is dependent on receiving the evaluation report in time for the CAB (or other change authority) to use it to assist in their decision making.
  • Service Design Coordination Provides the service in the service design package.
  • Service Level Management or Business Relationship Management To agree on the use of customer resources to perform the evaluation.
  • Service Validation and Testing Provides change evaluation with information; the change evaluation and service validation and testing processes must coordinate activities to ensure that required inputs are available in sufficient time.

Review Questions

You can find the answers to the review questions in the appendix.

  1. Which of the following is not provided as part of a test model?

    1. A test plan
    2. A list of what is to be tested
    3. Test scripts that define how each element will be tested
    4. A test report
  2. Which is the correct order of actions when a release is being tested?

    1. Perform tests, design tests, verify test plan, prepare test environment, test cleanup and closure, and evaluate exit criteria and report
    2. Design tests, perform tests, verify test plan, prepare test environment, evaluate exit criteria and report, and test cleanup and closure
    3. Design tests, verify test plan, prepare test environment, perform tests, evaluate exit criteria and report, and test cleanup and closure
    4. Verify test plan, design tests, prepare test environment, evaluate exit criteria and report, perform tests, and test cleanup and closure
  3. Where are the entry and exit criteria for testing defined?

    1. The SKMS
    2. The CMS
    3. The KEDB
    4. The SDP
  4. The diagram mapping the types of test to each stage of development to ensure that testing covers business and service requirements as well as technical ones is known as what?

    1. DIKW
    2. The service V-model
    3. The test plan
    4. The test strategy
  5. Which of the following are valid results of an evaluation of the test report against the exit criteria?

    1. The service will work but with higher risk than planned.
    2. The service passed.
    3. The service failed.
    4. The service will work but with higher costs than planned.
      1. 2 and 3 only
      2. 2 only
      3. All of the above
      4. 1, 2, and 4 only
  6. Which option is an objective of change evaluation?

    1. Provide assurance that a release is fit for purpose.
    2. Optimize overall business risk.
    3. Provide quality assurance for a release.
    4. Set stakeholder expectations correctly.
  7. Which is not a factor considered by change evaluation?

    1. Actual performance of a service change
    2. Cost of a service change
    3. Predicted performance of a service change
    4. Likely impact of a service change
  8. What may be included in an evaluation report?

    1. A risk profile
    2. A deviations report
    3. A recommendation
    4. A qualification statement
      1. 1 and 2
      2. 2 and 3
      3. 1, 2, and 3
      4. All of the above
  9. Which is the best description of a performance model?

    1. A performance benchmark
    2. Predefined steps for delivering good performance
    3. A representation of a service used to predict performance
    4. A framework used to manage the performance of a process
  10. Which process triggers change evaluation activity?

    1. Change management
    2. Transition planning and support
    3. Release and deployment management
    4. Service validation and testing

     

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset