THE FOLLOWING ITIL RELEASE, CONTROL, AND VALIDATION CAPABILITY INTERMEDIATE EXAM OBJECTIVES ARE DISCUSSED IN THIS CHAPTER:
The syllabus covers the day-to-day operation of each process and the detail of the process activities, methods, and techniques and information management. The managerial and supervisory aspects of service transition processes are covered in the service lifecycle courses.
This chapter covers the day-to-day operation of each process and the detail of the process activities, methods, and techniques and information management. The managerial and supervisory aspects of service transition processes are covered in the ITIL Intermediate Certificate Companion Study Guide (Sybex, 2016).
In service validation and testing, we review the concepts of quality assurance, ensuring that the new or changed service will meet the requirements of the business.
Change evaluation is concerned with the choices the business makes relating to the new or changed service. It allows for the organization to decide whether the new or changed service meets the acceptance criteria agreed to as part of the design.
The underlying concept behind service validation and testing is quality assurance—establishing that the service design and release will deliver a new or changed service or service offering that is fit for its purpose and use.
The purpose of the service validation and testing process is to ensure that a new or changed IT service matches its design specification and will meet the needs of the business.
The objectives of service validation and testing are to ensure that a release will deliver the expected outcomes and value within the projected constraints and to provide quality assurance by validating that a service is “fit for purpose” and “fit for use.” Another objective is to confirm that the requirements are correctly defined, remedying any errors or variances early in the service lifecycle. The process aims to provide objective evidence of the release’s ability to fulfill its requirements. The final objective is to identify, assess, and address issues, errors, and risks throughout service transition.
The service provider has a commitment to deliver the required levels of warranty as defined within the service agreement. Throughout the service lifecycle, service validation and testing can be applied to provide assurance that the required capabilities are being delivered and the business needs are met.
The testing activity of service validation and testing directly supports release and deployment by ensuring appropriate testing during the release, build, and deployment activities. It ensures that the service models are fit for its purpose and use before being authorized as live through the service catalog. The output from testing is used by the change evaluation process to judge whether the service is delivering the service performance with an acceptable risk profile.
The key value to the business and customers from service testing and validation is in terms of the established degree of confidence it delivers that a new or changed service will deliver the value and outcomes required of it and the understanding it provides of the risks.
Successful testing provides a measured degree of confidence rather than guarantees. Service failures can harm the service provider’s business and the customer’s assets and result in outcomes such as loss of reputation, loss of money, loss of time, injury, and death.
Now we’ll look at the policies and principles of service validation and testing and the basic concepts behind it. The policies for this process reflect strategy and design requirements. The following list includes typical policy statements:
Service validation and testing is affected by policies from other areas of service management. Policies that drive and support service validation and testing include the service quality policy, the risk policy, the security policy, and the service transition, release management, and change management policies.
Service quality will be defined by the senior management, based on customer and stakeholder input. This will drive the adoption and measurement of the basic quality perspectives as listed in the service strategy—level of excellence, value for money, conformance to specifications, and meeting or exceeding expectations. The organization will help to prioritize these, and this will influence the approach taken to service validation and testing.
Risk policies have a direct link to testing. The testing must measure risk according to the appetite for risk in the organization. The testing should include any requirements of risk from the security policy.
The service transition policy sets the working practices for the entire service lifecycle stage. This will include the approach to managing transitions of all scales, and defining the levels of control required during the lifecycle stage. It will cover areas such as governance, use of frameworks, reuse approaches, business alignment and relationship management, knowledge transfer, managing course corrections, early life support, and resource management.
The release policy will define the frequency and type of releases, which in turn will influence the testing approach. The more frequent the releases, the more likely it is to automate and look for reusable test models.
The use of change windows will also influence the testing, so the change management policy will be important in defining the testing approach.
Because testing is directly related to the building of the service assets and products that make up services, each one of the assets or products should have an associated acceptance test. This will ensure that the individual components will work effectively prior to use in the new or changed service. Each service model should be supported by a reusable test model that can be used for both release and regression testing in the future. Testing models should be introduced early in the lifecycle to ensure that there is a lifecycle approach to the management of testing and validation.
A test model should include a test plan, what is to be tested, and the test scripts that will be used for each element of the service. Making the test model reusable and repeatable ensures that testing remains effective and efficient. This will support traceability back to the design criteria or initial requirements, and the audit of the test execution, evaluation, and reporting. Table 13.1 shows some examples of test models.
TABLE 13.1 Examples of service test models
Test model | Objective/target deliverable | Test conditions based on |
Service contract test model | To validate that the customer can use the service to deliver a value proposition | Contract requirements. Fit for purpose, fit for use criteria |
Service requirements test model | To validate that the service provider can deliver/has delivered the service required and expected by the customer | Service requirements and service acceptance criteria |
Service level test model | To ensure that the service provider can deliver the service level requirements, and that service level requirements can be met in the live environment, e.g., testing the response and fix time, availability, product delivery times, support services, etc. | Service level requirements, SLA, OLA |
Service test model | To ensure that the service provider is capable of delivering, operating and managing the new or changed service using the as-designed service model that includes the resource model, cost model, integrated process model, capacity and performance model, etc. | Service model |
Operations test model | To ensure that the service operation functions can operate and support the new or changed service/service component including the service desk, IT operations, application management, technical management. It includes local IT support staff and business representatives responsible for IT service support and operations. There may be different models at different release/test levels, e.g., technology infrastructure, applications, etc. | Service model, service operation standards, processes and plans |
Release deployment test model | To verify that the deployment team, tools, and procedures can deploy the release package into a target deployment group or environment within the estimated timeframe. To ensure that the release package contains all the service components required for deployment, e.g., by performing a configuration audit | Release and deployment design and plan |
Deployment installation test model | To test that the deployment team, tools, and procedures can install the release package into a target environment within the estimated timeframe | Release and deployment design and plan |
Deployment verification test model | To test that a deployment has completed successfully and that all service assets and configurations are in place as planned and meet their quality criteria | Tests and audits of actual service assets and configurations |
Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS
Service validation and testing should focus on the perspective of those who will use, deliver, deploy, manage, and operate the service. The test entry and exit criteria will have been developed during the development of the service design package. This should cover aspects such as these:
Service acceptance testing is concerned with the verification of the service requirements. This is verified by the stakeholders, who include business customers or customer representatives, suppliers, and the service provider.
The business perspective of acceptance testing should consist of a defined and agreed means for measuring the service to ensure that it meets their requirements, and that it has appropriate mechanisms in place to manage the interface to the service provider. This will require the business to provide an appropriate level and capability of resources to take part in the tests.
The service provider requires the interaction of the business to ensure continued engagement in the transition, and to ensure that the overall quality of the new or changed service is meeting expectations.
Use cases can be used to ensure that the testing covers realistic scenarios of interaction between the service provider and the business.
User testing should cover the requirements for applications, systems, and services, and ensure that these meet the functional and quality requirements of the end users. User acceptance testing (UAT) should be as realistic as possible to simulate the live environment. It is important to ensure that the test is viewed appropriately and that the expectation is that not everything will work perfectly.
It is important not to forget to test that the new or changed service can be managed and supported successfully. This should include some basic elements:
The engagement of continual service improvement should ensure that the new or changed service is adopted as part of the scope of the overall service management improvement approach.
Testing is directly related to the building of the service assets and products, and it is necessary to ensure that each one has an associated acceptance test and activity to verify that it meets requirements.
The diagram in Figure 13.1, sometimes called the service V-model, maps the types of tests to each stage of development. Using the V-model ensures that testing covers business and service requirements, as well as technical ones, so that the delivered service will meet customer expectations for utility and warranty. The left-hand side shows service requirements down to the detailed service design. The right-hand side focuses on the validation activities that are performed against these specifications. At each stage on the left-hand side, there is direct involvement by the equivalent party on the right-hand side. It shows that service validation and acceptance test planning should start with the definition of the service requirements. For example, customers who sign off on the agreed service requirements will also sign off on the service acceptance criteria and test plan.
There are many testing approaches and techniques that can be combined to conduct validation activities and tests. Examples include modeling or simulating situations where the service would be used, limiting testing to the areas of highest risk, testing compliance to the relevant standard, taking the advice of experts on what to test, and using waterfall or agile techniques. Other examples involve conducting a walkthrough or workshop, a dress rehearsal, or a live pilot.
Functional and service tests are used to verify that the service meets the user and customer requirements as well as the service provider’s requirements for managing, operating, and supporting the service. Functional testing will depend on the type of service and channel of delivery. Service testing will include many nonfunctional tests. They include testing for usability and accessibility, testing of procedures, and testing knowledge and competence. Testing of the warranty aspects of the service, including capacity, availability, resilience, backup and recovery, and security and continuity, is also included.
There are seven phases to service validation and testing, shown in Figure 13.2. The basic activities are as follows:
The test activities are not undertaken in a sequence; several may be done in parallel; for example, test execution begins before all the test design is complete. Figure 13.2 shows an example of a validation and testing process; it is described in detail in the following list:
Next we’ll look at the trigger, inputs and outputs, and process interfaces for service validation and testing.
This process has only one trigger, a scheduled activity. The scheduled activity could be on a release plan, test plan, or quality assurance plan.
A key input to this process is the service design package. This defines the agreed requirements of the service, expressed in terms of the service model and service operation plan. The SDP, as we have discussed previously, contains the service charter, including warranty and utility requirements, definitions of the interface between different service providers, acceptance criteria, and other information. The operation and financial models, capacity plans, and expected test results are further inputs.
The other main input consists of the RFCs that request the required changes to the environment within which the service functions or will function.
The direct output from service validation and testing is the report delivered to change evaluation. This sets out the configuration baseline of the testing environment, identifies what testing was carried out, and presents the results. It also includes an analysis of the results (for example, a comparison of actual results with expected results) and any risks identified during testing activities.
Other outputs are the updated data and information and knowledge gained from the testing along with test incidents, problems, and known errors.
Service validation and testing supports all of the release and deployment management steps within service transition. It is important to remember that although release and deployment management is responsible for ensuring that appropriate testing takes place, the actual testing is carried out as part of the service validation and testing. The output from service validation and testing is then a key input to change evaluation. The testing strategy ensures that the process works well with the rest of the service lifecycle—for example, with service design, ensuring that designs are testable, and with CSI, managing improvements identified in testing. Service operation will use maintenance tests to ensure the continued efficacy of services, whereas service strategy provides funding and resources for testing.
In Chapter 1, “Introduction to Operational Support and Analysis,” we explored the generic roles applicable to all processes throughout the service lifecycle. These are relevant to the service validation and testing process, but there are specific additional requirements that also apply. Remember that these are not “job titles”; they are guidance on the roles that may be needed to successfully run the process.
The generic process owner role responsibilities described in Chapter 1apply to this role, and in addition, these specific requirements apply:
It is important to ensure that this role is assigned to a different person from the one who has responsibility for release and deployment management, to avoid conflicts of interest.
The generic process manager role responsibilities described in Chapter 1apply to this role, and in addition, these specific requirements apply:
This role typically includes
A number of roles contribute to the service validation and testing process:
As previously mentioned, the nature of IT service management is repetitive and benefits greatly from reuse of data, scripts, and models during transition. Service management suggests that a test library is maintained. This will also include the use of automated testing tools (computer-aided software testing), which are becoming more and more a part of the service validation and testing process.
Data is a requirement for all testing, and its relevance will determine the success of the test. When applied to software this is clearly necessary, but it also applies to other testing environments.
Test environments should be maintained and protected. All changes should be reviewed to see if they have an impact on the test environments. All of these aspects will need to be considered:
Maintenance of test data should be carried out as part of day-to-day operational activity. It should consider
As with all processes, the performance of service validation and testing should be monitored and reported, and action should be taken to identify and implement improvements to the process. Each critical success factor (CSF) should have a small number of key performance indicators (KPIs) that will measure its success, and each organization may choose its own KPIs.
The following are two examples of CSFs for service validation and testing and the related KPIs for each.
The success of the CSF “Achieving a balance between cost of testing and effectiveness of testing” can be measured using KPIs that measure
The success of the CSF “Providing evidence that the service assets and configurations have been built and implemented correctly in addition to the service delivering what the customer needs” can be measured using KPIs that measure both the improvement in the percentage of service acceptance criteria that have been tested for new and changed services and the improvement in the percentage of services for which build and implementation have been tested separately from any tests of utility or warranty.
The most frequent challenges to effective testing are based on other staff’s lack of respect and understanding for the role of testing. Traditionally, testing has been starved of funding, and this results in an inability to maintain an adequate test environment and test data that matches the live environment, with not enough staff, skills, and testing tools to deliver adequate testing coverage. Testing is often squeezed due to overruns in other parts of the project so the go-live date can still be met. This impacts the level and quality of testing that can be done. Delays by suppliers in delivering equipment can reduce the time available for testing.
All of these factors can result in inadequate testing, which, once again, feeds the commonly held feeling that it has little real value.
The most common risks to the success of this process are as follows:
Before a transition can be closed, it needs to be reviewed to ensure that it has achieved its purpose with no unidentified negative side effects. Successful completion of the change evaluation ensures that the service can be formally closed and handed over to the service operation functions and CSI.
An evaluation report is prepared that lists the deviations from the service charter/SDP and includes a risk profile and recommendations for change management.
The purpose of the change evaluation process is to understand the likely performance of a service change and how it might impact the business, the IT infrastructure, and other IT services. The process provides a consistent and standardized means of assessing this impact by assessing the actual performance of a change against its predicted performance. Risks and issues related to the change are identified and managed.
The objectives of change evaluation include setting stakeholder expectations correctly and providing accurate information to change management to prevent changes with an adverse impact and changes that introduce risk being transitioned unchecked. Another objective is to evaluate the intended and, as much as possible, the unintended effects of a service change and provide good-quality outputs to enable change management to decide quickly whether a service change is to be authorized.
Effective change management means that every change must be authorized by a suitable change authority at various points in its lifecycle. Typical authorization points include before build and test starts, before being checked into the DML (if software related), and before deployment to the live environment. The decision on whether to authorize the next step is made based on the evaluation of the change resulting from this process. The evaluation report provides the change authority with advice and guidance. The process describes a formal evaluation suitable for use for significant changes; each organization will decide which changes need formal evaluation and which will be evaluated as part of change management.
Change evaluation is concerned with value. Effective change evaluation will judge whether the resources used to deliver the benefit that results from the change represent good value. This information will encourage a focus on value in future service development and change management. CSI can benefit enormously from change evaluation regarding possible areas for improvement within the change process itself and the predictions and measurement of service change performance.
The following reviews some of the key policies that apply to the change evaluation process. The first is that service designs or service changes will be evaluated before being transitioned. Second, although every change must be evaluated, the formal change evaluation process will be used only on significant changes; this requires, in turn, that criteria be defined to identify which changes are “significant.”
Change evaluation will identify risks and issues related to the new or changed service and to any other services or shared infrastructure. Deviation from predicted to actual performance will be managed by the customer accepting the change with the deviation, rejecting the change, or introducing a new change to correct the deviation. These three are the only outcomes of change evaluation allowed.
The principles behind change evaluation include committing to identifying and understanding the consequences of both the unintended and intended effects of a change, as far as possible. Other principles include ensuring that each service change will be fairly, consistently, openly, and, wherever possible, objectively evaluated and ensuring that an evaluation report is provided to change management to facilitate decision making at each authorization point.
The change evaluation process uses the Plan-Do-Check-Act (PDCA) model to ensure consistency across all evaluations. Using this approach, each evaluation is planned and then carried out in multiple stages, the results of the evaluation are checked, and actions are taken to resolve any issues found.
Table 13.2 shows the key terms used by the change evaluation process.
TABLE 13.2 Key terms that apply to the change evaluation process
Term | Meaning |
Actual performance | The performance achieved following a service change. |
Countermeasure | The mitigation that is implemented to reduce risk. |
Deviations report | A report of the difference between predicted and actual performance. |
Evaluation report | A report generated by the change evaluation process, which is passed to change management and which consists of A risk profile A deviations report A recommendation A qualification statement |
Performance | The utilities and warranties of a service. |
Performance model | A representation of a service that is used to help predict performance. |
Predicted performance | The expected performance of a service following a service change. |
Residual risk | The remaining risk after countermeasures have been deployed. |
Service capability | The ability of a service to perform as required. |
Service change | A change to an existing service or the introduction of a new service. |
Test plan and results | The test plan is a response to an impact assessment of the proposed service change. Typically the plan will specify how the change will be tested; what records will result from testing and where they will be stored; who will authorize the change; and how it will be ensured that the change and the service(s) it affects will remain stable over time. The test plan may include a qualification plan and a validation plan if the change affects a regulated environment. The results represent the actual performance following implementation of the change. |
Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS
Figure 13.3 shows the change evaluation process and its key inputs and outputs. You can see the inputs that trigger the evaluation activity and the interim and final evaluation reports that are outputs of the process. Where performance does not meet the requirement, change management is responsible for deciding what to do next.
Evaluation of a change should ensure that unintended effects as well as intended effects of the change are understood. Unintended effects will often be negative in terms of impact on other services, customers, and users of the service. Intended effects of a change should match the acceptance criteria. Unintended effects are often not seen until the pilot stage or even until live use; they are difficult to predict or measure.
In Table 13.3 are the factors to be included when considering the effect of a service change.
TABLE 13.3 Factors to consider when assessing the effect of a service change
Factor | Evaluation of service design |
S—Service provider capability | The ability of a service provider or service unit to perform as required. |
T—Tolerance | The ability or capacity of a service to absorb the service change or release. |
O—Organizational setting | The ability of an organization to accept the proposed change. For example, is appropriate access available for the implementation team? Have all existing services that would be affected by the change been updated to ensure smooth transition? |
R—Resources | The availability of appropriately skilled and knowledgeable people and sufficient finances, infrastructure, applications, and other resources necessary to run the service following transition. |
M—Modeling and measurement | The extent to which the predictions of behavior generated from the model match the actual behavior of the new or changed service. |
P—People | The people within a system and the effect of change on them. |
U—Use | Will the service be fit for use? Will it be able to deliver the warranties? Is it continuously available? Is there enough capacity? Will it be secure enough? |
P—Purpose | Will the new or changed service be fit for purpose? Can the required performance be supported? Will the constraints be removed as planned? |
Using customer requirements, including the acceptance criteria, the predicted performance, and the performance model, a risk assessment should be carried out. An interim report will be sent to change management.
The report will compare the outcome of the predicted performance to the acceptance criteria and includes the risk assessment. It will include a recommendation as to whether the change should proceed. This will be forwarded to the next authorization point.
Before change management makes a decision on authorization of a step in a change, change evaluation will evaluate the actual performance. The results will be sent to change management and interim decisions can be made. The decision will rest with the business. If the recommendation is to stop, then the business will have to verify that recommendation.
If the recommendation is to continue, then the next authorization point will be reached for the business to make a decision.
Post implementation, a report on the actual performance, along with a risk assessment, will be compared to the acceptance criteria and predicted performance results. Based on the results of the risk assessment, a decision will be recommended for the business. If it is acceptable, then the decision for acceptance will be made through change management.
Each organization will have its own approach to risk management, and this approach should be used to assess the new or changed service during the change evaluation process. The level of risk should be appropriate for the expected benefits and business appetite for risk. The comparison between actual and predicted will be assessed for risk to the business.
This output is known as a deviation report, and it will be used to support decisions regarding implementation of the new or changed service.
Associated test plans and results from tests will be used in conjunction with the deviation report, and the total evaluation will be presented in an evaluation report.
The evaluation report contains the following sections:
Now let’s look at the trigger, inputs and outputs, and process interfaces for change evaluation.
Let’s look first at the trigger for this process. The trigger for change evaluation is receipt of a request for evaluation from change management.
Inputs to change evaluation are the service design package (which includes the service charter and service acceptance criteria), a change proposal, an RFC, a change record, and detailed change documentation. Other possible inputs are discussions with stakeholders and the test results and report.
The outputs from change evaluation are interim and final evaluation report(s) for change management.
Change evaluation interfaces with a number of other processes. The main interfaces are as follows:
Transition Planning and Support Change evaluation works with transition planning and support to ensure that appropriate resources are available when needed and that each service transition is well managed.
Change Management There is a critical interface with change management. The processes of change evaluation and change management must be tightly integrated, with clear agreement on which types of change will be subject to formal evaluation. The time required for this evaluation must be included when planning the change. In addition, it is change management that triggers change evaluation, and change management is dependent on receiving the evaluation report in time for the CAB (or other change authority) to use it to assist in their decision making.
Service Design Coordination This process provides the information about the service that change evaluation requires in the form of a service design package.
Service Level Management or Business Relationship Management Change evaluation may need to work with these processes to understand the impact of any issues that arise and to agree on the use of customer resources to perform the evaluation.
Service Validation and Testing This process provides change evaluation with information; the two processes must coordinate activities to ensure that required inputs are available in sufficient time.
Information from testing should be available from the service knowledge management system. Interim and final evaluation reports should be checked into the configuration management system.
This section explores the process roles relating to the change evaluation process.
The process owner role will carry out the generic process owner roles as described in Chapter 1. In addition, the process owner will:
In addition to the generic process manager responsibilities, the responsibilities of the process manager also include
The practitioner role responsibilities include
As with the other processes, the performance of change evaluation should be monitored and reported, and action should be taken to improve it. Here are two examples of CSFs for change evaluation and the related KPIs for each.
Challenges to change evaluation include developing standard performance measures and measurement methods across projects and suppliers, understanding the different stakeholder perspectives that underpin effective risk management for the change evaluation activities, and understanding (and being able to assess) the balance between managing risk and taking risks because this affects the overall strategy of the organization and service delivery. A further challenge is to measure and demonstrate less variation in predictions during and after transition.
The remaining challenges to change evaluation include taking a pragmatic and measured approach to risk and communicating the organization’s attitude toward risk and approach to risk management effectively during risk evaluation. Finally, change evaluation needs to meet the challenges of building a thorough understanding of risks that have impacted or may impact successful service transition of services and releases and encouraging a risk management culture where people share information.
The most common risks to the success of this process include a lack of clear criteria for when change evaluation should be used and unrealistic expectations of the time required. Another risk is that staff members carrying out the change evaluation have insufficient experience or organizational authority to be able to influence change authorities. Finally, projects and suppliers who fail to deliver on the promised date cause delays in scheduling change evaluation activities.
This chapter explored two more processes in the service transition stage: service validation and testing and change evaluation. You learned how service validation and testing ensures that the release is fit for use and fit for purpose. We explored the activities relating to the process and the requirements for information management.
We considered the process of change evaluation and the various outputs in the form of interim or final reports.
Finally, we examined how each of these processes supports the other and the importance of these processes to the business and to the IT service provider.
Understand and be able to list the objectives of service validation and testing. The objectives of service validation and testing are to ensure that a release will deliver the expected outcomes and value and to provide quality assurance by validating that a service is fit for purpose and fit for use. The process aims to provide objective evidence of the release’s ability to fulfill its requirements.
Understand the dependencies between the service transition processes of release and deployment management, service validation and testing, and change evaluation. Release and deployment management is responsible for ensuring that appropriate testing takes place, but the testing is carried out in service validation and testing. The output from service validation and testing is then a key input to change evaluation.
Be able to list the typical types of testing used in service validation and testing. The main testing approaches used are as follows:
Be able to describe the use and contents of a test model. A test model includes a test plan, what is to be tested, and the test scripts that define how each element will be tested. A test model ensures that testing is executed consistently in a repeatable way that is effective and efficient. It provides traceability back to the requirement or design criteria and an audit trail through test execution, evaluation, and reporting.
Understand the purpose of change evaluation. Change evaluation enables us to understand the likely performance of a service change and how it might impact the business, the IT infrastructure, and other IT services. It provides a consistent and standardized means of assessing this impact by assessing the actual performance of a change against its predicted performance.
Be able to explain the objectives of change evaluation. The objectives of change evaluation include setting stakeholder expectations correctly and preventing changes from being accepted into the live environment unless it is known that they will behave as planned. Another objective is to provide good-quality outputs to enable change management to decide quickly whether a service change is to be authorized.
Understand which changes go through the formal change evaluation process and the production of an evaluation report. Only those changes the organization has deemed important enough or risky enough to require a report will go through the formal change evaluation process; simpler changes will not, except through the change review process.
Be able to list and explain the main process interfaces with change evaluation. The major interfaces are with the following processes:
You can find the answers to the review questions in the appendix.
Which of the following is not provided as part of a test model?
Which is the correct order of actions when a release is being tested?
Where are the entry and exit criteria for testing defined?
The diagram mapping the types of test to each stage of development to ensure that testing covers business and service requirements as well as technical ones is known as what?
Which of the following are valid results of an evaluation of the test report against the exit criteria?
Which option is an objective of change evaluation?
Which is not a factor considered by change evaluation?
What may be included in an evaluation report?
Which is the best description of a performance model?
Which process triggers change evaluation activity?