Chapter 46
Continual Service Improvement Methods and Techniques

THE FOLLOWING ITIL INTERMEDIATE EXAM OBJECTIVES ARE DISCUSSED IN THIS CHAPTER:

  • ✓  How to perform and interpret:
    • Assessments
    • Gap analysis
    • Benchmarking
    • Service measurement
    • Metrics
    • Balanced scorecards
    • SWOT analysis
    • Service reports
    • Return on investment
  • ✓  How CSI can use processes to support its activities:
    • Availability management
    • Capacity management
    • IT service continuity management
    • Problem management
    • Knowledge management

 In this chapter we consider how to carry out some common CSI techniques. We also explore the use of measurement and metrics. The chapter also provides information on the support of continual service improvement from other service management processes.

Assessments

Assessments are formal mechanisms for comparing the operational process environment to performance standards for the purpose of measuring improved process capability and/or identifying potential shortcomings that could be addressed.

Assessments enable the sampling of particular elements of a process or organization that impact its efficiency and effectiveness. By conducting a formal assessment, an organization is demonstrating a significant commitment to improvement because assessments involve real costs, take up staff time, and require the management teams to be completely supportive and engaged in the activity.

Comparison of the operating environment to industry norms should be a relatively straightforward process. It is important to identify the “norm” that will be most effective for comparison. Assessments based on comparison to a maturity model have become common over the years.

A well-designed maturity assessment framework will evaluate all aspects of the process environment, people, processes, and technology. It will also cover factors that affect process effectiveness and efficiency in the organization, such as cultural factors, the process strategy and vision, governance, reporting and metrics, business and IT cooperation and alignment, and decision-making.

The initial step in the assessment process is to choose (or define) the maturity model and in turn the maturity attributes to be measured at each level.

A suggested approach is to turn to the best practice frameworks, such as CMMI, COBIT, ISO/IEC 20000, or the process maturity framework. These frameworks define maturity models directly or a model can be inferred. The frameworks are also useful in the definition of process maturity attributes.

When to Assess

Assessments can be carried out at any time, but it is good practice to associate an assessment to the improvement cycle (Plan-Do-Check-Act).

If we consider the Plan stage, there should be an assessment carried out as part of the project initiation. This is particularly important at the beginning of a process improvement initiative. When processes are being introduced, they should be assessed as part of the baseline for the improvement. Processes can be of many configurations and design, which increases the complexity of assessment data collection.

Planning can be a lengthy activity because incremental plans may be agreed on midstream during the project. Assessment taking place in the course of a process improvement—ensures that the project objectives are being met and can provide evidence that benefits are being achieved from the investment in time and resources.

When the process is in progress, we are part of the Do-Check stages. Assessment during a project—for example, at the conclusion of a project stage for process improvement—is important to validate the maturation of process and the process organization that was achieved through the efforts of the project team. Periodic reassessment following an improvement initiative will ensure that quality standards are maintained or further improvements are identified.

What to Assess and How

Setting the scope of the assessment is obviously a very important decision. A key consideration must be the objective of the assessment and what the expected future uses of the process assessments and assessment reports will be. Assessments can be targeted broadly at current processes or focussed on specific issues within the process environment.

There are three potential scope levels, namely process only; people, process, and technology; and a full assessment, including culture.

Process Only

The first of these is a process-only assessment of process attributes based on the general principles and guidelines of a process framework.

Process, People, and Technology

Extend the assessment of the process to include people and technology. This will mean that the skills and roles of management and practitioners involved with the processes will be included. It also includes the technology in place to support the processes.

Full Assessment

A full assessment extends the people, process, and technology assessment to include the whole organization supported by the service provider.

The full assessment will cover the culture of acceptance of the improvements within the organization and the ability of the organization to articulate a process strategy, which should include the vision for the process environment in the end. This will drive how the processes and functions are structured and the ability of the process governance to ensure that the process objectives are met. The strategy will define the alignment and cooperation between the business and IT in using the process framework.

A key part of this will also be the assessment of the reporting and metrics. The strategy will cover the capacity and capability across the business and IT of decision-making practices to improve processes over time. A full assessment will cover all of these aspects to give a complete review of the overall health of the IT organization.

How to Assess

Assessments can be conducted by the sponsoring organization or with the aid of a third party. Table 46.1 shows the pros and cons of these differing approaches.

Table 46.1 Pros and cons of assessment approaches

Pro Con
Using external resources for assessments
Objectivity
Expert ITIL knowledge
Broad exposure to multiple IT organizations
Analytical skills
Credibility
Minimal impact to operations
Cost
Risk of acceptance
Limited knowledge of existing environments
Improper preparation affects effectiveness
May not be there to see it through to the end to witness the results, good or not
Performing self-assessments
No expensive consultants
Self-assessments available for free
Promotes internal cooperation and communication
Good place to get started
Internal knowledge of environment
Can repeat exercise in future at minimal cost, using newly acquired skills
Lack of objectivity (internal agendas)
Little acceptance of findings
Internal politics
Limited knowledge or skills
Resource intensive
Inability to see the wood for the trees; assessment often needs a fresh set of eyes
Detracts from the day job; unless backfilled, could inadvertently reduce service effectiveness and efficiency during assessment

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS.

The advantages of conducting a self-assessment are the reduced cost and the experiential learning of how to objectively assess relative performance and progress of an organization’s processes. The downside is, of course, the difficulty associated with remaining objective and impartial.

Use of a third party can eliminate the lack of objectivity. There are a number of public “mini-assessments” that are available on various websites and provide a general perspective of maturity, but a more detailed assessment and resulting report can be produced by a firm specializing in an assessment practice. The increased cost of a third-party assessment can be balanced against the objectivity it provides and the experience that comes with performing assessments regularly.

Whether conducted internally or externally, the assessment should be reported using the levels of the maturity model. A best-practice reporting approach is to communicate assessment results in a graphical fashion. Graphs are an easy tool because they can fulfil multiple communication requirements; for example, they can be used to reflect changes or trends of process maturity over time or to compare the current assessment to standards or norms. It is often easier to provide a visual rather than textual report of the results because improvements can be seen at a glance. No graph should be unsupported by explanatory text; there needs to be a clear definition of the data, its source, and how it was used to produce the graphical output.

Advantages of Assessments

Assessments can provide an objective perspective of the current operational process state. This perspective can be compared to a standard maturity model and a process framework. Once a thorough assessment has been conducted, an accurate identification of any process gaps can be quickly completed, recommendations for remediation put forward, and action steps planned.

A well-planned and well-conducted assessment is a repeatable process. The assessment should be a useful management tool for measuring progress over time and establishing improvement targets or objectives.

Using an accepted industry-recognized maturity framework applied to a standard process framework allows an organization to compare its findings against a wider industry standard. This may be useful in promoting the organization for commercial tender.

An assessment provides information for the improvement cycle, answering the “Where are we now?” question and highlighting potential improvement areas.

Risks of Assessments

Of course there are risks to carrying out an assessment. It will only provide a snapshot of a specific state at a specific time. Dependent on the assessment mechanism used, you may be tying your organization into a particular vendor-specific choice of assessment and maturity framework. Occasionally, organizations find that the assessment and the achievement of the targets becomes an end in itself and the actual benefits and improvements that will help the organization are lost in the achievement of the maturity targets.

All assessments take resource effort, both for the practitioners of the processes and the assessors. It is important to understand the impact of this and realistically schedule the right amount of time. This is one of the many challenging aspects for assessments, but it is often overlooked.

Whatever the results, any assessment will require some interpretation and will therefore be subject to the experience, attitude, and approach of the assessor. It is important to try to establish objectivity as much as possible, but this may not be possible.

Consideration of who carries out the assessment is also important, as is ensuring that objectivity is retained through subsequent repeated audits. This is a particular challenge if the same assessor is used or the assessment is carried out internally. Knowing what was previously in place will not necessarily show where any new improvements are needed or if something has relapsed to an earlier maturity level.

Assessment Considerations

In the CSI journey, the decisions as to what to improve are critical to the overall results that can be achieved. Any discussion on improvements has to begin with the services being provided to the business. This could lead to improvements of the service itself or to process improvements that support the business service.

Figure 46.1 shows the relationships between services, processes, and systems.

Diagram shows the relationship between logical layer that contains business process, IT service, and IT system and physical layer that contains IT components.

Figure 46.1 The relationships between services, processes, and systems

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS.

Service improvements are governed by the improvement lifecycle. The improvement lifecycle is modeled after the Deming Cycle of Plan-Do-Check-Act. The cycle establishes a clear pattern for continual improvement efforts. Assessment will be an important input into the planning and part of the output from the planning stages.

Value of Processes vs. Maturity of Processes

For service management process improvement projects, one of the questions should address how mature we need our processes to be. The answer is tied directly back to the business. In other words, how important a process is to the business.

In Figure 46.2, you can see the value of a process mapped to the importance to the business using three examples—service level management (SLM), availability management (AM in the figure), and capacity management (CAP in the figure).

Value of IT processes to the business versus maturity plot shows area of high risks on top left, largely overdoing IT on bottom right, and areas of added business value, quick gains et cetera.

Figure 46.2 The value of a process versus the maturity of a process

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS.

An assessment has shown that these processes are not very mature. This particular organization is changing its strategy for selling and delivering products and services to a web-based strategy. Because of the importance of capacity management and availability management to any organization that provides products and services over the Web, this company has to implement an improvement program for increasing the maturity of both processes, because without any improvement initiatives, this particular organization is putting itself at risk.

Having a low SLM process maturity will create some issues for CSI activities. SLM identifies new business requirements and provides information on what is currently being monitored and performance against targets. Without this information, CSI will have no baseline data for comparison.

The maturity of a process should ideally fall in the “safe” areas. If a process is immature but the business heavily depends on it, there is potentially a significant danger to the organization. If a process is very mature yet provides very little to the business, then an organization may be overinvesting resources and money. It is important to understand not only the value to the business, but also the relationship to other processes when making this assessment. Consider the impact of the removal of problem management to incident management, for example. A mature problem management process may be very proactive, and the benefit to the organization will be difficult to assess in business terms. But without it, the impact on incident management will definitely be adverse.

When CSI is looking at improving processes in support of IT services, it’s critical to understand the value of processes to a business as well as their function in the lifecycle as a whole.

Gap Analysis

Gap analysis is a logical next step following benchmarking or assessment. A gap analysis requires that the variance between the business requirements and the current capability is determined, documented, and approved. Once the current capability has been identified, comparison between it and the business requirements can be made; this is the gap analysis. This is the way we can identify the difference between what we want and what we need.

Analysis can be performed at the strategic, tactical, or operational level of an organization. Gap analysis can be conducted from different perspectives within the organization, for example, the organization itself, including the organizational structure and capabilities of the people. Other perspectives might include the business direction or the business processes. There is a justification for looking at the analysis from the perspective of information technology, particularly where it is changing rapidly and new technology may provide a significant benefit to the organization.

Gap analysis provides a foundation for how much effort, in terms of time, money, and human resources, is required to have a particular goal achieved—for example, how to bring a service from a maturity level of 2 to 3.

Benchmarking

Benchmarking is a specific type of assessment and is a process used in management, particularly as part of strategic management. It is used in organizations to evaluate aspects of their processes in relation to best practice. One of the key aspects is that it enables a decision to be made on how to achieve best practice if there is an identified shortfall. Benchmarking may be a one-time event, but it is often treated as a continuous and repeatable process in which organizations continually seek to amend their practices, which supports the goals of CSI.

Benchmarking is actually a logical sequence of stages that an organization goes through to achieve continual improvement in its key processes. It involves cooperation with others because benchmarking partners can learn from each other where improvements can be made.

There are some key requirements for benchmarking success. First, it is necessary to ensure that there is sufficient management support at a senior level. Benchmarking should be objective and inclusive, gaining information from the business, internal IT, and external sources. It is very important to make sure an external view is considered as well as the internal organizational concerns. When completing comparisons, remember that it is necessary to compare processes, not outputs, across organizations. Processes may be the same, but the output may be very different dependent on the nature of the business undertaken or the industry sector being used as a comparison. If only the outputs from processes are compared, then there is a potential for missing improvements made in other industry sectors.

In addition, it is important to involve the process owners to ensure the support and buy-in by those who will be affected by any improvements. It is wise to set up benchmarking teams who will be instrumental in developing the culture within the organization. Individuals undertaking benchmarking will require some training and guidance. It is important to get assistance from an experienced in-house facilitator or an external consultant who will be able to provide experience in the chosen method.

It is important for organizations to plan their benchmarking process based on their own improvement needs, but it is necessary to understand that this may require measurement of other companies. A research organization may be a valuable benchmarking partner, for example, if target companies are competitors. Some cross-industry figures may be published by the international research organizations, but they will not necessarily include the assumptions and measurements a given organization needs.

Benchmarking is generally expected to be a process of comparing an organization’s performance to industry-standard figures. This is often a challenge because having such benchmark figures available is often seen as the first hurdle in a benchmarking exercise. But benchmarks are relevant only when the comparison is of the same performance measures or indicators and with similar organizations in terms of size, industry, and geography.

Benchmarking Procedure

For benchmarking to be successful, it is important to identify your problem areas.

A range of research techniques may be required, such as informal conversations with customers, employees, suppliers, or focus groups to capture feedback. More formal approaches using marketing research and quantitative research can provide industry sector data. Internally, feedback can also be gained by using surveys and questionnaires. Often the result of a benchmarking activity will be process mapping and reengineering analysis. Quality control should be applied to variance reports to provide information on process achievements, and financial data will be used to understand the balance between cost and efficiency.

Benchmarking Costs

Benchmarking is a moderately expensive process, but most organizations find that it more than pays for itself. There are three main types of costs.

The costs associated with travel- and accommodation-related expenses for team members who need to travel to the site are known as visit costs. This is applicable for either internal or external assessors, but it is more likely when using an external organization because none of the team members will be based at the organization’s sites.

Time costs will be significant if the assessment is to be completed thoroughly. Members of the benchmarking team will be investing time in researching problems and finding exceptional companies to study and on visits and implementation. This will take them away from their regular tasks for part of each day, so additional staff might be required.

Once benchmarking is part of business-as-usual practice, it is important to capture and manage the data collected. It is useful to create and maintain a database of similar best practices and the companies associated with each best practice.

Value of Benchmarking

Benchmarking can be seen as valuable only if the results are clearly communicated. This should include displaying the gaps, identifying the risks of not closing the gaps, and assisting with the prioritization of development activities and facilitating communication of this information.

Benchmarks show profiles of existing quality in the marketplace and industry sector. Demonstration of quality by comparison to benchmarks can motivate staff and aid retention. Achievement of a quality standard can be a source of pride and self-confidence in employees because it shows that they work in an efficient environment.

Customers will be able to see that the organization is a good IT service management provider.

Optimizing service quality is key to all IT organizations to maximize performance and customer satisfaction and provide value for money. Using a benchmark as a comparison allows organizations to demonstrate their achievement.

Benchmarking as a Lever

It is common to hear staff say, “The way we do it is the best because this is the way we’ve always done it.”

Benchmarking is often a way to open an organization to new methods, ideas, and tools to improve their effectiveness. It can help break through resistance to change by demonstrating methods other than the ones currently employed and demonstrating evidence that others are using them successfully.

Benchmarking as a Steering Instrument

Benchmarking should be used as a management technique to improve performance. It is used to compare performance between different organizations or different units within a single organization undertaking similar processes.

It can be used as an ongoing method of measuring and improving products, services, and practices against the best that can be identified in any industry anywhere. It has been defined as “the search for industry best practices that lead to superior performance.” Benchmarking can support management in driving the direction of organizational change.

Benchmarking Categories

An internal benchmark is where an organization sets a baseline at a certain point in time for the same system or department and then measures to see how it is doing today compared with the baseline originally set. This type of benchmark is often overlooked by organizations (service targets are a form of benchmark), but they can be as useful as comparing to external benchmarks by showing an improvement progression.

Other benchmarking categories are comparisons with industry norms provided by external organizations, direct comparisons with similar organizations, and comparison with other systems or departments within the same company.

Benefits

Using benchmark results should help deliver major benefits in achieving lower prices and higher productivity on the part of the service provider. This should include identifying efficiencies by comparing the costs of providing IT services and the contribution these services make to the business with what is achieved in other organizations. This helps the organization to identify areas for improvement.

Benchmarking will also demonstrate effectiveness in terms of actual business objectives realized compared with what was planned. To obtain the maximum benefit, it is necessary to look at economy, efficiency, and effectiveness rather than focusing on one to the exclusion of the others.

Who Is Involved?

Within an organization, there will be three parties involved in benchmarking. Each has a different perspective on the results of benchmarking and how to apply them.

  • The customer or the business manager responsible for acquiring IT services to meet business objectives. The customer’s interest in benchmarking would be, “How can I improve my performance in procuring services and managing service providers, and in supporting the business through IT services?”
  • The user or consumer, namely anyone who uses IT services to support their work. The user’s interest in benchmarking would be, “How can I improve my performance by utilizing IT?”
  • The internal service provider who provides IT services to users under service level agreements negotiated with and managed by the customer. The provider’s interest in benchmarking would be, “How can we improve our performance in the delivery of IT services that meet the requirements of our customers and are cost-effective and timely?”

There will also be participation from external parties:

  • External service providers provide IT services to users under contracts and service level agreements negotiated with and managed by the customer.
  • Members of the public are increasingly becoming direct users of IT services, and this is challenging when attempting to benchmark against their needs.
  • It is important not to forget the input that will be required from benchmarking partners, that is, other organizations with whom comparisons are made in order to identify the best practices to be adopted for improvements.

What to Benchmark

Differences in benchmarks between organizations are normal. Each organization will have a slightly different setup; no two will be exactly the same. Direct comparison with similar organizations is most effective if there is a sufficiently large group of organizations with similar characteristics.

Benchmarking techniques can be applied at various levels, from relatively straightforward in-house comparisons to an industry-wide search for best practice. Benchmarking should follow the continual service improvement seven-step process to ensure that appropriate data is collected, analyzed, presented, and acted on.

Comparison with Industry Norms

ITIL is itself an industry-recognized best practice. The core publications provide documented guidance on benchmarking and process assessment. There are many organizations that provide IT service management consultancy and professional expertise in benchmarking, which may be useful to an organization. Use of maturity models is supported by a number of frameworks, and there is Capability Maturity Model Integration (CMMI), which is widely recognized.

Total cost of ownership (TCO), developed by Gartner, has become a key measurement of the effectiveness and efficiency of services. TCO is defined as all the costs involved in the design, introduction, operation, and improvement of services within an organization from its inception until retirement. TCO is often used to benchmark specific services in IT against other organizations, for example, managed service providers.

Benchmark Approach

Benchmarking will establish the extent of an organization’s existing maturity with best practice and help in understanding how that organization compares with industry norms. Deciding what the key performance indicators (KPIs) are going to be and then measuring against them will give solid management information for future improvement and targets.

There are two basic approaches: either an internal benchmark, which is completed internally using resources from within the organization to assess the maturity of the service management processes against a reference framework, or an external benchmark, completed by an external third-party company. A third party will probably have its own proprietary models for the assessment of service management process maturity.

Viewed from a business perspective, benchmark measurements can help the organization to assess IT services, performance and spend against peer or competitor organizations, and best practice, both across the whole of IT and by appropriate business areas. There are a number of questions often asked about IT, such as: How does IT spending compare to other similar organizations—overall, as a percentage of revenue, or per employee? It is hard for an organization to understand whether or not it is spending too much compared to similar organizations for basic functions such as payroll or to compare spending across business units, locations, or processes. Competitors are unlikely to divulge details about spending on these types of services because it will be perceived as commercially confidential information. So a benchmarking activity against an agreed norm is often the only approach to support business understanding and justification of the costs of IT.

Benchmarking activities need to be aligned to the business. If carried out thoroughly, a benchmarking exercise, whether completed internally or externally, will incur significant costs. It is important that the benchmark is targeted to identify areas that will be of most value to the business.

The approaches to benchmarking can include an assessment of the cost and performance for internal service providers or the price and performance for external service providers. Or it can focus on the performance of processes against industry best practice. The comparison to industry sector or peer information relating to financial performance of IT is another assessment, as is effectiveness based on customer satisfaction ratings and business alignment.

Whichever approach is adopted, it is important that the context for benchmarking requires information about the organization’s profile and complexity and relative comparators. An effective and meaningful profile contains four key components.

The company profile provides basic information. Company size, industry type, geographic location, and types of user are typical of data gathered to establish this profile.

There also needs to be an understanding of the current assets because the IT assets within the organization may include operational IT, desktop and mobile clients, peripherals, and network and server assets.

It is important to understand current best practices, including the policies, procedures, and/or tools that improve returns and their maturity and degree of usage. Understanding the organization includes information about the end-user community, the types and quantities of varied technologies in use, and how IT is managed.

There are a variety of IT benchmarking types available separately or in combination:

  • Cost and performance for internal service providers
  • Price and performance for external service providers
  • Process performance against industry best practice
  • Financial performance of high-level IT costs against industry or peers
  • Effectiveness benchmarking, which considers satisfaction ratings and business alignment at all levels

Benchmarking will establish the extent of an organization’s existing maturity with best practice and will help in understanding how that organization compares with industry norms.

Service Measurement

IT services have become integral for businesses of all sizes, private and public organizations, educational institutions, consumers, and the individuals working within these organizations. Without IT services, it is hard to see how any organization would be able to deliver their products and services in today’s market. This has an impact on the expectations for availability, reliability, and stability because reliance on IT is paramount. It is why the integration of business and IT is so important, it is hard to think of a circumstance where they could be considered separately and an organization would still survive.

As a direct consequence, businesses require that IT services are measured, not just the performance of an individual component such as a server or application. IT must now be able to measure and report against an end-to-end service and understand how this service supports and enables the business to achieve its goals.

The seven-step improvement process discusses the need to define what you will measure after looking at the requirements and the ability to measure.

Most organizations will consider specific areas of measurement, the first being the availability of the service. It could be said that this is often the primary focus of the business in terms of understanding the support delivered by IT services. If the service is unavailable, then business may simply stop. Think for a moment about the impact of a web portal outage on an online retailer. Availability is critical to success.

Supporting availability is the reliability of the service. Service availability may be good, but if it is interrupted by minor outages on a repeated basis, this will not be satisfactory for the user experience. A service that is restored quickly and often may meet targets for availability overall, but the perception from the users will be negative. Reliability is a measure of continuous performance and will ensure that the business has trust in the services provided.

Measurement of overall performance is crucial to understanding the business impact of a service rather than its components. Measuring at the component level is necessary and valuable, but service measurement must go further. Service measurement will require someone to take the individual measurements and combine them to provide a view of the true customer experience.

Too often we provide a report against a component, system, or application but don’t provide the true service level as experienced by the customer. In Figure 46.3, you can see how it is possible to measure and report against different levels of systems and components to provide a true service measurement. Even though the figure shows availability measuring and reporting, the same can apply for performance measuring and reporting.

Diagram shows service email on top logical layer, system-exchange and system-lotus notes on second logical layer, exchange hardware and software on third logical layer, and servers and SQL DB on physical layer.

Figure 46.3 Availability reporting

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS.

Design and Develop a Service Measurement Framework

It is always challenging for an organization to create a measurement framework that supports the business requirements. One of the key factors for this is the definition of what success looks like. We need to be mindful of both the past and the future; measurement should allow for the identification of future improvement as well as report on past performance.

Whether measuring one or multiple services, the following are key to a successful service measurement framework.

  • The origins of the framework and defining what success looks like; in other words, what are we trying to achieve and how will we know when we’ve achieved it?
  • Ensuring that we are building the framework and choosing measures that will provide us with information to make strategic, tactical, and/or operational decisions.

It is important to select measures that will deliver the data and information we need based on agreed targets within IT and the business.

There are some critical elements that should be included in a service measurement framework. For example, the framework should be integrated into business planning and focused on business and IT goals and objectives. It should support cost-effectiveness, with a balanced approach to the measures applied that can be sustained over a period of time and withstand change. The framework must clearly identify the performance measures that will encourage the behaviors desired and be accurate, timely, and reliable. It is also important to ensure that the roles and responsibilities are clearly defined, so there is no doubt about who defines the measures and targets, who monitors and measures, and who gathers and analyzes the data and prepares the reports.

Different Levels of Measurement and Reporting

A service management framework should be built on different metrics and measurements so that the end result is a combined view of the way the individual components support the overall service. This in turn should provide information to the key performance indicators, allowing us to ensure that targets are being achieved. This will then be the basis for creating a service scorecard and dashboard.

The service scorecard can then be used to populate an IT scorecard or overall balanced scorecard. Figure 46.4 shows a diagrammatic representation of the multiple levels that need to be considered when developing a service measurement framework.

Diagram shows component measures, rolled-up service measurement results, key performance indicators, service scorecard and dashboard, and balanced scorecard from bottom to top.

Figure 46.4 Service measurement model

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS.

Service Management Process Measurement

There are four major levels to report on. They are shown in Figure 46.5.

Diagram shows activity metrics for a process, process performance indicators, process high level goal, and overall service management scoreboard from bottom to top.

Figure 46.5 Service management model

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS.

The bottom level contains the activity metrics for a process, and these are often volume-type metrics such as number of requests for change (RFCs) submitted, number of RFCs accepted into the process, number of RFCs by type, number approved, number successfully implemented, and so on.

The next level contains the KPIs associated with each process. The activity metrics should feed into and support the KPIs. In turn, the KPIs will support the next level, which is the high-level goal such as improving service quality, reducing IT costs, or improving customer satisfaction.

Finally, this high-level goal will feed into the organization’s balanced scorecard or IT scorecard. When first starting out, it is important not to pick too many KPIs to support the high-level goal(s). Additional KPIs can always be added at a later time.

Creating a Measurement Framework Grid

As a significant part of this approach, best practice recommends that the organization create a framework grid to set out the high-level goals and define which KPIs will support the goal, and also which category the KPI addresses.

An example of this can be seen in Table 46.2, which is an extract from the CSI core publication.

Table 46.2 High-level goals and key performance indicators

High-level goal KPI KPI category Measurement Target How and who
Manage availability and reliability of a service Percentage improvement in overall end-to-end availability of services Value
Quality
End-to-end service availability based on the component availability that makes up the service
AS 400 availability
Network availability
Application availability
99.995% Technical managers Technical analyst
Service level manager

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS.

In this example, the high-level goal relates to availability and reliability of a service, with a qualitative KPI to demonstrate a percentage improvement in overall availability. We can see the components measured and the desired target achievement as well as those responsible for the measurement.

When considering performance, it is important to recognize that there are different elements that combine to give an overall perception of achievement. These can be classified as compliance, quality, performance, and value.

  • Compliance is a measure to demonstrate if we are doing something.
  • Quality allows us to measure how well are we doing something.
  • Performance demonstrates the speed and urgency of carrying out something; in other words, how fast or slow are we doing it.
  • And last but by no means least, we have value, and this determines if what we are doing is making a difference.

Setting Targets

Targets set by management are quantified objectives to be attained. They may express the aims of the service or process at any level and provide the basis for identification of problems and early progress toward solutions and improvement opportunities.

It is important to recognize the variety of drivers for the service targets used in reporting. Some may be driven by business requirements or new policies or regulatory requirements. SLAs are also key drivers for targets, but it is necessary to ensure that service level management has verified the capability of the IT department to deliver on them.

Metrics

There are three types of metrics that are used to support the activities of service improvement. Making sure your metrics include all three types will ensure a well-rounded approach to measuring your services.

The three types that should be considered are technology, process, and service metrics:

  • Technology metrics, measuring the response, availability, and performance of individual components, may not be easy for nontechnical folks to interpret, but combined with process metrics, they provide vital information for the measurement of end-to-end service.
  • Process metrics relate to the quality, performance, value, and compliance for processes by capturing critical success factors associated with key performance indicators.
  • Service metrics combine these to produce end-to-end service measures.

Metrics define what is to be measured, using a scale of measurement that has been agreed to as a clearly defined unit. Many business models use metrics at their base (CMMI, for example). Metrics are used to track trends, productivity, resources, and more. The most commonly tracked metrics are KPIs. Figure 46.6 shows the relationship between the overall vision and the measurements that prove it has been achieved.

Diagram shows a diagonal arrangement which starts from vision followed by mission, goals, objectives, CSFs, KPIs, metrics, and measurements.

Figure 46.6 From vision to measurement

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS.

It is important to ensure that this relationship is recognized so that the measures that are applied support the achievement of the organizational vision.

How Many CSFs and KPIs?

It’s a valid question—how many critical success factors and KPIs should we have?—and opinions on this are varied. The more KPIs there are, the more complex the reporting model and analysis required to interpret them will be.

Good practice suggests that CSFs should be supported by a number of associated KPIs, but there is no defined number for either. Even a very mature organization is unlikely to have more than five CSFs per process, with no more than five KPIs per CSF. But that still adds up to a potentially high number of metrics. So it is recommended that in the early stages of a CSI program, only two or three KPIs for each CSF are defined, monitored, and reported on. As the maturity of a service and service management process increases, further KPIs can be added.

Remember, KPIs will change over a period of time as their importance to the business and the maturity of the service provision alters, and this may have effect on other KPIs and processes. But the changes to KPIs must be carefully considered so that trending information or the value of the metric is not lost by too frequent alterations.

Qualitative KPIs

Qualitative KPIs are based on achievement of a quality-based CSF, such as improving service quality. In order to achieve the CSF, a specific KPI must be identified. In this example, the metrics required will be the customer satisfaction scores for handling incidents.

The measures needed to support this will be the incident handling survey score and the number of survey scores. It is important for a representative sample to be used for quality, so the number of survey scores should be captured. A sample from only one customer will not give a true representation of the facts.

Quantitative KPIs

Quantitative KPIs are based on achievement of a quantity-based CSF, such as reducing IT costs. In order to achieve the CSF, a specific KPI must be identified. For example, suppose the metrics required will be the costs of handling printer incidents at the start and end of the initiative and the cost of the improvement initiative itself.

There will be a number of measures to be considered for this KPI, including the costs associated with the salaries of the analysts working on the printers, costs of service calls to third parties, and the costs of developing any workaround for printers.

Is the KPI Fit for Use?

It is important to ensure that the KPI is fit for use. There are some key questions that should be addressed to ensure that your KPI will provide the information required.

How will the KPI help to achieve the goal? Does it provide any information on whether or not the goal will be achieved if we meet the target? Does the indicator provide enough information to establish a course of action? What is the required frequency of information? Is the KPI stable and accurate? Does it take into consideration external influences that may impact the results? Can it be changed to reflect different organizational circumstances? Can the performance indicator be measured now, and what would stop it from being measured?

It is also necessary to understand who will be managing the KPI: who is collecting the data, performing the analysis, interpreting the results, and producing and delivering the reports?

Tension Metrics

Tension metrics ensure that the team efforts stay in balance by measuring a combination of the elements that deliver a successful support team. There should be a balance of resources (the people and the money), the features (the product or service and the quality of that product or service), and the schedule (an element of timeliness).

Focusing on one factor above others will cause an imbalanced approach. For example, if too much focus is placed on delivering to a schedule, quality may be impacted by the increased speed to delivery. Similarly, concentrating too much on service quality may impact on the financial resources in use because high quality is costly and the end product needs to be cost justifiable.

Tension metrics are designed to measure a balanced approach so that if one measure drives a specific behavior, such as a service desk answering calls quickly, we have a measure in place to ensure that quality does not suffer to beat the time constraint. This is why there should always be a suite of measures in place; concentrating on any one aspect may harm the overall service.

Tension metrics should not cause conflict with goals or objectives but should ensure that the team maintains an overall focus on quality of service across all aspects of delivery.

Goals and Metrics

Goals and metrics are important for all stages of the lifecycle—from strategy, where the organization will decide on how IT will be funded; through design, where business requirements are translated into IT solutions; through transition, where they are made a reality; to operation, where the business finally sees the direct value of IT. Throughout the lifecycle, all stages must keep the business goals and requirements in mind, and these should be reflected in the goals and objectives for each stage.

Breaking Down Goals and Metrics

Best practice identifies three categories of metrics to be considered: financial metrics, learning and growth metrics, and organizational or process metrics.

Financial metrics include project costs or operational expenses, while learning and growth metrics may include increase in skill sets or certifications. Organizational or process metrics can be broken down further into product quality metrics and process quality metrics. Product quality metrics are the metrics supporting the contribution to the delivery of quality products. Process quality metrics are related to efficient and effective process management.

Using Organizational Metrics

Organizational metrics (including financial, learning and growth, and process metrics) are important for managing the overall service delivery and ensuring that teams and processes work successfully together to achieve the desired goal.

It is important to ensure that these are adopted to provide an overall approach for the organization, not just specific elements of IT. This is the relationship shown in the journey from vision to measurement.

Interpreting and Using Metrics

Results must be interpreted in the context of the objectives for the measures as well as any environmental or external factors. If results are considered out of context, they may be misinterpreted if extenuating factors have affected them. It is important to review the measures to ensure that the chosen indicators have worked and that the results are contributing to the overall objective of the service or process.

To make sure reports are useful and meaningful, it is important to ensure that the generated results are making sense. If the output does not show a probable or viable outcome, then investigation must take place into how this could have occurred. The investigation is not designed to identify blame but to rectify an error in reporting so the required results can be produced.

The following questions need to be answered to ensure that the results are verified properly:

  • How did we collect this data?
  • Who collected the data?
  • What tools were used to collect the data?
  • Who processed the data?
  • How was the data processed?
  • What could have led to the incorrect information?

Before starting to interpret results, you should always ensure that you have sufficient information about the data elements that have been used and the purpose of the results. It is important to understand the expected normal range for the results so that it is possible to identify any anomalous results or exceptions.

It is easy to jump to conclusions incorrectly; for example, a downward trend in calls opened at the service desk may be caused by a wide range of scenarios. There could have been a change in the way support is offered, perhaps by the introduction of self-service, or there could have been a failure in the telephone system. It should be noted if there have been any changes that might have triggered a set of results, and where insufficient information is available from simply reviewing the results, make sure the appropriate people are included in a discussion.

Using Measurement and Metrics

Metrics can be used for multiple purposes, and it is important to understand the objectives you are trying to achieve so that you select the correct purpose. For example, metrics can be used to validate a decision, such as whether you are supporting the strategy and vision of the business. They may also be used for justification, to answer the question, Do we have the right targets and metrics? Of course, we all understand that metrics can drive behaviors, so they can be used to direct and change people’s actions based on factual data. Often metrics are used to identify when an intervention needs to take place or to take corrective actions such as identifying improvement opportunities.

As always, it is important to ensure that there is a balanced approach; focusing only on identification of improvement can have a negative impact on staff morale.

Service measurements and metrics should be used to drive decisions. Depending on what is being measured, the decision could be strategic, tactical, or operational.

CSI will have to manage many improvement opportunities, but often with only a limited budget to address them, so decisions must be made. Which improvement opportunities will support the business strategy and goals, and which will support the IT goals and objectives? What are the desired return on investment (ROI) and value on investment (VOI) opportunities? Note that the measures and metrics are always being reviewed against desired business goals and outcomes to ensure that IT continues to align with and meet business expectations.

Measures by themselves may tell the organization very little unless there is a standard or baseline against which to assess the data. Measuring only one particular characteristic of performance in isolation is meaningless unless it is compared with something else that is relevant. Measures of quality allow for measuring trends and the rate of change over a period of time. The following comparisons may be useful:

  • Comparison of the assessment against the baseline or agreed standard. It is important to understand the criteria for any deviation from the standard so that you make only necessary improvements instead of responding to every discrepancy.
  • Comparison against a target or goal in an SLA is important so there is a clear understanding of fluctuations in service quality. This will strengthen a relationship between service provider and customer by demonstrating engagement and forward planning.
  • Comparison with other organizations is also helpful, but it is necessary to ensure that the strategy, goals, and objectives of other organizations align with yours.
  • Comparison over time, such as day to day, week to week, month to month, quarter to quarter, or year to year, is a commonly used approach for trend analysis. Remember to ensure that you are still comparing relevant and appropriate data samples. If a measure has been altered, it may be that the comparison is no longer valid.
  • Comparison between different business units and services allows the organization to ensure consistency across the enterprise as a whole.

Using measures and metrics is a powerful mechanism for the identification of improvements, and trend analysis can provide information to predict future performance.

It is also important to ensure that your measures take into consideration external factors, such as political influences or market forces outside of your organization.

Individual metrics and measures by themselves do not communicate very much from a strategic or tactical point of view. Some types of metrics and measures are often more activity based than volume based, but they have value from an operational perspective. They might include the services used and which customers are using those services. Understanding service usage both for the time of day and how often will help you understand the demand requirements, as will the medium through which it is accessed (for example, whether it’s internal, external, or web based). At a lower level of granularity, the performance and availability of components also provides useful information on the quality of the service delivered.

Each of these measures by themselves will provide some information that is important to IT staff (particularly the technical management staff), but it is the examination of all the measurements and metrics together that delivers the real value.

It is important for someone to take responsibility for looking at these measurements as a whole and to analyze trends and interpret the meaning of the metrics and measures.

Creating Scorecards and Reports

CSI should assume responsibility for ensuring that the quality of service required by the business is provided within the imposed cost constraints. CSI is also instrumental in determining if IT is still on course with the achievement of planned implementation targets and, if not, plotting course corrections to bring it back into alignment.

Using techniques such as the balanced scorecard and SWOT analysis, CSI can measure and report on the success of improvement actions.

Service measurement information is used for three main purposes: to report on the service to interested parties, for comparison against targets, and to identify improvement opportunities.

Reports must be appropriate and useful for all those who use them, and typically there are three distinct audiences for service management reports. The business will be interested in evidence that IT is focused on delivering services on time and within budget. IT management will have an interest in the tactical and strategic results that support the business, and the operational and technical IT managers will make use of the tactical and operational metrics that support their activities. The operational managers will also be interested in technology domain measurements such as component availability and performance.

Many organizations make the mistake of creating and distributing the same report to everyone, but this does not provide value for everyone because of varied interests.

Creating Scorecards That Align to Strategies

Reports and scorecards should be linked to overall strategy and goals. Using a balanced scorecard approach is one way to manage this alignment. In Figure 46.7, there is an illustration of how the overall goals and objectives can be used to derive the measurements and metrics required to support the overall goals and objectives.

Diagram shows some questions associated with strategy goals and objectives and their financial, customer, internal, and innovation and learning perspective.

Figure 46.7 Deriving measurements and metrics from goals and objectives

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS.

The arrows point both ways because the strategy, goals, and objectives will drive the identification of required KPIs and measurements, but it is also important to remember that the measures are input into KPIs and the KPIs support the goals in the balanced scorecard.

It is important to select the right measures and targets to be able to answer the question of whether the goals are being achieved and the overall strategy supported.

Creating Reports

When creating reports, it is important to know their purpose and the details required as well as the time frame to which they relate. Reports can be used to provide information for a single month, or a comparison of the current month with other months to provide a trend for a certain time period. Reports can be used to show whether service levels are being met or breached.

Before starting the design of any report, it is important to understand some key facts, which should be incorporated into the design. One of the first items to consider is the target audience. Most senior managers don’t want a report that is 50 pages long. They like to have a short summary report and access to supporting details if they are interested.

Also important is understanding what the report will be used for because this will make a difference to the content and the way that it is presented and interpreted. Basic information about roles and responsibilities for the creation and production of the report and the frequency of reporting will also be part of the design. The audience will drive the information that is to be shared or exchanged. For example, if the report is going to be read by senior managers, then it should be short, informative, and usable without sacrificing readability.

The report format must meet the needs of the audience but be repeatable and aid the understanding of the data.

The majority of service management tools provide out-of-the-box functionality for reporting, including some standard reports providing basic content. One of the criteria for a good service management toolset is that it has not only this capability, but also the capability to produce customized reports to meet the individual customer’s requirements. This should include the ability to generate output in an acceptable media and format, such as web-based, automatically generated reports.

Usually reports are set up to show the results for a service, with supporting reports giving individual measurements on components and the health of a service management process by using process KPI results and functional reports, for example, telephony reports for the service desk.

Table 46.3 includes some examples of key performance indicators, but it is important to apply those that match your business and organizational requirements and strategy.

Table 46.3 Sample key performance indicators

Process / Function KPI / Description Type Progress indicator
Incident management Incidents resolved within target time Value Meets/exceeds target times.
Incident management % of incidents closed—first call Performance Service desk only; target is 80%.
Service desk Abandon rate Service desk with automatic call distribution (ACD). 5% or less goal (after 24 seconds).
Incident management Count of incidents submitted by support group Compliance Consistency in number of incidents—investigation is warranted for (1) rapid increase, which may indicate infrastructure investigation, and (2) rapid decrease, which may indicate compliance issues.
Problem management % of repeated problems over time Quality Problems that have been removed from the infrastructure and have reoccurred. Target is less than 1% over a 12-month rolling time frame.
Problem management % root cause with permanent fix Quality Calculated from problem start date to permanent fix found. This may not include implementation of permanent fix. Internal target is fix 90% of problems within 40 days. External target is fix 80% of problems within 30 days. External target = third party/vendor.
Problem management % and number of incidents raised to problem management Compliance Sorted by infrastructure (internal and external) and development (internal and external).
Change management % of RFCs successfully implemented without back-out or issues Quality Grouped by infrastructure/development.
Change management % of RFCs that are emergencies Performance Sort by infrastructure or development and by emergency quick fix (service down) or business requirement.
Service asset and configuration management Number of configuration item (CI) additions or updates Compliance CI additions or updates broken down by group—configuration management database (CMDB) or change modules.
Service asset and configuration management Number of records related to CI Performance Number of associations grouped by process.
Release and deployment management % of releases using exceptions Value Exceptions are criteria deemed mandatory—identify by groups.
Release and deployment management % of releases bypassing process Compliance Identify groups bypassing release process.
Capacity management Action required Value Number of services that require action vs. total number of systems.
Capacity management Capacity-related problems Quality Number of problems caused by capacity issues sorted by group.

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS.

There are a wide variety of techniques used to measure IT and IT service effectiveness and efficiency, and they are often combined. CSI should be responsible for measurement of the quality of service (and corrections if the quality of service is below targets) while ensuring that IT is still operating within the financial constraints of the organization. CSI will measure progress, understand achievement against targets, and make corrections to improvements where required to remain on track.

It is tempting to allow reporting and measurement to become an end in itself because often the effort and analysis required for report generation requires a separate team of people within the department. CSI should ensure that the goal of reporting remains clearly focused on the progress toward the achievement of business goals and objectives and not simply chasing targets and producing statistics.

Setting Targets

The importance of setting the correct targets for your reports cannot be minimized. Targets set by management are quantified objectives to be attained. They express the aims of the service or process at any level and should provide the basis for the identification of problems and improvement opportunities.

It is as important as selecting the measures you will be using. Targets should be realistic but challenging based on the SMART principles (specific, measurable, achievable, relevant, and time-bound), and they should be easily understandable for those attempting to achieve them. Remember that the choice of measures and their targets can affect the behavior of those who are carrying out the work that is being measured. That is why it is always important to have a balanced approach.

A target that requires service desk staff to answer calls quickly will potentially drive them to clear callers off the line too quickly without resolution and to the detriment of the quality of the call response. It is important to provide a balanced approach and not rely on one target. A variety of targets will produce a holistic approach and maintain quality as well as meeting time-based requirements.

Once a target has been agreed, you must measure to provide a baseline so that improvement toward the target can be measured. Initially, it may not be necessary to report on this until a good statistical reference has been built to show progress.

Balanced Scorecard

Kaplan and Norton (in their paper published in the 1990s) documented the balanced scorecard technique, which involves the definition and implementation of a measurement framework covering four different perspectives: customer, internal business, learning and growth, and financial. These four linked perspectives provide a balanced scorecard to support strategic activities and objectives and can be used to measure overall IT performance. It is complementary to ITIL.

The balanced scorecard shares some common themes with the ITIL framework.

It looks at the client perspective of IT as a service provider, which is primarily documented in SLAs. It considers internal processes and operational excellence utilizing incident management, problem management, change management, service asset and configuration management, and release and deployment management as well as other IT processes and the successful delivery of IT projects. By considering learning and growth, the scorecard reviews business productivity, flexibility of IT, investments in software, professional learning, and development. Finally, the financial scorecard ensures that IT is aligned with business objectives, manages costs, manages risks, and delivers value. Financial management for IT services is the process used to allocate costs and calculate return on investment.

In Figure 46.8, you can see an example of an IT balanced scorecard, and in each sector a different aspect is reviewed.

  • In Customer: What do customers expect of IT provision?
  • For Internal (processes): What must IT excel at?
  • In Innovation (learning and growth): How does IT guarantee that the business will keep generating added value in the future?
  • And in Financial: What is the cost of IT?
Diagram shows an IT balanced scorecard with four sectors such as financial, customer, innovation, and internal. Questions about IT provision and their answer are written in all sectors.

Figure 46.8 IT balanced scorecard

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS.

Cascading the Balanced Scorecard

Many organizations are structured around strategic business units (SBUs). Each business unit will focus on a specific group of products or services offered by the business. Once a balanced scorecard has been defined at the SBU level, it can be cascaded down through the organization.

Many organizations use scorecards in all departments, even at the board level, because for each strategic business-level measure and related target, business units or departments can define additional measures and targets that support the strategic goal and target. Action plans and resource allocation decisions can be made with reference to how they contribute to the strategic balanced scorecard.

The Balanced Scorecard and Measurement-Based Management

The Balanced Scorecard approach covers a number of important aspects, including customer-defined quality of service, continual improvement, employee empowerment, and measurement-based management and feedback.

The balanced scorecard is complementary to total quality management (TQM) and uses the same approach of feedback for internal business process outputs, but has an additional feedback loop for the outcomes of business strategies. This ensures a more complete approach to overall quality across the organization. To achieve this, metrics should be developed based on the priorities of the strategic plan. It is this plan that provides the key business drivers and criteria for metrics that managers most desire to watch. Services and processes can then be designed to collect information relevant to these metrics. Remember, metrics and measurements are part of the design of a service, as covered in the service design lifecycle stage.

Metrics are valuable because they provide a factual basis for defining feedback from some key areas. Strategic feedback shows the present status of the organization; information is gathered from many perspectives for decision-makers. Improvement requires input from diagnostics on a continuous basis, and performance trends can be tracked over time. It is always important to make sure the measures themselves are under continuous review because business requirements change and metrics should change to reflect this. Metrics should also be used to support forecasting methods, providing a quantitative input to the approach.

SWOT Analysis

SWOT stands for strengths, weaknesses, opportunities, and threats. This technique involves the review and analysis of four specific areas of an organization: the internal strengths and weaknesses and the external opportunities and threats.

The analysis allows action to be taken to exploit and capitalize on an organization’s strengths while reducing, minimizing, or removing any weaknesses that have been identified. The external focus should encourage active engagement with opportunities while managing, mitigating, and eliminating threats.

SWOT analysis is a technique that can be applied quickly to a specific area of the business. It does not have to have an overall focus.

Purpose

SWOT analysis is a strategic planning tool. It is used to evaluate the strengths, weaknesses, opportunities, and threats associated with a project, business venture, or any other situation that requires decision-making.

How to Use SWOT Analysis

When SWOT analysis is used, the most important factor is to define the desired end state or objective. All the participants in the process must agree, and it must be clear and explicit. Without this clarity, the analysis will not be effective because each element is using the objective to define the specifics for each step. Because the subsequent actions following the analysis will be driven by the results, it is important to ensure that the objective is “SMART” (specific, measurable, achievable, relevant, and time-bound).

If we consider each step in turn, the analysis becomes clear as we define the action taking place. Strengths are the internal attributes that will be helpful in achieving the agreed objective, whereas weaknesses are internal attributes that will be harmful. The analysis then looks at external factors. Opportunities are those factors that will be helpful in achieving the agreed objective, and threats are external factors that will be harmful.

An accurate SWOT analysis is a useful planning tool, helping to answer questions such as, How can we use the strengths to our advantage? and How can we stop the weaknesses? Having a clear understanding of external factors is important; even if we are unable to change them, we may be able to adapt our practices to exploit them or mitigate against their impact.

Scope, Reach, and Range

A SWOT analysis can be performed at various levels throughout the organization. It can take place at an individual, departmental, divisional, or even corporate level. It is important to consolidate the results of the analyses, and this should be from the bottom up so that the lower levels of the hierarchy can be completed before the next is attempted.

For example, if the individual members of a functional team each perform a SWOT analysis to capture their individual perspective, the next SWOT analysis should be based on the team. Then multiple teams can perform the analysis until eventually the departmental level is reached. This would continue up to the corporate level.

It is also possible to conduct a SWOT analysis for a service or a process.

Common Pitfalls of a SWOT Analysis

It is important to align the SWOT analysis with the business vision, mission, goals, and objectives. If the end state of the analysis is not properly identified at the start, it may result in wasted resources and potentially failure. There are a number of common errors that can take place when carrying out a SWOT analysis.

One of the most common is conducting a SWOT analysis before defining and agreeing on the end state. Another common error is to confuse the external opportunities with the strengths in the internal organization. It is important to keep them separate. A further mistake is to confuse opportunities with possible strategies. SWOT is a description of conditions, while possible strategies define actions.

Creating a Return on Investment

For a return on investment (ROI) challenge, many factors need to be taken into consideration, one of which is the investment cost. This is the money an organization pays to improve services and service management processes. These costs will be, for example, internal resource costs, tool costs, and consulting costs. It is often easy to come up with these costs.

Another factor is what an organization can gain in a return. ROIs are often hard to define or quantify. Here are a few of the things that have to be considered when creating a return on investment (in addition to the cost of not implementing the improvement):

  • The cost of downtime, including the loss of productivity and loss of revenue
  • The cost of rework or redundant work, project work, and delayed implementation
  • The cost of the operating environment, escalation of incidents, and hourly costs

It is important to understand the ROI the business will receive as a result of the improvements. Measuring availability is often a good way to understand the cost of lost productivity, the cost of not being able to complete a business transaction, or the true cost of downtime.

There are different approaches to measuring and reporting on availability. You can apply an analysis of the impact by minutes lost, which is a calculation of the duration of downtime multiplied by the number of customers impacted. This can be used to report on lost customer productivity. Then there is the impact by business transaction, which is a calculation based on the number of business transactions that could not be processed during the downtime. This measurement provides a better indication of business impact. Combining these can provide the true cost of downtime that has been agreed on.

Other areas of warranty—such as security, recoverability, and ensuring that there is sufficient capacity—also have to be taken into account.

For example, an insurance company not being able to write policies can easily translate to lost revenue. Internet companies providing goods and services online are also good examples for easily demonstrating lost revenue.

Establishing a Business Case

A business case needs to identify the reason for undertaking a service or a process initiative, including the specification of the data and evidence that needs to be provided to prove the costs and expected benefits.

It is important to remember that process redesign activities are complex and may be more costly than assumed. The same can be said for the impact of organizational change, and with the introduction of organizational and process change comes the potential requirement to improve competencies and tools, adding further cost to the improvement.

It is important not to limit the business case to return on investment but to include the value that a service improvement will bring to the organization. Value on investment can be measured as the improvement is implemented, whereas return on investment can really only be demonstrated once the implementation of the improvement has concluded. Working collaboratively with the business, it should be possible to identify the value the implementation brings to the business. Examples of business value measures include the time to market, customer retention, and the increase in organizational market share.

IT can demonstrate its contribution through gains in agility, managing and enhancing knowledge, and a reduction in costs and risk. IT should begin by defining the types of business values that each improvement will contribute.

Business Cases in a Data-Poor Environment

It is often the case that an organization that intends to carry out service improvements is doing so in a situation where the lack of process means that there is no evidence to prove the expected benefits, value on investment, or return on investment.

There is an approach that circumvents this situation by gaining approval to establish basic measurement capabilities as a means of gathering consistent data for future analysis. This may be as simple as ensuring that all IT staff record data in a consistent fashion or start measuring activities or outcomes that are not currently captured. After an agreed period of data capture, some evidence will exist to support (or perhaps not support) a process improvement initiative.

Another approach is to undertake a process maturity assessment of current processes, but this activity will identify only the absence of process and/or data. A process maturity assessment will not in itself provide the data to justify how much to spend on improving process. So often both approaches are used so that consistency is achieved with an understanding of how well processes are being followed, and from this, measures of value can be established.

It is important that once the decision to start capturing and reporting on data is made, an initial baseline is created so improvements can be measured against it.

Measuring Benefits Achieved

In the business case, we can identify estimated benefits, but eventually we need to measure achievements. These measurements show whether the improvement activity achieved the intended outcomes and should also consider whether the envisaged improvements were realized by measuring the benefits arising from the improvements. It is also important to demonstrate that the target ROI and the intended value-add was actually achieved (VOI).

Continual service improvement is cyclic, and the outcomes of measurements will lead to further process improvement actions being reevaluated.

It is important to ensure that enough time has passed before measuring the benefits. Some benefits will not be immediately apparent, and it is likely that benefits will continue to change over time as ongoing costs and ongoing benefits continue to change.

A further consideration in the measurement of benefits is that data quality and measurement precision pre- and postimprovement could be different. This may invalidate direct comparison, so there may be a requirement for the data to be normalized before validating benefits.

Service Reporting

Reporting should cover the purpose of the report, the intended audience, and its use. Once the data has been collected and analyzed, presentation is critical.

It is often the case that the majority of daily reports produced and delivered to the business are not used. They are much more appropriate for use by the internal IT team. Consider carefully the reports delivered to the business; trends and actions for improvement may be of more interest. Always ensure that reports are meaningful and appropriate for their audience.

Report content should be informative. It may be that the business will be more interested in a structure that reports on the actions taken. Reports on adherence to SLAs can be open to interpretation, whereas reports on future actions as well as the past will enable IT to promote its solutions to the issues the business may have experienced.

Reporting Policy and Rules

It is important to agree with the business on the policies and rules for the reports. Gain agreement in advance regarding the potential target audience, and then agree on what information the business requires about the service. During the design stage of the lifecycle, agreement should be sought on what is going to be measured and reported, including definitions of terms and boundaries. It is important to provide clarity on the mathematics used for all the calculations that are used. The scheduling and delivery mechanism for the reports and who will have access to them should be stipulated, as well as the attendees for the review and discussion meetings. It is good practice to agree on these elements in the design stage, but we should also remain flexible for alterations in the operations stage of the lifecycle.

Right Content for the Right Audience

It is important to ensure that the right content is provided for the right audience. It is good practice to apply the right policies for each target group because the needs of one customer may not be the same as another.

Once the framework, policies, and rules are in place, automating suitably styled reports is a task of translating flat historical data into meaningful business views. These will need to be annotated around the key questions, threats, mitigations, and improvements that have been identified in the report. Reports can then be presented via the medium of choice—for example, paper-based hard copies, online soft copies, web-enabled dynamic HTML, current snapshot whiteboards, and real-time portal/dashboards.

Simple and effective customizable and automated reporting is vital for an ongoing reporting schedule, which provides satisfactory reporting to the business.

It is also important to recognize that the initial schedule and content for reports may change over time as the business needs change. The end result should be targeted reporting that is clear, unambiguous, and relevant, delivered to the correct recipient in a medium and manner that promotes accessibility and use.

CSI and Other Service Management Processes

The CSI process makes wide use of methods and practices found in many of the other processes throughout the lifecycle of a service. This means that the outputs in the form of flows, matrices, statistics, and analysis reports provide valuable information about the service’s design and operation. This information, combined with new business requirements, technology specifications, IT capabilities, budgets, trends, and possibly legislation is of great importance to CSI in enabling a determination of what needs to be improved and also how to prioritize it and suggest improvements if required.

Availability Management

Availability management’s methods are part of the measuring process—gathering, processing, and analyzing activities. When the information is provided to CSI in the form of a report or a presentation, it becomes part of CSI’s gathering activity.

When used by availability management, this activity provides IT with the business and user perspective about how failures and faults in the infrastructure and underpinning process and procedures impact the business operation. The use of business-driven metrics can demonstrate this impact in real terms and help quantify the benefits of improvement opportunities.

Component Failure Impact Analysis

Component failure impact analysis (CFIA) is the analysis of the impact to the business if a component fails. It identifies single points of failure, IT services at risk from failure of various configuration items (CIs), and the alternatives that are available should a CI fail. It should also be used to assess the existence and efficacy of recovery procedures for the selected CIs. The same approach can be used for a single IT service by mapping the component CIs against the vital business functions and users supported by each component.

When a single point of failure is identified, the information is provided to CSI. This information, combined with business requirements, enables CSI to make recommendations on how to address the failure.

Fault Tree Analysis

Fault tree analysis (FTA) is a technique that is used to determine the chain of events that cause a disruption of IT services. Using this technique, it is possible to construct detailed models of availability. It makes a representation of a chain of events and distinguishes between four types of events: basic events, resulting events, conditional events, and trigger events. Using Boolean algebra and notation (AND/OR statements), it is possible to indicate which part of the infrastructure, process, or service was responsible for the service disruptions. This information, combined with business requirements, enables CSI to make recommendations about how to address the fault.

Service Failure Analysis

Service failure analysis (SFA) is a technique designed to provide a structured approach to identify end-to-end availability improvement opportunities and deliver benefits to the user. Many of the activities involved in SFA are closely aligned with those of problem management. SFA should take an end-to-end view of the service requirements. It is therefore important to attempt to identify improvement opportunities that benefit the end user.

CSI and SFA work hand in hand because SFA identifies the business impact of an outage on a service, system, or process. This information, combined with business requirements, enables CSI to make recommendations about how to address improvement opportunities.

Technical Observation

A technical observation (TO) is a prearranged gathering of specialist technical support staff from within IT support. The TO’s purpose is to monitor events as they occur, with the specific aim of identifying improvement opportunities within the current IT infrastructure. The TO is best suited to delivering proactive business and end-user benefits from within the real-time IT environment. The TO gathers, processes, and analyzes information about the situation. If the TO is included as part of the launch of a new service, system, or process, for example, a lot of the issues inherent to any new component will be identified and dealt with more quickly.

The Expanded Incident Lifecycle

The expanded incident lifecycle, as shown in Figure 46.9, provides a technique to help with the technical analysis of incidents affecting the availability of components and IT services.

Diagram shows a timeline which is divided into alternate up and down times. Incident starts at the end of uptime. Service is available during uptime and unavailable during downtime.

Figure 46.9 The expanded incident lifecycle

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS.

It is made up of two parts: time to restore service (also known as downtime) and time between failures (also known as uptime). There is a diagnosis part to the incident lifecycle as well as repair, restoration, and recovery of the service.

Using the other techniques in availability management, it is possible to review each element of the incident management process and apply to this review a continual service improvement activity to address issues in incident management. Management of infrastructure is often reliant on the information from the analysis of mean time between failures, mean time between system incidents, and the mean time to restore service.

Capacity Management

The capacity management process must be responsive to changing business requirements for processing capacity. New services are required, and existing services will require modification to provide extra functionality. Old services will become obsolete, freeing up capacity. Capacity management must ensure that sufficient hardware, software, and personnel resources are in place to support existing and future business capacity and performance requirements.

Similar to the availability management process, the capacity management process should play an important role in helping the IT support organization recognize where it can add value by exploiting its technical skills and competencies in a capacity context. Capacity management should use the continual improvement technique and apply this to technical capability. It is possible to do this either in small groups of technical staff or in a wider group within a workshop environment.

The information generated by the capacity management process should be made available to CSI through the capacity management information system (CMIS), a database that should form part of the service knowledge management system.

As you will remember from your Foundation course, capacity management has three subprocesses: business, service, and component capacity management.

Business Capacity Management

First we will look at business capacity management. A prime objective of the business capacity management subprocess is to ensure that future business requirements for IT services are considered and understood and that sufficient capacity to support the services is planned and implemented in an appropriate timescale.

New service level requirements from the business will drive new capacity requirements, as will improvements and requirements identified through its own investigation and analysis.

The information gathered in this subprocess allows CSI to answer the question, What do we need?

Service Capacity Management

A prime objective of the service capacity management subprocess is to identify and understand the IT services, their use of resources, working patterns, and peaks and troughs. In addition, the subprocess should ensure that the services can and do meet their SLA targets. This is another process, like availability management, that is concerned with end-to-end service provision and performance.

In this subprocess, the focus is on managing service performance as determined by the targets contained in the SLAs or SLRs.

The key to successful service capacity management is to preempt difficulties wherever possible. The information gathered here enables CSI to answer the question, What do we need?

Component Capacity Management

The component capacity management subprocess’s prime objective is to identify and understand the capacity and utilization of each of the components of the IT infrastructure. This is where the technical management expertise will be utilized. This ensures the optimum use of the current hardware and software resources in order to achieve and maintain the agreed service levels.

As in service capacity management, the key to successful component capacity management is to preempt difficulties wherever possible.

It is important to understand how the three subprocesses tie together. Let’s look at the example in Figure 46.10 and the requirements in Table 46.4

Image described by surrounding text.

Figure 46.10 Connecting business and service capacity management

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS.

Table 46.4 Departmental requirements

Marketing Sales Finance
Employees 15 40 5
Number of emails per day 100 200 50
Size of attachment 10 Mb 5 Mb 10 Mb
Frequency of large attachment Infrequent Very frequent (contracts) Often
Requires remote access No Yes Yes
Requires handheld computer No Yes No

Copyright © AXELOS Limited 2010. All rights reserved. Material is reproduced under license from AXELOS.

There are three services (A, B, and C) and three departments (Marketing, Sales, and Finance). Service A is used by all three departments. Service B is used only by Marketing and Sales. Service C is used only by Finance.

Each of the subprocesses will have a part to play in understanding the management of capacity in the organization. Changes in business focus should be communicated through business capacity management, but capacity planning is often carried out a year in advance, and it is very difficult to be accurate this far ahead, so it is important for business capacity management to be constantly informed of any changes by the organization. So if marketing provides business plans for an increase in sales, then this will have an impact on all three departments. Any of the services used will potentially be impacted. Marketing services will be required to take on greater capacity, while sales services will have a higher throughput, and so will the finance services as more sales go through.

It is also important for the activities in service capacity management to keep up with changes in service level requirements in real time, and these activities can be used to support the forecasting in business capacity management for the marketing department’s predicted increase in sales.

Component capacity management is highly technical, but the reports and information it produces are used by service capacity management to continue to monitor the capacity capability of the service as a whole. Each of the subprocesses is important in managing the organizational requirements.

Workload and Demand Management

There are many different ways to influence customer behavior. Charging for services is an obvious option, but it is not always effective. People may still need to use the service and will use it regardless of the price.

Usage policies for the service is another way to influence customer behavior, by placing restrictions or limits on the service. You are probably familiar with restrictions such as amount of space allocated for email storage. Be careful because such policies may produce a negative effect, but regular reviews can be used to make sure the influencing mechanism is still having a positive effect.

For example, if an organization chooses to charge for every contact to the service desk, this could create negative behavior in that end users no longer call or email the service desk and instead call second-level support directly or turn to peer-to-peer support, which ultimately makes the cost of support go up, not down. However, if the goal is to move end users to a new self-service web-based knowledge system, then with a proper communication and education plan on using the new self-service system, this could be a positive influencing experience.

Business requirements change over time, and CSI should continue to review policies to ensure that they are still appropriate for business needs.

Capacity management uses a number of techniques to support the process.

Trend Analysis

Trend analysis of the data captured in service and component capacity management subprocesses will provide valuable information for the prediction and forecasting for capacity of services and components in the future. Problem management also uses trend analysis as a technique, but it relates to the historical view of trends for improving incident data.

Modeling

There is a wide range of modeling techniques applied in capacity management. Modeling is used to provide information on the “what if” scenario and is a useful tool for future prediction and forecasting based on potential situations.

Analytical models are representations of a computer system’s behavior based on mathematical algorithms, for example, network queuing theory. Comparison to the actual performance is necessary to verify that the model is effective, and then variables can be changed to use the model for prediction.

Simulation involves the modeling of discrete events, such as transaction arrival rates, against a given hardware configuration. This type of modeling can be very accurate in sizing new applications or predicting the effects of changes on existing applications. It can also be very time consuming and therefore costly.

Improvements are gradual and incremental by nature. The first stage in modeling is to create a baseline model that accurately reflects the performance that is being achieved. When this baseline model is created, predictive modeling can be done. If the baseline model is accurate, then the accuracy of the result of the predicted changes can be trusted.

IT Service Continuity Management

IT service continuity management (ITSCM) allows an organization to manage IT risks through a process of identifying what is important to the stakeholder, mitigating against the risks the organization chooses to take, and ensuring that the business processes will continue to operate throughout a disruption. CSI reviews the requirements of the organization, and improvements should be managed through the change process to ensure that they are reflected in the continuity plans.

Business continuity management is concerned with the management of risks at an organizational level, which are then supported by the ITSCM plans. The BCM process involves reducing the risk to an acceptable level and planning for the recovery of business processes should a risk materialize and a disruption to the business occur.

Risk Management

Although not an ITIL-defined IT service management process, risk management is part of many IT service management processes.

Every organization manages its risk, but not always in a way that is visible, repeatable, and consistently applied to support decision-making. The task of risk management is to ensure that the organization makes cost-effective use of a risk process that has a series of well-defined steps. The aim is to support better decision-making through a good understanding of risks and their likely impact.

There are two distinct phases: risk analysis and risk management:

  • Risk analysis is concerned with gathering information about exposure to risk so that the organization can make appropriate decisions and manage risk appropriately. Risk analysis involves the identification and assessment of the level (measure) of the risks calculated from the assessed values of assets and the assessed levels of threats to, and vulnerabilities of, those assets.
  • Risk management involves having processes in place to monitor risks, access to reliable and up-to-date information about risks, the right balance of control in place to deal with those risks, and decision-making processes supported by a framework of risk analysis and evaluation. Risk management also involves the identification, selection, and adoption of countermeasures justified by the identified risks to assets in terms of their potential impact upon services if failure occurs and the reduction of those risks to an acceptable level.

A certain amount of risk taking is inevitable if an organization is to achieve its objectives. Effective management of risk helps to improve service performance by contributing to increased certainty and fewer surprises. It should also support better service delivery and more effective management of change. Managing and making more efficient use of resources by better management at all levels through improved decision-making should be a further effect of risk management. Organizational risk management should also reduce waste and fraud and deliver better value for money. By managing the risks of innovation, and also contingent and maintenance activities, risk management supports continual service improvement.

Problem Management

CSI and problem management are closely related because one of the goals of problem management is to identify and permanently remove errors that impact services from the infrastructure. This directly supports CSI activities of identifying and implementing service improvements.

Problem management also supports improvement activities through trend analysis and the targeting of preventive action. Although problem management activities are generally conducted within the scope of service operation, CSI takes an active role in the proactive aspects of problem management because it is here that the process is used to identify and recommend changes that will result in service improvements.

Knowledge Management

Knowledge management is a key support for CSI because capturing, organizing, assessing for quality, and using knowledge plays a large part in CSI activities. An organization has to gather knowledge and analyze what the results are in order to look for trends in service level achievements and the results and output of service management processes. This knowledge may be used to identify improvement opportunities for inclusion in the CSI register, which will then be reviewed and prioritized. It will also be used for contributing to service improvement plans and initiatives.

Knowledge management is constantly changing, in line with the technological advances in IT. The rate of change in the IT industry has opened up new opportunities for knowledge sharing and collaboration, not least of which is the extensive use of the Internet by corporations and end users. Staff turnover requires that corporate knowledge is captured centrally rather than being dependent on the individual because it is more common for individuals to change companies throughout their career.

Knowledge Management Concepts

Effective knowledge management enables a company to optimize the benefits of CSI by enhancing the organization’s effectiveness through better decision-making enabled by having the right information at the right time. Knowledge management is key to facilitating learning through the exchange and development of ideas and individuals.

It supports the customer-supplier relationship because information and services are shared, expanding capabilities through collaborative efforts. In addition, it will improve business processes through sharing lessons learned, results, and best practices across the organization.

Knowledge management is key to the overall viability of an organization, from capturing the competitive advantage in an industry to decreasing cycle time and cost of an IT implementation. The approach to cultivating knowledge depends heavily on the makeup of the existing knowledge base and knowledge management norms for cultural interaction.

The identification of knowledge gaps and the resulting sharing and development of that knowledge must be built into CSI throughout the IT lifecycle. Throughout a CSI initiative, a lot of experience and information is acquired. It is important that this knowledge be gathered, organized, and accessible. To ensure the ongoing success of the program, knowledge management techniques must be applied.

Summary

In this chapter, we reviewed the methods and techniques used by CSI in the management of improvements.

We began by exploring the use of assessments and gap analysis to assist the start of improvements, to identify weak areas, and to demonstrate success. Benchmarking supports analysis, which enables a capture of the baseline for the improvement.

All improvements will need to be measured to prove they have been successful, so this chapter also explored the use of service measurement, metrics, and a balanced scorecard. We also considered the use of SWOT (strength, weakness, opportunity, and threat) analysis and how it can be used for identification of improvement.

Measurements must be shared with the appropriate audience, and in this chapter, we also looked at the use of service reports and demonstrating return on investment.

Finally, we explored the way in which other service management processes support the activities of CSI. These include availability management, capacity management, IT service continuity management, problem management, and knowledge management.

Exam Essentials

Understand the methods and techniques of CSI. Be familiar with the methods involved in the practice of continual service improvement.

Be able to explain and expand on the importance of assessment in CSI. It is important to understand the importance of assessment for CSI and the methods of assessment that can be used.

Understand and expand on the use of gap analysis for CSI. You should be able to explain the use of gap analysis in the lifecycle stage of CSI and how it supports the delivery of improvement.

Be able to explain the importance of benchmarking in CSI. Benchmarking is a vital part of any improvement and should be carried out regularly as part of an improvement program.

Understand the use of measurement within CSI. This includes the use of service measurement and metrics in demonstrating success and tracking improvement. You should be able to identify the appropriate techniques and metrics for a given situation.

Know how to use a balanced scorecard. Although a balanced scorecard does not originate as part of the ITIL framework, its use is complementary to and supportive of the CSI approach.

Be able to explain the importance of SWOT analysis in CSI. SWOT analysis identifies the strengths, weaknesses, opportunities, and threats in an organization. You should be able to explain the use of SWOT analysis as part of CSI.

Understand and explain the use of service reports and ROI to support CSI. To demonstrate the success of CSI, there must be measurement and suitable reporting. This includes justifying the return on investment in the improvement.

Understand and explain the support of other ITIL processes to CSI. Understand and explain the use of other processes (availability, capacity, continuity, problem, and knowledge management) in the support of CSI processes.

Review Questions

You can find the answers to the review questions in the appendix.

  1. What is the purpose of an assessment?

    1. To establish potential shortcomings
    2. To provide a comparison point for benchmarking
      1. Statement 1 only
      2. Statement 2 only
      3. Both
      4. Neither
  2. Assessments require resources, but which of these is NOT an essential required resource?

    1. Real costs
    2. Staff time
    3. Management engagement
    4. Assessment tools
  3. Maturity assessment frameworks evaluate which of the following elements?

    1. People
    2. Services
    3. Process
    4. Technology
      1. 1, 3, 4
      2. 2, 4
      3. 1, 3
      4. 3, 4
  4. What is the commonly used acronym for the Deming Cycle?

    1. DCAP
    2. PDCA
    3. ACDP
    4. PADC
  5. What is one of the purposes of benchmarking?

    1. Evaluate SLA performance in relation to SLRs
    2. Evaluate contract targets against SLA targets
    3. Evaluate processes in relation to best practice
    4. Evaluate change requests against expected outcome
  6. Which of these are survey techniques that can help identify problem areas for improvement action?

    1. Informal conversations with customers, employees, suppliers
    2. Focus groups
    3. Automated monitoring
    4. Questionnaires
    5. Process mapping
    6. Quality control of variance reports
      1. 1, 2, 4, and 6
      2. 1, 3, 4, and 6
      3. 1, 2, 3, 4, and 5
      4. 1, 2, 3, 4, 5, and 6
  7. Which of these internal and external personnel may be involved in benchmarking:

    • Internal organization
    • The customer
    • The user or consumer
    • Internal service provider
    • External partners
    • External service providers
    • Direct IT users (members of the public)
    • Benchmarking partners
      1. Only internal organizational personnel
      2. Only external partners
      3. Both internal and external
      4. Neither internal nor external
  8. Which of these are basic measures used in service measurement?

    1. Availability
    2. Reliability
    3. Performance
    4. Security
      1. 1, 2, 4
      2. 2, 3, 4
      3. 1, 3, 4
      4. 1, 2, 3
  9. Service targets may be driven by which of the following:

    1. Business requirements
    2. Regulatory requirements
    3. New policies
    4. Service level agreements
      1. 1, 2, 3
      2. 2, 3, 4
      3. 1, 2, 3, 4
      4. 1, 2, 3
  10. Which of the following statements about the acronym SWOT is/are correct?

    1. S refers to external attributes that are helpful for achieving objectives.
    2. W refers to external attributes that are harmful for achieving objectives.
    3. O refers to external conditions that are helpful for achieving objectives.
    4. T refers to external conditions that are harmful for achieving objectives.
      1. None are correct.
      2. 1 only is correct.
      3. 1 and 2 are correct.
      4. 3 and 4 are correct.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset