13 MEASURING THE PERFORMANCE OF THE BA SERVICE

INTRODUCTION

How do we know if we are delivering a good service? Service improvement is only possible if performance levels are monitored and understood. Performance can be evaluated in a variety of ways such as financial effectiveness and customer satisfaction. There is a range of areas that can be used to quantify the performance of a BA Service.

This chapter covers key topics relating the measurement of BA Service performance including:

benefits and drivers for measurements (why measure);

understanding service priorities (what to measure);

sources and types of measurement (how to measure).

THE IMPORTANCE OF METRICS AND MEASUREMENT

There are two main reasons for creating meaningful metrics and measures for the BA Service. The first is to drive development and improvement; measurement is the only way to gauge performance and turn strategy into reality (see Figure 13.1). The second is that measurement provides the means to evidence the value offered by the BA Service.

This chapter refers to both metrics and measurements, terms that are sometimes used interchangeably but which actually refer to different levels of performance monitoring.

A measurement is a fundamental unit-specific term, such as a value that can be counted, timed or otherwise assessed. By tracking measurements over time, it may be possible to see trends, but single point-in-time measurements have no context and it is very difficult to draw conclusions from them.

A metric is a standard for measurement, often derived or calculated from one or more measurements. Metrics often require a baseline measurement to provide context and the opportunity to understand improvement or effectiveness.

The benefits of measurement

Without appropriate measurement the BA Service has no way of evidencing the value the service offers or the progress being made towards implementing agreed strategy. This can be a precarious position to be in, when the direction of travel for most organisations is always to demonstrate efficiency and effectiveness and ‘to do more with less’.

Figure 13.1 Turning strategy into reality

images

Robert Behn (2003) has researched and defined the purposes that managers have for measuring performance, and the questions that can be addressed through measurement; these are shown in Table 13.1.

Table 13.1 The purposes for measuring performance (Reproduced with permission from Wiley)

Purpose

Questions managers can answer through measurement

Evaluate

How well is my team performing? Are we meeting targets/objectives?

Control

How can I ensure that my team members are doing the right thing?

Budget

On which programmes, projects or people should my organisation spend money?

Motivate

How can I motivate the team, collaborators and stakeholders to do the things necessary to improve performance?

Promote

How can I convince senior management, stakeholders and customers that my team is doing a good job? How can we demonstrate that we are progressing towards or meeting strategic objectives?

Celebrate

What accomplishments are worthy of the important organisational ritual of celebrating success?

Learn

Why is that working or not working?

Improve

What exactly should we do differently to improve performance?

It is critical to select the right metrics and find efficient ways to measure them. This allows the BA Service to:

deliver against strategy;

make informed decisions;

establish the case for change/investment;

evidence improvement;

allow objective assessment;

celebrate individual and service success;

allow meaningful comparison;

inform future strategy.

Common challenges

Setting targets and tracking against them can seem like an unnecessary overhead, instead of an integral part of delivering an effective and efficient BA Service. This may be due to:

limited data/data quality – ‘not the full picture’;

monitoring the wrong things – ‘doesn’t tell me anything’;

monitoring too many things – ‘not worth the effort’;

information not being timely – ‘too late to influence anything’.

In addition, business analysts who have never been subject to any monitoring of their work may be suspicious or resistant to the introduction of metrics and measures. This can affect the ability to obtain the required information or its veracity.

Open monitoring can also have unintended consequences or drive undesirable behaviours. For example, counting ‘how many’ analysis products are generated in a certain time frame, or ‘how long’ it takes to generate a particular deliverable will put the focus on time rather than quality and may drive business analysts to avoid seeking peer review in an effort to ‘save time’.

To overcome these challenges, start by focusing on a small number of metrics that can be accurately measured in a timely way. Develop the set of metrics and measures over time and encourage business analysts and other audiences to be engaged in the process of defining and improving metrics. Ensure that all stakeholders are shown the uses and usefulness of the information.

An approach to defining metrics

Going from a position of no monitoring or tracking to being able to set targets and have evidence available for decision making takes time and effort. It is an iterative process, which can begin with information that is easy to obtain to allow tracking against simple metrics and then expand to offer deeper insights into the BA Service and what are meaningful targets. Figure 13.2 shows that by starting to count simple things (monitoring) and building on this foundation, the BA Service can reach a position of informed decision making by setting and managing targets.

Figure 13.2 Approach to defining metrics

images

Table 13.2 shows the types of metrics that can be built up over time, starting with simple measures such as counting, moving to adopting new metrics and eventually setting targets.

Table 13.2 Increasing complexity of metrics over time

images

TYPES OF MEASURES

There are many types of measure that help to assess the performance of the BA Service. No single measure can give a complete picture and the greatest insights are gained by using a combination of measures to look at the BA Service from different perspectives.

Input and output measures

There are several things that may be measured, starting with inputs (things we need) and activities (things we do), moving on to outputs (things we produce), outcomes (things we deliver) and, finally, impacts (things we affect). Inputs and activities can be generally considered ‘costs’ of the project or service, whereas outputs, outcomes and impacts may be the ‘benefits’ it provides. Inputs are the easiest things to measure but don’t always offer much insight or support for informed decision making; impacts offer the most interesting information but are much more difficult to accurately and confidently measure. The different areas of measurement are shown in Figure 13.3.

Having ensured that a range of input and output measures are in place, the BA Service can start to ask questions about value for money and efficiency. BA leaders can then see relationships between changing elements on the cost side (people, budget, time) and corresponding results on the benefits side.

Figure 13.3 Areas of measurement

images

Leading and lagging measures

Leading measures tend to be input orientated; they provide the opportunity to influence future performance. Lagging measures are typically output orientated and provide the ability to analyse past performance. Leading measures are often concerned with behaviours, relationships and attitudes.

If the Service focuses only on lagging measures, it will also be looking backwards ‘Why did we miss that target?’; ‘Why has our customer satisfaction dropped?’, and so on. If appropriate leading measures are also in place, it increases the chances of ‘keeping on track’ and lagging metrics will not come as a surprise. Leading and lagging measures associated with core aspects of the BA Service are shown in Table 13.3.

Observable versus measurable

Not everything is measurable, and in many cases, the effort to measure something may outweigh the benefit of having the information. This is particularly true for leading measures; for example, what would ‘increase levels of engagement between business analysts across projects’ look like? What factors could be observed to support whether this was happening/not happening? Encouraging business analysts to attend each other’s stand-up meetings, share work, share outputs of lessons learned or retrospectives might all be useful approaches, but actually tracking these things might not be sensible.

Consistently encouraging particular behaviours, actions and attitudes and highlighting/praising analysts when these are seen may impact measures down the line.

Table 13.3 Leading and lagging measures

images

Service Level Agreements

It is useful for BA leaders, business analysts and customers to have a factual understanding of how long certain activities and processes take to accomplish. As business analysis starts to be seen as a service, thoughts may turn to the introduction of Service Level Agreements (SLAs). It may be difficult to guarantee SLAs for the BA Service, as there are so many interdependencies and aspects outside the control of the Service. There are some areas where SLAs or internal targets may be relevant if all factors can be controlled by the BA Service; these are covered in Table 13.4.

Table 13.4 Example business analysis service levels

BA Service area

Example service levels

Resource management

When the BA Service receives a request for business analysis support, customers will receive an acknowledgement within X days, the BA Service will aim to clarify the request within Y days and will provide a response to customers within Z days

It may be difficult to provide a commitment to source business analysis capacity within a set time frame, unless other agreements with third parties (recruitment specialists, consultancies, etc.) are also in place

Absence management

When the BA Service becomes aware of planned or unplanned absence of a business analyst, the Service will provide short-term cover for any absences over X number of weeks

This approach relies on some capacity being available within the Service, either by deprioritising improvement or management activities to enable delivery support or via access to additional business analysis capacity at short notice

Transition planning

When the BA Service becomes aware that a replacement business analyst is required (possibly due to the initial business analyst leaving, promotion or other moves that may arise) there will be a period of X weeks’ notice to the customer

It may also be possible to commit to a target handover period, but, as with resource management, this may not be possible unless other arrangements (notice periods and sourcing agreements) are in place

It is important to track the appropriate information before committing to an SLA in order to determine if measures are likely to be achieved – or breached. If the latter, a consequence must be defined. It may be more appropriate to set internal targets that are then tracked for a set period of time before making performance commitments to customers.

Setting time-based metrics for business analysis deliverables and activities are very challenging to achieve due to the myriad of factors that influence business analysis work on projects. These factors include the number of business analysts involved, the involvement of other roles, numbers of stakeholders, type of project, and development methodology, to name just a few. It is useful to track the actual time spent producing specific business analysis deliverables or performing business analysis activities. This information should then be used to inform future estimates rather than to set time-based metrics.

The Balanced Scorecard

The Balanced Scorecard (BSC) was developed by Kaplan and Norton (1996) as a means of defining a framework for performance measurement that would support the achievement of the vision for organisations, and the execution of business strategy (Cadle, Paul and Turner, 2014). Many organisations use the Balanced Scorecard to evaluate team and individual performance.

The emphasis of the BSC is to consider aspects of performance in a balanced way. It can be used as a visual reminder to ensure that when overall performance of the BA Service is considered, the metrics used do not all relate to one area, and that no areas are missing. It also shows that the elements are interrelated, and, by seeking to improve performance in one area, other areas will be impacted (see Figure 13.4). The ‘balanced’ part of the approach refers to the fact that, at any given time, managers have to make trade-offs between the various elements. For example, investing in learning and growth at the short-term expense of financial performance.

Figure 13.4 The Balanced Scorecard (after Kaplan and Norton, 1996)

images

The elements of the BSC are shown in Table 13.5.

These four elements of the BSC are underpinned and driven by the vision and strategy of the organisation. It is useful to move through the levels shown in Figure 13.2 to identify suitable measures for each element of the BSC. The CSFs and KPIs should be linked to the vision and strategy for the BA Service and the wider organisation. This helps to provide the context for the metrics and measures that are to be monitored.

Critical success factors

Each objective of the BA Service will have a number of essential areas of activity that must be performed well in order for the objective to be met; these are the CSFs. It is necessary to differentiate between factors that are truly critical and those that are simply important to avoid overly burdensome measurement. Progress towards providing the CSFs allows the service to keep on track towards meeting objectives and, through these, achieving the vision (see Figure 13.1).

Table 13.5 Business analysis performance measures using the Balanced Scorecard

images

CSFs may evolve for the BA Service over time. During the early stages of Service maturity, raising awareness of the service offering and the business analyst role may be key CSFs to establishing the BA Service and meeting objectives. A more established BA Service may need to focus on the performance of the business analysts and the provision of a consistent service to meet the defined objectives.

Key performance indicators

KPIs are quantifiable and provide a way of measuring whether or not a CSF has been achieved. Each CSF should have one or more associated KPIs, otherwise the service does not know whether that CSF is being performed well or not.

The process of defining KPIs follows the same iterative quality management process shown in Chapter 12 (see Figure 12.10). As time goes on, it is important to retest and confirm that:

each KPI is providing meaningful information to the Service;

each KPI is linked clearly to a CSF;

the effort involved in obtaining the KPI information is justified.

FINANCIAL METRICS

The BA Service budget will primarily be concerned with staff costs, as discussed in Chapter 9. Depending on the charging model used, it may be necessary to set re-charge or ‘income’ targets for each business analyst and the BA Service as a whole. It is also useful to understand average costs and consider how cost savings or efficiencies could be achieved in each area. Typical areas to consider include average salaries, travel expenditure or software licence costs. For example, the average business analyst salary could be reduced by setting permanent staff to contractor ratios or by implementing new entry-level roles.

CUSTOMER METRICS

The customer perspective focuses on the people who use the BA Service. Surveys are often used to obtain opinion-based feedback and information about levels of satisfaction; for example, with performance. There are potential disadvantages to using surveys, such as the difficulty in reaching sufficient sample sizes, the possible bias of those completing the survey and the demand on people’s time. However, surveys remain a reliable indicator for how people feel about the service they receive and using specifically targeted questions can provide significant insight. Surveys also offer a repeatable process – the same survey may be re-used, allowing responses to be tracked over time.

There are a number of established metrics to measure customer satisfaction that are described below.

Customer satisfaction (CSAT) survey

A CSAT survey provides a high-level customer satisfaction metric that indicates if customers’ expectations are being met and they are happy with the BA Service. A low score can identify the need to carry out more detailed analysis via customer engagement, a root cause analysis or by utilising a more detailed service quality measurement survey.

Starting the conversation about satisfaction helps customers to appreciate that their opinion is valued and allows the BA Service to track if any continuous improvement activities that have been implemented have had a positive customer impact. Essentially a CSAT score helps the BA Service to answer the question, ‘Is the Service focusing on improving the areas that matter to our customers?’ Figure 13.5 shows an example of a CSAT score calculation.

Figure 13.5 Customer satisfaction survey example

images

Net Promoter Score (NPS)®

NPS surveys are used to understand customer loyalty and are a good indicator of the potential for business growth; the higher the Net Promotor Score®, the more likely it is that demand for the BA Service will increase. It allows for easy comparison across teams, organisations and industries. Figure 13.6 shows an example NPS® score calculation.

Figure 13.6 Net Promoter Score®

images

Promoters: like working with the BA Service and will have a positive influence through word of mouth.

Passives: are not particularly invested but are satisfied with the service they receive. These scores do not form part of the NPS® calculation.

Detractors: have not particularly enjoyed working with the BA Service and will require proactive engagement to avoid reputational damage.

An NPS® of more than zero (i.e. any positive value) is considered good; a score of over 50 is considered excellent.

Customer Effort Score (CES)

This metric is used to understand how easy or difficult it is to be a customer of the BA Service; this will impact both CSAT and NPS® results.

Where business analysis is provided as an internal consultancy and customers effectively have no choice about using the Service or not, this metric can highlight the likelihood that customers will try to work-around or avoid using the BA Service. Customer statements such as ‘We don’t need a business analyst on this’; ‘The business team have done their own analysis’; ‘We’ll bring in our own BA’ can all be warning signs that the BA Service is not easy for customers to work with. Figure 13.7 shows an example CES score calculation.

Figure 13.7 Customer Effort Score

images

The same question can be used to calculate a similar indicator, the Net Easy Score (NES), which takes the same format as the NPS. In this scale, those who responded 6 or 7 find the service ‘Easy to work with’, those who responded 1, 2 and 3 find it difficult. The NES is calculated by % Easy – % Difficult.

Aggregating feedback

The customer metrics discussed in this section have all involved asking questions about the BA Service as a whole. However, customers may only feel able to provide feedback on the specific business analyst they have been working with and may not yet recognise the concept of a BA Service. In this situation, it is possible to phrase questions that relate to individual business analysts rather than the overall BA Service. Possible questions are:

Overall, how satisfied are you with the work of (named business analyst)?

Would you recommend (named business analyst) to a colleague?

Is it easy to work with (named business analyst)?

The scores obtained can be shared with the individuals and aggregated to provide Service-level metrics. On first consideration, asking these questions about an individual can seem too direct but moving to the idea of delivering a customer-focused service requires a change in approach and attitude and a real commitment to continual improvement.

A recommendation for a named business analyst can be an indicator of good performance. However, accommodating requests for a specific business analyst is likely to present a challenge for the BA Service and may be demotivating for other business analysts. Requesting a specific business analyst may be driven by factors other than business analysis performance, such as business domain or system knowledge, or the willingness of the business analyst to provide support beyond the agreed role. It is even possible that a business analyst is popular with their customers because they do not challenge assumptions or assertions; in other words, that they are not performing the role effectively. The question of ‘recommendation’ is still useful to derive an aggregated Net Promoter Score®, but this could lead to unwanted behaviours from customers.

Scores and feedback provided at an individual level may also represent factors wider than the performance of an individual business analyst, such as:

the clarity of the business analyst role (see Chapter 1);

the nature of the customer expectations (see Chapter 10);

other roles that are either present or not present.

In this case it may be appropriate to only use the aggregated metrics, to attempt to tackle the issues at a Service-level and not try to interpret or action scores for individual business analysts.

Complaints and compliments

It is unlikely that the BA Service will have a formalised ‘complaints procedure’, although this may be the case in some organisations. Complaints may be received in many ways, some of which may be subtle, requiring BA leaders to be alert to such approaches. For example, there may be a throw-away line in an email, a pointed comment made, facial expressions, corridor conversations or second-hand reports. There may also be more direct ‘complaints’, framed as concerns, feedback, lessons learned, general issues or even escalation requests.

These approaches may also be used to offer compliments about a business analyst or the BA Service. Compliments are unsolicited positive feedback, messages of thanks and appreciation. Compliments must always be acknowledged back to the provider and passed on to the relevant individual or group. Compliments should be shared within the BA Service as a regular agenda item in meetings or in a section of newsletters and updates. Creating a culture that openly celebrates success has been shown to improve performance (Behn, 2003).

All of this qualitative data may be tracked, and it can useful to create a log of both complaints (whether or not this term is used by the customer) and compliments that the BA Service or individual business analysts receive. The number of complaints will typically outnumber the compliments, as customers are far more likely to be motivated to raise issues.

LEARNING AND GROWTH METRICS

The learning and growth perspective considers the culture and development of the BA Service. Key questions the BA Service may wish to address from this element of the BSC are:

What is the level of engagement for BA Service activities?

Is knowledge management being applied consistently?

Is the volume of knowledge assets of the BA Service expanding and being maintained?

How much time are business analysts spending on learning and development activities?

Is new learning being applied?

Are business analysis support tools being used to their full potential?

Are we seeing improvement in quality and performance after investment in training?

What routes do we have for learning from outside the organisation and the sector?

It is important to consider how information to help answer these questions can be obtained. For example, the volume of files shared, accessed and updated may help to quantify knowledge management processes, and outputs of peer reviews may provide information about quality improvement after training. Some of the information will have to be obtained by asking the business analysts directly and surveys can again be an efficient mechanism.

BA pulse survey

The aim of a pulse survey is to get insight into how business analysts are feeling in relation to key areas such as workload, personal development, management support and wellbeing. It is quick to complete and can be repeated frequently (e.g. weekly or monthly). This snapshot of information can help inform decisions such as ‘How urgent is this recruitment?’ and ‘Can the service take on an additional project?’

Asking for regular feedback promotes employee engagement. Pulse surveys offer the opportunity to highlight issues and can complement and even affect longer-term employee satisfaction. For example, employee satisfaction surveys may include questions about the opportunity to provide feedback, the ability to influence decision making and the employer’s attitude to wellbeing, all of which may be answered more positively by the employee if pulse surveys are carried out regularly and the results are acted upon.

Pulse surveys have been shown to contribute to a more positive organisational culture (Mann and Harter, 2016), and provide a mechanism to remind employees of the areas that matter to the organisation. For example, being repeatedly asked about personal development reinforces the idea that personal development time is expected and encouraged, and that business analysts will be allowed time for this within their working week.

There must be a willingness to take action if pulse scores are low or drop, or if the results show wide-ranging results between different responders or from one survey to the next. Where this occurs, it is likely that the pulse survey has identified the need for more detailed engagement to determine underlying issues and identify possible remedial actions.

Before introducing a pulse survey, it is important to consider the following questions

Will it be anonymous or identifiable?

How will people be encouraged to complete the survey?

What frequency will be applied or when will it be issued? (Tip – avoid Monday mornings and Friday afternoons as opinions expressed in these slots are not necessarily representative of the working week!)

What results are expected?

What routes of action are available if results are not as expected?

How will aggregate information be shared back with the business analysts?

Figure 13.8 provides an example pulse survey.

Figure 13.8 Business analysis pulse survey

images

Business analyst performance measurement

Organisations typically have performance development and appraisal processes in place, though a generic process is unlikely to provide sufficient detail to assess the business analysts’ performance and skill levels. The BA Service can survey customers about the business analysts they have worked with, either at the end of projects or assignments or on a pre-determined timing cycle (e.g. twice per year or annually). It is also useful for business analysts to assess themselves using the same parameters and scale as customers, as this will generate results regarding the business analysis CSFs and enable a comparison and consideration of two different views.

There are two possible uses for this data:

1. to aggregate the individual scores and obtain a view of the performance of the entire BA Service;

2. to support a performance evaluation for an individual business analyst.

The KPIs suggested (BA Manager Forum, 2015) in Table 13.6 may be asked as survey questions using a Likert scale (typically five points from ‘strongly agree’ to ‘strongly disagree’). An example survey is also shown in Appendix 14.

Table 13.6 Potential CSFs and KPIs

images

This list does not cover all aspects of business analyst performance that may be of interest to the BA Service. However, it is important to balance the areas that are measured, and the amount of time stakeholders are willing to spend providing feedback.

This type of performance feedback does not take into account the individual business analyst’s different levels of seniority and experience, or that expectations may vary for different business analysts. Where the BA Service is sufficiently large and mature, it may be beneficial to construct a more complex measurement system that reflects differences in job descriptions. For example, the SFIA levels may be used for this purpose (see Chapter 4). However, starting with a straightforward mechanism that may be refined gradually is likely to offer the best approach.

INTERNAL PROCESS METRICS

The internal business processes perspective looks at how smoothly the BA Service is running. It gives the opportunity to consider process efficiency, reduce waste and identify where it is possible to work more quickly. The key processes required to operate the BA Service (see Chapter 9) will help to identify how baseline process measures could be defined and improved.

Recruitment metrics

There are two key recruitment metrics that should be used by the BA Service to plan the work effectively and respond to customer demand; these are recruitment timescales and conversion rates.

The most informative recruitment timescale metric is the average length of time from listing a business analyst vacancy to a new business analyst beginning work. There a several processes that contribute to this metric, including the length of time:

adverts are listed for;

recruitment agencies are given to source candidates;

for the process of shortlisting, interviewing and making a decision; candidates are required to work as a notice period.

The BA Service cannot control all of these issues but should shorten time frames where possible, as this is likely to be of benefit to both customers and business analysts.

Conversion rates are a recruitment metric to help aid understanding of the end-to-end process. They provide opportunities to compare one round of recruitment with another or to compare recruitment exercises for different roles. They reflect the number of applications that are converted to interviews and how many interviews are converted to new business analyst appointments. Conversion rates should be tracked for all business analysis recruitment exercises, as this will develop helpful management information. This information is vital to inform the business analysis recruitment strategy (see Chapter 3) by addressing key questions such as:

Which business analysis roles (senior, practitioner or entry level) have the highest conversion rates? Why is this? What are we doing differently at each level?

How do business analysis conversion rates compare with other roles such as project management? What could explain the difference?

At what times of year do we get more applications?

Which advertising routes lead to more applications? Is the conversion rate affected?

Achieving a higher conversion rate without compromising on appointment standards indicates a more effective recruitment process. Table 13.7 shows an example of typical recruitment metrics that would provide a basis for informed decision making about the recruitment strategy and informed predictions about future recruitment. The data might lead to the following consideration:

‘If we need to appoint 5 business analysts, based on our previous conversion rate (6.7 per cent) we would need to receive over 70 applications and see 20 candidates.’ Is this feasible? How long would this be likely to take?

Table 13.7 Example business analyst recruitment metrics

images

The conversion rate could be calculated just from the number of applications (start of process) and the number of offers accepted (end of process). However, the intermediate information allows adjustment actions to be suggested such as:

We rule out a lot of applications on paper. Should we be speaking to more applicants?

We don’t rule out many applicants via the telephone interview, is this working? Is it worth the effort?

Why aren’t all our offers being accepted? Are we tracking the reasons applicants give us?

It should also be possible to reflect ongoing performance of recently recruited business analysts against recruitment information:

How do we feel about the recruitment decisions made once six months or one year have passed?

Do our processes (e.g. questions, assessments) need to be updated to reflect the information we hold about gaps in applicants’ knowledge and experience?

What assumptions have we made that we didn’t recognise?

Did time pressures or customer demand affect our decision making? Was this successful or detrimental?

Where recruitment feedback from candidates suggests that the process is too slow, it may be useful to track candidate experience metrics such as:

length of time between application and interview;

length of time between interview and offer;

percentage of offers not accepted.

These metrics will show where improvements can be made to avoid losing good candidates in a competitive market.

BA Service Dashboard

Presenting information about the metrics and measurements is vital to influence behaviours and encourage action.

Developing a dashboard that can be used with different audiences is a useful approach, as it helps to present information concisely and clearly. The process of designing the dashboard needs to consider the following:

the metrics and information to include;

the target audience;

the creation and maintenance process and resources;

the support tools to be used;

the dissemination mechanism(s);

the frequency of updating the dashboard;

the links to existing reports or other information.

Figure 13.9 shows an example dashboard.

The dashboard can use a number of different data visualisation techniques to make the content visually engaging. Displaying the dashboard in relevant physical and online locations sends the message that performance is taken seriously, and that the BA Service is making tangible steps towards ensuring that performance remains on track to meet the defined objectives and deliver the strategy.

The service gap

When a service is experiencing low customer satisfaction, there is a gap somewhere in the service provision. It is useful to understand where this gap is occurring. Are we providing the service we agreed, but the customer expects something different? Or, are we failing to meet the standards the BA Service has set? Were the standards correct in the first place?

Four dimensions that relate to customer service gaps are discussed in Table 13.8.

Figure 13.9 BA Service Dashboard

images

Table 13.8 Understanding the service gap

images

CONCLUSION

This chapter has suggested a range of metrics relevant to the BA Service that explore each area of the Balanced Scorecard. Having used metrics to evaluate each of these areas, it is important to use the information gained to improve the performance of the BA Service. The four elements of the service gap – knowledge, standards, delivery, communication – provide a framework for considering where improvements may be made.

BA leaders need to invest resources in defining metrics and measuring performance against these metrics. This will enable them to understand whether the BA Service is improving and to demonstrate progress against the performance improvement strategy. This may seem a daunting task but, if a performance improvement exercise is initiated with a limited scope and using straightforward metrics, it is possible to build up confidence and capability in using a metric-based approach to improvement.

Ongoing monitoring and improvement of the BA Service may be facilitated through the use of performance metrics and the dissemination and publication of the survey results. The use of a BA Service Dashboard will help to highlight performance issues and clarify where changes need to be made in order to achieve the required performance improvements.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset