11 ASSESSING VALUE IN DATA STRATEGY IMPLEMENTATION

‘To me, ideas are worth nothing unless executed. They are just a multiplier. Execution is worth millions.’

Derek Sivers1

Any investment in defining a data strategy and embarking on its implementation must be closely related to the value that such an activity is to bring to the organisation. If this is not uppermost in the minds of both those commissioning the data strategy and those tasked with defining and delivering it, then I would contend that it will lack focus, with an end product unlikely to gain wider traction or seem relevant to those you most need to engage.

This chapter focuses on the concept of value delivered through the data strategy and its implementation, how you demonstrate it through effective measurement and the importance of evaluating the programme to convey how the data strategy has achieved its goals which, through alignment, has enabled the corporate strategy to succeed.

I have known of instances in which an organisation has embarked on a data strategy because of having to address either a lack of compliance or an urgent need to meet compliance obligations. In reality, these instances did not lead to the development of a data strategy as discussed throughout this book. It is a reasonable question to ask whether this makes them any less a data strategy, and whilst there are clearly reasons why every organisation needs to be compliant in its management and exploitation of data, these reasons alone do not warrant a data strategy as the answer. I would suggest that a standard programmatic approach to compliance would be more effective than wrapping up compliance into a strategy (in other words, compliance should always be part of a data strategy, rather than the entirety of it).

There is, of course, nothing like a focus on an immediate crisis or compliance issue to drive attention to data! Therefore, it may be a case of not letting such a crisis ‘go to waste’, and addressing the immediate issue as a prompt to also cast the net wider, delivering a data strategy that turns adversity into a positive outcome.

One of the overriding reasons for any strategy failing is the danger of focusing solely on performance metrics, which track programmatic activity, such as spend, resources and timing. These are focused on the immediate or, more often, the historic, through MI and reporting which will tell you where you were a month or more ago by the time it is produced. Yet, without being able to measure the effectiveness of strategy execution, by which I mean the relevant transformation activities having been delivered and embedded, it is just as likely to be a failure.

It may seem obvious, but if you are delivering a strategy you are meant to be looking forward, driving change, and therefore performance measurement will only tell a small part of the impact that strategy implementation is having on your organisation. The optimal approach is to strike a balance between tracking the coordination of the programme, as measured via the performance metrics, in the short term and to align these with the impact of the change delivered, as measured over a period of time.

The essence of embarking on measuring impact is to have a clearly defined baseline. This may already exist, but in most organisations it does not. There will likely be performance reporting scattered like confetti in your organisation, and many individuals may be involved using a variety of tools to exploit one-off data sets sourced through contacts, which leads to a lack of consistency or quality of information produced. You need to be very clear in how you determine your baseline and get it approved and agreed on by those stakeholders who will later hold you to account for your progress. In some cases there will be such a paucity of data that there will be gaps, undermining your baseline. Ironically, this tells its own story. If you are unable to measure a baseline then there is clearly a significant issue with the data management in play in your organisation.

The first sign of progress is to be able to articulate where you start from, even if it takes some time to establish this fact. As John Foster Dulles, the former US senator, said: ‘The measure of success is not whether you have a tough problem to deal with, but whether it is the same problem you had last year.’2 In other words, be able to articulate the problems you are seeking to address with the strategy to be able to demonstrate your progress.

11.1 EVALUATION TO GENERATE MEASUREMENT IN DATA STRATEGY IMPLEMENTATION

As has been outlined above, the importance of measurement will become clear the further you get into the implementation of the data strategy. However, if you haven’t defined, agreed and measured your baseline at the start, then you have missed one of the fundamentals of defining a data strategy.

Measurement should be a visible part of your data strategy, articulating the improvements you expect to deliver in the course of the implementation activity and being a key plank of your review discussions with your sponsor and senior stakeholders. If you have not defined your success criteria and how these will be evidenced through measurement, then it is highly likely that you are already operating to differing interpretations of what is going to be achieved through the data strategy implementation.

It may seem complex to define success criteria before embarking on the data strategy, but I would suggest that you will have formed judgements based on some degree of evidence as to what to address, and when, in the data strategy. If not, then the data strategy is open to challenge as to what is driving the prioritisation and the return on investment the organisation can expect the implementation to generate. It is essential to build your measurement on evaluating improvement, in a way that can be evidenced, as opposed to being judgemental. The process is intended to be empirical, so think about where that evidence will be captured, stored and used to demonstrate progress.

The process of defining your success criteria, the options available to prioritise in terms of delivery, and the impact that the data strategy will have in terms of enablement and capability should all be factored in to the strategy definition phase and then refined in more detail at the implementation stage. Prior to commencing implementation, it is good practice to revisit the success criteria to evaluate whether these are still accurate, measurable and achievable in the time permitted. It is also recommended that you gain a final sign-off on the measures you are putting in place and enshrine these in the implementation programme reporting to ensure there is a tracker on progress.

The complexity of the measures will likely depend on the nature of the organisation, the scale of the programme and the way in which other strategic implementations report progress. If these are lacking, then you have a blank canvass to work with and can utilise good practice of others in your own approach.

Within UK government, there is a publication known as the Magenta Book3 which is intended to guide the evaluation of policies, projects and programmes across government. Whilst this is intended for those who engage with, or operate within, government, many of the principles are equally applicable to strategy implementation measurement, especially in relation to the three evaluations – process, impact and value for money. It provides a coherent description of how to use these evaluation techniques to evidence progress against that which was anticipated, and so is a helpful guide for anyone new to evaluating delivery of a major programme, into which data strategy implementation would certainly fit.

A similar guide4 produced by the New South Wales (NSW) government in Australia provides a very effective set of guidelines to be used to undertake programme evaluations (it also references the Magenta Book as one of its sources). Whilst aimed at those operating NSW government funded programmes, it clearly articulates how to undertake an evaluation of a programme, and I would recommend it for those keen to explore how this might be utilised to support a data strategy implementation programme evidencing successful outcomes. In 2013 the NSW government established, via its Treasury, the Centre for Program Evaluation, specifically to promote evidence-based decision making across the NSW government and to conduct evaluations of programmes using a consistent methodology.

11.1.1 Evaluation approaches

The balanced scorecard was originally designed by its founders, Robert Kaplan and David Norton,5 as a management system for organisations to be able to manage their strategic implementation, based around four key themes at its heart.

  • Financial – how the organisation should appear to shareholders so that the organisation can succeed financially. It focuses on bottom-line improvement, through measuring profitability and shareholder value.
  • Customer – how the organisation expects to appear to customers in order to achieve its vision. This separates customer and market segments into those where it will seek to compete and its anticipated performance levels in those segments, whilst determining what approach to adopt in other segments (harvest or divest,6 for example).
  • Internal business – identifies how the processes within the organisation need to be refined or improved to excel and be able to meet shareholder and customer expectations. The focus is on those internal processes, core competencies and technologies that underpin customer needs.
  • Innovation and learning – investigating the sustainability of the organisation’s ability to change and adapt to meet evolving customer expectations and thereby achieve the organisation’s vision. This involves a review of the entire infrastructure of the organisation needed to meet these objectives, and assesses the ability of the organisation to innovate, improve and learn, typically measured by new product launches and speed of response to change.

Core to the vision of the balanced scorecard is the importance of strategy mapping, aligning explicitly the cause and effect linkages that ensure outcomes are achieved through the alignment of initiatives and resources, tangible and intangible. Kaplan and Norton were clear that this is essential to be able to demonstrate value.

Mindful that their starting point had been the private sector, Kaplan and Norton adapted the model to make financial and customer factors equal in status in supporting the mission of a public sector organisation. Others have since altered this order and redefined or added to it to create variants which more closely relate to public sector and not-for-profit organisations, but all start with mission as the key driver.

Whilst the balanced scorecard is more typically used across the organisation, it is worth considering – especially if your organisation is using a balanced scorecard approach or a variant thereof – whether there are elements of the balanced scorecard that the data strategy implementation delivers. For instance, every one of the four key themes has a data dimension to it, and each will be enhanced by what you deliver through the data strategy implementation.

If your organisation does not follow the balanced scorecard, or has no desire to do so, I would still recommend considering it as a structured approach to aid your thinking in defining your outcomes and, thereby, the measurement to be used. It provides a structure that is focused on the organisation at large, as your data strategy should too, and therefore becomes an effective way to marshal your thinking into how you demonstrate real value to your senior stakeholders and sponsor to be able to reach through to those things which impact the bottom line, customer value or efficient delivery of services.

Two further commonly used methods of evaluation are the KAB model,7 which assesses effectiveness in driving change in knowledge, attitude and behaviour, with each in turn more difficult to achieve than its predecessor, and the Fogg behaviour model (FBM), a design behaviour change model introduced by B.J. Fogg.8 For those working in training organisations or learning environments, there is also Kirkpatrick’s evaluation model,9 which determines impact in terms of reaction, learning, behaviour and results, which I do not propose to cover in more detail here (I have provided a link in the footnote if you are interested in exploring it).

The KAB model is also referred to as a social cognition model, or knowledge, attitude and practice (KAP) model. It is used extensively in health education and is based on behavioural change being affected by knowledge and attitude. It is centred around knowledge, and the hypothesis that behaviour changes gradually, based on increasing knowledge leading to attitudinal change, which in turn impacts behaviour. Its core premise is that humans are rational, though of course we know that at times emotion or other triggers can overcome the rational, which can make the use of the KAB model challenging on occasions.

According to the FBM, behaviour is composed of three different factors: core motivators, simplicity factors (often referred to as ability) and a prompt. The concept is that an individual will succeed in behaviour change if they are motivated, have the ability to perform that behaviour and receive a prompt to do so. If the three do not converge then there will be no change. If you are interested to know more, then Dr Fogg provides a ‘boot camp’10 to share how to use the model to change behaviour.

Whatever evaluation approach is adopted, the evidence will usually be gathered via a combination of sources. There will be qualitative and quantitative data, observational data and trend data, which is gathered by the implementation team itself to infer enhancements to the way in which the organisation is operating. These may be the collation of observed, qualitative and quantitative data into a view which would otherwise fail to be interpreted in a way that demonstrates behavioural progression in the organisation.

Of course, there is also the external perspective to consider in all of the evaluation too. As well as the impact within the organisation, if the action taken drives improvements to the customer experience, these too should be captured. This may result in fewer customer queries being raised with the call centre or sales agents, greater retention through providing a more efficient service or a better understanding of customer needs through analytics, and being able to plot these in a way which was either suboptimal or not achievable previously.

There are other approaches which could have been used, but do consider the evaluation methods identified above, as they have a tangible impact on the bottom line or the brand image your organisation has. Most organisations exist to provide a service or product to the customer – regardless of private or public sector, not-for-profit or global conglomerate – and so this is the manifestation of your organisation becoming a more mature, data-led organisation that will likely start to differentiate itself from its competitors, in turn sparking innovation and a dialogue which might not have been present in the organisation previously.

11.1.2 Communicating measurement

Communicating the measures to be used, gaining agreement to their adoption and the data which underpins them is the first part of the communications process. Once this has been achieved, it is essential to establish the baseline, gain acceptance that this is the starting position (you may find there are as many versions of how this has been defined in the past as there are stakeholders, which is why defining the baseline and getting approval to targets is so key) and agree a production cycle of reporting.

Typically, measurement would be expected monthly but, in reality, progress within a month might be limited, especially on some of the bigger tasks which require significant mobilisation. Therefore, you may wish to report quarterly, or to even offer a hybrid of hard measures quarterly and a narrative-based progress report on a monthly basis, in which key deliverables can be called out supported by some key measures.

There could be some important elements that could act as lead indicators regarding the health of the implementation, such as the progress of dependencies elsewhere, the risk mitigation in advance having removed some potential barriers (for example, resource constraints which were forecast to constrain the pace of the implementation may have been resolved, enabling a faster pace to be achieved than forecast) or the necessity to do certain activities being removed from the programme (which could be due to an alternative approach being identified or these having been mistakenly understood to be essential when, in fact, they are not impacting the strategy implementation). These will enable further exploration to discover whether future milestones are on track or if there is some concern about how these are shaping up. I would encourage you to consider adding these, to give a sense of predicting future performance, as it might help mobilise additional support or focused activity to assist in bringing these back on plan if that is necessary.

As the strategy implementation gathers pace and there is more evidence to present, you may be confident that moving to a monthly reporting cycle based on harder measures is feasible, and I would encourage you to do so at the earliest opportunity. The more opportunities to raise the profile of the data strategy implementation, the more support you are likely to retain for it amongst your senior stakeholders.

Your organisation may already have an expectation as to the reporting frequency for strategic programmes such as the data strategy implementation, and so you may have to fit with the wider approach. However, do bear in mind that the data strategy implementation will almost certainly get off to a relatively slow start in terms of evidence of change due to the need to focus resource and potentially technology on delivering over several months. For this reason, quick wins showing clear results and benefits realised are to be sought to keep enthusiasm amongst stakeholders positive and to demonstrate progress, even if these are possibly quite tactical in nature.

In addition, do not forget those who are delivering the implementation of the data strategy. They will likely be working on part, rather than all, of the data strategy implementation and so will not necessarily be aware of the progress in totality. It is good for motivation, cross-programme communication and collaborative problem solving to have a joined-up approach to keeping the implementation team fully engaged and informed, and it enables them to have more rounded discussions with their stakeholders across the organisation.

11.2 BENEFITS REALISATION

‘It is a central tenet of the Benefits Realisation Approach that benefits come only with change and, equally, change must be sustained by benefits. People must change how they think, manage and act in order to implement the Benefits Realisation Approach.’ – John Thorp and Fujitsu Consulting’s Center for Strategic Leadership11

A key measurement to use in data strategy implementation is tracking the benefits that have been realised through the implementation. The concept of benefits realisation is not a new one, yet many organisations struggle to articulate the benefits delivered by their programmes in a structured way.

Whilst the evaluation discussed above looks at the impact of the data strategy as a whole, benefits should be tracked throughout the delivery to demonstrate the success of the implementation. This will aid communication and retain confidence with your sponsor and stakeholders alike, and can then be presented in a way which makes a compelling case as to why the continued investment in the data strategy implementation is required.

Benefits realisation is closely related to the evaluation process as it seeks to achieve similar aims, and so the work will overlap and can be done in parallel so there is a consistent approach adopted. The key to benefits realisation is to identify, at a more detailed level, the actual benefits that are to be measured, via clearly defined monitoring processes, to enable the programme delivery to be tracked through to its conclusion and beyond, depending on how long a tail there is to the benefits being fully realised.

It is advantageous to start the process of defining these well in advance of implementation, ideally via the waymarkers within the data strategy itself, as these should articulate some of the benefits (not necessarily all, but the most significant should be included) and how these will be identifiable once delivered.

The similarities in benefits realisation and other forms of evaluation are the principle that they seek to establish whether the programme implementation has delivered what it set out to achieve. This may seem to be a statement of the obvious, but in many organisations the focus of performance measurement is fixated on whether the programme has run to schedule, met the requirements and delivered within budget. All of these could be achieved and yet the programme fails, as none of these realise the intent of the programme, which is to deliver the outcomes as fully realised benefits. It is often one of the sources of most frustration in organisations, as programmes diverge from reality, the deliverables remain theoretical due to an inability to adopt them or a failure to communicate how to transition to them, and a programme and sponsor declare success, yet nothing has actually been achieved.

The key to benefits realisation is to identify business owners of the benefits and hold them to account in terms of actually being able to achieve what the programme set out to deliver. This detaches the responsibility for benefits from being an introspective activity for the implementation or programme team and shifts the onus onto those who will need to materialise the benefit. It therefore brings operational responsibility into play and, in terms of a data strategy implementation, is critical to ensuring that changes and enhancements are fully embedded in the organisation in day-to-day operations.

The benefits realisation process should help inform a wider evaluation of the programme, demonstrating that the key aspects of the programme as set out have been achieved and subsequently adopted into business-as-usual activity.

There are a variety of ways in which benefits realisation and performance measurement can be tracked, and I have sought to provide a number of references within the bibliography for those wishing to explore this topic further, with guidance, plans and frameworks available via a number of organisations.

11.3 PERFORMANCE FRAMEWORKS

There are two commonly used performance frameworks in use, and both have their place and can co-exist as supporting measures quite effectively.12 In short, the key differences are displayed in Table 11.1.

Table 11.1 Comparison of KPIs and OKRs

 

KPIs

OKRs

Goals

Achievable

Ambitious and aspirational

Basis

Quantitative

Qualitative

Intent

Performance tool

Motivational tool

Focus

Outputs

Growth

Indicator

Lag

Lead

Purpose

Delivering success, often in business as usual/defined activities

Innovation, improvement, challenge

I shall cover the more common version first – KPIs – and then the less commonly used objectives and key results (OKRs).

11.3.1 KPIs

The term KPI is probably used in most organisations but is not always well understood. The reason for this is that most organisations fail to distinguish between the notion of a performance indicator from one which is a key performance indicator. These may sound to be largely the same but, if used correctly, makes a significant difference to your organisation and the way in which it operates its MI approach.

A KPI is a performance measure that demonstrates how effectively an organisation is achieving its critical objectives. They are used to track performance over a period of time to ensure the organisation is heading in the desired direction, and are quantifiable to guide whether activities need to be dialled up or down, resources adjusted or management resource focused on understanding what is in play that may be holding back the organisation. In many cases, KPIs are so well established that sectors have norms in the types of KPIs used, so universal have they become, and in some cases organisations may even know where they stand according to industry benchmarks on some KPIs. The important factor to determine is what constitutes a KPI in your organisation.

I have experienced organisations producing what are defined as KPIs that run to 100 pages or more each month, and yet I am still unable to find anyone who manages at the head of an organisation who is abreast of so many KPIs.

At one of the first conferences I ever attended, nearly 30 years ago, a wise individual stated that the best way to establish organisational KPIs was to ask the chief executive (or similar post in your own organisation) what keeps them awake at night in terms of the performance of the organisation. The presenter suggested, based on many years’ experience, that the response would probably be between six and ten critical things that are top of the list of concerns, and for the more information-hungry CEO possibly a dozen. Those are your KPIs, he said, and anything else is background noise as far as that CEO is concerned.

I have to say that in my discussions with CEOs, this mantra has proved to be well founded, yet organisations convince themselves of the need to produce not only KPI reports in vast numbers but – until the more recent transition to electronically presented dashboards via more sophisticated MI tools becoming commonplace – churning out almost encyclopaedias of KPIs every month.

The KPI juggernaut has been misused and abused in too many organisations to the extent it has devalued the concept of KPIs. KPIs used well – the ten things that really matter to an organisation – can, in my experience, be a real galvanising force to get focus and attention put in those areas which really can make a difference. The rest is a distraction, there through some misplaced view that more adds value when actually it detracts through losing the focus from where it needs to be.

In addition, the important element in the KPI is not as such the data itself, but the quality of the narrative to add interpretation, context and accountability to the KPI. I find the narrative is often the area which gets left to the last minute or ignored completely, when a chart without a narrative is like a road sign without the place names – it advises of something happening but leaves those reviewing it in the dark as to why this is the case or what is being done about it. The true subject matter expertise in an organisation needs to earn its value through being able to support executive leaders by making the interpretation clear and the decision to be taken directed to the point in hand.

The importance of KPIs to the data strategy implementation will be the evidence they provide of the progress being made and the key decisions to be taken. Therefore, there should be little to report via KPIs – the status of the implementation may be a simple RAG status13 on a dial – and a narrative statement demonstrating the impact of the implementation to date in driving the change the organisation signed up to in the data strategy.

The real focus in performance reporting is where the vast majority of reporting across the organisation should be – performance indicators (PIs). These should align to the objectives of individuals and teams, unlike OKRs, and should therefore be easy to track through the activities of both the implementation team and the wider organisation where the implementation impacts.

11.3.1.1 PIs

The elevation of performance measures into an avalanche of KPIs tends to mean that the concept of PIs has been lost on many organisations. The PIs will typically fall into two camps: those which contribute to the KPI, and are therefore essential to track to consolidate into the KPI, and those which are stand-alone, not in the top ten or thereabouts of critical things to measure but need tracking for operational reasons or to support decision making at a lower level in the organisation.

PIs are as important as KPIs, as without these the foundations of measuring performance would not hold together or be understood. Simply removing one letter does not mean they lack relevance or a place in the organisation: it is important to appreciate the role they play in enabling the smooth running of the organisation by tracking performance and, unlike the KPIs, the delegation of responsibility to track activity within the organisation, which provides much-needed bandwidth for the executive tier to focus on fewer things, with more time to devote to getting the big decisions right.

The data strategy implementation, therefore, functions via a range of PIs to steer the programme lead and sponsor in the right direction to ensure the top-level reporting is coordinated and aligned with the actual state of the implementation.

PIs should have a desired target, so there is clarity on where you wish to get to in a given time frame. If the indicator does not have a target, it is simply a metric that may provide some value in reporting status but does not add the value that a PI delivers in being able to track against a known goal. As a result, a PI should have objectivity through a quantitative assessment, along with a narrative to provide context in terms of progress. That measurement, aided by the narrative, should be informative of what is required to maintain or accelerate performance.

It is important to determine the appropriate PIs for the data strategy implementation to be able to track progress towards the milestones in the implementation plan. PIs should link with the delivery through to adoption of the data strategy as business-as-usual activity in the organisation, to ensure the full end-to-end activity is tracked. The PIs should be focused on what informs effective leadership in delivering the implementation programme and so should be enabling those within the programme to make the right decisions at the right time.

PIs will be a mix of lead and lag indicators, the former being measures of performance before the business or process result starts to follow a particular pattern or trend, whilst the latter measure performance after the business or process follows a pattern or trend and is used to confirm long-term trends. This enables change over time to be assessed through both aspects of measurement.

There will also be tracking of other routine metrics, such as budgets, resources, risks, issues and dependencies in addition to the PIs specific to the implementation deliverables, which of course form part of the effective governance of the data strategy implementation.

11.3.2 OKRs

One of the most effective ways to develop a performance framework to capture benefits is using a method known as objectives and key results (OKRs).14 This is a particularly effective approach to adopt in an Agile environment where the setting of goals needs to be closely aligned to the strategic outcomes that matter most to the organisation. The OKR methodology was developed by Andy Grove at Intel in the 1960s, so has been around for some time, but has had a new lease of life following the publication of a book called Measure What Matters in 2017.15 OKRs are now in use by Google, Amazon, Uber and Airbnb, amongst many others.

The smart element that OKRs bring is the specification of a way to measure achievement – the principle of OKRs is that they must be measurable, flexible, transparent and aspirational and operate outside the usual framework of performance or pay reviews. Most organisations track OKRs quarterly, to assess progress as measured by outcomes against their strategic goals.

The premise of OKRs is to keep objectives and results simple and flexible, ensuring they align with business goals and enterprise initiatives guided by regular reviews to assess progress during the quarter. The intent is to keep OKRs clear and accountable, as well as measurable, with between three and five objectives recommended at a high level that can each be tracked by three to five key measures. They should be ambitious goals, even uncomfortable, in challenging aspirations, making them stretch targets.

OKRs are measured between 0 and 1, or as a percentage between 0 and 100. The drive is to push on, to meet the stretch target but to regard success as anywhere between 70 and 100 per cent and failure as having achieved 30 per cent or less. Divorcing the OKRs from performance reviews enables those who are targeted to achieve them more ambitiously and with less concern about the impact of failure, leading to more innovative and experimental thinking being deployed to explore ways to achieve the desired outcome.

The rationale for quarterly measurement is that these are typically strategic goals, and therefore progress is expected to take longer than a single quarter to be achieved on any of the three to five objectives set.

The reason OKRs work well in a strategic sense is the link between an aspirational goal and the clarity of measurement. They also have an assigned owner, which provides an effective link to driving strategy implementation forward in a way which recognises that the desired outcome is to realise change within the organisation. The flexibility of OKRs – you are encouraged to ratchet up the target if it emerges that it can be obtained with confidence, potentially by at least 30 per cent further stretch – can enable the strategy implementation to be more dynamic and push on further than planned if the opportunity arises and the conditions are right to do so.

In terms of the format of OKRs, the objectives set the target for what you wish to achieve. In a strategic sense, this is the end point of the particular objective being defined, which may be one of the elements called out in a waymarker or a key milestone on the implementation plan. There are two types of objectives in the OKR methodology.

  • Committed objectives – goals an individual, team or organisation has committed to achieve, regardless of wider events, that are suitably resourced with people, money and time to make them achievable. The committed objectives will be measurable in an unambiguous way.
  • Aspirational objectives – may also be referred to as ‘moonshots’ as they are goals an individual, team or organisation aspires to get to and are typically where the organisation is pioneering or innovating and so cannot be absolutely certain in terms of approach, or even the resources needed to achieve them. As a result, the measurement will be uncertain and the quarterly reviews will potentially need to explore the learning gained, and reassess the direction to be taken and the goal definition.

The key results are often referred to as waypoints, or steps, along the way to demonstrate the progress being made in achieving the destination – the objective. Critically, key results should always measure outcomes, not outputs, since the focus is on the end result in terms of impact. This is central to why OKRs are used successfully in a host of organisations – it is the emphasis on measuring value-based results as opposed to activity-based results that differentiates the OKR methodology. It permits a level of licence to be taken in the way in which the result – the outcome – is achieved, enabling innovation, experimentation and an agile approach to be adopted that makes OKRs an empowering way for individuals and teams to work through challenges in an unconstrained manner to achieve success. This fosters collaboration whilst ensuring alignment of effort to work towards a common goal.

The challenge with using OKRs is to focus on just three to five objectives – sounds simple enough, but so many organisations follow the ‘if it moves, track it’ philosophy such that they can’t see the wood for the trees. These are the strategic priorities for the organisation, even if they change periodically from one quarter to another, and so making significant progress on them will be moving the organisation forward at a greater pace than would otherwise be the case.

Similarly, these are not performance measures in the traditional sense of reviewing personal achievement, hence they have to be stretching, and likely unachievable – the moonshot as a destination should set your frame of reference. In the strategy implementation, these might be those areas of improvement within the organisation that involve having to try something for the first time – AI, for example, or other forms of advanced analytics to crack a wicked problem that has been challenging the organisation for some time. Hitting 70 per cent should be regarded as success, and so this needs to be embraced as aspirational and challenging to get the most inventive thinking applied by the team to solve the objective before them. These are not simply tasks; they are transformative challenges and must be measurable as value-based results.

It is also important to recognise that OKRs are aligning your organisation, and so their application to strategy implementation is ideally suited to their use. Therefore, consider how the objective is transformational, what the wider impact is on the organisation and how the result can be measured in terms of having delivered value through its adoption by the relevant part of the organisation on an enduring basis.

I would recommend regular reviews of OKRs – weekly or at least fortnightly, in a dynamic informal manner – to ensure progress is being driven throughout the team. If there is a strong case to adjust the OKRs within the quarter, then do so; otherwise consider a rebalancing or recasting of the result as part of the quarterly review. There is little point recognising the outcome either will be reached early – this is meant to be challenging, after all – or is unrealistic: progress needs to be made, and if this is unachievable it is better to reset than knowingly fail.

In the much more structured quarterly reviews (and potentially monthly meetings, depending on how you construct these) assess the confidence scores attached to each objective and the relevant measures. Utilising a traffic-light approach – in which red symbolises off-track; yellow, a need to monitor; and green, on-track – categorises progress against each measure. This will enable the group to reflect on where pressure points lie and the need for rebalancing or recasting the measure(s), and to flex either resources or the current approach to the objective.

11.4 EARNED VALUE

Some organisations have adopted the earned value management (EVM) approach to programme delivery, and so you may find your implementation programme needing to report according to this performance methodology.

The Association for Project Management (APM) defines earned value16 as providing information which enables effective decision making by knowing: what has been achieved of the plan; what it has cost to achieve the planned work; if the work achieved is costing more or less than was planned; if the project is ahead of or behind the planned schedule.

EVM provides a detailed analysis of programme delivery based upon four key data elements to derive the assessment, namely:

  • planned value (PV) – i.e. what we are going to do, the plan: the schedule for the expenditure of budgeted resources as necessary to meet project scope and schedule objectives;
  • actual cost (AC) – i.e. what the work achieved actually cost;
  • earned value (EV) – i.e. what the amount of work achieved should have cost, according to the planned budget: the earned value for the work actually achieved;
  • the estimate at completion of the project. This is the ACWP [actual cost of work performed] to date, plus the most knowledgeable estimate of remaining requirements, scope, schedule and cost.

Utilising the EVM methodology, it is easy to identify the progress of the programme in terms of the value delivered for the cost incurred and time spent. Therefore, it provides a focus on the value-add of the programme rather than the use of resources being as scheduled, for the value delivered is the true measure of effectiveness of a programme.

The EVM approach is highly effective but needs a significant amount of data and a highly structured approach to programme implementation through to delivery to be in place at all times to ensure the integrity of the data. It is therefore essential to undertake significant planning and effort to get EVM right, as trying to apply it retrospectively whilst a programme is in-flight is extremely challenging, as I know only too well – inevitably, data will not be constructed in a way to support EVM or is of poor quality or missing, undermining any effort to construct a historic view of EVM to build upon.

11.5 MATURITY ASSESSMENTS

The use of maturity assessments has been covered extensively elsewhere in this book. The point to be made here is that these are critical to the baselining efforts undertaken to support measurement and are high profile in their commissioning and delivery, which helps in providing a recognised input to the process of measuring data strategy implementation.

The assessments used are not a factor for consideration here; however, the importance of capturing the baseline and establishing a review process is clearly one that needs to be built into the measurement process.

There are various ways in which this can be undertaken, and there are benefits with each option to be considered, so it is entirely up to you, and the importance placed on independence of measurement by the organisation, how you proceed.

You may have undertaken the initial baselining of maturity internally, using the skills and knowledge within the organisation to deliver this. If so, you may be less concerned about maintaining the integrity of the process in reviewing progress – there is little to be gained by grade inflation, raising the scores on flimsy evidence to be able to claim progress if this has not actually been achieved. I would go so far as to suggest this is a waste of resources across the entire organisation, as it does not deliver improvement but takes up considerable time and effort for something which is clearly flawed. It is therefore important to be able to demonstrate that the process has been conducted with absolute rigour, even to the point of being a harsh judge on the evidence gathered and progress made, to satisfy all that any improvement in the score achieved in the maturity assessment process is demonstrable.

Of course, the alternative is to hire an external resource to deliver the maturity assessment and/or conduct the reviews using external help. If you do not have the capabilities in-house this might be the only option available to you, though I would strongly recommend upskilling the in-house team as soon as possible, so there is resource available within the implementation team who can direct and interpret what is needed in order to demonstrate progress in between assessments. The benefit of taking this approach is an element of objectivity, assuming the external assistance is not swayed by senior stakeholders to be lenient in their scoring. In such a case, I would question the integrity of the external support and challenge whether that individual or team is fit to conduct the assessment.

A further benefit of using external assistance is the wider perspective of other organisations undertaking the same maturity assessment approach to drive their own data strategy. This enables an element of benchmarking – depending on how willing other clients are to share experiences – and learning from what other organisations have done to drive improvements across their organisations. This can take some of the variability out of the process, utilising the experience of others to potentially fast-track improvements in your own organisation.

In some organisations, the maturity assessment is a key element of senior stakeholders’ annual objectives, and hence the importance of having the integrity behind the review and scoring is increased due to there being a financial implication. Whilst this can be an effective means to drive attention, it may not drive the right behaviour, as it could lead to a focus on what drives the scoring, rather than what is the right thing for the organisation to achieve at that point in time. For instance, there could be a key infrastructural or behavioural activity needed to subsequently release value and as a result improve the scoring, but the score will not shift by placing the focus on the key enabler, despite this leading to progress later. Therefore some sort of governance around the prioritisation in such cases is needed to drive the appropriate corporate behaviours amongst stakeholders to focus on what is right for the organisation to enable it to make progress.

A key part of the assessment process is the debrief following the assessment. You should be made aware of the gaps that make each of the criteria fall short of the next level, enabling you to construct a programme within the implementation to remediate or address those gaps according to the prioritisation assigned to each of the criteria in terms of importance to the organisation. The benefit of the maturity assessment approach is the standardised methodology that makes it easier to construct a plan to work with it in readiness for the next scheduled assessment – I would recommend an annual assessment as part of the data strategy implementation.

11.6 DATA AS AN ASSET – REALISING VALUE

The business world is gradually waking up to the realisation that data is an asset, even if it is not recognised financially as one to stick on the asset register as an asset with a value. After all, most organisations operate through the medium of data exchange, trading information in the form of contracts, specifications and transactions, as well as holding internal data such as registers of employees, customers and suppliers. All of these are valuable to an organisation, and are fundamental to the ability to operate. Without a grip on your data, you are losing value every minute of every business day.

There is a view that data is the new oil, a phrase coined by Clive Humby back in 2006 which seems to have had a new lease of life in more recent years.17 Whilst I understand the sentiment behind the quote, I tend to disagree, as data does not need to be captured in the same complex way as oil, which requires deep drilling to excavate it, often in hostile environments, but is made available much more readily for use to get the organisation operational.

I believe data is like water, essential to the running of any organisation, and without it there would be no organisation to speak of. It is renewable and simple in structure, yet easily degraded and diminished in value through a lack of cleanliness, and, if not managed carefully, can be destructive – just like flooding or a major leak can wreak havoc on a property, so can data if it is not managed compliantly or there is a breach. Like water, which is used extensively (for example, from manufacturing to our leisure and overall wellbeing), data is a truly versatile asset. We can use its immense flexibility to drive decisions and support risk assessments and meaningful engagement in the right way at the right time with our customers or key stakeholders. Therefore, data is the new water for me.

The lack of data being recognised as a key asset with a value has not deterred organisations being sold for premiums that belie the asset book value. This is particularly the case for those organisations that operate in a tech world in which knowledge of the customer and being able to manage a one-to-one relationship remotely and build customer loyalty is worth much more than the stock of the organisation.

Doug Laney has written a highly recommended book, Infonomics, focused specifically on how to monetise, manage and measure information as an asset and I recommend exploring this further if this topic is of interest to you.18 The book is rich in information on ‘how to monetise, manage and measure information’, but there are some specific points I want to highlight here for you to consider in developing your data strategy and framing the execution of it in the right way to open the eyes of key stakeholders who may control or influence the decision as to whether to invest in a data strategy.

Laney notes how James Tobin, an American economist and Nobel laureate, developed a simple ratio known as ‘Tobin’s q’ to understand the relationship between a company’s market value and the replacement value of its tangible assets. Since reaching 0.4 in 1945, the ratio has more than doubled to be regularly above 1.0 in any given year. For those who have invested in what Gartner recognises as ‘info-savvy’ behaviour,19 the ratio is nearly twice as much as the market average. Further, the information-based organisations out there have a q value three times greater than the market average as they operate with fewer tangible assets and more focus on the customer.

A practical example of this is the initial public offering of Facebook, in 2012. With reportable assets of $6.6 billion and a predicted conservative post-IPO market capitalisation of $75 billion, the non-reportable assets were predominantly information assets. This translated to $81 per user account, and now exceeds $200 today.

Data has a value, without which an organisation is largely a shell, worthless and of limited appeal other than as a means of sweeping up fixed assets at a knock-down price. It is the lifeblood of an organisation, so whether you regard it as the water that is essential to life or the blood circulating around the body, without it our organisations are not functional.

It is well worth considering, as the data strategy is drafted, building in a perspective of monetising, managing and measuring data. You are seeking an investment in the core asset that enables your organisation to operate. It is more resilient than the people, processes or technology of your organisation, albeit it needs these three to breathe life into its value. If you consider the discussions that almost certainly take place within your organisation, particularly those which are asset-rich organisations, in terms of investment decisions, surely data warrants at least an equal footing in terms of asset management and investment. Without reflecting on an investment strategy, you are almost certainly diminishing the value of your biggest asset without even recognising it exists.

Many organisations have started out on the path to assign a value to data and to seek to manage the value in much the same way other assets have a level of control over managing their value. This is not an easy undertaking, and needs significant commitment from within your organisation. However, would it not be useful to appreciate how critical data is to your organisation, and to be able to wrap controls around it to ensure it has the investment needed to retain, or grow, its value?

I have often found myself debating the concept of allocating funding to data quality as less of an expense, more of an investment. As with any asset, the longer it is lacking investment, the more it will degrade and, depending on the volatility of the type of data, it may become worthless or even have negative value if inaccurate (for instance, failing compliance rules as well as preventing effective contact with your customer). Without being able to articulate value, the case for investment becomes subjective and easy to deflect, even if the implications are only at least partially understood.

For this reason, data quality programmes are often commenced as a reactionary plan to deal with non-compliance or to address an immediate issue (for example, the failure to integrate data into a new system because of inconsistencies in format or dates). This is to fundamentally misunderstand the importance of data, and to fail to recognise it as an asset – some even look at the cost of data quality as if it is money which will never be recouped, which is very short-sighted indeed. As with any asset, routine maintenance is required to keep it in good condition for its continued use, and investment in the quality of data will be repaid many times over.

As you progress through the data strategy definition, making the case to senior stakeholders and seeking to find a willing, committed sponsor, remember the importance of data as an asset in pitches to those who have the decision as to whether to invest or not. Consider the opportunity to move to a way of recognising the inherent value that your data brings to your organisation, seek to redefine the agenda in the way data is considered in your organisation, and identify ways in which you could begin to make data a recognised asset and managed as such alongside those other key assets the organisation recognises. If you can succeed, it will make your case not only stronger, but more enduring, changing the philosophy of your organisation to one potentially starting out on the journey to becoming a multiplier on Tobin’s q ratio compared to its rivals.

It is also essential to keep a focus on data as an asset throughout the data strategy implementation. The potential to realise value through the delivery of key milestones is a powerful way in which to demonstrate the overall success of the data strategy implementation itself, and to bank those gains as you go. Recognising where data value can be realised may well determine some of the prioritisation calls you make as you set out the course of the implementation plan, and remember to build in the means to measure these gains and continue to track beyond the data strategy implementation.

If you find that the data strategy implementation does not realise the benefits you had expected, and as a consequence the measures are lower, explore why. It may be that there are changes required to the way in which data is exploited – in my experience, data is rarely allowed to take the straightest line to a decision; there are many hands that get in the way which leads to the data either being distorted or which results in delays, causing the data to be less effective at the time it is used (one of my favourite quotes is ‘making the right decision at the wrong time’, which reflects the impact of delaying a decision beyond what the data has the elasticity to cover, such that the data needs to be refreshed to be reliable for the intended decision to be made).

It is almost certainly the case that the data is only as good as the understanding that those looking to exploit it have in terms of its metadata – the information that describes it and clarifies its provenance. Of course, data is often dependent for quality on its capture, and the individual recording it in the first place, but this is another issue entirely. It is not uncommon to see the wrong data on which to base a decision used, and to create variations in the data through reworking the data into what an individual believes it should be, rather than trusting the data to be correct to begin with. The irony, of course, in this action is that the subjectivity of what the data should be fails to recognise the dynamic nature of data and the lack of repeatability in changing the data.

In one organisation, I recall the number of employees being changed several times over as the data was amalgamated and ‘refined’ by those who handled it prior to presentation to the executive. Unsurprisingly, queries about the data could never be acted upon because of the way in which it was manipulated through the process, yet this had been the case for several years, with no one seeming able or willing to grasp the fundamental data problem.

If you have the opportunity, look for ways to introduce the concept of data as an asset and assigning a value to it through the data strategy and its implementation. I am certain that this will be one of the biggest and most significant changes you can introduce into your organisation and, in many ways, can be done in a relatively low key way and yet deliver a very significant impact.

11.7 TEN TO TAKE AWAY

To summarise, here are ten things to take away from this chapter:

  1. Avoid embarking on a data strategy solely for compliance reasons. This misses the importance of value and the asset that is inherent in data, which you need the organisation to focus on.
  2. Be clear on the baseline at the start of the implementation phase and its calculation, as this needs to be followed through to demonstrate progress.
  3. Establish key evaluation criteria to measure success of the implementation programme. An effective approach to consider is the Magenta Book produced by the UK government.
  4. Identify benefits and track these throughout the implementation programme. These should be captured as part of the requirements process.
  5. Implement a performance management approach to track programme delivery against a number of measures, consolidating these to support KPIs which will be of keen interest to your sponsor and executive stakeholders.
  6. Consider use of OKRs as a way of making the performance measurement, due to its focus on results and therefore an effective way to establish credibility of the data strategy and its implementation.
  7. Apply maturity assessments throughout the implementation as a means to track progress versus the baseline (assuming the maturity assessments were conducted prior to the implementation commencing).
  8. Consider the governance and controls you might deploy to utilise maturity assessments (and other performance measures) to maximum effect.
  9. Data is an asset, it is the new water – renewable, flexible, a key ingredient to so much of what we do.
  10. Explore assigning value to data, it helps focus minds on the potential and importance to support the ‘data as an asset’ principle.

 

1 D. Sivers, Anything You Want: 40 Lessons for a New Kind of Entrepreneur. New York: Portfolio Books, 2015. The quotation is often attributed to Steve Jobs, though no source provides provenance.

2 Executives’ Digest: Summaries of Timely Articles of Special Interest to Business Men. Boston, MA: Baker Library at Harvard University, 1951.

3 HM Treasury, Magenta Book: Central Government Guidance on Evaluation. 2020. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/879438/HMT_Magenta_Book.pdf.

4 NSW Government Department of Premier and Cabinet, NSW Government Program Evaluation Guidelines. 2016. https://arp.nsw.gov.au/assets/ars/f506555395/NSW-Government-Program-Evaluation-Guideline-January-2016_1.pdf.

5 R.S. Kaplan and D.P. Norton, The Balanced Scorecard: Translating Strategy into Action. Boston, MA: Harvard Business Review Press, 1996.

6 Harvesting a customer would involve continuing to take revenue and profitability but not to increase investment; in other words, to make the return on past investment, hence the term harvest. Divesting a customer would be to reduce exposure, either in supporting that marketplace, investing in the customer or simply because the segment that customer is part of is no longer regarded as a core market.

7 For further information I can recommend: P.G. Schrader and K.A. Lawless, The Knowledge, Attitudes, & Behaviors Approach: How to Evaluate Performance and Learning in Complex Environments. Performance Improvement, September 2004.

8 B.J. Fogg, A Behavior Model for Persuasive Design. Proceedings of the 4th International Conference in Persuasive Technology. 2009. https://doi.org/10.1145/1541948.1541999.

9 Kirkpatrick Partners, www.kirkpatrickpartners.com/our-philosophy/the-kirkpatrick-model.

10 www.behaviormodel.org.

11 John Thorp and Fujitsu Consulting Center for Strategic Leadership, The Information Paradox. Whitby, Canada: McGraw-Hill Ryerson, 2003.

12 For a useful overview of KPIs and OKRs, this blog provides a brief comparison: J. Wishart, OKR vs KPI: What’s the Difference Between OKRs and KPIs? 2021. https://www.rhythmsystems.com/blog/okrs-vs-kpis-whats-the-difference-infographic.

13 A RAG (red, amber, green) status is a way to illustrate performance or progress, typically in project management and business reporting. Green would indicate the activity is on track, amber that there are concerns that are being managed but these represent a risk, whilst red indicates a project failing and in need of intervention or escalation.

14 This article provides a useful overview and starting point for further research: S.J. White, What is OKR? A Goal-Setting Framework for Thinking Big. 2018. https://www.cio.com/article/3302036/okr-objectives-and-key-results-defined.html.

15 J. Doerr, Measure What Matters. OKRs: The Simple Idea that Drives 10x Growth. New York: Portfolio Penguin Random House, 2018.

16 Association for Project Management Special Interest Group, Earned Value Management: APM Guidelines. Princes Risborough: APM, 2014. https://www.apm.org.uk/media/31993/evmguide-no-print.pdf.

17 Michael Palmer, Data is the New Oil. https://ana.blogs.com/maestros/2006/11/data_is_the_new.html.

18 D.B. Laney, Infonomics: How to Monetize, Manage, and Measure Information as an Asset for Competitive Advantage. New York: Bibliomotion Inc, 2017.

19 Microsoft Links Into a Treasure Trove of Information. https://blogs.gartner.com/merv-adrian/2016/06/14/microsoft-links-into-a-treasure-trove-of-information/.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset