CHAPTER 8
Outcomes: Digital Transformation KPIs Are Worth the Pain and Resistance

Finally, we have arrived at the key framework element that is closest to the digital transformation value impact we are looking for—namely, outcomes. As the metaphorical products of our digital transformation reaction process, these outcomes are structured in two different categories with very distinct characteristics: (1) subjective digital maturity status, and (2) objective digital impact information.

The first category, subjective digital maturity status, includes all digital transformation products, which are usually not observable by external shareholders and stakeholders without any insider knowledge. This category forms the basis of most digital transformation research, and it mostly builds on the already extensively explained—and criticized—maturity models from Part I of this book. Unfortunately, this category is of little further value for any useful empirical findings because, practically, under the assumption of efficient capital markets (Fama 1970), external observers of digital transformation processes do not have access to or cannot rely on this information. Still, many firms and advisors will happily choose to take the simple path of subjective measurement in many different flavors, mostly to get at least some grasp of where they stand in relation to a predefined internal maturity goal and build a “burning platform” to provoke some reaction on an abstract, functional, or aggregate level in internal digital transformation strategy presentations.

As explained in Chapter 2, you should under no circumstances accept this as an easy way out. With some hard work and very likely against many organizational resistances, you can reach a higher level of understanding based on working with more concrete outcomes. For this you need to leverage the second category, objective digital impact information. This must take the form of specific, measurable, accepted, realistic, terminated (SMART) digital transformation descriptions (Kawohl and Hüpel 2018). Fortunately, when assumed to be value relevant, capital market–listed firms must publish or announce them in the wake of their legally obligated or voluntary external communication. But research insights on these observable and therefore more objective digital transformation outcomes are scarce, even though they would be highly relevant and useful for us.

To give you an overview of some current state‐of‐the‐art research, the following summary table (Table 8.1) tries to cluster objective digital transformation impact products (that is outcome proxies and outcomes) in three groups with increasing tangibility:

  1. Replicable references
  2. Operational KPIs
  3. Financial KPIs

Obviously, every product/outcome can be the materialization of the digital transformation accelerators and decelerators explained in detail in the previous chapters, depending on its expressed impact or trend versus the previous status quo. Table 8.1 provides examples of each category and demonstrates the apparent focus in existing digital transformation research on replicable references and to some extent operational key performance indicators (KPIs) as outcome proxies. Research on real outcomes, that is financial KPIs, is almost nonexistent. No matter which of these categories we look into, we unfortunately find one thing that is mostly missing—the link to externally measurable market value, a key objective of this book.

Finally, a word of advice before you continue this chapter: Even when you believe it cannot be replicated or even measured in official reports, the impact of digital transformation is often still observable, by what people do and what they say they do about it beyond official reporting.

Replicable References as a Proxy to Measure the So Far Unmeasured

Replicable references, our first objective outcome category in digital, can take many different forms and it seems to continue to grow in number and frequency (Kawohl and Hüpel 2018). Stories on firm‐wide digital transformation lighthouse efforts have therefore at least raised some initial attention in digital transformation research.

Replicable References: Advancements

Replicable references have one main advantage. They have turned out to be the easiest to capture as an outcome proxy. They thus help to significantly advance the understanding of firms' digital transformation impact by defining adequate keywords and then applying qualitative text analysis to public announcements (Beutel 2018; Kawohl and Hüpel 2018; Subramani and Walden 2001). They also can form the basis for much deeper insights based on recent advancements in programmatic natural language processing (NLP), such as dependency analysis, to measurable facts or sentiment analysis (see Appendix B).

TABLE 8.1 Product/outcome clusters depending on their tangibility.

Outcome/Product Proxies/OutcomesReplicable ReferencesOperational KPIsFinancial KPIs
TangibilityLeast tangibleMore tangibleMost tangible
ExamplesMentions/statements on digital transformation lighthouse activities, projects, pilots, etc.Key performance indicators (KPIs), for example, on customer experience or employee experience (for example, net promoter scores/NPS, CSI, digital first ratios), productivity parameters (for example, self‐service ratios, first resolution rates, automated transaction shares)Directly linked digital transformation data from P&L statements, balance sheets, cash flow statements (for example, digital business revenues or cloud cost)
Impact direction (acceleration or deceleration)Can be accelerator (positive impact) or decelerator (negative impact)Can be accelerators (positive impact) or decelerators (negative impact) depending on their incremental effectCan be accelerators (positive impact) or decelerators (negative impact) depending on their type (for example, revenues versus cost)
Relevant digital transformation researchSome:
  • Text analysis of digital orientation (Beutel 2018): Empirically positive relationship to value demonstrated
Little:
  • Intangibles like customer satisfaction, quality, processes, customer relationships, quality of human capital, etc. driving value in digital (White 2016): Relationship to value not analyzed
Very little:
Indirectly, as one criterion in qualitative text analysis (Kawohl and Hüpel 2018), could fit into “replicable references” category: Relationship to value not analyzed
  • Text analysis of digital transformation linked to analyst recommendations (Hossnofsky and Junge 2019): Empirically positive relationship to value demonstrated for midstage, but not for early and late stages of analysis
  • Market value and digital maturity (Zomer, Neely, and Martinez 2018): Reciprocal analysis, companies that increase market value are more digitally mature
  • Applying real options to digital transformation (Schneider 2018): Conceptual relationship to value demonstrated
  • Announcements on E‐Commerce/digital (Dehning et al. 2004; Subramani and Walden 2001): Empirically positive relationship to value demonstrated
  • Linking digital activity announcements in reports and investor calls to market value (Chen and Srinivasan 2019): Positive relationship demonstrated
  • Steering models based on “management dimensions” like “community, partner, portfolio and resources” for the digital age (Schönbohm and Egle 2017): Relationship to value not analyzed
  • Dashboards for digital innovation (Mullins and Komisar 2011): Relationship to value not analyzed
  • Digital innovation/patents (Mani, Nandkumar, and Bharadwaj 2018): Influence of market expectations on the relationship between digital innovation and firm performance

Replicable References: Limitations

However, given the lack of measurability of these stories and the implied subjectivity of their impact implications, all findings must be taken with a grain of salt. In digital transformation research, few authors took the necessary steps from mere announcement analysis toward working on finding actual value implications (Beutel 2018; Chen and Srinivasan 2019; Hossnofsky and Junge 2019; Subramani and Walden 2001; Zomer, Neely, and Martinez 2018). They all agree that there might be a selection bias when identifying the replicable references, regardless of whether this is done by humans or machines, potentially leaving relevant terms out of sight or adding too many. From my perspective, this concern can be neglected. As a mitigation, I have chosen a mixed approach, which expands a manually defined dictionary, with definite selection bias risk, by a wide range of natural language‐processing‐based word vectors and a neutral second digital transformation expert's validity check to eliminate bias as much as possible. There could also be a selection bias in the NLP code applied as such and the taken preprocessing steps to clean up the reports (see Appendix B). While measures have been implemented to reduce this bias (e.g., random checks), this concern cannot be fully discarded. Even the sophistication of the implemented NLP code cannot (yet) replace human understanding of implied meaning of the text, eliminate all false positives (e.g., niche company names), and identify dependencies across paragraphs. As the amount of work required to do the same analysis manually is prohibitive, we must live with this concern until future NLP innovation eases the issue. We must also assume that the potential bias to generate the chosen digital transformation proxies in the same way for every observation still makes the results valid enough for careful interpretation. Also, there might be a bias in the data in the sense that the chosen proxy of replicable references does not adequately represent the companies' true digital transformation status (for example, if the chosen companies predominantly report only positive digital transformation experiences, while companies with failures limit their communication about digital transformation). Like Chen and Srinivasan (2019), I do not see this as a major issue, given the assumption that the actual success of digital transformation efforts will not be visible at the time of disclosure anyways, and therefore likely does not play a major role in influencing disclosure decisions. Nevertheless, this concern could hint at a future research direction in terms of multidimensional proxies, including not only the proxy, but also the one‐to‐one sentiment associated with this single proxy.

TABLE 8.2 Replicable references advancements and limitations.

AdvancementsLimitations
  • Can be measured with simple textual analysis of comparable annual reports
  • Allows further insights based on recent advancements of programmatic natural language processing (like dependencies to measurable facts, sentiments, and much more)
  • Accepted mitigations against replicable references selection bias have already been developed
  • Potential selection bias in defining what counts as a replicable reference
  • Potential selection bias in any applied programmatic natural language processing (NLP)
  • Potential bias in what companies decide to publish on their digital transformation impacts

Table 8.2 summarizes the advancements and limitations of replicable references.

Operational KPIs Are Elusive, But Better Than Nothing

Our second objective digital transformation impact category, operational KPIs, like data on customer experience perception or employee experience (for example, net promoter scores or similar KPIs), or operational productivity parameters (for example, self‐service rates in customer service) is very common in daily management practice. Nevertheless it has found few inroads into digital transformation research other than survey‐based discussions on relevant intangibles (White 2016) and initial efforts to develop new steering models based on “management dimensions” like “community,” “partner,” “portfolio,” and “resources” (Schönbohm and Egle 2017).

Operational KPIs: Advancements

Operational KPIs can bring one additional advancement to digital transformation value analysis on top of what even the most sophisticated replicable reference analysis can deliver. At least for internal analysis, they can form the baseline for applying more advanced models to estimate the impact on actual financial KPIs. Examples include the recent experiments in developing causal models to estimate the impact of digital customer satisfaction improvement measures on customer experience KPIs like net promoter score (NPS). Based on the results of these analyses, modern approaches then try to find (causal) links of NPS to customer lifetime value.

TABLE 8.3 Operational KPI advancements and limitations.

AdvancementsLimitations
  • Ongoing innovations in causal models can provide first insights on the relationship between operational KPIs and financial impacts
  • Data not publicly available in any systematic way: Market value impact analysis not possible
  • Internal analytical cause‐and‐effect models (on financial KPIs) often still experimental and with substantial noise

Operational KPIs: Limitations

The most relevant downside of trying to leverage operational KPIs for systematic digital transformation value analysis is that, externally, this information is not available in any systematic way. This is no surprise, because no definite standards for publishing such information exist (and rightly so). Therefore, the key step of linking operational KPIs to market value is not possible from the outside. This does not mean that internal models should not be in your focus. While still experimental, I highly recommend that you constantly challenge your internal analytics teams and external analytics experts to experiment with this idea of using causal models to stay on top of the current innovation happening in this space. The initial successes I have recently seen from such models are quite promising, even though there is still substantial noise distorting the transparency of effects.

Table 8.3 summarizes the advancements and limitations of operation KPIs.

The Journey to Financial KPIs Is Cumbersome But Worthwhile

Our third category, financial KPIs (e.g., data from profit and loss (P&L) statements, balance sheets, cash flow statements) quickly turn out to be the least covered outcome cluster for digital transformation and, even if so, only as part of niche qualitative text analysis (Kawohl and Hüpel 2018). Instead, if at all, financial KPIs are used as the dependent firm performance variable in empirical research. Apparently, conventions on how to capture digital transformation successes have not found their way into research and practice. This is not surprising given the manifold impacts any digital transformation can have on these KPIs.

Financial KPIs: Advancements

The journey to developing a better understanding of the internal value relationships (in the form of value driver trees and causal models) in your organization is still worthwhile. While this might not help you to find a link to market value, as such, you certainly need to go down this route, no matter what internal resistance you might be facing. At a minimum, this will ensure that your internal business cases are not just crystal‐ball exercises but based on a learned and constantly improving understanding of what happens with your financials when you work on certain triggers. Furthermore, a systematic recording of financial effects of what you do in digital allows you to decide what you want to communicate in terms of replicable references, and which facts (e.g., revenues from a certain digital product or business model) you want to report to your external shareholders and stakeholders.

Financial KPIs: Limitations

Financial KPIs for digital transformation value impact analysis share the same shortcomings as their foundational IT/IS and innovation predecessors, which, like other non‐accounting information, can only be translated into financials using indirect conceptual value‐driver trees on revenue and cost line items basis, like extensively demonstrated by Vartanian (2003) for innovation value research. For digital transformation, this idea is still in its infancy so that most likely all experimental analysis will come with substantial bias and even more important noise from other factors (for example, nondigital‐related measures or market/competition changes) disturbing a clear transparency on what is happening.

Table 8.4 summarizes the advancements and limitations of financial KPIs.

TABLE 8.4 Financial KPIs advancements and limitations.

AdvancementsLimitations
  • Gradual improvement of understanding of value‐driver relationships
  • Information repository to decide what facts to add in your replicable reference external communications
  • Value‐driver based analysis still in its infancy
  • Noise from other effects
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset