2

Recognizing and Avoiding Value Traps

In the previous chapter, we talked about the different ways your implementation can bring value to your organization. Now, we will address some of the common pitfalls that impact ServiceNow implementations and prevent the realization of the targeted value.

In this chapter, we will cover five value traps that re-occur across implementations. By steering clear of these issues, you will be far more likely to achieve your project’s objectives. If you’re a member of the team, this chapter will also help you articulate the risk of these value traps to your project leadership and argue more convincingly for an approach that avoids these common pitfalls. The value traps covered in this chapter are the following:

  • Replicating the current state
  • Ignoring the current state
  • Chasing the long tail
  • Not managing change
  • The science experiment

These value traps are common because, at first glance, the courses of action that bring them about are attractive to project teams and leaders. The result of these approaches, however, is consistently problematic, as it tends to impair your ability to deliver certain types of value.

For each trap, we’ll cover the approaches that lead to the value trap, the reasons why that approach is attractive, and the issues that it can cause. Finally, we’ll address alternative strategies and considerations that help provide you with options to balance these issues beneficially.

Replicating the current state

The value trap of replicating the current state occurs when an implementation defines a less-than-ideal process for deployment because that is the way it has always been done in the past or the previous system. The way things are done now is often called the current state, while the future design is the target state. Often, making the target state match the current state leads to customizing the ServiceNow platform away from the out-of-the-box (OOTB) processes and towards an outdated, inefficient, or unmaintainable configuration that will complicate upgrades and lower your Instance Scan scores.

The tool replacement approach

The replication of the current state, also known as a like-for-like replacement, is most often encountered when the project is seeking the shortest path to go live with the new solution. These projects often have the goal of replacing some existing system, process, or technology and a time constraint under which they must execute the transition. Most people associate tool replacements with the decommissioning of a competing system – for example, in ServiceNow’s case, you might find BMC Remedy, HP Service Manager, or something similar as the incumbent technology. However, in addition to these cases, replacing an Excel file, SharePoint list, Access database, or even paper-based forms can lead to a tool replacement methodology. Even the replacement of a legacy ServiceNow instance can start to take on characteristics of a tool replacement.

Some elements of your implementation will likely target the replication of the current state, while others will avoid this approach entirely. As with other value traps, it may impact some, or all, of your project’s scope.

Arguments in favor of replicating the current state

Replicating the current state would not be a value trap worth mentioning if it wasn’t frequently encountered in implementation projects and it wouldn’t be as common as it is if not for the fact that there are many potential or perceived advantages. In some cases, your project has been commissioned to deploy a more modern solution that avoids the costly maintenance of legacy systems or to address unstable and failing applications – the saying “time is money” applies quite literally as the costs to maintain existing technology grow. In other cases, there may be a great deal of institutional knowledge embedded in processes that have been developed over years or even decades. The idea of altering these processes can seem to be an insurmountable amount of work that falls outside the budget for the implementation. The users of the old systems may also feel very comfortable with their processes and resist any changes to the way things have always been done.

It’s important to recognize some of the key benefits of a tool replacement methodology to understand why so many team leaders are attracted to it:

  • A tool replacement (also negatively labeled a tool slam) can appear to have far less process design required because you are simply adopting known and defined processes
  • Replicating the current state creates a similarity between the old and the new processes, which also implies a greatly reduced effort for the management of changes associated with the project
  • Finally, the tool replacement project appears to have a very clear scope because you have a clear specification of functionality in the form of the current system or process

Issues with replicating the current state

We can see that there are good reasons why it is appealing to replicate the current state process but, as with so many decisions in an implementation, no path is without its advantages and disadvantages. In this section, you will learn about the common drawbacks to the replication of the current state, and it will then be up to you to balance these considerations against the arguments in favor of current state replication while clearly understanding the impact this will have on the value realized in your project and its long-term return on investment (ROI). The following concerns are most pronounced when the legacy process is different from the ideal target state in ServiceNow, often because those processes have been tailored and built up over the years.

Replicating tailored current state processes can be problematic for the following reasons:

  1. If your existing tools are overly customized, difficult to maintain, or unstable, the last thing you want is to bring a similar degree of complexity to the ServiceNow platform. A new implementation should be an opportunity to start fresh with a clean slate technically.
  2. Your current state processes may also have nuances or customizations that have accrued over years of use that are not well documented. Frequently, processes evolve to handle dozens of edge cases, each of which needs to be considered and configured in ServiceNow. This additional effort in discovery, documentation, and development can quickly outweigh the savings of not designing a streamlined process.
  3. Legacy processes often do not take advantage of new capabilities of the ServiceNow platform such as Virtual Agent, contextual search, and Predictive Intelligence. By replicating the capabilities of an older tool, you neglect the value that could be obtained by deploying some of the features that the current ServiceNow version supports.
  4. The legacy processes have likely been implemented in a way suited to the architecture of your previous tools – an optimal process in ServiceNow might look very different if designed specifically with the ServiceNow capabilities or OOTB processes in mind.
  5. The quality and consistency of data created using legacy processes may also be low, preventing you from using a data-driven approach to get to the root causes of issues and optimizing efficiency.

In general, the replication of the current state prevents you from realizing some of the value of investing in ServiceNow while presenting the risk that you will carry the technical debt that prompted the tool replacement forward into your new ServiceNow instance. The cost of time and effort of dealing with nuances of the old process flows often erodes the benefits that prompted the like-for-like strategy in the first place.

To present compelling alternatives to the current state replication strategy, we will review two strategic options that can provide many of the advantages while mitigating some of the issues.

Strategy 1 – Adopting the out-of-the-box process

The first alternative applies when the default ServiceNow platform has features that can facilitate the same outcome as your legacy process. In these cases, you should recommend the adoption of the OOTB ServiceNow process instead of replicating your current state. This approach has the advantage of accelerating the detailed process design of your implementation but, unlike the like-for-like scenario, it does not result in substantially increased technical efforts. This is because you can utilize ServiceNow best-practice processes and their associated configurations instead of replicating the old solution’s features (which are often not available out of the box in ServiceNow).

This alternative has only one significant drawback relative to the like-for-like approach – training and change management activities are likely to consume more effort, as you’ll have to transition teams from the old way of doing things to the newer model. Generally, the greater the difference between OOTB ServiceNow and your legacy process, the larger the scope of the change management effort that will be required. You will likely need to create documentation that maps the user’s familiar current state into a target state process and monitor for cases where people revert to their old ways of working.

Fortunately, these change management and documentation efforts can be offset by the corresponding savings in development, testing, technical documentation, and ongoing support. These are likely to be larger and help balance the scope of process change.

An example of an area where this approach applies very well is in the Hardware Asset Management space, where the data model and workflows are tailored to an industry standard way of managing assets that goes above and beyond what most competing solutions offer.

The application of OOTB processes can also provide unexpected benefits due to the inherent synergies of ServiceNow’s single platform approach. For example, incident management executed according to ServiceNow’s guidelines will create information that can be leveraged for service level management and knowledge management and is automatically populated in standard workspaces and portals.

Strategy 2 – Developing an MVP process

The second alternative to a like-for-like approach is the development of a lightweight or Minimal Viable Product (MVP) process in ServiceNow that allows your users to interact with a stripped-down version of the process with the bells and whistles removed. This approach is often warranted in customer cases, IT service requests, or HR cases where OOTB flows may not cover the intended use case and ServiceNow provides the toolkit for the construction of arbitrary forms and flows.

When taking an MVP implementation approach, it is important to clearly outline the scope constraints that will be applied because, by definition, an MVP will not achieve all the goals you might hope for but rather, only absolutely necessary objectives. A documented scope will help you effectively manage changes to that scope with a clear picture of how it will impact your resources and timeline.

Targeting MVP processes can also provide a substantial reduction to the design and development effort, while the simplicity of these processes may also reduce the training overhead for your project. The trade-off is that if your project has value objectives related to process optimization, you may struggle to realize those with only a simplified process implementation.

Important note

In cases where the legacy process is simple and does not require a complex workflow or custom features, the MVP approach ends up being most similar to the like-for-like scenario. This is not a concern because issues 1, 2, and 4 (as we talked about in the Issues with replicating the current state section earlier) would not be impactful and your MVP processes can be uplifted later to leverage platform capabilities more efficiently.

Ignoring the current state

Closely related to the first value trap is another failure mode where instead of replicating the current state, the project leadership marks it as clearly out of bounds for analysis and charters the project team to only consider the organization’s target state. Another form of this value trap occurs when analysis of the current state is considered separately from the target state design and change management plan by an independent team.

Focusing on the future

When developing a new set of processes and supporting capabilities to enact significant change to the way things are currently done (which is often the case when seeking service quality and cost optimization value), it can be important not only to avoid replicating the current state, as we discussed in the previous section, but also to re-imagine and re-engineer processes to improve them. Decision-makers with limited resources will need to prioritize efforts and may decide to only focus on the future rather than conducting shadowing, research, and data analysis to understand how things work today. This approach can be appealing for the following reasons:

  • The current systems and processes are often being replaced because they are inadequate or suboptimal. Why would the team want to spend time studying these inferior solutions rather than designing and implementing a better set of processes?
  • The current state can be seen as irrelevant to the future design because it is being replaced, decommissioned, and discarded. This argument focuses on the fact that elements that are being discarded during the implementation will not have an enduring effect on the future performance of your organization.
  • There will be a risk that the old ways of working influencing the new way could impact the organization’s ability to change. We’re already discussed the many reasons to avoid replicating the current state and these are often used as a reason not to investigate those processes too deeply.

Failing to learn from the past

All three of these arguments suggest that we focus on spending time where it matters most – the future. What these arguments fail to capture is the importance of the journey of transitioning from the old to the new and the fact that the past can both inform the future, as well as help us understand exactly what will change during the implementation, which allows us to manage that change more actively. Opportunities that are missed in implementations that ignore the past typically include the following:

  • Without investigating the current state, you will likely miss out on certain stakeholders for processes. If someone is involved in a process today and that process changes significantly, then that person or organization should at the least be considered as part of the change management. They may be making extra accommodation for gaps in the process or relying on report data in certain formats. Walking through the current state allows you to better identify these people to reduce friction in the User Acceptance Test (UAT) and go-live phases.
  • One advantage the past processes have is that their performance is already well established. Even if the process does not fully meet the organization’s needs, these past processes still provide a benchmark for end-to-end performance (either analytically by looking at data or by observing the processes in execution) and most likely can even provide insight into the cases where the process breaks down. You will be able to see where work accumulates in the process, which allows you to identify bottlenecks.
  • We will cover organizational change management (OCM) in greater detail in a subsequent section, but it is worth noting here that being able to clearly map the current state to the target state for process participants is a very useful organizational change tool. This also unlocks the opportunity to index information in terms that people are already familiar with, which is good if you are producing content, such as quick reference cards, that people will refer to as they complete specific activities.
  • Looking at current state architectures can also allow you to identify the system interfaces that can be preserved with their current interface definitions. This can significantly decrease your dependence on outside parties to execute unplanned efforts that would impact your project timelines if they were delayed.
  • In the process of investigating the current state, you can often find useful tricks or methods in execution that could work as well in the future state as they do in the present. This might involve digitization of forms that are handled in Excel, or even physical forms, and can provide hints about which key data points to collect or which approvals to consider if you don’t automatically treat them as the full specification of your target state system.

As you can see, there are many reasons to understand the current state, and learning about the current state to realize these benefits will provide some value to your project. Keeping in mind the limited resources of most implementation projects, the question then becomes how to balance priorities and enable a forward-looking view with an understanding of the past. The following strategies help you manage these tradeoffs and enable you to get more benefit per unit of time invested, which ultimately enables you to drive the maximum ROI given the constraints of your implementation. These strategies come down to allocating a small but significant fraction of resources to tasks that contribute to both understanding the past and shaping the future. By using these approaches, you can more efficiently determine how or where you should invest to get the most value and thus maximize your project ROI.

Strategy 1 – Process shadowing

When an implementation team starts working on a new ServiceNow project, some of the most common activities for the first week or two are to develop a project plan, standardize artifact templates, and otherwise set up the governance structure. While this is both useful and necessary work, there is another set of activities to which it is worth allocating a fraction of the time of any of your resources who are not actively spending every hour of the workday on the critical path activities. Process shadowing consists of becoming very well acquainted with the execution of the current processes by sitting with (or virtually shadowing) the participants of the process to get a deeper understanding of how they work today. Some examples of teams to consider shadowing depending on your process scope are the following:

  • IT service desk teams
  • Deskside support teams
  • Customer support teams
  • Field service technicians
  • Data center support technicians
  • Operators of related business processes and systems

This type of information gathering is particularly valuable as an addition to discussions and process reviews with the managers of these teams because many organizations show a disconnect between the documented process and the way work is completed.

Tips for conducting process shadowing

The following are a few tips for conducting process shadowing:

  • Work with team leaders to identify the right individuals to shadow – ideally, you are looking for experienced and open-minded staff who are motivated to inform those working on systems they will ultimately use.
  • Get two or three points of view if possible – the differences between how individuals get the work done can be as useful to know as the similarities.
  • Explain in advance what you’re doing and why; it can be disconcerting to have someone looking over your shoulder for an extended period, so having this communication done in advance, and preferably with the individual’s direct manager, is advised.
  • Observe and take notes of things you don’t understand. Initially, you’ll want to observe and not disrupt the flow of work. Resist the urge to re-engineer the process on the spot. This is your turn to learn from those who have spent hundreds, if not thousands, of hours executing the processes that you will need to update.
  • Reserve questions for dedicated or natural breaks in the workday. Ideally no more than 4 hours should pass between opportunities to ask questions or clarify something you may have missed. This is long enough that most people will need to refer to notes but not so long that all context is lost.
  • Note which auxiliary tools are being referenced during the work. For example, are mobile workers frequently using third-party navigation applications? Are service desk users referring to a documentation folder on the SharePoint site? Are Excel spreadsheets being used to look up assets or network resources?
  • After the sessions, you should compile your notes into a summary of the current state process to preserve and organize the information for future review, as well as for comparison with the target state processes.

The time investment of process shadowing

With these tips in mind, you’ll need to decide how much time to dedicate to process shadowing for your implementation. A good rule of thumb is to have 20% of your team spend up to one week conducting process shadowing. On a five-person team with a weekly total capacity of around 200 hours, this would translate to 40 hours of process shadowing.

As a typical implementation spans many weeks, the total fraction of project time dedicated to shadowing would often be ~1% of the total time. Naturally, if you are approaching the end of this period and you are still unearthing relevant insights, you may choose to extend this process. This is an opportunity to learn and improve the insight into the processes, not simply a checklist item to complete.

Another important consideration is who should conduct process shadowing. The best individuals to engage in process shadowing will be those responsible for designing the target state processes. On some projects, these are business analysts, functional leads, architects, or even senior developers or implementation specialists. Notably absent from this list are the project managers or project coordinators, as these individuals are likely fully utilized during the early project phases when process shadowing occurs.

The expected benefits of process shadowing

As always, activities should be conducted with a clear understanding of the objectives. In the case of process shadowing, you should be targeting the following benefits:

  • Confirming that the full scope of the process is understood and included in the project’s documented scope or charter
  • Understanding the undocumented resources relied upon in the current process to ensure they are incorporated or replaced in the target state
  • Identifying stakeholders for detailed process reviews
  • Understanding how much behavioral change will be required from the users of the relevant business processes
  • Identifying sticking points in the current process to ensure that the new process addresses those points
  • Building relationships with the end users of the systems and processes that you’re implementing and understanding their environment, mindset, and workload

Applying a limited amount of time early on in the project to process shadowing gives your team a rich resource to draw upon to become well acquainted with the current state without significantly increasing the total cost of the project. Process shadowing is an effective and low-cost hedge against the value trap of ignoring the current state.

Strategy 2 – Data analysis

Another useful and high ROI strategy allowing the current state to inform but not define your target state is to extract and analyze data from the current ticketing tools. This allows you to prioritize effectively and is complementary to the process shadowing strategy, as it will often outline outliers and patterns that would take weeks of observation to detect.

Targeting data analysis

While ServiceNow best practices often recommend not migrating ticket data in bulk, this data can still be a source of great insight into the current processes. Ideally, this information was used in the shaping of your project’s value drivers but even if this is the first time that you are looking at it, there is still value to extract. Some examples of datasets that can provide useful insight include the following:

  • Incident, case, and request ticket data from legacy systems
  • Asset or CMDB data
  • SLA data
  • Knowledge Base article metadata (usage data if available)
  • System user, groups, locations, and other master data
  • Other datasets relevant to your project’s value objectives

Often, some elements of data analysis are carried out on these datasets during preparation for data migration; moving this activity up in your project schedule and completing (or at least starting) it in advance of process design allows you to extract more value from roughly the same total effort investment.

Conducting exploratory analysis

Data analysis will start in an exploratory way and it is often useful to access the legacy system or process in a read-only form to see each data point in context rather than just as columns in a spreadsheet. Exploratory analysis should at a minimum get you familiar with the following:

  • The time period covered by the data
  • Key volumetrics of the data such as how many tickets, assets, and dollars hours
  • Major categorizations such as regions, categories, teams, or request types
  • Data quality, which fields are consistently populated, and where there are pervasive data quality issues

Integrating value and analysis

Once you have this information, you can combine it with your project’s value statements to identify specific questions that should be addressed. While the precise process will vary, it is useful to explore a couple of examples to get a sense of how data analysis can be used.

If your project aims to optimize expenditure on end user assets, then confirming the number and value of the new assets purchased annually allows you to create a useful anchor point – it will also prompt you to consider the projected number of assets reaching the end of life in the next year (based on in-service dates and company policy, for example), plus the number reported missing or defective each year. If the rate of missing assets is particularly high, then improved asset tracking is likely to result in fewer missing or lost assets. In this case, improvements to processes such as employee offboarding or location audits should be considered.

If reducing the end-to-end cycle time in your software catalog is important, then your data analysis might focus on answering the question of whether all titles have a similar deployment time or whether some software is much faster than others (which prompts a discussion around why this is and how to address or leverage the differences).

Important note

Data analysis is intended to both answer some questions and raise new ones. You will not have the information to explain all the patterns and gaps, but being aware of them allows you to raise these during workshops or design sessions.

When analyzing ticket data, you will almost always want to group it by ticket type. This will both give you ticket volumes for each category (an essential metric) and allow a more detailed analysis of the peculiarities of each category. For example, analysis of all work orders for field technicians is generally less informative than separating them by customer installations, troubleshooting, or routine maintenance activities. You will want to consider what value your project is expecting to drive. If improving the efficiency of your field staff is important, then you may want to consider the time spent traveling, time on jobs, and repeat visits. However, if customer satisfaction is a more important metric for your project, then the Net Promoter Score (NPS) score from surveys may be more useful, allowing you to determine which work order types your organization performs well on and which frequently lead to customer dissatisfaction. The data alone will give you a part of the story and provide you with topics to discuss with the delivery teams and managers. You should use these follow-up conversations together with the data to determine where improvement opportunities lie.

The importance of intentional time allocation

As you can see from the tension between the last two value traps, there is tension between looking backward and moving forward with your implementation. The future is most critical but the journey to get there is informed by the past. As we have discussed, there are risks to letting the future state blindly emulate the past but also in ignoring the current processes.

The most important thing to take away from this is the importance of thinking critically about what your goals are to inform tough decisions on where to spend the most important resource in your project: time. It would be much easier to recommend a detailed analysis of the current state spanning many months, but few projects will have the resources to support this level of investment and even if they do, the time might better be spent on higher-value activities.

As a project team member or leader, you should always strive to be intentional in how time is spent on your project, asking whether the set of activities to be executed next supports the value proposition of your project to a greater degree than others that could be executed instead. This way of thinking carries forward to our next value trap, which fundamentally informs the allocation of time in support of value.

Chasing the long tail

The third value trap is based in part on a rule of thumb called the Pareto principle, more commonly known as the 80/20 rule. This rule states that, in most situations, 80% of the effects can be attributed to 20% of the causes. While the exact numbers tend to vary, value indeed tends to be concentrated within a relatively small subset of the possible scope for your project. This is most clear in the high-volume process configurations in ServiceNow such as those found in the Service Request Catalog. The value trap of chasing the long tail is experienced when the need to enable all instances of a certain workflow or automation type, regardless of the diminishing returns as a team, is applied to processes that are time-consuming to implement but are used only a few times a year.

The appeal of aiming for 100% coverage

No team or project charter sets out with the goal of working on a seemingly endless list of low-value activities – yet, many projects end up in exactly this situation due to unrealistic expectations being set at kick-off. Projects that get into trouble with the long tail are those that aim for completeness without considering the distribution of value across their scope. Typically, this happens when the scope is set at a high level early in the project (for example, discover all assets, or enable all service requests) and project teams then take this direction literally and execute it without considering the incremental value being delivered.

It seems intuitively clear that doing all of something is better than doing only 20%, 50%, or even 80%. Additionally, launching an incomplete catalog of service requests can lead to confusion among users looking for a missing item and this confusion can reflect negatively on your implementation. In this section, we’ll look at the risk of this approach and provide practical guidance on how to avoid or address it. We will use the Service Request Catalog as a running example, as it is by far the most common case of this value trap, but the same principles apply to case types, discovery probes/patterns, and asset classes.

The distribution of value

Each process that you can implement in ServiceNow has an incremental value potential that it would add if configured in the ServiceNow platform. This value is determined by how many times the process is executed multiplied by the additional value that ServiceNow would provide if the process was effectively orchestrated by the platform.

Important note

Value potential is often achievable in stages – for example, a basic version of the process can provide 15 minutes of time savings while more extensive automation can save several hours.

The Pareto principle usually applies to these value potentials, meaning that a relatively small subset of the processes comprises a large fraction of the total addressable value potential.

Risks of completeness

Part of the challenge when addressing a scope focused on completeness is that every additional and well-implemented process provides some value or potential value to the organization. Given this premise, it is tempting to then suggest that implementing every process provides a net positive value, but there are two principal challenges with this conclusion:

  • Given that all projects are executed in the context of limited resources, every hour invested in a particular process is an hour that is not being applied elsewhere in the project. This means that if work is being done on a marginally valuable process, then it is likely that something else that is potentially more valuable is being ignored elsewhere.
  • When assessing the overall ROI, the potential value (the return) is one part of the equation – however, we should remember that implementation and support costs (the investment) should eventually be deducted from the potential value. Since there will always be some support or implementation costs, the actual value realized as a return on investment will always be lower than the theoretical value potential.

Taking these two challenges side by side, the risk of attempting to cover 100% of an arbitrary scope is that the result may distract from other essential or higher value activities and even the potential to invest more in the process than the possible future returns.

Unfortunately, it is far too common to see a team investing days into diligently holding workshops and producing documents to define a process that is executed only once or twice a year for limited value. If a process is discovered that truly provides an excellent ROI, then updating the business case or project scope to include it will also require additional time or a corresponding scope reduction elsewhere. The strategies that follow will provide useful approaches to reducing the risk that your project will allocate far too many resources in support of far too little value.

Strategy 1 – Top N selection

When defining the scope for an implementation where there is a potentially unlimited number of workflows to consider, it can be useful to set an arbitrary bound such as 10, 100, or even 1,000 processes and work within that bound to identify the most valuable ones to focus on. This approach requires an estimate of the number of valuable processes and the boundary number should not exceed this value. It is acceptable to set the bound lower than the likely number of valuable processes. It is often the case that completing the exercise to arrive at the top 10 or top 50 will provide much greater insight into the planning of a subsequent phase with a higher boundary number. In this way, the significant initial value realized can also help justify further investment.

There are many methods to select the top processes but they should be seeking to optimize value as defined for your project in all cases. Recall that the potential return on investment relies on three factors – volume, value, and investment. Theoretically, for a perfect ranking, you would need to define each process opportunity and assess these three factors and combine them to get a score that can be ranked. This takes a significant amount of time, so we will propose a modified approach that will be mostly correct but consumes significantly fewer resources.

The recommended approach for ranking

The following algorithm provides a systematic approach to ranking, which is one possible way to achieve a valuable list of processes for prioritization. It is not guaranteed to produce the best possible list but typically has very good results in most organizations:

  • Set your boundary value for N (that is, 10 or 50).
  • Of the three factors, the most reliably predicted is volume, as it can usually be established from a combination of historical data and business projections. First, determine the top 2N processes using the heuristic of the current state ticket volume. If possible, you may ask for quick adjustments by an informed manager (to account for known factors such as business changes or mergers that will have a major impact on any volumes).
  • After ranking the top 2N processes by volume, you should now work with your technical team or architect to estimate a rough level of effort for each and with your business stakeholders to estimate the value of each. Re-rank the opportunities based on the ROI calculated using these factors.
  • If you have at least N clearly favorable process investment opportunities, then you may use these as your initial N processes – if not, then you will need to repeat the process steps for the next set of processes by volume until you have a full list.

This procedure relies on N being smaller than the total number of positive ROI opportunities. If you find yourself running out of processes or assessing requests with very low value and volume, it may be an indication that N is too large, which would prompt a project leadership discussion to assess the value hypotheses of the project considering the new information.

Strategy 2 – Minimal implementation for long-tail items

For some projects, you’ll find that establishing a complete catalog of services or requestable items is very important and that a low top N strategy would leave a gap that challenges the value of the catalog. In these cases, you can take advantage of ServiceNow’s ability to develop a very low-cost implementation of a process using a reusable workflow to develop MVP workflows.

This strategy of a minimal catalog item implementation is intended to complement a Top N approach by considering the list of must-have items that do not fall within the top N high-value items. Instead of increasing their value, it strives to optimize the cost of implementation and maintenance to squeeze a positive ROI out of even a lower volume, lower value process.

Important note

This approach requires you to willingly adopt a standard workflow across these minimal implementation candidates, and a standard form layout with only a description, key header data, and a field to enter any additional details.

Applying this strategy populates new items into the catalog with a relatively low total effort investment and has the benefit of enabling metric tracking for these items in ServiceNow, facilitating more efficient analysis and optimization in future phases. When applying this strategy, it is critical to set expectations so that while some processes will be fully optimized, the ones that are implemented according to the MVP strategy will be more basic in form and function. Accepting this trade-off, this strategy allows you to get a large degree of coverage without overwhelming your team with the effort it takes to conduct detailed process analysis on each workflow.

Not managing change

A ServiceNow implementation almost always involves a significant change to the ways that people complete their daily work. This change typically results in a brief period of reduced productivity as workers acclimatize themselves to the new processes. Unfortunately, this period of reduced effectiveness can result in frustration and provide a negative first impression of the solution your team has worked so hard to implement. In addition, if gaps are not closed quickly, then the productivity hit can persist and permanently offset the value being realized from your implementation.

The first cut – OCM

When a ServiceNow project is in the initial planning stages, seasoned architects and project managers will typically highlight the need for OCM efforts to help facilitate the transition from the current to the future state. Unfortunately, when the initial budgetary estimates exceed the leadership’s willingness to invest, one of the first areas targeted for cuts is the OCM effort.

This decision process is an exercise in prioritization and alignment with value. In principle, the decision to cut a lower-value part of the program would be appropriate – however, repeated experience shows that cuts to OCM are far more costly than most projects anticipate.

Risks of reducing the OCM effort

Cuts to OCM efforts come in different forms – the most common is to reduce the seniority of, and time dedicated by, the team members responsible for change management activities. In essence, this means reducing the cost of the OCM effort without fully removing it from the project. Another common approach is to restrict the time during which the OCM resources will engage in the project. These approaches lead to the following risks:

  • Less experienced OCM resources take longer to become acquainted with the value proposition, scope, and implementation plan, and require additional support from the rest of the project team to effectively develop and deliver OCM efforts.
  • OCM resources that are brought into the project significantly after kickoff often lack the context of the discussions that have occurred around the plans to adjust how things are done in the current state and thus cannot effectively map the journey of transition. Bringing these resources up to speed at a critical phase of the project (as go-live approaches) puts additional strain on the remainder of the project team during a period when many project teams are already fully occupied.

The impact of these two risks is a reduction of value realized across the implementation and additional strain on the project team at key times during the implementation and go-live processes. Recall that the most common reason OCM efforts get cut in a project budget is the desire to reduce cost and effort in an area where the effects will be less pronounced than if the project scope was trimmed overall. However, due to OCM’s role in accelerating and securing value from the implementation, the effects tend to be more severe than expected. Fully and clearly articulating the role of change management as an accelerator to the full realization of the planned value is critical, particularly in combination with ensuring OCM efforts are rightsized from the outset.

Optimizing value from OCM

OCM truly acts as a multiplier for value from other areas of your implementation. That means that while it can have a large impact on the most valuable areas of your project, it is unlikely to generate value in areas where little impact is being made by the planned scope. This reality suggests focusing the OCM efforts on the areas where your project’s impact is largest to ensure those are effectively covered and supported, and reducing the OCM efforts on lower-value processes. This approach requires you to consider OCM as a useful tool for value realization, not simply a checklist item.

Tying the OCM scope to specific value objectives also helps the budget holders visualize how OCM will support these value objectives and reduces the likelihood that OCM will be seen as a separate item that can be added and removed independently of the overall business case.

Responding to insufficient management of change

At times, it will unfortunately become necessary to recognize where OCM efforts have not been sufficient to prepare the organization for the coming change, representing a risk to value realization at go-live. A red flag is a clear indicator that some kind of risk or issue is present in your project. Some examples of red flags that should be looked for as you progress in your implementation include the following:

  • System users being surprised and confused during training sessions and acceptance testing
  • Difficulty in producing target state operating instructions to cover the full scope of current working processes
  • Poor awareness of the planned release of ServiceNow or poor understanding of how the release will impact daily work

Remember that OCM exists to ease the transition to new ways of working and to support the realization of value from those changes. If inadequate change management is evident in low-value processes but not in all high-value process areas, then that may simply be reflective of the tough prioritization decisions that were made. However, if your core value propositions seem threatened, then you will need to take immediate steps or risk significant portions of the planned value of your implementation being challenged.

Applying high-impact OCM activities

While the field of OCM can point to numerous benefits of the formal OCM process, the reality of ServiceNow implementation often requires balancing the need to do things right with the need to get things done. The activities discussed in this section will provide you with a toolbox of the activities that have shown themselves to be most valuable in ServiceNow implementations and that aim to reduce the post-go-live efficiency slump and user experience. The goal of these activities is to facilitate a smooth and sustained increase in value from the implementation and they have been curated accordingly.

Target state work instructions

Target state work instructions covering the current state process scope are perhaps the single most useful deliverable to facilitate OCM outcomes. Target state work instructions are produced on many projects already. With a little extra effort, you can ensure mapping and coverage between the current state of the processes that people use today to the correct procedures for their future work.

These instructions should be detailed and draw on the full range of common process scenarios (including those observed during process shadowing). The process of producing these instructions can be incredibly valuable because they cannot be created without detailed attention to the specific sequences of activities that users will complete in the system. While these work instructions do not need to cover every possible edge case, they certainly should be comprehensive enough to cover most of the cases that a worker will encounter in their daily interactions with the processes you are enabling on ServiceNow.

Transition support service

During the go-live and for a few weeks afterward, the establishment of highly available, knowledgeable, and friendly points of contact for users can provide a much higher degree of comfort for the teams working to get things done. The purpose of the channel is to allow workers or team leaders to get rapid answers to their questions so that system issues or gaps in knowledge do not impede their ability to efficiently complete their work. By providing real-time channels such as a walk-up help center, phone hotline, or Slack channel, you can immediately address questions and concerns and will become aware of issues minutes after they occur, rather than days or hours. This allows your team to react quickly and get an accurate pulse of how the transition is progressing and where the challenges are.

In an ideal transition where testing, training, and design have all been executed flawlessly, you will expect relatively little interaction with this supporting team, but it is still useful to deploy the channels, as the resources assigned to monitoring them can still complete other tasks during quieter hours. When a go-live is not as smooth as previously hoped, the extra capacity acts as a buffer, allowing your project to absorb some of the impacts on the organization and reducing the impact on operational teams.

The science experiment

ServiceNow technology is both flexible and powerful – this combination can lead to innovative solutions but also complex configurations whose cost to maintain exceeds their value. The value trap that we call the “science experiment” occurs when overly complex, advanced, or technically sophisticated architectures are layered onto ServiceNow, leading to substantially higher implementation and maintenance costs.

Science experiments are common in integrations, Predictive Intelligence, Virtual Agent, access controls, and even workflow business logic. The difficulty of recognizing a science experiment comes from the fact that they are often proposed by your most capable developers and the proposals do solve important design objectives.

Projects extending ServiceNow

There are many cases where basic ServiceNow capabilities must be extended for the efficient and effective realization of value. In some cases, the platform capabilities will get the job done but the configuration can feel overly burdensome, and a more abstract and efficient configuration layer is proposed. In other cases, an innovative new module of ServiceNow has been licensed and a mandate to implement is given even before specific measurable value objectives are defined.

To balance business needs with complexity, each ServiceNow project needs to determine to what extent it falls into each of the following categories: an implementation project, a software engineering project, or a basic research project. The nature of the project should be chosen depending on the value characteristics of the project and the degree of uncertainty in its technical execution:

  • Implementation projects: These are by far the most common ServiceNow projects, aiming to deploy the system for maximum value and to remain close to one that is OOTB wherever possible. Implementation projects have higher success rates, a lower total cost of ownership, and execute on well-known design and development principles. Following the principles in this book, along with the ServiceNow technical best practices, provides a high probability of success in these projects.
  • Software engineering projects: These ServiceNow projects are created when ServiceNow is chosen as a foundational platform with the expectation that significant custom development will be required to realize the expected outcomes. These projects may include the development of entirely new portal experiences, analytical capability, or deep integration into systems for which standard integrations are not available. Software engineering projects are more likely to experience delays and cost overruns than implementation projects, as the nature of the work is more variable and the full scope of the expected issues is not known in advance.
  • Basic research projects: A basic research project doesn’t aim to deliver something to production but rather aims to assess the feasibility of a certain idea or to evaluate alternatives for a specific problem. Very few ServiceNow projects intentionally operate as basic research projects, largely because the upfront commitments of ServiceNow licensing make projects without clear value outcomes and lower chances of success very risky. A basic research project is most often used in the opening phase of a ServiceNow implementation to assess the feasibility of implementing specific capabilities. A basic research project may conclude by proving the capability can be effectively implemented or even by proving that some constraint prevents the approach from being successful. Basic research projects do not need to deploy anything to production to be successful.

The following table shows the different types of projects and their associated complexity and success probability:

Table 2.1 – Project types

Table 2.1 – Project types

All three project types are potentially useful and valuable in their own ways, but it is essential to be clear on what type of project you are working on and to plan accordingly. The science experiment value trap occurs when elements of a higher uncertainty project are incorporated into a lower uncertainty project without accounting for the decreased certainty of outcomes. This disconnect signals a potential for a misalignment of the expected value with the value that will be delivered in your project – this misalignment is a source of potential risk that should be managed.

Risks of the science experiment

A science experiment can present several risks to your project:

  • The experiment can consume a disproportionate amount of the senior technical resources’ time on the project, preventing you from completing other objectives or responding to unforeseen challenges.
  • Experiments are uncertain and as such, the effort or outcomes are variable. This uncertainty is of particular concern if a fraction of the project’s committed value depends on work that is not certain to succeed or to result in a working solution within the project’s planned timeline.
  • Experiments can lead to highly complex configurations, even if completed within a scoped application. Future developers may be pressed to understand and maintain the configuration.

The risks of letting a science experiment run on your implementation or engineering project arise from the misalignment of the resulting complexity with your project’s goals and risk tolerance. For some projects, it is possible that conducting some exploratory research and development efforts to support a significant and otherwise unachievable value target is the right course of action – however, this should be a deliberate decision taken with a full understanding of the costs and risks.

Recognizing a science experiment

Recognizing an unplanned science experiment early on is necessary to avoid the unexpected expenditure of significant effort that may not contribute to project outcomes. Some signs of a science experiment underway are the presence of the following indicators in an implementation project:

  • Highly capable developers are working hard but without output in the form of value-contributing configurations.
  • OOTB ServiceNow features are being passed up in favor of custom scripting. For example, the development of a scripted integration framework rather than the use of Integration Hub or the native data transformation function.
  • Timelines for the work on one module significantly exceed expectations or are highly uncertain, particularly when the module is linked to more complex platform capabilities such as Predictive Intelligence or UI Builder.

When you suspect that a science experiment is being run on your project, then a discussion between the project manager and the architect or another senior technical leader about the value objectives, efforts to date, and value to date is required to determine the appropriate next steps.

Important note

A less experienced technical team may struggle with even more standard implementation activities. Ideally, each project will have at least one seasoned ServiceNow technical leader but if your team is all-new, the previous criterion may not apply. In these cases, you may need to rely on the advice of ServiceNow or a trusted implementation partner to help you assess the situation.

Handling a science experiment

If a science experiment is unexpectedly consuming the time of your project’s resources, then it is necessary to align the objectives of the project with its execution. To achieve this, a combination of project management and technical leadership will be required. The value objectives leading to technical complexity should be assessed in the context of the current best information about the complexity required to achieve the value. If the business case to complete the work is strong, then commissioning a small software engineering or basic research initiative within your project should be worthwhile. This initiative should be managed with a greater eye to the risk and with the understanding that the timelines and success probability will have a different profile than typical implementation work. Again, this alternative complexity profile should be fully justified by the value that can be realized if the initiative is successful.

Summary

This chapter has covered five types of value traps that are common within ServiceNow implementations. We’ve covered the reasons why these traps are common and the strategies that can be used to understand and mitigate their effects. With the tools you’ve learned in this chapter, you should be able to strike a balance between designing for the future and understanding the business in its current form.

You will also be able to focus on a realizable and valuable scope that shows value to your project’s sponsors to ensure that the trivial does not get in the way of delivering the critical. By developing a greater understanding of what change management is and why it is important, we have explored ways to justify the change effort and magnify its value. Finally, we have reviewed the risk of not taking a conscious approach to matching the technical complexity to the types of value being targeted.

Throughout this chapter, the recurring theme has been the alignment of effort to value during the scoping and execution of the project. In the next chapter, we will cover detailed considerations for managing and capturing value in a ServiceNow deployment.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset