8

Platform Team Processes, Standards, and Techniques

With your processes designed and your organization aligned, it’s time for your team to roll up their sleeves and configure the platform to support those amazing processes in a user-friendly way.

So, where do you start? How does the platform team ensure what they are doing will not compromise the integrity of the platform? How do you minimize the work of maintaining the configuration going forward? I thought we were just going to go with out-of-the-box (OOTB) anyway?

This chapter will go into the details of how the platform team is run, cover contentious topics (such as the oft-brought-up customization versus configuration discussion), and go over the broad processes and technical patterns that can improve the maintainability of configuration on the ServiceNow platform for your organization.

Entire books have been written on how to run development teams, coding design patterns, architecture best practices, and coding coding conventions and so a single chapter will never be able to cover everything you need to know in detail. Instead, we will provide you with numerous building blocks across the entire life cycle of a platform team’s work, from technical techniques to process improvement techniques, and show you where to do additional research to supplement the pointers provided in this chapter. We hope that you will be able to draw upon these building blocks to create a platform and team that will be envied by all. We will be jumping between process and technical advice as we go through this chapter, but broadly speaking, we will try to follow this order in terms of the themes of our advice

  • Platform management and technical standards, including the techniques used to apply those standards
  • Operational processes, continuous improvement strategies, and technical recommendations

Platform management and technical standards

At the risk of sounding like a broken record, management, governance, and standardization are frequently the missing ingredients for realizing long-term value in ServiceNow (or leveraging any other technology). When it comes to the technical integrity of the platform, standardization, management, and governance should consider multiple dimensions.

Setting standards to manage platform maintenance and risk

Any configuration change on the platform, from adjusting the font on the service portal to creating a scoped application to manage travel bookings, will increase the effort to maintain the platform going forward. More generally, any change in the configuration on the platform will bring risk to the platform in a variety of ways, from the risk of security breaches to the risk of impacting platform performance, to the risk of affecting the customer experience.

Because any change to the platform brings risk, any change on the platform must be balance against that risk. To balance risk, another dimension must be considered, and the typical one on the other side of the decision fulcrum is business value.

A change on the platform is only preferable if the business value it brings to the organization outweighs the risk it brings to the platform. However, risk and business value are both metrics that are subjective to the organization and must be established as part of platform management standards based on the profile of the operational team maintaining the platform. For example, a highly technical platform operational team might evaluate technically complex configurations as lower risk than a less technically skilled team.

Therefore, the terms configuration and customization are misleading as they falsely suggest that there’s some universal property of a change that can easily be used to determine whether it is safe (configuration) or dangerous (customization). The reality is that there is no such thing as a safe change and you should instead consider if the risk of change is justifiable when considering the business value it brings.

Throughout their day-to-day activities, the platform team might need to evaluate hundreds of incoming requirements – all of which might require some level of change on the platform. If each one must be evaluated carefully by a committee, then the time-to-value will plummet. The evaluation standard that should be established must allow the platform team to quickly determine whether a change is worth doing and, ideally, help the team make decisions without having the involvement of senior leadership whenever possible.

One way to establish a standard to accomplish this is to create a table of the types of changes that can be made on the platform and assign a relative level of risk against it. Changes deemed low risk can be made at the discretion of the technical implementer, or the product owner, given that at least some level of business value is provided by the change. In comparison, changes that are deemed high risk might require higher levels of scrutiny by platform governance to justify (for example, they might require a business case, require review by an architect, or more).

The following sample table provides a solid starting point for your organization’s personalized platform change risk assessment standard. You should work with your platform team to determine how to best categorize platform changes across a spectrum of risk and where to establish your threshold for “high risk.” Also, you should be aware that this approach does still have limitations. For instance, it does not properly consider that a very large number of small, minimal-risk changes might still have a large impact on the maintainability of the platform. There is always going to be a trade-off between having a reference to accelerate decision-making and considering each decision carefully – it is up to the platform team to reach a model over time that balances the two for your organization:

Type of change

Recommended change assessment

  • Create a report or dashboard
  • Create a knowledge article
  • Add data to an OOTB data-driven functionality (SLAs and assignment groups)
  • Add a field or change a label on a record producer or catalog item
  • Change a form layout
  • Create or update a text-only notification
  • Make a field mandatory or hide a field using a UI action

Lower risk. Does not require a business case beyond product owner sponsorship.

  • Creating a new case type
  • Creating a new CMDB class
  • Add a notification with dynamic content
  • Creating a new workflow
  • Making an ACL change
  • Creating an integration into an external system
  • Creating a new scoped application (regardless of the actual makeup of the scoped application)
  • Creating a custom Discovery probe

Higher risk. Requires engagement of the business sponsor and platform architect to formally evaluate the balance of risk of maintenance and risk of change against the business outcome provided.

Table 8.1 - Assessing different kinds of change

Finally, now that all changes on the platform have been accepted as having some level of risk, the platform team should stick to the literal definition of OOTB to avoid confusion – a capability or functionality should only be considered OOTB if it requires no change from what is provided by the platform. That is, changing the ServiceNow logo for your company logo will not be considered OOTB, and changing the color of the agent side banner will also not be considered OOTB.

Standards and tips to manage platform changes

Now that we have standards on whether a change should be made on the platform or not, we can set standards on how changes should be made to the platform.

The platform team should establish a set of instructions that can be kept in a knowledge article on the platform for the team or as a document stored in a shared repository that can be referenced by every member of the team when platform configuration changes are being planned and made.

We recommend that the team considers, at the very least, standards and controls in the following areas.

Naming conventions of configuration records (business rules, UI actions, and script includes)

Naming convention standards enable teams to find configuration quickly and easily after it has been initially developed. Naming conventions of business rules, UI actions, and script includes should never contain information that is already accessible through other properties of the configuration. For example, the naming convention of business rules should never include capturing which table the business rule applies to, as this is a property that is already part of the business rule record and is searchable.

A naming standard that can serve as a starting point for any team is to simply document precisely what the configuration record is doing. For example, a UI action could be named Set Resolution Action mandatory when State is Resolved, which describes exactly what the client script is doing.

Script coding standards

Variable naming conventions, whitespace formatting conventions, where to put the curly braces when starting a loop, or an if/else block, there are numerous published coding conventions out there on the internet that can be adopted by the team when it comes to script coding standards. The most important thing is to adopt one as a team and attempt to follow it as closely as possible, taking care during code reviews to make sure any non-compliant code has been corrected. Coding standards and style guides improve the maintainability and readability of code on the platform, and the more teams and individuals developing on the platform, the more value it will have.

As there are numerous professional tech organizations that release their style guides for JavaScript (which are 90% applicable to scripting work in ServiceNow), in this book, we will not focus on our own style guide. We will take some time to highlight a few common elements across most style guides and leave your teams to adopt or establish the rest:

  • Use clear, non-abbreviated variable names. Avoid variables such as n, errcde, and other aggressive abbreviations that will be difficult for people to interpret. Variable names should, as much as possible, clearly communicate their purpose: errorMessage or accountId. In most style guides, naming variables clearly and avoiding team-specific or group-specific abbreviations and acronyms are key naming considerations Saving typing time should be deprioritized over making code more easily understood by new readers.
  • Minimize variable scopes. JavaScript contains many dangerous “features,” with one of the more significant ones being the fact that variable declarations are at the global level or the function level. Most JavaScript style guides recommend that variables are declared as close as possible to where the variable is being used. With the Tokyo release of ServiceNow, the JavaScript engine will be able to use ES6+, which means local block-level variable declarations using let and const will become available. When your platform is on Tokyo or later, variable declarations should never use var to minimize the scope of variable declarations.
  • Most popular JavaScript style guides use lowerCamelCase for variable and method declarations. Constants use CONSTANT_CASE and enumerations, and class names use UpperCamelCase. Private methods and private variables are a mixed bag. ServiceNow’s own code tends to follow prefixing variables and method names with an underscore: _privateVariable. In comparison, other style guides prefer trailing underscores or no underscores at all (because JavaScript does not actually enforce any privacy; therefore, the underscore notation may, in fact, mislead developers into thinking that changes to these “private” methods and variables will not impact publicly available API consumers). This author’s preference is to avoid the underscore notation considering JavaScript’s limitations, but the convention should be set by the team and consistently followed.

Creating database indexes

Establishing some clear guidelines on when and how database indexes can be created can significantly improve the performance of your platform configuration, especially for high data volume scenarios in scoped applications.

Databases and database indexes are a science, but there are some general rules of thumb that technical teams can use to make sure that, at the very least, indexes are considered part of the technical design of applications.

First, indexing should be added to commonly used joins in queries against tables where the typical result set being returned is much smaller than the size of the table itself. Teams should strongly consider indexing whenever queries with joins return datasets much smaller than the table queried. In such cases, indexes will bring performance improvements even when the table is small.

Teams should frequently look at the Slow Queries log and then use the Suggest Index functionality of the platform to determine whether there are opportunities for improving performance through the creation of indexes for these queries.

Using scoped applications

Teams should be encouraged to use scoped applications for any functionality implementation instead of modifications to the global scope. Scoped applications have come a long way since their introduction, and as of writing this book, there are very rare situations where a scoped application would not be preferred over changes to the global scope.

Another aspect of scoped applications that should be contained within your platform development standards is how scoped applications should be structured. The most common approach is that each scoped application be a self-contained set of functionalities, configuration data, or data generated as part of the use of that functionality.

A single large custom application might still comprise multiple scoped applications, with each containing a standalone component of the scoped app. This pattern can be seen frequently with ServiceNow’s own platform capabilities where each standalone capability that improves a core process (for example, the CAB workbench) is in its own scoped app.

One way to think about scoped applications is as microservices, with each scoped application providing specific functionality and a clear set of APIs and data tables designed for other scoped applications (or none) to interact with the functionality.

As with microservices, there is no single test for how much or how little functionality defines a “scoped application,” but teams should at least try to design scoped applications in such a way that the vast majority (more than 90%) of functionality contained within the scoped app can be testable through the usage of simple mock data. This test will encourage teams to design strongly decoupled applications and simplify their scoped app’s publicly facing APIs.

Putting it all together, we recommend that your platform teams commit to a standard where:

  • Whenever possible, new functionality should always be created within a scoped application.
  • Each scoped application should represent a clearly defined functionality that can be added or removed from the platform without interfering with other scoped applications.
  • The vast majority of the functionality provided by a scoped application should be testable using mock data and stubs without requiring additional dependencies to be stood up to encourage good API, contract design, and decoupling.

Management of access controls and roles

When no specific design considerations exist, create at least one role per table that provides create, update, and read access to the table. ServiceNow creates a role by default when a new table is being created, and you should, at the very minimum, keep this table-level access role.

Table access roles should be rolled-up into persona-level roles (for example, Change Manager), which are roles associated with a specific user persona involved in multiple processes and/or user journeys) to control access.

Roles at the persona level should be assigned to groups, which users might be added to in order to be granted roles.

When adding roles to groups, always add persona-level roles and never table-level roles. The actual capabilities of the persona-level role can then be configured by adding child table-level roles to it. In this way, when creating a new scoped application, the team can define the access levels of the persona and provide the right table access for that persona by simply adding the appropriate table-level roles to the persona. This reduces the need for custom ACL rules or scripts against various user personas, as in most cases, providing personas with the right table-level roles will provide the correct level of access for the persona role.

Maintenance of design documents

A common refrain you hear from consultants and business leaders everywhere is that the documentation becomes obsolete the moment it is created. Too often this adage becomes a self-fulfilling prophecy as it is used to justify corner-cutting when it comes to the creation and maintenance of important documentation, which to the surprise of no one will result in obsolete documentation that.

The fact is documentation becomes obsolete if teams let the documentation become obsolete, often by choosing consciously or unconsciously to prioritize development throughput over maintenance activities.

However, some documentation can be critical to long-term value realization or achieving operational cost savings. The question is what documentation is important to keep and maintain and what documents should be optional or even avoided? Before we dive into which documents we recommend teams create and maintain, let us first go over a few guiding principles of documentation that inform our recommendations.

First, it is always good to create the least amount of documentation for the greatest number of use cases. This means that the documentation format and content standards are important – with the right standard and expectations setting, one or two document types can be created with care and then used to inform a myriad of use cases.

Second, whenever possible, have documentation be part of the natural workflow of implementation work and do not repeat documentation with the same purpose in multiple places. For example, establishing code comment standards and naming convention standards for configured platform components will serve naturally as documentation for the purpose of improving the speed of code understanding for new code readers. code readers with no additional external documentation. Similarly, enforcing the creation of user stories and acceptance criteria by the functional team to communicate required platform configuration changes to developers enables user stories to also serve as a paper trail of these changes on the platform, including when they are made.

Third, manage complex documentation with interdependencies to one another in a digital, searchable repository such as a wiki or a knowledge base. This allows dependencies between documents to be managed more easily. The most important capabilities to consider when it comes to managing documentation are the ability to keep track of (and revert/compare) versions of the documents, track who made any changes, and create easy-to-maintain links between documents. The Knowledge Management capability of ServiceNow could conveniently serve as such a repository.

Now that we have established our guiding principles, here are three pieces of documentation that serve 80–90% of an operational team’s needs.

The process design document (or system design document) should serve as the bible of how the platform was designed to be used. This document should be in the form of a detailed process guide with every supported user and system action that is part of the instance design documented in a step-by-step manner. The process design document should explain the roles supported by the design, the processes and sub-processes supported by the design, and the platform interaction points and functional logic that is executed when users or automated actions perform these processes and sub-processes. Additionally, the design document should include details on how the platform and users executing the functional processes will need to handle errors. The process design document serves multiple purposes:

  • The document can be used as a detailed reference for users of the system on how to perform actions and activities. When more documentation time is available and when end user experience is critical, this document’s content can be summarized and formatted differently to produce training and end user documentation.
  • The document can be used as a functional testing guide. Each process and sub-process captured in the process design must be tested via manual or automated test scripts, and the behavior described in the design document should serve as the expected result of any tests. During testing, if capabilities are identified that change the process as documented, it should be considered an enhancement, while issues identified that prevent the documented process from being executed correctly should be considered a defect.
  • The documentation can serve to provide business context to development teams on required platform configurations. It is easy to reconcile this document with an agile methodology by treating each documented process/sub-process as the “Epic” user story that can then be subsequently broken down into actual user stories to assist in configuration.

The process design document may be maintained in lieu of the ongoing maintenance of functional requirement documents. This is because, after implementation, the process design document should meet all the accepted functional requirements of the project. This saves the team time and reduces the overall number of documents that need to be actively managed. Functional requirements that have not been met by the currently implemented platform should still be maintained or, at the very least, evaluated on a regular basis, but only as a function of demand management.

Architecture and master data documentation are needed and should be maintained with the purpose of helping technical teams quickly understand how pieces of functionality were implemented and their various components for the purpose of troubleshooting or making enhancements. Architecture documentation does not need to be overly complex and filled with diagrams. Instead, if process design documentation has been created, architecture documentation can be created that references processes and sub-processes and summarizes the components that enable those processes and sub-processes. For example, a process design document might contain a process for incident auto-resolution that also contains sub-processes for reopening auto-closed tickets. The corresponding section in the architecture document can then speak to the various platform components (a scheduled job that looks for stale tickets, the addition of a “reopened by user” metric, and more) that work together to facilitate the process. Architecture design documentation should be created prior to the detailed technical design of individual capabilities as it should serve as a guide for the technical design of individual components.

One type of content that must be documented in the architecture and master data documentation but is not obvious from the process design is the architectural design of shared components that are consumed by multiple processes. This includes any kind of foundational or master data and the data entered into the system following the common service data model (CSDM).

Test documentation should provide testing steps that testers can execute to test all, or parts of the design as captured in the process design document. Test documentation must be maintained in parallel with the design documentation and is used for regression testing on the instance. Testing documentation for manual testing should be maintained even as substantial portions of your functional configuration are tested via automation.

The preceding documentation forms what we consider the minimum required documentation to keep at the end of your platform implementation and during operations. Other documentation augments these core documents for specific purposes and, depending on the needs of your organization, the use cases supported by your platform. Some major missing but commonly understood documentation might require some justification. You might not agree with the logic, but you will at least understand the reasoning. It is not necessary to maintain user stories as their primary purpose is to serve as a communications tool in terms of what a developer should configure on the platform at the time of user story elaboration. While the user story documents themselves should be kept (hopefully in a digital repository such as ServiceNow), there is no need to maintain these user stories once the configuration has been completed and deployed into production. If enhancements or changes to the platform are required, update the corresponding design documentation and write new user stories instead.

As-built configuration documentation or detailed inventories of technical changes might be another documentation type that is commonly encountered, especially if your organization has engaged a consulting company for your initial implementation. While a detailed technical inventory of configuration changes is important for initial knowledge transfers to the operational teams, we believe it is less necessary to maintain this documentation in the long term provided that the platform team follows the conventions and standards of how to configure and name changes on the platform. For most simple features and configuration changes, the records in the ServiceNow instance itself and its descriptions and comments should speak for themselves. For more complex functionality with many moving parts, the higher level architectural and master design documentation should serve as enough of a guideline for technical teams to find what they need on the platform.

Automated test coverage standards and/or regressing testing scripts standards

As we covered in planning an implementation program for success, regression testing is an important part of any implementation. The bigger and more successful your platform team becomes, the more important regression testing becomes. For large implementations, regression testing existing functionality could cause significant bottlenecks to the pace of release if there is no automation available.

The automated test framework capability in ServiceNow allows the development team to create regression tests for business logic quickly and maintain them on the platform. The best way to ensure appropriate test coverage and make sure the automated tests are maintained is to require the creation or updating of automated tests at the time of the implementation of the configuration or change in configuration.

ServiceNow provides plenty of documentation on the uses and best practices of the automated test framework, so for this book, we will summarize a highlight reel of the most important best practices and throw in a few of our own words of wisdom to be applied to your testing standards:

  • Use an automated test framework to perform the functional testing of business capability. That is, automated test frameworks are best used to test end-to-end user journeys, processes, and sub-processes and not the best for unit testing of specific script functions, individual business rules, and the like. In later chapters about how to operate the platform, we will discuss how you should keep the overall process steps enabled by your platform design documented clearly. If you follow that advice, it should be easy to create and group regression tests into the respective process and sub-process test suites and update them as those processes and sub-processes evolve.
  • Tests should always be impersonating specific test users with specific roles; avoid writing tests that run with admin or non-real-world privileges.
  • Set up and use the headless browser for UI testing. The headless browser allows the automated test framework to execute UI tests without forcing a manual opening of a browser window by the tester. Setting up the headless browser requires the installation of a Docker image on a server within your environment that can then host the headless browser for the ServiceNow instance, and the most up-to-date instructions can be found in the ServiceNow product documentation for your version of ServiceNow. Setting up the headless browser and making your UI regression tests utilize the headless browser is strongly recommended to enable UI regression tests to be executed with a minimum amount of human intervention.
  • Treat test design as seriously as feature and functionality design in your implementation standards. It is hard to design good tests, and creating good functional tests requires careful thinking, planning, and design just like the creation of new capability. Whether the creation of automated tests is the responsibility of dedicated QA developers or the functionality development team, budget plenty of time for test creation and modification, with formal planning sessions and design documents created to make sure there’s sufficient coverage of common journeys, failure scenarios, and error handling scenarios. When designing test cases, the team should be asked to map tests to functional requirements and/or the processes and sub-processes in your platform functional design document and use this mapping to determine the test coverage of the functional requirements. Strive for test coverage in the high 80%–90% range, making sure to cover the most common and important business processes first.

Now that the team has been set up with strong standards, in the next section, we will discuss the day-to-day operational norms of the platform team in the maintenance of your platforms.

Operational processes and techniques of technical development

What should your platform technical team’s day-to-day activities look like? How do you manage your platform development team or teams? What should your team be watching for in their operational day-to-day and what should they be considering when they are doing development? In this section, we will go over a list of techniques for you and your team to keep in mind.

Teams should manage the accuracy of their estimations and the consistency of their throughput

Platform development teams in operations can become exceptionally accurate at being consistent with their estimates of effort and their actual throughput if the team puts their mind to doing so. This can apply to both teams attempting more waterfall approaches and agile approaches.

All that is required to estimate and determine a team’s throughput is to have the team establish a unit for estimating average team effort (this could be story points, hours of average team effort, or some other metric that the entire team can agree to), a way of dividing work into chunks that can have a clearly measurable definition of done, and checkpoints where the number of work chunks that have been done since the last checkpoint can be measured.

A typical operational process that hits all of the preceding requirements is the scrum methodology, where user stories (with a clear definition of done and acceptance criteria) are used as a way of dividing work, story points the method of estimating effort, story poker as the way of obtaining the average team estimate and finally sprints as the checkpoint to calculate throughput.

We will not highlight the scrum methodology (or similar methodologies) in this book, but instead, we will highlight a few bad behaviors that can influence the quality and success of these estimated and throughput measurement processes and discuss how to manage them or use them to drive the continuous improvement of your processes. Far too often, scrum masters, project managers, and managers give up on attempting to measure the metrics or dismissing them outright when, in fact, there were clear external factors that they could influence to improve upon their ability to obtain good metrics.

One common bad behavior is an inability to recognize the impact of skill gaps between individuals on a team in the scrum process. The team and project management must recognize that the estimates for each chunk of work should be closer to the median estimate of the team, not the best or worst. In practice, this means that if the team member with the worst estimate is assigned the task, that team member should be supported (but not replaced) in the endeavor by someone (say, a team lead) to allow that team member time to grow into a better developer. One behavior to be avoided in managing this issue would be to have skilled resources do all the estimating, creating unrealistic estimates or estimates that leave no time for less experienced team members to grow and learn. A related bad behavior would be to allow less experienced developers on the team to repeatedly provide “infinity” estimates or announce that they cannot provide an estimate. If this behavior becomes prevalent, it will become not only impossible for the team to understand its throughput, but also prevent further investigations into why such situations exist. When the team or certain team members are providing many I don’t know estimates, there could be several major reasons, each with different solutions.

First, it is possible that the requirement or user story provided is not detailed enough or is too large to estimate in an estimable chunk. These two issues are frequently related to each other – large features are unlikely to fit well into a single user story, and when a product owner attempts to do so, it produces unclear user stories. A good tell that this might be the cause is when the entire technical team has trouble with the estimation, and excellent product owners should recognize this as a sign that the story must be elaborated and broken down further to be estimated. As an aside, excellent product owners and technical team leads should make sure during the elaboration and estimation process that the “missing details” developers are looking for in this case are not technical in nature. Developers should be comfortable with taking clear functional requirements and translating them into a cohesive technical design hypothesis quickly. Developers should also be comfortable in asking for certain business contexts that then inform them of any technical choices that must be made without explicitly asking the product owner to make that choice for them. For example, when estimating a hypothetical functionality where Sold Product records that have expired must have a case created and assigned to the Renewals team, developers should be able to determine whether this can be done as a scheduled job occurring periodically scanning all sold products, or use an approach where each expiry event is pre-scheduled to trigger the task creation action as the Sold Product expiry dates are set. There is a possibility that a particularly complex or unique story might simply require some investigative time. In such cases, timeboxes should be established to prevent the team from getting lost in the investigation.

Second, it is possible that some or all developers might be inexperienced in the area in which the functionality has been written. If the estimates are consistently unknown for a set of user stories, the team simply might not have the skills required to tackle what is needed. In such a case, switching the functionality to a different team might be appropriate if available, and if not, the product owner and/or project manager should make sure that contingency in mitigating this risk is already incorporated into the project plan and budget.

Third, the issue could be one of skill or lack of confidence in that skill by a particular team member in that functionality. Good technical team leads should recognize when this is the case and decide on whether this could be an opportunity to upskill the team member or give the work to someone else on the team for now. In either case, during the estimation, the technical lead should, at the very least, provide some guidance on how a feature should be implemented to see whether the team member can, based on that information, provide a real estimate (even if high). Repeated unknowns by a single team member on multiple features over multiple sprints could suggest a major skills gap that should be addressed by the team lead. It is important that throwing out unknowns is not seen as something to hide, but instead as something that triggers proactive actions from the team to make additional investments to grow an individual’s skill set at the expense of some team throughput.

Measurements of throughput also have common pitfalls, the most common of which is to give up after determining that the results returned are highly variable and difficult to use as a predictive tool. Instead, teams should look carefully into why throughput varies so greatly between sprint to sprint, or between two measurement cycles. Make no mistake – a team that has high variances in throughput from iteration to iteration should be strongly avoided as it means that there’s little certainty on how much work will get done and no certainty in the team’s ability to estimate. Teams should make it a clear agreed-upon goal to be consistent in their throughput and do everything they can to be consistent.

There are several reasons why throughput might be consistent. First, it is possible that the team’s estimates are wildly off, and because throughput is measured by looking at the estimated effort, your throughput misses could just be your estimation misses. It is easy to determine whether this is the case: at each checkpoint, have the team provide the actual effort and determine the variance from the estimated effort. If this variance is found to be large, then the team might be too overconfident or not confident enough in their estimates depending on whether the variance is negative or positive. Either way, teams should be asked to improve their estimation confidence levels. There are various techniques available to allow teams to be better estimators. One way is by asking the team to visualize themselves being offered a bet where 90% of the time they win $5,000 and 10% of the time they win nothing, and then asking them whether they would take that bet over an alternate bet where they win $5,000 if the variance of the estimate is less than 10% (or 90% accurate). If there is a preferred bet, then the team is likely over or underconfident in their estimates as the odds should be equivalent. It turns out many studies have shown that mental exercises like this can improve the estimation abilities of individuals even without actually making the bet. Through this process of looking at variances and setting a team goal to reduce it sprint by sprint, behaviors will naturally arise within the team to encourage better estimates.

Another cause for throughput variance is distractions. The platform team might be called into meetings that have nothing to do with the work at hand, and be distracted by taps on the shoulders, incident investigations, and other activities. Distractions could be devastating to individual throughput as each interruption not only distracts the individual from the time it takes to resolve that distraction itself, but also the time needed to return to a mental state where the individual can be productively working on their development tasks. If estimations of individual units of work are right, then throughput variance is likely caused by disruptions and distractions. There might be some level of disruption that simply must be managed, but teams should not allow distractions from other work areas to significantly affect their ability to produce consistent throughput. If distractions due to other tasks are causing large throughput variances from sprint to sprint or cycle to cycle, scrum masters, project managers, and team leads must work hard to protect their team from such distractions and to help create behaviors within the team to avoid inter-team disruptions of throughput. Once again, by simply measuring and showing the variance of throughputs from cycle to cycle to the team and making the management of variance a goal of the team, many positive behaviors will naturally arise to reduce the variance and the cause of the variance. What teams must avoid is complacency and the acceptance of bad metrics. The moment a team gives up on controlling these KPIs is the moment improvement will stop or slow down substantially.

So far, we’ve talked about throughput variance as a metric, leading to the identification of process or skill gaps within the team. What, then, do project managers or team leads do about low throughput in general? The first element to consider is whether the concept of low throughput has any meaning. Generally, when project managers or team leaders feel that throughput is not high enough, they have in mind some theoretical throughput. If throughput variance has already been managed, this theoretical throughput is unlikely to be based on evidence from the team in question. Keep in mind that the metrics of throughput are only applicable to the team performing the estimates and the work; two isolated teams cannot be compared directly without more work of calibrating their estimates together as the true value of the unit of effort will be different across them under normal circumstances. So, when a team is concerned about its throughput being low, consider carefully whether the feeling is based on factual evidence or a dream of a theoretical throughput that the team cannot truly achieve. This is not to say teams cannot strive for greater throughput over time, but even this can be established as a goal of producing positive change in the team’s throughput over time instead of a hard throughput metric goal that may or may not be realistic.

Teams should use version control whenever possible and use it as a way of managing code review and quality control processes

In Chapter 6, Managing Multiple ServiceNow Instances, we discussed in some detail how to manage your ServiceNow instances and environments. This instance management approach can be augmented with the use of version control.

In normal development not using ServiceNow, version control allows many people to develop and touch the same or inter-related functionality at the same time and distributes the work of conflict mitigation to each individual developer as they pull down the latest version of code incorporating the changes of other. In ServiceNow platform development, the usage of version control not only provides this capability but also has the added benefit of enabling individual developers to use personal development instances for real development with a much lower impact if the instance expires, as you would be storing your configuration and platform code in your own repository and simply using the instance as a development platform.

The simplest way to get into the usage of version control for your team is to use Git and GitHub as your repository of choice. This is because ServiceNow supplies native integrations through Studio to GitHub, making setup a matter of looking up the official ServiceNow documentation online and creating your repository.

When working with version control for ServiceNow, a suggested pattern to consider would be to set up a repository for each scoped application being developed. For the next few suggestions, we are assuming your team already has some experience or has learned some fundamental Git concepts. If not, go look for the vast amount of documentation online explaining how it works. Many of the suggestions apply just as well in non-ServiceNow development, and we encourage your team to look up best practices for version control outside of this book for added suggestions and inspiration.

First, teams should decide on a branching strategy or workflow. One of the most basic version control workflows can be called the central repository workflow. In this flow, everyone working on features will develop their features from the main code branch. Each developer merging their changes into the main code branch will need to incorporate features made by other developers to their own local repo prior to committing their changes to the tip of the main branch. Upon this basic structure, numerous branching strategies have emerged such as Branch per feature, where each feature is developed by developers against a unique feature branch, which, once completed, is merged into the trunk. This workflow provides the advantage of enabling releases to choose which features to include by merging individual feature branches into a release branch that is then committed to production. The following figure illustrates the branch-per-feature strategy:

Figure 8.1 – Branching per feature

Figure 8.1 – Branching per feature

GitFlow, another popular branching strategy, involves the use of two major branches, one to keep an abridged version of final merged changes and another that is used as an integration branch for features. The advantage of this method is that the trunk branch contains a simple and clear list of changes, each of which is a minor or major version release, while the development branch can be used for commits and merges on a day-to-day basis and keeps track of a full history of changes to the complete code base.

The most practical advice that can be given with regard to branching strategy is to pick one that works for the team and be consistent in its application. There is no perfect branching strategy as each has its own drawbacks and advantages. Teams should take care and select the strategy based on the composition of the team and the types of work being done by the team. A small platform team making basic enhancements and defect fixes might use the central repository strategy as it is simple to keep track of and requires no further developer training beyond the basic usage of version control and its capabilities. In comparison, a large platform team where there is a team of developers per feature being built might benefit from the reduction of conflict resolution overhead by leveraging GitFlow or branch-per-feature strategies.

If you choose to have every developer use their own developer instances for the bulk of development, you should have a “Trunk” development instance that then serves as a snapshot of the trunk development branch – the branch that has a list of all completed development changes. This would include changes that are planned for, but not yet integrated with the QA or production environments.

Teams should make sure developers commit completed changes to their working branches (for example, the trunk in branch per feature or the develop branch in GitFlow). Committed changes will become quickly and inevitably integrated into someone else’s local branches. This means that if you commit changes that are only partially complete, you will end up breaking other developers’ local versions when they merge with your own changes. Therefore, it is important for developers to only commit to the main branch when your functionality is done and verified.

Summary

In this chapter, we covered multiple areas where it pays to establish operational standards, enforce, or highly suggest the use of technical and functional techniques to increase the effectiveness of your platform team’s operations and deliver greater value more efficiently to the organization. As stated in the introduction, treat the contents of this chapter as a series of principles concepts for your team’s toolbox. Some of the advice might not be directly applicable to your team, and others you might have already implemented, but whatever the case, we hope it has made you think and evaluate whether you are (or are not) doing certain behaviors and understand why they may or may not be done to see whether you need to make changes.

If there is a key takeaway to leave you with at the end of this chapter, it is that good technique in both the functional and technical world requires discipline to develop, and creating discipline is a matter of establishing cadences, measurements, and standards that can be enforced, and in setting meaningful and measurable goals. Before you implement any of the preceding techniques in this chapter or where you find them elsewhere, determine why they matter to you and your team. Make sure your team shares these goals and objectives, and you will be sure to succeed even if you run into obstacles and pitfalls at the beginning.

Setting your platform standards will help your platform team produce high-quality, maintainable configurations. In the next chapter, we will discuss how to structure and engage the rest of the organization to maximize the return on investment of your platform.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset