Chapter 3

Assessing Conditions, Controls and Capabilities

Abstract

This chapter takes readers through the process of assessing their company's data needs. It describes assessment as the first step in data management and introduces the 3Cs of data assessment: conditions, controls, and capabilities. Because of the varied needs each company has for data management, the chapter details various methods for data assessment. The chapter then stresses the importance of using the resulting information to form a coherent image of the data needs. This final image includes understanding the capability level of the data stewards, the needs of the data and the effect that data can ultimately have on the company. Finally, the chapter stresses the importance of adjusting the Playbook to fit the individual needs of those using it.

Keywords

Assessment; Audit; Data; Data capabilities; Data controls; Data management
 
Assessment process is built with a policy-driven approach in mind. As any governance or risk-management executive will tell you, the trick to using policy to drive action is applying it in ways that people understand well enough to act upon. We have broken out three areas where an executive mandate produces requirements for action. This mandate is comprised of both mission or business strategy and compliance-based policies. The policies have to articulate priorities, resources, and measures at a level detailed enough to support each of these three areas, where action occurs. The first area is in assessment and planning. This is where we continually assess and monitor our conditions, controls, and capabilities, using gaps to identify required activities from our Playbook and then establishing execution sequences to close this gap. The second area centers around the execution approach, which includes a long-term roadmap and work plans. These plans include Playbook activities that you’re going to use in the sequences and an approach to operationalizing governments. Playbook activities are structured to support both gap-fill and operational approaches so many of the activities described in the Playbook are ongoing or continuous in nature. Finally, we have to support the creation of measures and communication forms so that the impacts of our work are understood broadly. This is where measures become critical, since we have to measure our progress to plan for capital and our ongoing operational efficacy. These three areas of assessment and planning, execution, and impact monitoring are all driven by business and compliance policy requirements. Since the Playbook gives us procedures, we can show very close alignment in executing those procedures with what the policy mandates require.
image
image
The first area that leverages policy-based executive mandates is assessment and planning, the focus of this chapter. We break down and describe in detail each of the assessment targets in the following two pages. Here we just want to highlight the assessment approach we’re using to look at the conditions, controls, and capabilities for each of those subjects and the activities that should be provided for teams to perform. This assessment will provide us with the scope of data and the scope of the people necessary to close the gaps related to issues with data and issues with our teams’ capabilities. Adding to a team’s experience while improving data conditions and controls is a key construct in the Playbook. The output from the assessment is a series of Playbook activities that are required to fill the identified gaps. Those activities are then sequenced into an execution sequence, which can be added to the overall roadmap you are using for your program or permanent function.
image
The second area is in the execution of Playbook activities and its operational aspects. When you have a large program or corporate function for data and analytic governance, build out a roadmap that identifies which areas of the business, subjects of data, and terms you want to expand and improve upon. Identifying and expanding your Playbook set of activities to do this is a key step, because it provides standard methods and measures for your execution. This is typically a requirement of solid policies and procedures. Many of the activities are actually operational governance practices and can operate as an ongoing service. Identifying such activities is a compass in the Playbook, and many firms expand this area of the Playbook to describe large, complex global data services established at the request of the business or in partnership with them. Some of these services transition key resources during data administration and work in lines of business over to a global data service. Data services, like any other enterprise service, have service-level agreements and metrics associated with their delivery and outcomes. Data services often save the business time and money around their development and ongoing operational activities. A data service example is the duplication and validation of names and addresses against live and dead name indexes to ensure we reduce the scrap and rework of returned mail. More complex examples come from the Big Data arena, where we see new delivery tracking and emerging drone and automated delivery methods driving the need for much more precise delivery information. Identifying the right measures, and the best way to communicate those measures as they change and over periodic requirements, is the last stage in the execution steps.
Once the measures have been defined, they need to be vetted against the executive mandate to ensure all of the aspects of the policy-driven mandate are being addressed with regular measures and impact monitoring. This very dynamic area requires consistent measures as well as some flexible measures that can change over time. Flexible measures refer to services that change in scale, nature, or scope over time. Standard measures around data quality, availability, and impacts are maintained with baselines. Measures related to execution and resource levels are maintained against baselines over time. Executives can expect to see comparisons of long-term resources allocated after levels are expended and outcomes achieved.
image
image
The Playbook assessment process is focused on three key areas, which collectively provide a clear view of the current conditions, controls, and capabilities of the enterprise, with regard to critical data and analytics. Many quality and governance programs focus on the conditions, and to some extent the capabilities, but often lack permanent controls and the measures that prove their efficacy. Balancing conditions, controls, and capabilities, as the Playbook suggests, ensures that our execution and operational support levels are consistently high enough to certify integrity and quality in our critical data and analytics. It’s also worth noting that this approach focuses on assessing the current state of these areas, in order to create baseline measures that support ongoing reporting and heat mapping. In addition, the outcomes of the assessment work are inputs for prioritization and planning along with the demand management cycle, which we will address later. In order to make the assessment effective in terms of measuring the gaps, it’s important to have baseline measures for each of the three areas in mind before you start. In this way, you ensure that your assessment is a fit-gap assessment that produces a list of gaps from expected values. Many data governance and related data management maturity models and assessment tools have been developed and published in the last 10 years. Each of these has unique value, but we came to the conclusion that the Playbook assessment approach was essential. This is because our emphasis includes a balanced focus on controls, whereas others are not as clear about that critical area. For example, it is important to be able to assess your capabilities with the controls focus in mind. Having strong data stewardship or analytic quality skill sets does not always translate into solid, reliable, and efficient controls in those areas. Thus it’s important to look at the controls as well as the capabilities in order to gauge the amount of energy that is being consistently applied to operating solid controls. It is also important to remember that, as capabilities improve, controls efficiency should improve, because it takes less effort from a well-schooled and experienced resource to identify, put in place, and operate an effective control. So we can expect to see improvements across these three areas as they occur within the three areas. Let’s look at the each of the areas to understand more about what we assess within.
The Current Conditions area focuses on data stewardship, governance, and quality management. It also includes the cataloging and operation of business analytics. Assessing the current condition requires at least two elements. These elements have been detailed in very mature programs and corporate functions that are based on the notion of “trust and verify.” Starting with assessment surveys, while communicating an ongoing review of the work being done, enforces accountability in your stewards. Asking a data steward if they keep and maintain the catalog of their critical data is useful. But asking those questions while simultaneously reviewing that catalog and discussing its gaps and omissions is far more effective in assessing the situation and, it communicates the level of accountability you expect from stewards going forward. Similarly, asking about the business analytics, change-control process, while simultaneously reviewing examples of currently used production analytics, is far more effective in validating the level of control in place and communicating the expectations for controls going forward. Finally, it’s worth noting that you will amass a series of worked examples that can be used for training and orientation of new stewards. Additionally, internal and external audit and regulatory experts are always looking for evidence of controls sufficiency in the conditions of data and analytics. They know that testing controls is an important part of their job, and that testing the way those controls result in solid data and analytics proves that the controls are being used consistently.
Assessing current controls is often best done in conjunction with internal audit and risk management professionals. At minimum, engaging these professionals in defining controls objectives and sufficiency tests is far more likely to result in a control’s assessment profile that meets their requirements as well as your own. Risk and audit professionals are also continuously monitoring industry and public threats, risks, and best practices. This makes them solid partners in defining controls, based on the risk and exposure we face with data and analytics. Assessing data and analytic controls often requires some segregation of business-based audit and balance controls, from the IT-based technical controls. We are seeking an assessment of control effectiveness and efficiency, so we must test those who operate the controls in order to understand how well the controls themselves are being applied. We are simultaneously testing for whether the controls actually help us monitor, manage, and remediate risk and exposure based on our specific risk profile. We don’t need to control or test controls for risks that we do not face or that do not rise to a level that necessitates controls. We do need to ensure that appropriate controls are in place and tested where there is known risk and exposure. Finally, it’s worth noting that assessments should be conducted on a periodic basis to create a baseline for the ongoing controls’ effectiveness and cost. We should also be collecting anecdotal information about controls that are known to have prevented loss or other risk from being realized.
Due to the way we interact with certain people, it can be difficult to measure people’s data and analytics capability without skewing the measures. There are many complications identified with measuring people’s behavior and attitudes in business settings. In this case, measuring capabilities is best quantified through their collective experience in executing the Playbook, managing controls, and reporting on status and issues. Our experience indicates that tracking people’s outcomes with data and analytics governance is the surest way to determine their overall capability levels and commitment to quality. Here, as in current conditions, we’re looking across data stewardship, governance, quality management, and analytics to understand how experienced and effective our people are. People’s experience and the effectiveness of their part of our program or corporate function can be combined to determine our capability level in that area. However, we must not become too monolithic in the expression of summary information. Many capability models find it very difficult to provide granular views of differing capability levels or maturity levels across lines of business, geographies, and other business segments. We find it most useful to survey and observe, particularly with newer, less experienced people. Many times we see new people come into a company with a very long resume and list of accomplishments, but these do not always apply directly to the challenges in their new environment. So tracking their activities through survey and observation can identify where they are not aligning with the company’s approach. Similarly, when you have somebody who is relatively inexperienced but rapidly adapts to challenges, this sort of assessment approach can highlight those abilities.
Now that we’ve covered the basic areas to assess and talked about the use of surveying and observation, we can address another input in the assessment process. Status reports compiled by the overall program, which describe the progress of their work across both the organization and the data scope, are a primary input. From quarter-to-quarter and year-to-year, we can expect to see progress in expanding a program or function across the company and addressing commensurately larger bodies of data and analytics. Tracking that progression across the company and its data is a critical part of evaluating the growth of the program’s coverage. Many approaches use coverage as their primary measure of program success. We’ve found that doing so creates a gap between the perceived success rate and actual success rate, because what we identify as covered is often not treated in a consistent manner. We therefore use our assessment survey and observations to confirm that each area of the firm and subject area of data formally addressed by the programmer function is actually completely addressed to the level identified. Trusting in openly verifying with your business partners, who have invested time and energy into doing this well, is the surest way to confirm the coverage leads to success.
image
All of the assessment outputs should be considered as evidence of progress and improvements. Where there are clear gaps in the current conditions, controls, and capabilities, we can circle back to the Playbook to identify the required sequence of activities to close the gaps or to bring up our overall levels as needed. Please remember that the Playbook can be customized and extended to address things that are critical or somewhat distinct to your company. But these assessed gaps are only part of our input into the overall roadmap and work plan for Playbook-based activities across the company. Another key input is a demand management function. We address this in other areas of the book, but need to summarize some steps here to show how we add inputs from it into the overall roadmap and work-planning effort.
Solid demand management for data and analytics engages stakeholders to understand their problems, issues, requests and enhancement needs but does not stop there. These various requirements for improvements and services must be vetted against two more levels. The first is management data initiatives, which can broadly address data and analytics needs that multiple groups of the company or key functions of the company have identified as gaps in today’s environment. Those gaps often lead to a major project or initiative that addresses a number of the individual problems or issues we collected before. These initiatives also establish broader priorities and improve data and analytics based on what you choose to focus on and how far you go in making improvements. These initiatives are also critical because they have managed to acquire funding and executive sponsorship, and so are already a part of the committed demand pipeline.
The final level we have to include in our analysis of demand is a set of enterprise data and analytics priorities. These typically emerge from the executive team. Some of these will be reflected in the type of management data initiatives that are underway. Additional enterprise priorities will be reflected in strategic dashboards, goals and objectives, and mission maps, which the enterprise uses to drive organizational behavior. It’s critical to understand the enterprise data priorities in some sort of ranked order, and these must be tied to desired outcomes, goals, or measures in order to be useful. We typically weigh the strategic goals by associating the initiatives that we reviewed and the request that we have of them. This gives us the ability to prioritize the coverage of initiatives and actions for requests and problems against the enterprise stated goals.
The combination of all of these components enables us to develop and maintain a roadmap or transition plan for data and analytics. It’s important that this plan have a communicated sequence, along with the logic behind why things are sequenced the way the plan indicates. Therefore we need to point back to the enterprise data priorities and management data initiatives in order to indicate why we have sequenced our work in the manner the plan suggests. This plan produces deployment schedules, which become a work order for various parts of the organization to engage in Playbook activities. It also drives the supported activities and initiatives for the shared or global data services partners.
This demand management process adds a corporate management and local set of priorities to the assessment outputs, which helps us craft our overall pipeline and sequence a roadmap based on the priorities of the company. It allows us to adjust the prioritization we use from the demand management side against the gaps that the assessment has identified we need to fill. Demand management can also point out emerging areas where additional data and analytics capabilities are needed and may in fact need to be added to the Playbook. Recent examples include the ways companies are mapping Big Data with conventional database and other technologies. We will talk a bit about the data in a future chapter, but the first thing to think about in terms of governing is knowing when it makes sense to invest in the first place. Thus this approach allows us to balance the current assessments gaps and issues with the demand and priorities already in place, so that we address all of our issues in a prioritized way.

Assessment Methods

Assessing your conditions, controls, and capabilities is a job best done using multiple methods and based on a number of factors. A key factor to consider is the type of people you’re working with as you assess your 3Cs. People working in technical and operational areas have very limited time and may become impatient when asked to take large amounts of that time to answer questions or be observed. Another factor to consider is the way you ask questions or frame surveys. It is important that it’s not overly obvious you’re looking at these underlying conditions, controls, and capabilities; rather, you are looking at functional areas related to stewardship in governance. The survey below includes questions that are sectioned into four areas. These areas, while they relate to certain aspects of the 3Cs, are clearly written around functional activities and outcomes. We included the survey and observe columns to point out the fact that, while you may ask these questions in writing, through dialogue or perhaps in a facilitated session or meeting, you’ll often need to include observations to validate the answers. The key factors in question framing and delivery are really centered around the nature of the people, the work they’re doing, and the work environment they do it in. People who work in open, fairly busy, and often fairly noisy environments, operationally or in technology, are less apt to be able to answer questions or fill in surveys quickly. It may be more useful to pull them into short, facilitated sessions, where you ask the questions and get the answers.
image
The key is to keep the period of time you spend asking questions or asking people to complete surveys to a minimum. Ideally, engage them with questions that get you to two levels of detail. We sometimes go through an iterative cycle of asking a subset of our questions in either a facilitated session or sitting with people as they do the work one-on-one. We also use online surveys wherever possible, then follow up one-on-one or in small groups. The reason for the face-to-face or facilitated session, where we have groups that can be drilled down or questioned, is to get context from those groups. We often hear people tell us they are doing certain work related to stewardship, for example, cataloging critical big business data terms and rules. But they don’t go further in a private session or small group session with people they trust in order to share how that effort is typically done. Only when a major data problem has been identified through reporting or analysis do they undertake these sessions. These are all really valuable insights, and they reveal that your audience is committed to learning about how to do this work and is interested in finding a better way to move forward.
There is another critical factor in your method of getting questions answered, engaging with your stewards and subject matter experts, and understanding the underlying conditions, controls, and capabilities. This focuses on being able to observe, either directly or indirectly, the artifacts of the work you are asking people about. When we start to survey people or interview them about data controls, we must be careful to understand what they tell us in very specific ways. If, for example, a person tells us that they have detective controls for their financial and other critical data, because they are reviewing the data manually, we know to ask about how they do that annual review. There’s nothing wrong with manually reviewing data when you don’t have profiling tools, but it’s unlikely, unless they’re using a fairly sophisticated approach, that manual approaches can produce reliable, consistent results. Direct observation allows us to see the work they do as they do it, which means we can observe things like spreadsheets, Word documents, SharePoint documents, and even advanced stewardship and governance tools like Culebra and Oracle. This will help us understand how effectively they are using the tools they have, the change control they have over what they capture, and their methods for sharing these with other people. Indirect observation, where we request samples of these documents and artifacts as a result of our interview or survey-based questions, is another valuable way to assess the maturity of their capabilities and tools. It also provides insight into their documented controls and their understanding of the conditions of the data. So direct and indirect observation yields far deeper results than surveys and interviews alone can provide.
As you formalize your governance program or function, you’ll be engaging stewards and subject matter experts in various meetings and other work sessions. This enables you to continuously gauge their activities, outputs, and challenges. It’s important to formalize the way you look at their capabilities and to create meaningful measures that can be communicated simply and engender a sense of equality and fairness in evaluation methods. We’ll look at capability measures later in this chapter and call out the fact that engaging people’s capability maturity should be as empirical and comparable as possible across people and time. Finally, it’s important to treat survey and interview results with care. We often collect more contextual information in the written results or interview notes than the direct questions would indicate. People will often tell us additional information beyond the answer to the question that was posed. For example, the first question in our surveyed sample size is: do you have formal executive sponsorship? We’ve had people answer this question “yes” or “no,” and then explain why they answered as they did and who’s involved in some of the history. That kind of information is much more sensitive than the direct answers to the questions themselves. Handling this in a private and confidential manner, and abstracting what we learned in that context, should obviously be treated with critical care as we engage in the process.
Let’s take note of several aspects of the survey, since this can drive your interview questions, physical surveys, or online survey process. Our survey focuses on four key areas: Stewardship and Governance from a sponsorship and resource level, Data Coverage, Data Quality Awareness and Management, and finally Data Controls and Outcomes. These categories of questions are functionally oriented and intend to gather insight when asked in a professional manner. Note that this survey contains 26 questions across these four areas. This survey should only be used in an environment where a governance function or program has already been established. Even if it’s just a local project, it will have formal leadership and activities occurring. In an environment with a functioning program, these questions would be relatively easy to answer, as they would be understood and answered affirmatively by those responding. We would need to tailor these questions to place them at a level that is appropriate for a firm or a business area that has not yet engaged in formal stewardship and governance. We would also adjust the categories of questions that we select around the experience level we know to be in place or expect to find. Therefore adjusting for all of these factors is something you need to do at the beginning and over time as maturity levels improve.
Now that we’ve covered various methods to get answers to our questions about the 3Cs, discussed the way you get these answers, and covered the way you engage people, it is time to consider the kinds of answers and how to aggregate them into a coherent picture. Producing a coherent picture of conditions, controls, and capabilities requires a bank of empirical information that is quantifiable as well as anecdotal. The best way to depict a complex system of people, processes, technology, and data, like a governance program, is to blend quantifiable data with qualified data. When looking to take a snapshot of a program or function across the business, we’re looking to understand how that area is progressing on its journey toward maturity. We also want to understand the results that come from having solid controls and governance in place. We want to create visualizations based on measures that enable people to see the current conditions and competitors. But we also want to capture some of the anecdotal stories and context. This can be shared at least verbally if not in writing as we share the quantifiable data illustrations. Asking these questions in ways that produce quantifiable answers (eg, on a scale from 1 to 10 how solid and consistent has your executive sponsorship been over the past 6 months?) as well as capturing stories and contextual information is essential. Finally, it’s very important to take the pulse of the people you rely upon to do important governance work. Their attitude, level of energy and engagement, and level of frustration and concern, are all things you want to note as you survey and analyze results. We typically gather quotes from people about their overall experience. Often these are offered before we can even ask, but we always get the respondents’ permission and make the comments anonymous before sharing them with the group and with executive management.
As you compile your results and start building visualizations, it is valuable and important to share draft versions of your graphical outputs and your storylines with the people from whom you gather the information. You want to engender a sense of trust and belief that this process is sustainable and valuable. Showing people the ways you are aggregating their responses and the expected perception of those responses is a great way to keep the teams willing to participate in this going forward. Be aware that as you move questioning people across the line of business operational area, they’ll talk with each other very quickly and some skewing in the way questions are answered can result. When we encounter this behavior, we sometimes change the questions as we move from the subgroup or even person-to-person. Having an array of similar but differently worded questions gives us the right outputs and is useful in undermining skewering bias. The last point you should consider is the control questions themselves. We sometimes find that asking about controls results in very uncomfortable answers. Occasionally we have found that there are little or no controls in place and that this lack of controls is reflective of a broader lack of controls. This can be very disconcerting, especially after a program has been running for some period of time. In these cases it is fair to capture what people share about broader controls, such as corporate governance, financial controls, and so on. However, this should not be shared with anyone except an executive sponsor in a private session and should not be attributed unless respondents are willing to do so.

Assessing Data Controls: Audit and Balance Controls

Let’s move into more specific assessment areas, assuming that both direct and indirect observation have been used to augment physical, electronic, or direct interview surveys. Understanding and applying a simple framework for data and analytics controls is central to evaluating the level of effectiveness of controls in place.
image
This audit and balance controls grid describes control points and types we’ve used successfully for a number of years. The original ABC grid was developed by Knightsbridge solutions, a firm where we were all partners for a number of years, and which was known as a leading data integration and analytic delivery leader. The ABC framework helps us point out simple ways of applying three types of controls across three basic levels of control points. We use detection, correction and prevention to describe the different control types that are prevalent, observable and effective in an analytic environment. We understand that detection is the core control type and is the most prevalent, since it is necessary to determine correction and prevention requirements and options. Many firms and lines of business have matured to the point of using detection along with correction in an iterative cycle. This is because prevention-level controls are the most difficult to build and deploy. They often require code, base-level changes to applications, and databases. One example of prevention is the use of validation edits in an application. Validation edits ensure that when new records are added or changes are made to a record, certain fields, such as gender, age, zip code, and other known, valid values, are forced to select from only those valid values. We often see manual processes for the detection of variances from those values and after-the-fact corrections as the de facto standard when improving the base application is too costly or beyond our reach.
Knightsbridge applied this type of thinking to the source, movement, and target population areas of data and analytics. Most of the analytics work we did with data controls was about backtracking control gaps as a result of analytic inaccuracy or inconsistency. Today, we know that analytics are built with many of these controls embedded, since the data integration, particularly with Big Data, is done with analytic streams and applications. Advanced analytics today therefore requires a combination of data and analytic output controls. Put simply, analytics now spans the source, movement, and target levels. Understanding where to place detective, corrective, and preventive controls across source, movement, and target location is the job of data-quality and controls experts. Engaging enterprise risk management, audit, information security, and other experts in the organization of consulting firms is often necessary and is a good practice to ensure control sufficiency. Remember that your internal audit and enterprise risk management professionals will at some point be reviewing data and analytics controls for critical enterprise data including financial data, customer data, and regulatory reporting. The ability to define these control types and locations or placement is critical to assessing your controls efficacy.

Controls Reporting—ABC Control Levels

Below is an example of a control’s assessment report we can use to summarize control conditions for an area of your program or function.
image
This chart summarizes some statistical measures gathered in observations and interviews and applies them in a way that allows us to score them for audit trail and overall control levels. One of our goals with this type of report is to use a format and communication method similar to what enterprise risk management typically uses to measure corporate governance for business and operational controls. Bar and pie charts, radar diagrams, and advanced visualizations are all equally useful and depend on the appetite of the audience. We always try to understand the range of people who will be acting upon these reports. So while we need to be able to summarize a level appropriate to executives and sponsors, we also need to make sure this resonates with the people whom we have interviewed and reviewed conditions with. It’s also important to note that these things change over time, so being able to compare one to another over a period of one or two quarters is important. The ability to look at them side-by-side and see the changes and even put directional indicators on the measures is also useful. We have found that putting too much on the slides makes them unreadable, so it’s a constant balancing act that you’ll sort out as you understand your audience and needs over time. In addition to a report like this, the anecdotal information you’ve gathered from people, without attribution, is an ardent voiceover you can use when you present these reports. When you have an insufficient area, as this report does, such as data storage and delivery for detection and correction controls, it’s important to have a voiceover about whether that area is going to change in the near future, or has other challenges that would explain this state. It’s also important to gather anecdotal evidence about the level of executive support your respondents are getting and the number of resources they have in order to fully contextualize the report.

Capability Measurement

Earlier we discussed four-capability level determination. We do not determine or assess capability generically or monolithically at the pure program level. We’ve found that it is simply too aggregated a level for the score to be meaningful or actionable. Instead, we identify the kinds of activities that must be performed and the scope of that performance across data and the business. We use a series of three-dimensional constructs to help visualize a company and its data and show where these capabilities are being exercised and applied in those areas. Assessing people’s capability is very valuable and important but also very sensitive, so what we’ve learned to do is determine with key sponsors what level of experience we require people to have to progress through various designations of capability.
image
We often use the karate belt analogy, such as white, green, brown, and black belts, to depict or reflect the overall level of experience people have gained over time. Some firms choose to give credit for attending industry conferences and training sessions along with credit for the work that has been performed in the firm. The key is that each belt is reflective of actual work performed following your Playbook approach. As a person moves from white, to green, to brown, they develop mentoring and leadership skills as a result of moving up in the organization and managing the team or teams of people doing data and analytic governance work. These are often people who lead data and analytics delivery and take on governance functions as part of their responsibilities, which is a highly desirable outcome since it ties accountability for the quality and control of data analytics to the person responsible for delivering them in the first place. A black belt is generally reserved for someone who has successfully led multiple successive program iterations or corporate waves of governance across multiple parts of the business. These belts are not grades as we might apply to a student on a report card. They must never be communicated as an indication of the quality of work people are doing. Our approach is to treat people as successful based on their continuing roles and responsibilities. It is the job of their direct report supervisor or manager, as well as indirect report data governance and analytic governance managers, to continuously assess their performance and manage them accordingly. This capability measurement is about the experience gained and assumes ongoing success.
This is an example of different levels of experience and the resulting belts that are awarded based on that experience. Notice that we include things like formal classroom and online training. We also include testing and mentorship activities, but we are very specific about the level of experience and outcomes necessary as part of the test to make it to the new belt.
image

Data and Analytic Conditions Reporting—A Risk and Exposure Approach

We’ve covered the ways we can communicate controls and capabilities, now let’s address data conditions. Many programs benefit from general health statuses of their program’s coverage in terms of data, organization, adoption, and so forth. We’ll look at those areas in a moment. We’ve found that a heat map or similar way of looking at data risk and exposure is the best means of expressing a program’s coverage. This is another area in which partnering directly with enterprise risk management and internal audit is extremely valuable. This work should be done in conjunction with the professionals, as they will be able to determine the appropriate thresholds and scoring values for enterprise risk and exposure into key areas.
image
The first area is data at risk, or risks specific to data. Some of these can be visualized as gaps in controls around the production and use of data, which can put data integrity and reliability at immediate risk. Risk to data is a key driver for analysis of appropriate control types and control points and will ultimately mitigate or resolve those risks. Data that is at risk for inconsistency due to a lack of control over the way data is entered, moved, or stored clearly needs detection controls at a minimum and prevention controls optimally.
The second area is risk arising from the use of data that may be of poor integrity or quality for the intended use. It’s important to understand that data quality and integrity are often a suitability issue. We can’t afford to get absolute about quality levels, since many operational processes and systems require data at a certain level and timeliness that may not lend itself to advanced quality controls and levels. This is never a matter of dogma or philosophy for us; it is much more about pragmatic approaches, which ensure that the businesses are able to produce and consume data appropriate to their needs. This approach always emphasizes detective controls, so that you know what the quality levels are and what risks you may be taking in order to process business more quickly or efficiently. Risk arising from data requires more assessment of the downstream impacts of integrity and quality gaps in the data. This includes embedded data, data integration streams, and advanced analytic applications. A key example that has proven itself over more than a decade in this space is the use of the Data Relationship Management tool from Oracle Corporation by advanced analytic users of Hyperion tools from Oracle Corporation. The Data Relationship Manager, or DRM, is a tool that provides tremendous control over the generation and change of mission-critical financial hierarchies and complex data relationships. Using this tool appropriately creates a preventive control for major financial reporting and analytics, which prevents risks from data arising and driving significant exposures, financial statements, projections, and decision-support outputs.
Together, these two areas of data risk and exposure should be measured and heat mapped with executives on a regular basis and in collaboration with enterprise risk management and internal audit. Getting the measures correct for the proper placement in the heat map and identifying the difference between risk from data and risk to data areas has proven to be a critical value proposition for our clients.

Overall Reporting and Visualization

We now cover the assessment and assessment scoring and recording process for all three areas: conditions, controls, and capabilities. What we found in repeated engagements with clients across industries is the need to visualize the firm or the area of the company we are dealing with on a three-dimensional scale. Existing maturity models tend to look very flat and are difficult to see visually as differentiated across different parts of the company or the data and analytics areas you are assessing. There are two ways we’ve seen this work effectively in a three-dimensional world. The first approach can be disassembled and manipulated for more granular expressions of overall conditions or specific areas of concern. The other approach is to use geographic models for large organizations but that model is more specific to certain industries, such as oil and gas, so we elected to use the cube representation in this book.
image
This cube representation allows us to color code subcubes in a way that gives us a very quick visual identification of overall conditions. In this example, you see the capability areas represented across the bottom row, the business areas going back through the data, and the level of maturity achieved from bottom to top. This is a relatively intuitive visual that can be shared all the way up the executive level. Using different colors and coloring from the bottom to the top and across capability areas by business area visually represents the penetration of governance activities into the organization. There is one note of caution on any and all reporting and visualizations we need to point out here. We view any form of assessment we provide management with as potentially critical to their current conditions and therefore very sensitive. We handle that information with great care in terms of our own systems and distribution, as well as the way we present it to clients. We often preview our detailed results with the people from whom we gathered them, while holding the aggregate reporting and visualizations for a very select few sponsors. These sponsors can direct us on how far and wide they want to distribute or share the information. They typically choose to distribute and expose them broadly in order to move behaviors and develop reasonable expectations about what must be accomplished. But where results are highly sensitive and a program or corporate function is just getting underway, executive sponsors often use discretion and control distribution until they feel people will fully understand what it represents. It is essential that people do not take this as a poor score or result on their part, which could negatively impact their careers. Instead, they should be able to contextualize it with regard to the amount of change they feel is needed and the sponsorship they are willing to provide.
image
The overall 3-D cube is very useful at the executive and management level. Snapshots of this as conditions change over time are also directly comparable and useful in board, steering committee, governance council, and other group settings. We have found with working groups, lines of business, and other regional or local groups that decomposing the cube into 3-D grids is a more useful way to dive into a granular set of views. These decompositions provide meaningful views of a subset of the business, typically a business area or geography, as you see in this exploded view. Determining the right way to visualize and communicate information is always a challenge; it puts us in the business of providing analytics and visualization. So let’s be clear about governing our own data and analytics as part of this process. We should be following our own advice and guidance in terms of using Playbook activities to control the quality and integrity of our data in the analytic outputs we produce for our clients. Put simply, it means that we should treat this as a testable data and analytics. Our work product is our attestation of our assessment and analysis of their conditions. We need to be prepared to provide evidence of our quality-control process, even as we assess our client’s quality-control process and outcomes. As a client, if you are consulting and internal resources are engaging in this work, is it certainly fair and reasonable to ask on a regular basis about the controls they apply to their own processes, data, and analytic outputs they are asking you to rely upon.

Summary

This chapter has provided specific examples and an approach, including an assessment framework the controls framework, for understanding current conditions, controls, and capabilities. These examples have been used in many clients and industries over two decades of shared experience. That said, it’s equally important that these examples and approaches resonate with you and your needs. So we encourage you to review these in detail, consider how they would work for you, and contemplate what changes may make them even more valuable for you. The key to the Playbook-based approach is that we use standard methods and introduce standard measures to understand and improve our data and analytics.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset