CHAPTER 8
Measure and Analyze
On two occasions I have been asked, “Pray,
Mr. Babbage, if you put into the machine
wrong figures, will the right answers come
out?” I am not able rightly to apprehend the
kind of confusion of ideas that could provoke
such a question.
—Charles Babbage (1791-1871), mathematician and inventor of the first automatic calculator, published 1864
 
Almost everybody who considers a knowledge flow management initiative very quickly arrives at the topic of measuring. The thought process is usually straightforward. A typical statement I have heard is “You only get what you measure.” Those that manage an initiative often believe they can motivate people to participate by creating measures, assigning some targets and just ensuring that the targets are met. But measuring is a more complicated topic than you might think at first; it seems to be one of the topics that people have not found really good solutions for.
There are a number of issues that I have encountered with measuring:
• It is difficult to find meaningful measures for judging knowledge flow management initiatives.
• It can be easy to drive some behavior but very often it is not necessarily the right behavior.
• Trying to measure exactly is difficult, as a lot of the things involved with knowledge are fuzzy and not easy to grasp with hard, fully quantifiable measures.
• Measures can easily be deceiving. They might satisfy the need of having “a measure,” but if you dig deeper, you will find they do not measure anything meaningful or it is rather easy to cheat.
In early discussions with other experts, I almost came to the conclusion that you should stay away from measures in knowledge management altogether. But over the years I found there are actually some ways in which measuring can bring value, if you measure with the right expectations and go about interpreting results in a cautious and balanced way. Moving away from my early position of “If you can’t measure exactly don’t measure at all,” I decided to start with “Why not measure just to see some approximate figures and see where it might lead?”
Typical measures used for systems that collect contributions from individuals for others to reuse are contribution and usage numbers. And we used those for our initiatives as well. Some lessons we learned around those measures include:
Contribution numbers. Numbers are only one aspect of contributions. Another one is quality. Be careful not to think that more is necessarily better. But to some degree, contribution numbers can actually be interesting. For one, they are an indication of participation if put into the proper perspective. In the case of ToolPool, it might not say much if there are three or five contributions per month from a given country, but if this number is considerably higher or lower than all the other countries, it can be a useful indicator. Also a relationship over time (i.e., increasing/decreasing) can be a useful indicator. The tricky part is to choose meaningful targets. One way to do that is to set targets at certain averages. In general, as getting people to contribute can be difficult, you usually want more, not less, contributions as long as the quality is right.
Usage numbers. With usage numbers, you have to look very carefully at what “usage” really means. In the case of ToolPool, it is very hard (and would be rather costly) to determine if a certain tool has actually been applied, to what degree, with what success, and producing what value. The lowest level that is possible to determine fairly easily is whether somebody has downloaded a tool. Downloading does not mean it has been used, though. Even if it was used, you would not know the produced value until you analyzed each usage in detail. As mentioned, this can be costly, and obtaining that kind of information could inhibit the actual process in such a way as to cut into normal productivity. But similar to contribution numbers, you can look at usage numbers in relative terms: usage per organizational unit, usage over time, and so on. One interesting relationship I have found over the years is that usage over time drives contribution. I believe this is true for these reasons:
• If people use contributions, they realize that even simple contributions can be very useful to them and that they themselves might have something of similar value to offer.
• There is an element of competition and belonging. When people see that many others contribute, they do not want to be left behind, so there is a certain peer pressure. This can be driven further by making competitive numbers transparent, as mentioned earlier. Also, when people use many of the entries and experience their usefulness, they become more likely to feel that it is their time to give something back to the community.
Looking at usage numbers in detail is important to see not only if there is actually reuse participation but also if there is a chance that contribution participation will increase. And if contribution is driven on usage, it is more likely self-driven participation, which often produces higher-quality contributions than would be driven by bonus plan components.
One key fact that we learned was that you should use a balanced approach of measures. Contribution or usage numbers should be part of a more balanced collection of measures that includes softer measures like the process measures discussed in the following section. While contribution or usage numbers in themselves might be dangerous to interpret as a sole indicator for the performance of a knowledge flow management initiative, as part of a more balanced approach, they can actually be of value.
Some years back I actually started to take the usage measure and tried to relate it to a value figure for ToolPool, just to get an estimate. I decided to combine usage numbers with some survey data and some assumptions that I tested in numerous discussions with colleagues around the world.
In a survey we asked users to what percentage they at least once used a tool after downloading it from ToolPool. The average that came back was something a little higher than 25 percent. I then made an assumption that every real usage would save a consultant about four hours. Discussions with project managers, consultants, and others confirmed that this number is quite conservative, as in many cases the savings can be counted more in days than in hours. Because I wanted to stay on the very conservative side with my estimates, I stuck to the four hours. Then I did another conservative estimate about the cost of an hour of a consultant’s time, based on internal cost, which was about €100 at the time. Again, this is conservative because it does not take additional opportunity costs into account. As it turns out, that was actually all I needed to do to get the type of minimum estimate I was after. If every fourth access ends in a usage and that usage saves four hours at €100 (total €400), it means that, on average, every access saves €100.
Before I presented that formula to anybody, I had numerous discussions with a range of people, both internal and external, and asked them to shoot holes in it. One early, very good comment was, of course, that I did not take the cost of running the initiative into account. So I did a conservative (high) estimate of costs for support and contribution efforts. With about one tool/day and an average effort to prepare and process a contribution of about eight hours (€800) and a fixed cost for the ToolPool team of about €150,000 per year, this amounted to about €430,000 per year. In contrast to about 75,000 downloads, which amounts to savings of about €7.5 million, the return of investment was definitely on the positive side.
How conservative the estimate is becomes clear when you look at what I call the gut feeling factors. Those are additional elements that I know are there but are very hard or costly to quantify exactly, so I did not account for them in the original formula:
• If somebody downloads a tool and passes it on to one or more colleagues (something we know from interviews and anecdotes happens quite often), we do not count that as a download. It will be on top of our numbers.
• A situation where somebody provides a tool and somebody else provides an improvement that is a value-add is not accounted for.
• A number of ToolPool tools end up as part of developed products or will be turned as a whole into a new product. This can save considerable funds for development. We have not quantified this so far, but the value could be quite extensive.
• Through a global exchange of tools and by making use of the collective knowledge of all our consultants, SAS can satisfy customer requirements faster and draw from a wider pool of innovations. This results in higher customer satisfaction and consequently higher sales.
Based on stories and personal feedback from users, I know all those gut feeling factors are there and provide additional value on top of the basic calculation. But even with the conservative calculation, there were 770,000 accesses in total for ToolPool that have saved the company over €70 million over the years—all costs deducted.
The results were surprising when I first did the calculation. Since then I have used these calculations mainly to show tendencies and make it clear that there is considerable value in such a focused initiative.
More generally, measures and the visibility of value can help to build the business case that is needed to get ongoing initiative funding. Be careful not to expect this type of business case to be a self-runner. You will need to deal with a number of other factors, such as company strategy and politics.

MEASURE TO GET WHAT YOU WANT

One key to measuring is to look at measures not only in isolation. You should create a list of measures that, taken together, give an indication of the performance of a given knowledge flow management initiative. In fact, you actually should go a level higher and score multiple initiatives together into a collected scorecard. The measures can include simple direct measures, such as contribution and usage, but they should also include some indirect measures that by themselves might not be clear indicators of performance but have an indirect influence based on experience. Some examples include:
• Process measures that cover these questions:
• Do the personal performance reviews include knowledge-sharing behavior components? This by itself will not ensure positive behavior, but the fact that it is a topic twice a year during review discussions raises awareness.
• Do job descriptions clearly spell out that knowledge-sharing activities are part of the responsibilities? Again, by itself, this might not change much, but it gives those employees who want to spend time on it a way to argue for the activity or managers who want to improve a person’s knowledge-sharing behavior a discussion argument.
• Are their departmental, divisional, location-specific knowledge exchange or transfer events done on a regular basis?
• International measures could cover questions like these:
• What is the participation rate of a given suborganization at international knowledge exchange events? To what degree is a country participating in global communities of practice (CoPs)?
• What is the level of involvement in international expert exchanges, such as the resource-sharing process mentioned earlier?
Taking a number of measures (and for your organization those might look quite different) and looking at them from a more balanced view can provide a much better indication of performance than focusing on single measures only.
In my experience, the statement “You only get what you measure” might be true to some extent, but you have to be careful to “get what you want,” as individual badly chosen measures can drive people’s behaviors very easily in unwanted directions.
In general, it is very hard to measure the actual flow of knowledge, as knowledge that presents high value will exist only connected to humans. And measuring the flow of information does not give a complete estimation of the knowledge that can be created from the information being transferred, as that is connected to prior experiences of the recipient.
That is why most of the time a measure will be more of an indicator of a potential knowledge flow, not a direct quantification. From my experience, these categories of measures play a role in knowledge flow management:
Participation measures. System contributions and usages, attendance and frequency of action (i.e., in a collaborative Web site).
Value measures. Time saved; money earned by using a contribution obtained via a knowledge flow management initiative; portion of sales income secured based on using prior experience; patent revenue and other value driven through an innovation based on knowledge created with the help of a given initiative.
Cultural measures. Indicators that show a cultural shift toward participants being more open and willing to share their knowledge. This might be a measure showing to what degree local staff are turning to a global CoP instead of trying to solve all problems locally.
Quality measures. Measures that look into completeness or format. Beyond that, measuring quality is very tricky, as I discuss in more detail in the next section.
Some of the dangers with participation measures have already been discussed. Let us have a look at value measures for a moment. With the exception of patents, value measures are not easy to quantify. To get accurate figures, you would have to be very diligent and detailed about trying to assess the application of what is being reused. To do that with an initiative that has many entities flowing could involve high costs. Some of the exchange could be easily visible (if a system is involved), but the actual knowledge flowing from person to person is usually considerably less visible. So while it might be possible to assess that value for a selection of entities, it is likely to be costly, take too much effort, and has a danger to intrude on well-running business processes. Some participation measures can be assessed only by detailed questioning of the involved parties.
One solution could be to do somewhat representative assessments (i.e., instead of trying to capture the value of all incidents, capture the value of random samples and extrapolate those). For such sampling coming close to being useful, a critical mass of incidents is necessary, which could still represent considerable cost.
One way of sampling is to work with stories and anecdotes. If a few cases represent a high value in savings or other benefits, it could be enough of an argument for keeping the knowledge flow management initiative running. You might produce measures like “At least five documented cases with proven savings of $1 million,” for example.
Another issue to be careful about is the allocation of value. If you have a sales case won that is worth $10 million, how much of it can be attributed to some type of knowledge flow management initiative? How much is personal performance by a single salesperson, how much value did the sales support team provide and what is their knowledge based on, and how much was luck?
One way to get better at this allocation process would be to introduce activity-based management, where you analyze business processes along certain activities and not necessarily based on larger block inputs. But even then there will be gray areas that are hard to quantify or assign. This represents another reason why you should be very careful not to base too much decision power on a single measure but instead work with a larger set of balanced indicators.

MEASURING QUALITY

Quality applied to knowledge can be rather tricky, as discussed earlier. People often propose measuring the quality of elements exchanged through a knowledge flow management initiative via ratings of those entities. Theoretically ratings could provide a collaborative way of getting the crowd to judge an entity. In reality, there are some issues, however:
• Often the critical mass of ratings per entity is not given. As a result, there are only very few or even no ratings at all for many of the entries.1 Ideally, you would want to have many ratings to get a proper median evaluation.
• It is hard to get people to an aligned understanding of the rating process. Often people rate entries on completely different evaluation dimensions (i.e., some rate completeness; some rate it on personal usefulness, which could be quite different from usefulness to a wider audience). In some cases people just complain about the form of an entry, not evaluating its actual content and potential value.
• Rating via a system is usually easy at the time you encounter an entry, as the rating is connected to what you download. But the real value might become clear only when you use it for some time, so there is a time lag between download and the best time for rating. To make the system work, you would have to get people to come back to an entry to rate it, which is often hard to do. We tried to guide people by also putting rating possibilities into places where they might search for similar elements in the future. One thing that people have suggested was to send users an e-mail on the downloaded tool to ask for an evaluation at a later time. What sounds like a great idea could be annoying in practice. Just because you want the rating data, bombarding users with questions about anything they might have used would surely drive a number of users away.
I think that ratings can still be of value but mainly for some limited feedback and not for real value measuring. I would not consider them an actual measuring tool. You could try to do certain rating events to get people to rate more regularly or introduce some point systems, but that actually drives away focus and attention from the actual reuse. After all, the main purpose of those knowledge exchange initiatives is not the rating itself but the reuse of information for the sake of creating new knowledge.
Before moving on to analytics, let me summarize the points on measuring by formulating some recommendations:
• Use a mixture of direct and indirect measures.
• Mainly measure tendencies.
• Use results for the comparison of participation groups, not to make an absolute statement.
• Look carefully at what you are really measuring.
• If you have to disturb the actual sharing process to fully quantify your results, you are going too far. Measuring should be transparent if at all possible.
• Always look very carefully at motives and behavior of those you are trying to influence via the use of measures. A few participants that use the system in unwanted ways could be acceptable, but if there is a considerable amount of people that show behavior not supporting the actual production of value, you will have to adapt or even drop the measures. An example would be that you are driving quantity instead of quality and that more and more participants provide more entries with less quality just to make the measures.

ANALYZE YOUR INITIATIVE

In the previous sections, I did not provide a silver bullet around measuring. As mentioned, it is a tricky topic, and I think it actually needs more research to refine ways on how to best go about it (a topic for another book). But why do you want to measure in the first place? I think the main motivation should be to help you steer your knowledge flow management initiatives. Those are business processes. And like other business processes in your organization (whether they are customer facing or internal), they need proper steering. In order to guide knowledge flow processes properly it is necessary to regularly analyze them. But analytics is more than just reporting and should include forecasting and optimization to really tune the process to ongoing value creation.
At SAS we have a bit of an advantage, as analytics is a company core competency, analytical thinking comes naturally, and we have everything needed at hand, from the right experience and skills to the proper technical infrastructure.
ToolPool has a whole range of analytical components. They start out with simple reporting, such as access and contribution reports in relation to staff sizes in countries. There are overview reports available that present all countries side by side around some parameters. But you also have the capability to create your own analysis with combining a range of parameters (i.e., countries, divisions, departments, years, months, days, weekdays, type of tools, etc.) into an online report and graphic representation of choice. Through forecasting functions, changes can be anticipated and some situations resolved or optimized before they become a larger issue.
Additionally there are easily accessible consolidated overview reports that represent all contributions for a given author together with the number of downloads those contributions might have gotten, the countries that downloaded them, how often, and on what dates. Those personal contribution overviews are a great tool to feed back an indication of the value that contributors produce to the rest of the organization. Seeing that their contribution generated hundreds of downloads, from over 50 countries around the world, is a strong argument that symbolizes the type of attention that contributors often like and that makes them come back to contribute again. A wide distribution also indicates a higher chance that the tool influenced and helped multiple suborganizations beyond just local peers, which is where knowledge sharing often stops without a proper global knowledge flow management initiative in place.
Another type of analysis that can produce useful results is day-to-day usage numbers over the full lifetime for a selected contribution. Looking at those results, contributors can actually see if the interest level on any of their contributions is changing and whether an update results in revived interest.
The contributor report brings all this information together into a simple document:
• A list of all contributions for a selected author ordered from highest to lowest usage.
• Individual usage counts and a total usage count over all contributions
• Dates indicating when the contribution was first provided and when it was last updated.
• The average rating with a way to drill down into individual ratings and associated comments.
• A link for every contribution that leads to a chart for daily access rates over the full lifetime of the tool, with a possibility to select subsets.
• A drill-down from the number of usages to a report that shows the countries or the hosts those usages have been originating from. From the country and host reports it is possible to drill down into specific access incidents with date and time of day data. This is useful to identify cases where a lot of the accesses came from the same office in a short time frame; in other words, to recognize something that appears to be a bit like buddy support. By making the data transparent, this effect of somebody pushing a colleague’s contribution can be reduced. Usually the problem is not as bad as you might think, though, if participants have a clear understanding of the business value of knowledge sharing and do not just focus on some given measures.
The effect that analytics can produce is largely dependent on the quality of representation of the results. One good example was the social networking charts produced during the analysis of person-to-person interaction between different departments at a country organization. In that organization we had a department move from one building to another. The buildings were about 5 kilometers (3 miles) apart, and travel between them took about 20 to 40 minutes, depending on the type of transportation (public, taxi, or private car [including finding parking]) used. The department that moved had contacts to a number of other departments in both buildings. The question was how the connectivity with those departments would change with the move. As this was a predictable event (the move was planned over several months), the analysis happened in two steps. About 3 months before the move and 11 months after the move, we sent out a questionnaire with five questions:2
1. How often do you have face-to-face contact with this person?
2. How often do you have e-mail contact with this person?
3. How often do you have phone contact with this person?
4. Do you regularly ask this person for advice?
5. Do you regularly give advice to this person?
The two result sets (before and after the move) were analyzed and presented in the form of network connectivity charts. While some of the findings were rather predictable, others turned out to be a little surprising. The face-to-face contact rates with colleagues who used to be in the same building but were now separated went down dramatically, which is not surprising, given the distance. Interestingly, though, the e-mail contact rate went down almost as much. We would have thought that e-mail would compensate somewhat for the loss of face-to-face contact. Looking at this in more detail, we discovered that people used to meet in the break rooms while getting a coffee; often this type of contact triggered follow-up e-mails with more details of short discussions. This type of follow-up did not happen any more, and the result was an impact on person-to-person e-mail sending.
Another finding was that the advice networks stayed comparatively stable. So if people turned to someone for specific advice, they still contacted their network even after being separated. I suspect that this contact suffered over time as well, but we did not do a longer-term follow-up to prove that.
The results were easy to present. Just looking at the density of the network graphs showed a clear change in behavior visible to management and human resources. As a result, the head of the department that had moved decided to set up some hot desks in the old location, where department members would spend some time to ensure that a certain level of face-to-face time outside of meetings could happen.
This type of social network analysis (SNA) is somewhat intrusive, though. We ran the questionnaire twice. For an ongoing analysis you would have to run it more often, which would probably result in push-back by staff and management as people usually develop something called “survey fatigue.” The idea of automatically recording personal contact information, while technically possible, would result in privacy issues.
There is one example were we analyzed networks in an automatic fashion, however. In this case we analyzed the flow of contributions in ToolPool as well as the flow of experts in our resource-sharing initiative. In both cases, we did not need any questionnaires. We could base the analysis directly on data produced by involved systems. For ToolPool, we have summarized data as to how often a specific contribution provided by one country is downloaded from another country. Together with the described parameters of that contribution, we could analyze not just the frequency but also subdivide the network diagrams by products, solutions, and other categories.
In the case of resource sharing, we also have a process tracking system that records what supplier country provides an expert to what requesting country. And for each exchange we record required technical expertise, products, and solutions as well as the type of services requested (consulting, pre-sales support, training).
An example of the use of such analysis is the identification of company specific innovation and expert centers. As it is solely based on data that is tracked transparently, you can run this type of analysis a lot more frequently. As it is performed on a summarized level, privacy issues can be dealt with more easily.
SNA has since become a popular analytics technique for SAS customers as well. Banks and insurance companies are using this method to identify fraudulent behavior. Telecommunications companies use SNA to identify key customers with a high influential connectivity such that they can be targeted with special marketing offers.
There is a whole range of other ways that we use analytics on knowledge flow management initiatives within SAS. Advanced analytics cover a lot more than reporting. While reporting can be of value, it looks at the past. A lot of additional value can be produced by looking into the future. By using predictive analytics, you can actually be a step ahead of the game. Instead of reacting, you can perform what-if analysis trying multiple scenarios. Based on historic data and intelligent models, you can also optimize your key knowledge flow processes.
You need a number of people developing analytical thinking about this type of business process, just as for many other processes. As knowledge flow management has a great potential to provide extensive value, it is a good candidate to be targeted with this type of analysis.
The key, however, is to combine the analysis itself and the correct interpretation of results. Doing this requires the right type of knowledge intermediaries who are highly familiar with the knowledge-sharing process as well as with company culture and human behavior in general. This is yet another reason to invest in proper initiative support.
Most of what was discussed in this chapter so far seems to apply only to larger organizations and corporations. But even in smaller or loosely connected organizations, the need to steer a knowledge flow management initiative will arise. And while such organizations might not have the luxury of an infrastructure to apply any advanced analytics easily, having an ongoing closer look at where the initiative is heading is very important. If you use some type of technology as a core support component of your initiative, it will very likely be Web based. It might be some hosted software or something you have control over yourself. An example might be some collaboration sites based on Microsoft SharePoint. Whatever you are using, it should have some way to get usage and contribution data (if possible with some indication of the location of accesses). You should also be able to export it for further analysis. In the most basic case you might have to turn to a Web master to supply some basic reporting on how support systems might be used.
For the non-technical elements, you might have to turn to quick regular surveys to get an overview of what is happening, what elements and actions within your initiative are working and which ones are not. Services like SurveyMonkey.com make producing simple surveys rather easy and cost effective. The effort is in creating some smart questions where the answers help you get a better understanding of where your initiative is moving, who is participating, what kind of value is produced, and what type of issues or barriers might exist that you need to tackle.

FEED IT BACK

No matter how big your initiative is and whether you are using an extensive technical infrastructure or not, one of the key points of measuring and analysis is the way you feed results back to participants. In my experience, this process is often overlooked. There are actually multiple stages of measuring/analysis and using results:
1. No measuring/analysis is done.
2. Measuring and analysis are done, but there is no time or no one responsible for even looking at the measures and further interpreting results.
3. Measuring and analysis are done, and the results are interpreted by a small group of technical system support people.
4. The same as number 3, but results are also made available to key initiative support people and management stakeholders.
5. The same as number 4, but a significant part of the results are also made available to the actual users and contributors.
Many initiatives get stuck in the first three levels; if there is a good knowledge support group, they may make it to the fourth level.
However, level 5 is the most important and powerful one. Transparency is a great way to influence knowledge-sharing behavior. 3 Instead of just using the results for yourself, make sure you prepare them and continue feeding them back to users of and contributors to your initiative. You can do so with a special report similar to the one I mentioned for ToolPool. Alternatively you can offer a self-service portal with a range of reports and online analysis capabilities. It depends a little bit on the audience. Some might have the necessary analytic skills; for others this might be overkill. If you are using some type of newsletter or other regular communication vehicle to build the pulse of your initiative, highlighting some analytic results could make a great addition, whether as a special news item or a short presentation at community events.
By making the results more transparent, you can give some meaning to participants’ actions, which makes it more likely that participants will be repeaters. You can also appeal to competitive thinking: People do not want to be behind everybody else. Group pressure makes them want to fit in.
Another effect of feeding back the results is that people are more likely to put in effort in the future. If you do surveys, for example, but never distribute any of the results, people will be less likely to respond the next time as they are not sure what happens to the data they provide. If, however, you not only feed back the results but also explain what actions were taken based on participants’ input, the effort they put into answering a questionnaire seems more worthwhile.
Apart from the general users, those in the knowledge flow management support group should have a transparent overview of what is going on. Making sure everybody in the support team, not just management is aware of results as a way to help people motivate themselves. Seeing that their hard work actually results in reuse and people helping each other again and again sends a strong message. Paired with stories, these numbers can paint a picture of the degree of enablement that they are providing to the organization. Especially in the beginning, it is also great to see participation grow. When ToolPool started to pick up country by country, we were always joking that we would be going for “world domination.” It was great to see office by office believing in the value of the initiative and integrating it into their business processes.
When it comes to measuring and analyzing, remember not to hide your analysis and results to a small group but use them to show all participants what is happening.

NOTES

1 Even on Amazon.com, where the potential audience to rate books is very large, you will find many books where you are encouraged to become the first one to rate them.
2 We sent the questionnaire to all staff within the selected departments and presented them with a checklist showing everybody but themselves. About 90 percent of those targeted actually participated.
3 As Richard H. Thaler and Cass R. Sunstein discuss in Nudge: Improving Decisions about Health, Wealth and Happiness (New York: Penguin, 2009), it is actually a strong influential force for all types of human behavior.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset