19. Value Is Uncertain, Too

,

Stakeholders often excuse themselves from predicting new-system value because they reason that it’s too uncertain to predict. The truest answer they can think of when queried about expected value is, “I don’t know.” Stakeholders need to have the same knee-jerk reaction we urged upon you in Chapter 11: When they hear themselves saying “I don’t know,” they need to switch gears into uncertainty-bounding mode and begin drawing uncertainty diagrams.

Benefit? Well, That Depends . . .

For systems whose major effect is market enhancement rather than cost displacement, there are some real unknowns about likely benefit. The market may leap all over a new product, or it may respond with a ho hum. The competition may steal a march on the new product, either coming out earlier with a similar product or announcing a forthcoming set of enticing and differentiating features.

Whatever happens may reduce the actual value of the new system from its most optimistic expectation. The very formulation of such a doubt calls attention to the fact that there is a “most optimistic expectation.” The first step in value prediction is to quantify the most optimistic expectation and to express it in terms of dollars of revenue or earnings, or added points of market penetration. Similarly, the least optimistic expectation can be quantified. Someplace in between the two is the most likely expectation. These three points produce a rudimentary uncertainty diagram, bounding the risks associated with value to be received:

Image

People persuaded to commit themselves to this extent will insist on attaching some explicit stakeholder assumptions to the expectation, a variation on the project manager’s project assumptions—the risks that he or she is not managing and are therefore someone else’s responsibility. A stakeholder assumption might be something like, “All bets are off if the system turns out to be unstable.”

For every incremental uncertainty diagram, such as the one presented above, there is an equivalent in cumulative form, such as this:

Image

From this, we can derive all kinds of useful information, including mean expected benefit and the expected benefit for any required confidence level. We can even use Monte Carlo tools to simulate the project multiple times and to produce a graph showing the benefit received for each instance.

The Market Window

The grand illusion of a market window is probably the safest and most often utilized excuse for not doing careful benefit projections. A stakeholder may speak confidently about benefits that will only be possible if the developers have the system ready before the market window closes. Any benefit number will do here because the allied tactic is to assert that the market window will close by a date that is virtually impossible to meet. The project is thus set up to fail, but the stakeholder avoids all accountability.

Market windows in the future are easy enough to talk about, but there are damn few examples you can dredge up from the past. VisiCalc clearly got its product to market before any market window had closed, but how to explain Lotus Notes? And all supposed market windows for spreadsheets had long since closed before Excel came along. How confusing, then, that Excel became the dominant spreadsheet. And Google, which missed its market window by a country mile, could obviously never become the dominant search engine—only somehow it did.

If the market window has any significance at all, it certainly isn’t binary. The pushback to the stakeholders is to oblige them to commit themselves to the benefits expected for the entire range of possible delivery dates. The range of dates covered by the development team’s risk diagram should also be covered by the stakeholders’ set of diagrams showing benefit expected for delivery at dates across the range. It’s not so easy to game the diagram set. Showing zero or negative value from a delivery that’s later than a given date may come back to bite the person who asserts the dire outcome. Since value is at the heart of the larger risk facing an organization, playing fast and loose with value projections (either under- or over-assessing them) will not be viewed as a mark of honor.

News from the Real World

To complement our own experience in value assessment, we queried a small set of real-world managers who have made it work (and have sometimes come a cropper). As you’ll see, there is a fascinating mixture of success and failure:

“The larger the system, the more accountability . . . the [benefit] numbers are watched carefully, because future financial models for overhead will be reduced due to the promises. . . .

“There is always the situation where the right person makes the request. Every organization has a few individuals who can simply get what they want due to their importance to the company.”

Christine Davis, formerly of TI and Raytheon

“[proceeding without value assessment] leaves nothing but testosterone-based decision making. It’s been my experience that testosterone-based decision making doesn’t have a very good track record at producing value over the long term. In fact, I think that it’s a career-limiting approach at best. . . .

“I’ve also experienced a very weird approach to accountability that goes something like, ‘The project was a complete success (after we redefined “success,” that is).’ This usually follows one of those late-project ‘flight from functionality’ sessions. Seems like a fuzz sandwich: fuzz at the front of the project, and fuzz at the end of the project with some sort of meat (you hope, but you don’t look to closely) in the middle.”

Sean Jackson, Howard Hughes Medical Institute

“[there has to be an] equality of accountability between builders and stakeholders. The stakeholder is accountable to make sure value is produced. [But] we find pretty consistently in our surveys that companies don’t track after the fact whether or not benefits are realized.”

Rob Austin, Professor, Harvard Business School

“The savings figures also are classified by whether they are reductions or avoided costs. The difference is important. Reductions are decrements from current approved budget levels. You (the requesting manager) already have the money; you are agreeing to give it up if you get the money for the new system. Avoided cost savings are of this form: ‘If I don’t have this system, I will have to hire another <n> <worker-type> in <future-year>. But if I do get the system, I can avoid this cost.’ This is every system requester’s favorite kind of benefit: all promise, no pain. The catch is that there is no reason to believe you’d ever have been funded to hire those additional workers. You’re trading off operating funds you may never get in the future for capital funds today. Very tempting, but most request-evaluators see this coming from miles away. The correct test for avoided-cost benefits is whether the putative avoidable cost is otherwise unavoidable, in other words, that the future budget request would inevitably be approved. This is a tough test, but real avoided-cost benefits can and do pass it.”

Steve McMenamin, Atlantic Systems Guild

The picture that emerged from our informal interviews was in some ways its own kind of fuzz sandwich, but there were a couple of interesting trends:

1. Best-practice organizations are intent on doing value assessment, though they may be willing to vary its form from project to project.

2. Lots of them follow the scheme of reducing downstream budgets by the amount of the expected benefit, what Christine referred to in saying, “future financial models for overhead will be reduced due to the promises.”

Finally, even these organizations have some instances of value guaranteed by promises like “Trust me, there is benefit to doing this,” but this is usually limited to stakeholders who have accountability for both cost and benefit.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset