14. A Defined Process for Risk Discovery

,

Core risks are not the only ones you need to worry about. There may well be risks particular to your project that have to be figured into your risk equation. For example, there may be one key player whose departure could be disastrous to the project, an important user who might defect and choose to go his own way, or a vendor whose nonperformance could have ugly consequences.

Once you’ve identified and quantified these risks, they can be managed just like the others. But getting them out on the table can be a problem. The culture of our organizations sometimes makes it impossible to talk about a really worrisome risk. We are like a primitive tribe that tries to hold the devil at bay by refusing to say his name.

Keeping mum about a risk won’t make it go away. The staff of the Ariane 5 project,1 for example, never did articulate the risk that a compiler would do no boundary checking, and thus compromise the launch vehicle. It happened anyway and resulted in the total failure of the launch.

1 Ariane 5 was the European Space Agency’s satellite launch that blew up due to a software error in 1996.

The bread-and-butter act of risk discovery is someone saying, “You know, if <whatever> happened, we’d really be up the creek. . . .” Usually, the person has known about the risk for a while, and may even have done a private assessment on it, something in the form of, “I’d better get my résumé polished up if it starts to look like <whatever> might happen.” When the only risk management on a project is happening inside the head of a single worried individual, that suggests a breakdown in communication. Specifically, it usually means there are disincentives at work, stopping the flow of essential information.

Naming the Disincentives

Let’s think about these disincentives within a real context: On the morning of January 28, 1986, the Challenger spacecraft exploded, with devastating loss of life and treasure and national prestige. The ensuing investigation revealed that the extended cold snap leading up to the launch caused the entire first stage rocket and all its components to be out of specified temperature range. The system was meant to operate at temperatures above freezing, and it was clearly colder than that. No one on the staff was thinking about O-rings, but a great many people knew that components of the system were temperature-sensitive and could not be counted on to perform to spec at subfreezing temperatures. Why didn’t they speak up?

Their reasons were the same ones that stop people from articulating risks at companies everywhere. They take the form of unwritten rules, built into the corporate culture:

1. Don’t be a negative thinker.

2. Don’t raise a problem unless you have a solution for it.

3. Don’t say something is a problem unless you can prove it is.

4. Don’t be the spoiler.

5. Don’t articulate a problem unless you want its immediate solution to become your responsibility.

Healthy cultures attach great value to the concept of a team. Being judged a “team player” is enormously important, and not being one can be fatal to a career. Articulating a risk shouldn’t be seen as anti-team, but it often is. These unwritten rules are not very discriminating. They don’t make much distinction between speaking up responsibly and whining. And because the rules are never discussed openly, they never get adjusted for changing circumstances.

We are all enjoined to adopt a can-do mentality in our work. And there’s the rub. Saying the name of a risk is an exercise in can’t-do. Risk discovery is profoundly at odds with this fundamental aspect of our organizations.

Since the disincentives are powerful, we need an open, fixed, and well-understood process to make it possible to speak. We need a ritual, a way for all to participate fully while still remaining safe. At the heart of the ritual are some temporary rules that make it okay—for the moment, at least—to disobey the unwritten rules. If your boss specifically asks you in public to “be the devil’s advocate about this idea,” you are clearly excused from the dictates of can-do thinking. That will make it acceptable for you to indulge in negative, what-if thinking. This is just what our defined process for risk discovery has to achieve.

The Defined Process

The defined process that we propose involves working backward in three steps to identify risks:

Image

When an actual catastrophe happens, these three steps occur in the opposite order, moving from cause to unfolding scenario to resulting outcome. But it is too threatening to deal with them in this temporal order. Working backward is less scary. It lets you focus first on the possible nightmare outcome, pretty much in isolation, divorced from cause. Even so, it’s still not easy for people to express such fears:

TRL:    Last year, I had to have arthroscopic knee surgery, involving complete anesthesia. The night before I went into the hospital, my wife asked me if I was anxious about the surgery. I quickly responded that I was not, that thousands of these operations occur with no problems at all. After a while, I did admit to her that I had what I considered a bit of an irrational fear. I was worried that the surgeon would operate on the wrong knee. My wife told me to tell him that. The next morning, I was prepped for surgery and my wife was keeping me company. The surgeon came in to tell us about the post-surgery procedures. My wife was looking at me, with eyebrows arched. I said nothing. Just before the surgeon went off for his preparations, he took out a marker pen and wrote, “YES,” on my thigh, just above the knee about to be operated on. My wife smiled; we all smiled.

The ritual has to make it okay to share fears. If the doctor had explicitly asked Tim in the pre-op room what his worst fear was, the matter would have been quickly out on the table. The ritual has to ask all participants to share their worst fears. Sometimes it helps to be allowed to say (as Tim said above) that the fear is irrational.

The rest of the process is to deduce how such a nightmare could come about. The trick is to make the three steps more or less mechanical, involving no question of blame. “My nightmare is this; here is a scenario that could result in precisely that nightmare; and the thing that might kick off such a scenario is . . .” Voilà, one risk identified.

To undo the mojo of the unwritten rules, the risk-discovery process needs to be written down and distributed before the event begins. You can’t spring it on people and expect them to suspend the unwritten rules without a good, formal cover.

The risk-discovery process should not happen just once, at the beginning of the project. There has to be some commitment to make it a continuing part of the project review. At each risk-discovery meeting, there has to be a formal statement of the approach to be followed, so that unwritten rules are effectively suspended.

The three steps typically happen together, at the same meeting. But, the techniques are unique to each step, so it’s worth going over each in turn.

Step 1: Catastrophe Brainstorm

A brainstorm is a contrived group-invention experience. The idea is to use group dynamics to find ways around conventional thinking and to let fresh new thoughts emerge. A catastrophe brainstorm is slightly different, but most of the techniques of classical brainstorming are useful. For a good description of these techniques, look into the brainstorming section in the References.

Brainstorming makes use of ploys, little tricks to help the group get past its inevitable blocks and dead moments. The titles listed in the References provide dozens of these ploys, all of them useful in getting the group to think up meaningful nightmares. A few ploys that are unique to catastrophe brainstorms are presented below:

1. Frame the question explicitly in terms of a nightmare: For some reason, this also helps undo the effect of the unwritten rules; no matter how positive-thinking the culture may be, it’s still okay to wake up at night with an awful thought. Ask people what their worst fears are for the project. When they wake up in a sweat over the project, what is it that upset them?

2. Use a crystal ball: Pretend you have access to a crystal ball, or the ability to conjure up headlines from next year’s newspaper. Assert that this glimpse into the future shows disaster for the project, but what disaster? Or say the company has been profiled in The Wall Street Journal’s idiot column (in the middle of the front page) for the awful way the project turned out. Now ask, “How could that have happened?”

3. Switch perspectives: Ask people to describe their best dreams for the project, and then discuss an inverted version of those dreams.

4. Ask about blame-free disasters: How could the project fail without it really being anybody’s fault?

5. Ask about blameworthy failure: Ask people, “How could the project go spectacularly awry and have it be our fault? the user’s fault? management’s fault? your fault?” (This only works if you make sure to get everybody into the act.)

6. Imagine partial failure: Ask how the project might succeed in general but leave one particular stakeholder unsatisfied or angry.

Brainstorms are fast and furious, so you’ll need to make some provision for capturing all the suggestions. Make sure that the facilitator is not the one responsible for capture.

Step 2: Scenario-Building

Now go back over the captured catastrophes one by one, and imagine together one or more scenarios that might lead to each result. Scenario imagining can be fairly mechanical, but the question of blame may be hanging in the air—so expect some tension here. Again, a capture mechanism needs to be thought out and implemented in advance so that suddenly increased tension won’t result in losing the very issues that need attention most.

It’s worth attaching at least a tentative probability to these scenarios. Obviously, the highly improbable ones are less valuable since they won’t justify the effort of being carried further. Be suspicious, though, of the low probability that the group may attach to a given scenario; someone offered it, and to that someone, it probably wasn’t a negligible matter.

In lieu of doing probability analysis on the spot, you might defer it to be done later by a subgroup. This will allow some empirical evidence to make the case that a given scenario is or is not worth worrying about.

Step 3: Root Cause Analysis

With a scenario in front of your group, it’s possible for everyone to work together to figure out potential root causes. This is much easier to do before the scenario has actually begun. When the scenario is only an abstraction—that is, some stupid thing that might happen—it’s possible to imagine causes without assigning blame: “Well, I can’t imagine this happening unless some idiot stole staff members to put out fires elsewhere.” That’s easy enough to say—even if the potential idiot is in the room—before the catastrophe has actually happened.

Your risks are the root causes of scenarios that will likely lead to catastrophic outcomes.

Root cause analysis is harder than it looks. The reason for this is not just the effect of the unwritten rule base, but the complex notion of “rootness.” (How root is root enough?) This is a process that is better performed by a group than by individuals in isolation. For useful tips on conducting root cause analysis sessions, consult the root cause analysis section in the References.

The WinWin Alternative

Barry Boehm’s WinWin Spiral Process Model brings together much of that admirable man’s career accomplishments to date. (See the References or the RISKOLOGY Website for links.) It integrates

• the spiral development life cycle

• metrics (specifically COCOMO II)

• risk management

• “theory W” of group interaction

It is a recipe for running sensible IT projects in light of the kinds of problems that typically dog such endeavors.

Boehm’s unique approach to software development is worth knowing about for reasons that go beyond the scope of this book. Our purpose in mentioning it here, however, is to describe one minor aspect of WinWin that casts a useful light on risk discovery.

In WinWin, the project makes an up-front commitment to seek out all stakeholders and solicit from each one the so-called win conditions that would make the project a success from his or her point of view. In this methodology, the requirement is defined as the set of win conditions. Nothing can be considered a requirement if no one can be found to identify it as one of his or her win conditions. Sometimes there are conflicts among win conditions, particularly as the set of stakeholders grows. A win condition expressed by one party may make it difficult or impossible to achieve win conditions expressed by one or more of the other parties. Under WinWin, any pair of win conditions with conflict or tension between them represents a risk.

You may be able to use Boehm’s trick to uncover risks that might never be discovered in any other fashion. So many of the risks that plague IT projects are directly due to conflicting win conditions, and the WinWin approach to risk discovery goes right to the heart of this root cause. Even if your project does not do a formal and complete solicitation of win conditions, you owe it to yourself to do some WinWin thinking as part of risk discovery. Think of it as one of the ploys you utilize. Ask participants, “Can you think of an obvious win condition for this project that is in conflict with somebody’s win condition?” Each identified conflict is a potential risk.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset