CHAPTER 8

Analyzing the Alternatives in Terms of Values and Uncertainty

As Chapter 6 discussed, a key task is to recognize you have a decision and to frame it properly in terms of alternatives, values (either qualitatively or quantitatively defined), and information. It is often a very difficult task to evaluate the alternatives in light of the values and the limited information available in order to arrive at a proper selection of the “best” alternative given the current situation; remember the difference between the good decision and the good outcome—good decisions do not necessarily ensure good outcomes. Recognizing and framing the decision are part, but only part, of a good decision. You must take the next step, shown in Figure 8-1, and conduct some kind of analytic comparison of the alternatives on the objectives using the information currently available in order to project the choice of each alternative out into the future.

FIGURE 8-1: Key Elements of a Decision

This chapter presents several qualitative approaches to analyzing alternatives in light of our values and uncertainty. Aspects of quantitative approaches are presented as well. Cognitive biases that limit our ability to perform these analyses are discussed. Finally the cardinal issues of good decisions are summarized.

The chapter presents the following sections:

Qualitative Approaches

Quantitative Approaches

Dealing with Biases and Heuristics

Requirements for Good Decisions.

Qualitative Approaches

Several qualitative approaches are available for use as tools in reviewing alternatives for a particular decision and recommending one of them. This section discusses five such approaches: list of pros and cons for each alternative, development of a fundamental objectives hierarchy with a qualitative assessment of each alternative on the fundamental objectives, consequence table, Pugh matrix, and Even Swaps.

Pros and Cons

The most famous approach is that described by Benjamin Franklin in a letter written to Joseph Priestley in 1772 (Bell and Labaree 1956). Priestley had recently been offered a new position in London that would increase his pay and prestige but delay the scientific experiments that he was conducting in chemistry. Priestley had written to Franklin to inquire whether he should move from Leeds, where he was well established, to London. Franklin was facing a similar problem himself; he greatly enjoyed London and all of the stimulation provided by the culture, science, and sophistication in London. But Franklin also missed his family and friends, and the opportunities to contribute to the growing rebellion in what was to become the United States. Here is the letter that Franklin wrote Priestley (Bell and Labaree 1956):

To Joseph Priestley

London, September 19, 1772

Dear Sir,

In the Affair of so much Importance to you, wherein you ask my Advice, I cannot for want of sufficient Premises, advise you what to determine, but if you please I will tell you how.

When these difficult Cases occur, they are difficult chiefly because while we have them under Consideration all the Reasons pro and con are not present to the Mind at the same time; but sometimes one Set present themselves, and at other times another, the first being out of Sight. Hence the various Purposes or Inclinations that alternately prevail, and the Uncertainty that perplexes us.

To get over this, my Way is, to divide half a Sheet of Paper by a Line into two Columns, writing over the one Pro, and over the other Con. Then during three or four Days Consideration I put down under the different Heads short Hints of the different Motives that at different Times occur to me for or against the Measure. When I have thus got them all together in one View, I endeavor to estimate their respective Weights; and where I find two, one on each side, that seem equal, I strike them both out: If I find a Reason pro equal to some two Reasons con, I strike out the three. If I judge some two Reasons con equal to some three Reasons pro, I strike out the five; and thus proceeding I find at length where the Balance lies; and if after a Day or two of farther Consideration nothing new that is of Importance occurs on either side, I come to a Determination accordingly.

And tho’ the Weight of Reasons cannot be taken with the Precision of Algebraic Quantities, yet when each is thus considered separately and comparatively, and the whole lies before me, I think I can judge better, and am less likely to take a rash Step; and in fact I have found great Advantage from this kind of Equation, in what may be called Moral or Prudential Algebra. Wishing sincerely that you may determine for the best, I am ever, my dear Friend,

Yours most affectionately,

B. Franklin

Franklin did not return to his home in Philadelphia until 1774, when his wife died. Priestley stayed in Leeds until 1780, when he moved to Birmingham. Priestley did move to London later in life and then to the United States.

The key point of Franklin’s list of pros and cons is that identifying aspects of each alternative as they relate to your value structure (objectives) is a way of being able to differentiate between the alternatives. An incomplete example of a pros and cons matrix is presented in Table 8-1. The decision represented in Table 8-1 is the type of transportation that should be implemented in order to move a defined set of people and equipment from point A to point B when there is a river between points A and B. What other entries would you add to this table?

TABLE 8-1: Example of a Pros and Cons Matrix

Alternatives Pros Cons
Build bridge

• Low operating cost

• High up-front cost

• Susceptible to damage

Use ferries

• Flexible costs

• Flexible capacity

• Quick start

• Not likely to meet peak demand

• Susceptible to strikes

Build tunnel

• Low operating cost

• Less impact due to weather

• Very high up-front cost

• Fixed capacity

Use helicopters

• Flexible costs

• Flexible capacity

• Quick start

• Not likely to meet peak demand

• High operating cost

This approach does a good job of stimulating the decision maker to generate differences between the options. But the result is not a systematic approach to organizing the pros and cons, or differences between the alternatives, into a coherent set of objectives, as discussed in the decision frame section of the previous chapter.

Fundamental Objectives Hierarchy

Keeney (1992) makes a strong case for value-focused thinking in his book by that name. Whether our values are defined qualitatively or quantitatively, the key concept in value-focused thinking is developing a comprehensive set of well-defined objectives early in the decision process and using those objectives to develop more and better alternatives and then evaluating the alternatives. These objectives should be (1) consistent with the current decision context and (2) not means-oriented for the current decision. Recall that this was part of the process for defining the decision frame. Means-oriented criteria are tied to the alternatives being made.

For example, if the project manager needs to decrease the budget for the next year, some undesired, means-oriented objectives would be reducing the staff operations, reducing contract support, and reducing travel. An example of fundamental objectives for this decision are (1) reducing monetary expenditures over the next year, (2) maintaining quality of operations, and (3) keeping all activities on schedule. Not all of these fundamental objectives are possible simultaneously, so the decision involves how to balance across all three while meeting the demands of the situation.

Project managers usually think in terms of three key fundamental objectives: cost, schedule, and performance. One way to segment each of these is to focus on the life cycle of the product or system: development, testing, production, deployment, training, etc. Often the costs for these segments of the life cycle come from different budgets; if this is the case, then having this type of cost breakout in the fundamental objectives hierarchy makes sense. However, if the various cost elements come from the same budget, then they are not really separate fundamental objectives because a dollar is a dollar.

Figure 8-2 shows an objectives hierarchy that was used to evaluate alternative architecture designs by the U.S. Defense Communications Agency (now the Defense Information Systems Agency) in 1980. The alternatives were differentiated by their reliance on commercial resources, facility hardening, switch capabilities and capacities, encryption, and use of satellites and mobile assets. The objectives hierarchy in Figure 8-2 has performance issues on the left and issues related to risk, cost, and schedule on the right. The risk objectives addressed cost, schedule, and product performance.

FIGURE 8-2: Objectives Hierarchy for World Wide Digital Systems Architecture (circa 1980)

Problem, Objectives, Alternatives, Consequences, and Tradeoffs (PrOACT) (1999) is a process developed by three well-known decision researchers: Ken Hammond, Ralph Keeney, and Howard Raiffa (1999). Problem definition within PrOACT is similar to the decision frame definition discussed in Chapter 6. The PrOACT process for defining objectives is:

Write down your concerns associated with the decision.

Convert your concerns into well-defined, succinct objectives.

Separate ends from means in order to create ends-oriented (fundamental) objectives.

Define what you mean by each objective so that others will understand them.

Test your objectives to ensure they completely capture your interests.

The biggest mistake commonly made in defining objectives is that some important objectives are left out. A recent research paper reported a finding that people often leave as many as half of the relevant objectives out of a list that they have been asked to create. These forgotten objectives are not the least critical ones either.

Recall that our fundamental objectives must capture all of the differences between the alternatives that will be considered. The objectives should also be different from each other; there should be no overlap in their definitions.

After the fundamental objectives are defined, the decision maker can employ either a qualitative or quantitative analysis process. A qualitative analysis process would adopt a categorical rating scheme for each alternative on each objective. A simple, commonly used approach is to create three (sometimes more) categories that go from best to worst: “great,” “average,” or “poor” for all objectives. Without quantifying these ratings, it is often easy to discern that one alternative is better than or tied with the others on most objectives. If this is the case, then that one alternative can be selected without more detailed analysis. If several alternatives are better than the others on 10-30 percent of the objectives, the decision will be harder to justify and some more detailed analysis might be warranted. Often it is not possible to determine the best alternative but it is possible to find alternatives that are worse than other alternatives on all (or nearly all) of the objectives, making these alternatives candidates for elimination.

Consequence Table

An increased level of sophistication in qualitative analysis beyond that just described (labeling each alternative as “great,” “average,” or “poor” on each objective) can be achieved by using the objectives hierarchy just discussed and a consequence table. The consequence table contains a row for each bottom-level objective and a column for each alternative. Now, instead of just writing “great,” “average,” or “poor” in each cell, the decision maker writes a few words or phrases that describe the consequences of having selected the alternative (in the column) on the objective (in the row). There might be some cells in the table for which there is great uncertainty; if so, this uncertainty should be noted.

After completing the table, the decision maker could review each objective (row) and should rank order which alternative performs best to worst on that objective. Here, we recommend giving the best alternative a high number (seven if there are seven alternatives) and giving the worst alternative a one. It is important to emphasize here that these numbers are called an ordinal scale and it is not theoretically correct to take averages and do multiplications with numbers on an ordinal scale.

The advantage of creating this rank ordering of alternatives for each objective is that we can review the table (as we did in the section above with the labels “great,” “average,” and “poor”) and determine whether any of the alternatives is worse than another alternative on all objectives. If so, this alternative is called “dominated” and can be removed from the analysis. Consider Table 8-2. In this table, alternatives A, B, and C each have a few rankings of “5,” which is the best. Only alternative B has no rankings of 1. More careful inspection of the table shows that the ranking of B is always greater than the ranking of D, so D is dominated by B on every objective and can be eliminated. Note that it is possible to show ties in such a table; alternatives A and D are ranked 1(2) on objective 9, indicating a tie.

TABLE 8-2: An Example of a Dominated Alternative

There is a common tendency with a table like Table 8-2 to add the ranks of each alternative and pick the alternative with the largest number. This is very risky. First, doing this assumes that the objectives are equal in importance and variation, but they almost never are. By variation, we mean that the ranks for one objective might be very close together (almost tied) and for another objective the ranks might indicate a huge swing in variation (the worst-ranked alternative is nearly unacceptable and the best-ranked alternative is near perfection).

It might be possible to get some insight from these sums, however; for example, if the sum of ranks for one alternative is nearly twice the sum of each of the other alternatives, then that one alternative might be the best no matter what more detailed analysis was done. In Table 8-2, the column sums are 32, 40, 33, 22, and 22. On the basis of these sums, it would be hard to rule out any of the first three alternatives.

Note that these sums do not give us any indication of whether there is a dominated alternative. Alternatives D and E have the same sum. While D is dominated by B, E is not dominated by any of the other alternatives.

Pugh Matrix

The Pugh matrix (Pugh 1991) takes a similar tack to the consequence table. An example of a Pugh matrix is shown in Figure 8-3. The Pugh matrix uses a three-point ordinal scale: + (superior), S (acceptable or satisfactory), and - (inferior). This approach can be used to find dominated alternatives but should not be used to maintain that one alternative is better than the others unless the evidence (+’s) is overwhelming for that alternative. Note that the Pugh matrix assumes all of the objectives have equal importance and variation.

FIGURE 8-3: Pugh Matrix Example

Even Swaps

A more advanced technique called Even Swaps is described by Hammond et al. (1999). This technique involves creating a consequence table and then making adjustments to some of the entries in the table in order to neutralize differences in the table. For example, suppose you were buying a used car and wanted a Ford Mustang. You found two good candidates manufactured in the same year that are equivalent on all but three dimensions. One Mustang costs $20,000, is blue, and has 40,000 miles on it. A second Mustang costs $15,000, is red, and has 50,000 miles.

An example of an Even Swap is to conclude that you prefer red to blue and would be willing to pay $2,000 to turn a blue car red. (Note that we are not talking about how much it would actually cost to get the Mustang painted red. There is an important difference between what you are willing to pay (a preference) and what it would actually cost.) So we could now say you were indifferent between the first Mustang ($20,000; blue; and 40,000 miles) and a hypothetical Mustang that cost $22,000, was red, and had 40,000 miles. This hypothetical Mustang is now more similar to the second Mustang, making the comparison easier because the two cars differ on only two dimensions rather than three.

Carrying this Even Swaps example to conclusion, suppose you are now willing to pay $4,000 to reduce the mileage of the second car from 50,000 to 40,000 miles. (Note that in this case it is impossible to reduce the mileage on the Mustang in the real world. But it is still possible to talk about how much you would be willing to pay if it were possible.) The second Mustang ($15,000; red; 50,000 miles) is now equivalent to a $19,000 red Mustang with 40,000 miles. As compared to the equivalent of the first Mustang ($22,000, red, and 40,000 miles), the Even Swaps approach concludes that you are $3,000 better off with the second Mustang (the $15,000 red Mustang with 50,000 miles).

Finally, it is important to note that you could consider a third alternative: buying the blue Mustang for $20,000 and paying to get it painted red. Suppose it cost only $1,000 to get a really good red paint-job on the Mustang? The third alternative now costs $21,000 for a red Mustang with 40,000 miles. Should you prefer this third alternative to the second alternative ($19,000 red Mustang with 40,000)?

For complex problems involving four or more alternatives and ten or more objectives, this Even Swaps approach can get quite complicated.

Quantitative Approaches

There are several sophisticated and theoretically justified approaches to performing quantitative analyses of alternatives across multiple objectives. It is beyond the scope of this book to get into the details of these approaches, but we do provide an introduction to some important issues to address and some cautions. Belton and Stewart (2002) and Kirkwood (1997) provide summaries and discussions of such approaches.

The Basics

The decision analysis approach is the approach recommended by the authors of this book. This approach provides for dealing with multiple conflicting objectives in a theoretically correct manner. The quantitative value model is intended to evaluate only feasible decision solutions after alternatives that are deemed unacceptable due to insufficient value on one or more value measures have been screened out. The model has three parts: (1) value functions that translate measures on dimensions such as purchase cost, miles per gallon, and safety into a unit-less value dimension; (2) “swing” weights that capture the importance of the measures and the degree of difference among the value space being considered on the measures and value on different dimensions into a unified value measure that spans all the measures; and (3) a mathematical equation to do the math.

Academics and practitioners have proposed many value equations that could be used. However, only a few value equations are rigorous in the sense that there is an underlying theory for why the value function makes sense from a decision-making perspective. The simplest possible such mathematical expression for the quantitative value model is given by:

where v(x) is the total value of a feasible decision alternative. For i = 1 to n (for the number of value measures), xi is the score of the feasible decision alternative on the i th value measure, vi(xi) is the single dimensional value of the feasible decision alternative on the i th value measure, and wi is the swing weight of the i th value measure. The saving weights are typically normalized so they sum to but this is not a requirement (Kirkwood 1997). This equation is most intuitive and has been used by many practitioners who did not even know there was an underlying theoretical justification for its use. This theoretical justification is the most restrictive of all such theoretically justified equations and relies in practice on using fundamental objectives rather than means objectives.

The three steps in this method are:

  1. Define value scales for one or more metrics that quantify each objective. These value scales should have an interval-scale property, which is defined so that the worst possible result is a 0 and the best possible result is a 100. An interval scale means a value difference of 10 on one part of the scale is the same size value difference as any other difference quantified as a 10. Fahrenheit and Celcius temperature scales are interval scales for temperature. Note that we cannot say that 100 degrees F is twice as hot as 50 degrees F, but we can say that the interval between 50 and 100 is equal in heat gained to the difference between 25 and 75 degrees F.

  2. Capture “swing” weights for each objective. These swing weights represent the relative value to the decision maker of going from the bottom to the top of the various value scales. These weights must be on a ratio scale, meaning that a difference in value swing that is twice as important as another difference in value swing should have a weight that is twice as large. It is not enough to just capture the importance of the objectives, because the range of consideration across the alternatives for the objectives might be uneven. We have found that many arguments about the weights boil down to differences in this “swing.” It is also common for people to not accurately gauge the ratios correctly and have them too close together.

  3. Score each alternative on each metric using the interval value scale defined in step 1.

After these three steps have been completed, a weighted average of value scores can be computed and reliably used to support a project manager in choosing between alternatives (see Figure 8-1).

Value Functions

Value functions contain both an x axis and a y axis. The x axis represents the range of a measure on which value is to be addressed. The y axis represents values that a decision alternative receives relative to the score of each decision alternative for that measure. A value function should be developed for each value measure i. This value function transforms a measure such as purchase cost in dollars, fuel economy in miles per gallon, or safety based on a 5-star metric into a value measure that is scaled from 0 to 1. This value function must be an interval scale, meaning that equal intervals in value dimension (y axis) have equal value differences no matter where they are. The value function can be discrete or continuous. Continuous functions typically follow the four basic shapes shown in Figure 8-4 (Parnell and Driscoll 2008). xi0 represents the worst value of xi on the value function. x*i represents the best value of xi on the value function. The curves on the left are monotonically increasing, and the curves on the right are monotonically decreasing from a minimum to a maximum level of the value measure. The decision maker is responsible for describing the shape of the curve for each value measure based on a specific incremental increase in the measure scale.

FIGURE 8-4: Typical Shapes of Value Function

We illustrate the development of a value function using the value measure miles per gallon (mpg). The minimal acceptable value to the stakeholder is 20 mpg, and the maximum acceptable value is 40 mpg. The incremental increase from 20 to 25 mpg is 5 units and from 25 to 30 mpg, 20 units. The other judgments are shown in the table at the top left of Figure 8-5. As the values increase from 20 mpg to 40 mpg, the curve reflects an S-curve as shown to the right in Figure 8-5.

FIGURE 8-5: Value Function for Maximizing Miles per Gallon

We will demonstrate this for a Honda Odyssey that gets approximately 25 mpg (xi). Using the value function shown in Figure 8-6, we see that 25 mpg gets a value of 5 (vi(xi)).

After summing all the weighted values across the value measures, we can calculate the total value v(x) for a decision alternative. The decision alternative scoring the highest value may be chosen as the decision solution; however, that decision rests ultimately with the decision authority.

FIGURE 8-6: Value Function for “mpg”

Swing Weights

We now turn our attention to weighting the value model. It is possible to derive equation 8-1 from the axioms of decision analysis, so it is clear that the weights in that equation have to account for the relative importance of the “swings” from the bottom to the top of the value functions being used. Using the weights to reflect just the raw importance of the measures might lead to poor decisions. When the bottom or top of the value function is adjusted, the weights also need to be adjusted.

This topic of capturing the swing in value as part of the value weights is the least understood and more important part of quantitative decision-making methods. (See the discussion about changing from a blue to a red Mustang in the Even Swaps section earlier in this chapter.) In the case of the Mustangs there were only two alternatives, and the value function can adopt the actual values of the two alternatives on each measure. We need not talk about green or black Mustangs until one of them becomes a viable alternative. The amount of money the decision maker was willing to pay to change from the blue to the red Mustang or the 50,000-mile to 40,000-mile Mustang was a discussion about swing weights for these two dimensions. Because the fictional person in the example was willing to pay $2,000 for the color change and $4,000 for the mileage change, his swing weight for mileage should be twice as large as his swing weight for color.

Using the example value hierarchy from Chapter 7 shown in Figure 8-7, we begin determining weights by probing the stakeholder about the relative importance of each value measure as compared to all other measures. The first step is to get the stakeholder to rank order the importance of the swings in value measures from the bottom to the top of the value scales. Table 8-4 shows these swings. Note that the full range of values for each measure does not need to be considered if the alternatives do not include those values. Because there are no two-passenger cars or 5-star-safety cars in consideration, these values are not included in the feasible swing for the value functions.

FIGURE 8-7: Vehicle Design Value Hierarchy

TABLE 8-3: Swings on Value Measures

The stakeholder for this problem ranks the value measures as follows: (1) safety, (2) miles per gallon, (3) horsepower, (4) storage space, and (5) passenger capacity. Note that if the safety swing is considered important to the decision maker, safety can still be the highest-weighted measure, even though its swing is small.

The second step is to get the stakeholder to assess the relative value of the swing in importance of each value measure relative to the carrying capacity of passengers. There are many ways to capture these judgments of the decision maker; see Kirkwood (1997) and Buede (2000). For this example, we use the most direct approach of asking for ratio judgments of value with respect to the swing in carrying capacity. The stakeholder identifies (1) safety as three times more important than carrying capacity (3x), (2) miles per gallon as three times more important (3x), (3) horsepower as two times more important (2x), and (4) volume as equally important (1x). The carrying capacity value measure receives a factor of (1x), indicating its relative importance as equal to value measure (4), volume. The importance is represented by the mathematical expression, which is an un-normalized weight. The mathematical expression for the normalized weight is given by:

Using equation 8-2, our weights for the value measures in Figure 8-7 are (1) 0.3, (2) 0.3, (3) 0.2, (4) 0.1, and (5) 0.1, respectively. We can now calculate the value (vi(xi)) of each decision alternative for each value measure to determine the decision solution that best satisfies stakeholder needs.

A recent swing weighting value assessment tool developed by Parnell (2007) is called a swing weight matrix. Figure 8-8 shows a swing weight matrix for the car purchase decision discussed in this section. The columns represent three segmentations (critical, moderate, nice to have) of the intrinsic importance of the measures to the decision in question. The rows represent three segmentations of the variation permitted on the measures, from unconstrained to significant constraints. Each measure is then placed into the appropriate cell in the three-by-three matrix. (Some swing weight matrices might have four or five segmentations of the rows or columns.) The measures in the top right corner cell must have the highest swing weight. The measures in the bottom left cell must have the lowest swing weight. As we move to the right in a row, the swing weights must increase. As we move from the bottom to the top cell in a column, the swing weights must also increase.

FIGURE 8-8: Swing Weight Matrix

In this example, the safety measure is highly constrained (only 3- and 4-star ratings), so it would have to be in the bottom row. Because it is so high in priority, it must be critical to the decision maker. Fuel economy, on the other hand, is relatively unconstrained, so it should be in the top row. Its relative high priority means it must be at least moderate in importance. If it were critical in importance, it would have to have a higher weight than safety.

Because fuel economy is in a higher row but a column to the left of the column for safety, the swing weight for fuel economy could be greater than, equal to, or less than the swing weight for safety. In our example, the decision maker said “equal to.” The variation of horsepower was also relatively unconstrained, so it should be placed in the top row. Given its priority, the decision maker must have felt it was relatively nice to have. The placement of storage space and carrying capacity is left to the reader to determine.

Some Cautions

Because numbers for our values have been introduced, various forms of sensitivity analysis and “what-if” analysis can be performed to determine how sensitive the recommended decision is to small changes in the numbers. Several commercial software packages can be used to support this type of quantitative decision; many people use their favorite spreadsheet software package for these computations.

One caution is most important: An approach that quantifies the value of one alternative on one objective/metric so that it is affected by the value of another alternative on the same objective/metric is subject to a phenomenon called “rank reversal.” This means that if we conduct an analysis and then delete an alternative because it is dominated, the rank order of the remaining alternatives might change. Be careful in using methods that have this “feature.” A common operation that causes this problem is “normalizing” the value scores on each objective/metric so they sum to 1.0.

Some complexities that can impact these decisions are uncertainty and risk. Often it will be hard to guess correctly how well a particular alternative might perform on a metric associated with an objective. Simple analytic techniques involve specifying a probability distribution for the performance of the alternative in question on that metric. However, often a string of uncertain factors are causing this uncertainty, and specifying a single probability distribution to cover all of these factors is difficult, unreliable, or both. In this case, more formal probability modeling techniques might be applied for the most important high-impact decisions. (See Buede 2000 and Paté-Cornell 1996).

When uncertainty exists, we must consider whether the project manager is risk-averse. For high-stakes decisions, using expected value does not capture the risk-averseness of many project managers. There is much in the literature on risk aversion and how to model it. (See Buede 2000, Howard and Abbas 2007, and Schuyler 2001).

A final caution is to avoid approaches that quantify weights and scores using rank orders. There is a well-known example (Pariseau and Oswalt 1994) of a Navy procurement that involved seven proposals being evaluated on 17 objectives or criteria (Figure 8-9). The evaluation team rank ordered each criterion on the basis of importance without regard to the swing; the most important criterion got a score of “1” and the least important got a “17.” Then the seven proposals were rank ordered, criterion by criterion, with the best proposal getting a rank of “1” and the worst a rank of “7.” To determine the best proposal, the weight ranks were multiplied by the score ranks for a given proposal and summed, yielding the weighted sum of ranks in the bottom row of Figure 8-9.

FIGURE 8-9: Improper Use of Rank Order Scales

There are several problems with this approach, which was successfully protested by one of the losing bidders. First, it is not legitimate to sum ranks or sum weighted ranks because ranks are an ordinal scale with no differentiation of the distance (in value in this case) between different rank orders. It is also not legitimate to multiply one rank order times another rank order for the same reason. Finally, this approach of giving the most important criterion a “1” and the least important a “17” and then multiplying makes the least important criterion far more important in determining the best proposal than the most important criterion is. There is a spread of 7 for the proposals on the most important criterion and a spread of 119 (7 times 17 minus 7 times 1) on the least important criterion.

Dealing with Biases and Heuristics

Cognitive biases have been defined in many ways:

“[M]ental errors caused by our simplified information processing strategies” that are “consistent and predictable.”

Distortion in the way people perceive reality.

Similar to optical illusions in that the error remains compelling even when one is fully aware of its nature. Awareness of the bias, by itself, does not produce a more accurate perception.

Resulting from subconscious mental procedures for processing information.

Not resulting from any cultural, emotional, organizational, or intellectual predisposition toward a certain judgment

The key biases that relate directly to decision-making are:

Information bias: Seeking information for the sole reason that more information will certainly help in the decision process, without regard to finding that information that could affect which decision should be taken. Seeking information is the first step for most decision makers and is consistent with this bias. One key message of this book is that it is best to think through the decision first and then seek information that would help determine which alternative is the best.

Bandwagon effect: Making decisions that many others are making because so many other people cannot be wrong. People have different value structures and find themselves in different situations, so making different choices is common.

Status quo bias: Preferring alternatives that keep things relatively the same.

Loss aversion: Avoiding losses is strongly preferred over acquiring gains. This leads to a status quo bias and very risk-averse behavior.

Neglect of probability: Disregarding probability when selecting an alternative from among several, each of which might have a very different chance of success. If some alternatives have greater chances of success than others, that fact should be considered.

Planning fallacy: Underestimating task-completion times. This is a well-known risk factor (DeMarco and Lister 2003).

Researchers in the field of intelligence analysis have been studying the effect that human cognitive biases can have on errors of judgment that result in mistaking non-causal relations for causal relations. Heuer (1999), one of the key researchers in this field, described six such biases, which are relevant to decision-making in any domain, including project management:

Bias in favor of causal explanations: A tendency to develop causal explanations of observations and to ignore the possibility that perceived links are due to random factors.

Bias favoring perception of centralized direction: Tendency to attribute actions of an organization to centralized direction or planning.

Similarity of cause and effect: Belief that causes and effects are similar in magnitude or other attributes.

Internal vs. external causes of behavior: A tendency to overestimate the role of internal factors and underestimate the role of external factors in determining behavior.

Overestimating our own importance: Tendency for people to overestimate the extent to which they can successfully influence the behavior of others.

Illusory correlation: A tendency to infer correlation between events from co-occurrence without considering the frequencies of the other possible patterns of these events.

Psychologists have studied errors that people make in estimating the probability that a future event might happen. Here are some of the results:

Availability rule: This heuristic refers to the tendency to judge the likelihood of an event by the ease with which examples can be remembered or imagined. So dying from an airplane crash is thought to be more likely than dying from an accidental gunshot, but based on data from the National Safety Council, the lifetime odds for a U.S. resident dying from these two causes were nearly equal in 2003.

Anchoring: The tendency to estimate a probability by adjusting from a previous estimate or anchor point. Typically, adjustments from the anchor point are too small.

Expressions of probability: Verbal expressions of uncertainty-such as “possible,” “unlikely,” and “could”—have long been recognized as sources of ambiguity and misunderstanding.

Base-rate fallacy: This bias refers to a tendency to focus on explicit indicators for an event and ignore the statistical base rate of the event. For example, we often ignore the base rate of a disease and assume that a positive test result means we positively have the disease. Many unneeded operations are performed each year because of this fallacy.

Assessing probability of a scenario: The assessed probability of a scenario is related to the amount of detail in its description, rather than its actual likelihood. A more detailed scenario (losing one’s house due to a flood caused by a hurricane) is judged more likely than a less-detailed scenario (losing one’s house due to a flood), even though the opposite is true.

Requirements for Good Decisions

Yates (2003) maintains that a decision requires the resolution of ten fundamental issues, which are the need; the people (or mode); the resources for making the decision; the options or alternatives; the possibilities, consequences, or objectives; the probabilities that certain possibilities might occur; the values associated with the possibilities and objectives; the trade-off values among objectives; the process of getting the decision makers to an acceptable selection of an alternative; and the implementation of that alternative. The following discussion of Yates’ ten fundamental issues provides project management examples of each of the ten issues.

Need: Deciding whether a decision should be made.

A company receives a request for a bid on a large project.

A competitor releases an upgraded product.

Customers increasingly complain about poor service.

Companies constantly are confronted with events like these—problems to address and opportunities to exploit. Successful decision managers vigilantly monitor the business landscape so they can see these events unfolding and determine whether, when, and how to initiate a decision-making effort.

Mode: Who will make this decision? How will they decide?

Should a particular decision be made by a senior executive or delegated?

Is a consultant’s expertise needed?

Should a committee be convened?

If so, who should serve on it and what process should they follow?

Managers must understand the numerous means of making decisions and carefully apply them to specific issues.

Investment: What resources will be invested in decision-making? Decision makers must weigh the material resources needed to make a decision—direct expenses and staff time, for example—as well as the emotional costs, including stress, conflict, and uncertainty in the organization.

Options: What are the potential responses to a particular problem or opportunity? The goal here is to assemble and evaluate options in a way that:

Unearths the ideal solution (or one close to it).

Wastes minimal resources.

Methods for gathering potential choices include soliciting ideas from staff, seeking input from a consultant, brainstorming, and evaluating how other organizations have responded to a similar issue.

Possibilities: What could happen as a result of a particular course of action? Managers must foresee outcomes that are likely to be important to beneficiaries and stakeholders in the decision. Many decisions fail when these parties—including employees, customers, and neighbors—are blindsided by adverse outcomes the decision makers failed to even consider.

Judgment: Which of the things that could happen would happen? Decisions are shaped by predictions, opinions, and projections; it’s important to evaluate their accuracy and determine how much weight to give them. The quality of these judgments improves markedly as the number of people participating in the process increases—particularly when the participants provide a variety of viewpoints.

Value: How much would beneficiaries care, positively or negatively, if a particular outcome were realized? Different stakeholders may have dramatically different values regarding an action and its outcomes. The intensity of these values determines whether they will take action supporting or opposing the decision.

Trade-offs: Every prospective action has strengths and weaknesses; how should they be evaluated? There are formal “trade-off tools” that can help with complex decisions. In most cases, when this issue is resolved, the decision is made.

Acceptability: How can we get stakeholders to agree to this decision and the procedure that created it? It’s critical to identify groups that might object to a decision, why they feel that way, whether they can derail the decision, and how to preclude such trouble.

Implementation: The decision has been made. How can we ensure it will be carried out? A decision that is not implemented is a failure. It’s important to recognize, and prevent, circumstances that can cause this to happen. Such circumstances include failure to allocate adequate resources to the initiative, failure to assign a senior manager to champion the project, and failure to provide incentives that ensure staffers will make implementation a high priority among their responsibilities.

In this chapter we have discussed many ways to achieve resolution on the fundamental issues Yates summarized. Five qualitative approaches for decision-making were presented, moving from the simple to the more complex. The pros and cons matrix of Ben Franklin began to establish the differences among the alternatives, providing grist for an objectives hierarchy. We then presented an approach for creating a fundamental objectives hierarchy for doing a qualitative assessment using adjectives such as “great,” “fair,” and “poor.” The third approach was a consequences table using rank orders for the alternatives on each objective. Here we cautioned to avoid the nearly inescapable tendency to use these ranks of improper mathematics. Instead, these numbers should be used to find dominated alternatives that can be discarded. The Pugh matrix was presented as another approach, similar to the consequence table, that can aid in finding dominated alternatives. Finally, the Even Swaps approach was presented for comparing a few alternatives on a few objectives to determine which might be best. The Even Swaps approach can help you create hypothetical alternatives that get closer to real alternatives but enable you to determine which is the preferred one.

Next, we summarized some of the essential characteristics of quantitative approaches and provided references to key resources for these approaches. Many quantitative approaches in the literature are masquerading as rational but in fact have major flaws that no rational approach would adopt. An example of one such case was provided. We then presented many of the biases that people exhibit in decision-making, strengthening our case for systematic, rational decision-making even more. We ended the chapter with Yates’s cardinal decision issues.

The following specific points were made in this chapter:

There are both qualitative and quantitative approaches to evaluating the alternatives we face in light of our values or objectives.

The qualitative approaches can range from creating lists of pros and cons and making holistic choice to approaches that are nearly quantitative by involving rank orders. But please be careful of rank orders: They may be numbers, but they should not be used as numbers unless there is overwhelming evidence that one alternative does much better than the rest. If this is the case, the decision was an easy one anyway.

The process of quantifying our values requires careful thought. This is a good thing because it causes us to question assumptions and define our terms carefully. But quantification of our values requires more time and should be reserved for hard, important decisions.

Quantification of our values should be more defensible and reliable as a decision-making process.

In Chapter 9, we address the impacts of uncertainty and risk on making a decision. Uncertainty and risk should be understood as important issues to project managers and clearly can impact our selection of the preferred alternative.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset