CHAPTER TWO

Decision-Making Challenges

TERRY A. BRESNICK and GREGORY S. PARNELL

Two Perspectives on Decision Making

Nothing is more difficult, and therefore more precious, than to be able to decide.

—Napoleon, “Maxims,” 1804

Nothing good ever came from a management decision. Avoid making decisions whenever possible. They can only get you in trouble.

—Dogbert, Dogbert’s Top Secret Management Handbook, 1996


2.1 Introduction
2.2 Human Decision Making
2.3 Decision-Making Challenges
2.4 Organizational Decision Processes
2.4.1 Culture
2.4.2 Impact of Stakeholders
2.4.3 Decision Level (Strategic, Operational, and Tactical)
2.5 Credible Problem Domain Knowledge
2.5.1 Dispersion of Knowledge
2.5.2 Technical Knowledge: Essential for Credibility
2.5.3 Business Knowledge: Essential for Success
2.5.4 Role of Experts
2.5.5 Limitations of Experts
2.6 Behavioral Decision Analysis Insights
2.6.1 Decision Traps and Barriers
2.6.2 Cognitive Biases
2.7 Two Anecdotes: Long-Term Success and a Temporary Success of Supporting the Human Decision-Making Process
2.8 Setting the Human Decision-Making Context for the Illustrative Example Problems
2.8.1 Roughneck North American Strategy (by Eric R. Johnson)
2.8.2 Geneptin Personalized Medicine (by Sean Xinghua Hu)
2.8.3 Data Center Decision Problem (by Gregory S. Parnell)
2.9 Summary
Key Terms
References

2.1 Introduction

In this chapter, we describe decision-making challenges and introduce some reasons why decision analysis is valuable to decision makers, but also why decision analysis may be difficult to apply. The axioms of decision analysis (See Chapter 3) assume rational decision makers operating in efficient organizations, but that may be the exception rather than the rule. Although decision analysis is mathematically sound, it is applied in the context of human decision making and organizational decision processes. As decision analysts, we interact with decision makers, stakeholders, and subject matter experts to build models in environments where objective data may be scarce and we have to rely on eliciting knowledge from decision makers, stakeholders, and subject matter experts (SMEs) who are prone to many cognitive biases. We must develop our soft skills as well as our technical skills. The challenges introduced in this chapter include understanding organizational decision processes, understanding decision traps, and understanding cognitive and motivational biases. Soft skills, such as strategic thinking, leading teams, managing projects, researching, interviewing, and facilitating group meetings, are covered in Chapter 4. Even if we do a superb technical analysis, the most important part of our job remains—communicating results of the analysis to decision makers and stakeholders as discussed in Chapter 13.

The rest of the chapter is organized as follows. Section 2.2 discusses the decision-making processes that humans typically employ. Section 2.3 introduces the factors that make decision making a challenge. Section 2.4 introduces the social and organizational decision factors that impact how a decision analysis can be conducted, including the organizational culture, the impact of stakeholders, and the level at which decisions are made. Section 2.5 discusses issues involved in obtaining credible domain knowledge for making the decision and the role of experts in providing this data. Section 2.6 focuses on the behavioral aspects of decision making to include decision traps and barriers and cognitive biases that affect the decision-making process. Section 2.7 provides two anecdotes of success and failure of supporting the human decision-making process. Section 2.8 sets the stage for our illustrative decision problems used throughout the handbook by setting the decision-making context.

2.2 Human Decision Making

Decision analysis practitioners sometimes take it for granted that people have little difficulty in making decisions. We often assume that thinking about alternatives, preferences, and uncertainty comes naturally to those we are trying to help, and that rational thought is the norm. The reality is that regardless of how many well-documented methodologies with thoroughly proven theorems are provided to them, human decision makers are inconsistent. Roy Gulick, a decision analyst with Decisions and Designs, Inc., coined the term “anatomical decision making” to describe the answers he had gathered from decision makers over the years to the question, “How do you make your decisions?” Why “anatomical decision making”? The responses he received typically included things such as “seat of the pants,” “gut reaction,” “rule of thumb,” “top of the head,” “knee-jerk reaction,” “pulled it out of my … ”—almost every part of the body was mentioned except “the brain”! These responses serve to remind us that no matter how well-honed our analytical methods are, and no matter how much objective data we have, human decision making cannot be overlooked. Humans are sometimes inconsistent, irrational, and subject to cognitive biases. Nonetheless, their subjective judgments, tenuous though they may be, must be included in the decision analysis.

All that said, decision analysts believe that the human decision-making process can be studied systematically, and that coherent, structured, and formal processes are better than purely “anatomical” decision-making processes.

2.3 Decision-Making Challenges

To achieve effective decision making, it is desirable to bring rational decision makers together with high-quality information about alternatives, preferences, and uncertainty. Unfortunately, the information is not always the high quality we would like. While we want to have factual information, very often, erroneous and biased data slip in as well. While we would like to have objective information based upon observed data, sometimes, the best we can obtain is opinion, advice, and conjecture. Similarly, the rational decision makers are not always as rational as we would like. Often, the goals of an organization are ambiguous and conflicting. In many cases, the decision-making environment is characterized by time pressures that impose additional constraints. As a result, the effective decision making that we seek is often less attainable than we desire. One key role of decision analysis thus becomes providing an effective link between the decision makers and the best available information.

Decision problems are complex, and this complexity can be characterized in three dimensions, as shown in Figure 2.1 (Stanford Strategic Decision and Risk Management Decision Leadership Course, 2008). Content complexity ranges from few scenarios with little data and a relatively stable decision-making setting, to many scenarios with data overload, many SMEs involved, and a dynamic decision context. Analytic complexity ranges from deterministic problems with little uncertainty and few fundamental and means objectives to problems with a high degree of uncertainty, many alternatives, and a complicated value hierarchy with many dependencies (see Chapter 7). Organizational complexity ranges from a single decision maker with a homogeneous set of stakeholders to multiple decision makers requiring consensus and a diverse set of stakeholders with conflicting perspectives. The best time to address the organizational complexity is when we are setting up the project structure by engaging the right people in the right way (see Chapter 4).

FIGURE 2.1 Dimensions of decision complexity.

(Adapted from Stanford Strategic Decision and Risk Management Decision Leadership Course, 2008, used with permission.)

web_c02f001

As we discuss in Chapters 9, 10, and 11, when we are eliciting expertise about a decision situation, modeling its consequences, and analyzing the results, it can be helpful to further decompose the analytical and content complexity into five more specific dimensions: value components, uncertainty, strategy, business units, and time.

2.4 Organizational Decision Processes

As decision analysis practitioners, we are asked to go into organizations, whether in the public or private sector, and to work within the existing decision processes. The key thing to remember is that one size does not fit all. Each decision opportunity has its own unique characteristics, and we must be willing and able to adapt to the individuals and the organizational decision-making environment. There are many examples of analytically sound studies that sit on bookshelves or in trashcans because the processes used and the conclusions reached did not “fit” with the existing organizational decision processes. All too often, analysts tend to think of processes used to “solve” the client problems that they face as “technical” processes. Properly applying decision trees, or influence diagrams, or Monte Carlo simulations, may provide a superb technical solution to the problem, but, by themselves, they can miss what may be the most important part of the solution—the social aspects of the solution. Larry Phillips of the London School for Economics describes what decision analysts do as a “socio-technical process.” (Phillips, 2007) The way that the technical solution fits into the organizational culture, structure, decision-making style, and other factors may determine the acceptability of the technical solution. These factors are discussed in the next section.

2.4.1 CULTURE

Decision analysis cannot be performed in isolation in any organization. The approach must be tailored to the context of the problem and the culture and environment of the organization. Culture can include many aspects that must be considered. Some of the major factors to consider include:

Public versus private sectorDecision making in a public sector environment can be very different than in a private sector environment. Many public sector decisions are made in a setting of openness and transparency, while others are made in parts of the public sector, such as the Department of Defense (DoD) and the Intelligence Community (IC), which are very security conscious and where information protection and “need-to-know” are the guiding principles. Private sector decisions in some domains are often proprietary and protected as well.
GeographicalDecision-making practices and approaches can vary greatly from country to country. Understanding the value systems, legal systems, moral and ethical underpinnings, and cultural mores are critical for success. What is an accepted practice in one country can be a terrible faux pas in another. For example, according to O’Boyle, people brought up in the U.S. have difficulty understanding those who prefer identity as a group rather than as an individual; Japanese find it unsettling to deal with U.S. companies whose policies change as management changes; and Americans consider the Dutch unassertive in their business approach, while the Dutch consider the standard American business résumé to be so boastful as to be unreliable (O’Boyle, 1996).
Leadership styleThe decision-making process and the role that decision analysts can play is highly dependent on leadership style. Some of the most significant aspects of leadership style that impact the nature of the analysis that can be performed include:
  • Degrees of authoritativenessSome organizations have highly authoritative leaders, while others practice a more democratic style of leadership.
  • Degree of delegationSome leaders are more willing than others to delegate both decision-making responsibility and authority.
  • Decision-maker engagementSome decisions makers will provide initial guidance and will not want to be involved again until results are ready, while others will want to be involved in every step of the process.
  • Number of decision makersIn some rare cases (especially in public decisions), there is a single decision maker, while in others, the decisions are made by committee, which involves aggregating across decision makers.
  • Degree of formalitySome organizations have very formal decision-making and leadership styles that can make it very difficult to get access to the decision maker without having to go through “gatekeepers” who fiercely protect schedules. Other organizations provide easier access to the decision maker.
  • Openness to new ideas and innovationSome organizations are highly innovative and are willing to accept new and better approaches to problem solving, while others prefer their current approaches to doing business. “Not invented here” can be a significant barrier to being able to perform a sound decision analysis.
  • Comfort with outside consultants versus use of insidersSome organizations are more comfortable getting most of their analytical support from inside the organization using analysts who are experts in both the process and the subject matter. In such an environment, it may be difficult for an “outsider” to have an impact.

The most important thing to remember is that as decision analysts, we must be prepared to adapt our techniques and processes to the culture of the organization, especially to the style of the decision maker. Keep in mind the “golden rule” of consulting—“the one with the gold makes the rules.” As we develop our analytical solutions, we must offer a process that matches the organizational culture.


Decision analysis is a social-technical process: we must design a process that uses the right people (broad and deep knowledge of the problem), the right forum (conducive to discussion and interaction), the right balance of modeling and challenging the model with intuition, and the right duration (meet needed deadlines but enable information gathering and socializing the results).

2.4.2 IMPACT OF STAKEHOLDERS

A stakeholder is a person, group, or organization that has direct or indirect stake in an organization because it can affect or be affected by the organization’s actions, objectives, and policies. Key stakeholders in a business organization include creditors, customers, directors, employees, government (and its agencies), owners (shareholders), suppliers, unions, and the community from which the business draws its resources.

(BusinessDictionary.com, 2011)

Stakeholders comprise the set of individuals and organizations that have a vested interest in the problem and its solution (Sage & Armstrong, 2000). Understanding who is affected by a solution to a decision problem provides the foundation for developing a complete definition of the problem. Stakeholders collectively perform many functions. They help frame the problem and specify constraints; they participate in the alternative generation and solution process to include evaluation and scoring; they provide data and subject matter expertise; and they identify and often execute tasks for implementing recommended solutions (Parnell et al., 2011).

A straightforward taxonomy of stakeholders is offered in Decision Making in Systems Engineering and Management (Parnell et al., 2011). Stakeholders can choose to be active or passive when it comes to participating in the decision process. Stakeholders are listed in typical order of relative importance:

  • Decision authorityPerson or persons with ultimate authority and responsibility to accept and implement a solution to a decision opportunity;
  • ClientPerson or organization that initiated the request for decision support; often, the client defines the requirements and holds the purse-strings for the effort;
  • OwnerPerson or organization responsible for proper and purposeful operations surrounding the decision;
  • UserPerson or organization accountable for conducting proper operations of systems related to the decision;
  • ConsumerPersons and organizations with intentional dependencies on the implications of the decision.

Stakeholder analysis is a key technique to ensure that the problem has been fully described before we attempt to obtain a solution to the problem. The three most common techniques for stakeholder analysis are interview, focus groups, and surveys. Several techniques are available for soliciting input from diverse stakeholders as shown in Table 2.1 (Trainor & Parnell, 2007). The techniques are characterized and compared on five attributes—time commitment, ideal stakeholder group, preparation, execution, and analysis.

TABLE 2.1 Techniques for Stakeholder Analysis

c02tbl0001ta

Stakeholder analysis is critical, since the fundamental and means objectives (see Chapter 7) are built upon the needs of the stakeholders. Without a clear understanding of the different perspectives and different success criteria upon which alternatives will be judged, the analysis can easily be built upon a shaky foundation that will not withstand the pressures of intense scrutiny and organizational implementation. The best practices for the use of these techniques are presented in Chapter 4.

2.4.3 DECISION LEVEL (STRATEGIC, OPERATIONAL, AND TACTICAL)

Decision analysis can be applied at a variety of decision levels in an organization. A common characterization of decision levels includes strategic, operational, and tactical. A good analysis must balance concerns across all three and must consider the dependencies across levels.

2.4.3.1 Strategic Decision Making. 

This level is focused on the long-term goals and directions of the organization, which are often expressed in the organization’s strategic plan. Strategic decision making is oriented around the organization’s mission and vision for where it wants to be in the future. It addresses very fundamental issues, such as what business the organization is in versus what business it should be in? What are the core values? What products and services should it deliver? Who are the customer sets? What is management’s intent about how the organization will evolve and grow? From the decision analyst’s perspective, this level of decision making typically involves the fewest viable alternatives, the greatest degree of uncertainty since it is future oriented, and the greatest need for fleshing out the fundamental objectives since statements of strategic goals are often broad and vague. In order to help an organization, the decision analyst must be a strategic thinker. We identify this as one of the soft skills required for decision analysts (Chapter 4).

2.4.3.2 Tactical Decision Making. 

This level focuses on turning the broad strategic goals into achievable, measurable objectives (Eyes Wide Open: Tips on Strategic, Tactical and Operational Decision Making, 2011). It requires developing actions and allocating resources that will accomplish the objectives. It includes the set of procedures that connects the strategic goals with the day-to-day operational activities of the organization, and its primary purpose is to enable the organization to be successful as a whole rather than as independent parts (Tactical Decision Making in Organizations, 2011). From the decision analyst’s perspective, it is important to identify redundancies and synergies in the alternatives, to conduct value of information analysis (see Chapter 11) to avoid modeling uncertainties that do not affect decisions, to fully understand how to decompose fundamental objectives of the organization into manageable means objectives, and to avoid suboptimization (see Chapter 7).

2.4.3.3 Operational Decision Making. 

This level focuses on day-to-day operational decisions, particularly on how the organization allocates scarce resources. Decisions are short term, and the decision context can change rapidly (Eyes Wide Open:Tips on Strategic, Tactical and Operational Decision Making, 2011). In a business context, it can be highly reactive since much depends upon the competitive environment. From the decision analyst’s perspective, it frequently involves rapid response, “quick turn” analyses with little data other than that of SMEs. Benefit/cost analysis, to include net present value (NPV) analysis, and high-level multiple-objective decisions analyses (MODA), are frequently used tools that are appropriate for longer time horizons.

Identifying the decisions to be made in a decision analysis is a nontrivial task. Knowing the decision level is one useful technique. In Chapter 6, we introduce the decision hierarchy, which is a decision framing tool to help define the decisions in sufficient detail to perform a decision analysis.

2.5 Credible Problem Domain Knowledge

2.5.1 DISPERSION OF KNOWLEDGE

In theory, it is easy to think of the decision analyst as working directly with the decision maker to build models and solve problems. In practice, it is rarely that straightforward. Most decision analyses rely on knowledge that is dispersed among many experts and stakeholders. The views of individual decision makers are subject to both cognitive and motivational biases, and often, these views must be interpreted and implemented by groups of stakeholders in multiple organizations. The decision analyst is often asked to take on a complicated role other than model builder—the analyst must be the facilitator who translates management perspective to others, who balances conflicting perspectives of stakeholders, who elicits knowledge from dispersed SMEs, and who combines these varied ingredients into a composite model that organizes and integrates the knowledge (see Chapter 9). While unanimity among participants is a noble goal, it is exceptionally rare. Consensus is a more achievable goal—if it is defined as developing a solution that everyone can “live with” rather than as any form of unanimity. But even to achieve consensus, the decision analyst must use modeling and facilitation skills to bring together technical knowledge (often the purview of scientists and engineers) with business knowledge (often the purview of managers and financial personnel) in the particular domain at hand. This is no easy task as sources of such knowledge can be varied and uneven in quality.

2.5.2 TECHNICAL KNOWLEDGE: ESSENTIAL FOR CREDIBILITY

For a decision analysis to be credible, it must be based upon sound technical knowledge in the problem domain. In some cases, the decision maker may have such knowledge by coming up through the ranks. In others, the decision maker is more focused on the business side of the organization and relies upon the technical staff of scientists, engineers, and others to provide such knowledge. In some ways, it is easier to reconcile conflicting opinions on technical matters than on business matters since technical matters tend to be more factually based and objective. That said, sometimes, it is difficult to establish which “facts” to believe, particularly on controversial issues, such as global warning!

There can be a huge difference in the level of domain technical knowledge required of the decision analyst. Specific technical knowledge may be demanded of a decision analyst who is internal to an organization. This is typical, for example, in the oil and gas industry and in the pharmaceutical industry. For a decision analyst external to an organization, there may be less expectation of technical knowledge, but rather the expectation is that the decision analyst can work with a group of technical experts to identify and model the key concerns. In fact, in many consulting firms, it is considered to be an advantage for the decision analyst to not be burdened by having to be an expert in the technical aspects of the organization; this allows the decision analyst to focus on the decision process.

For either the internal or the external decision analyst, it is essential to help the client develop a clear set of objectives, a range of possible outcomes, and probability distributions, to flesh out the key assumptions and constraints, to understand the factors that could create extreme outcomes, and to document how the technical knowledge obtained from others is used.

2.5.3 BUSINESS KNOWLEDGE: ESSENTIAL FOR SUCCESS

While a firm grasp on domain technical knowledge is essential for credibility, a firm grasp on business knowledge is essential for success. Such knowledge includes analysis of the competition, analysis of the economic environment, analysis of the legislative environment, and analysis of required rates of return, among other business environment areas. As with technical knowledge, it would not be unusual for the decision analyst to not be a SME in these areas, but rather, obtain such knowledge required for decisions from business experts internal or external to the organization. Familiarity by the decision analyst with corporate financial reports, benefit/cost analysis, costing approaches, net present value calculations, and portfolio theory may be essential for success.

2.5.4 ROLE OF EXPERTS

The role of experts in decision analyses is not always as straightforward as one might think. Clearly, they provide factual, objective information in the areas of their expertise. Whether it is actual hard technical data, actual performance data, or projected performance data on proposed systems, most technical experts are comfortable providing both point estimates and uncertainty ranges on such data. Where it becomes murkier is when SMEs or technical experts are asked to provide “preference” or “value judgment” estimates that may require access to stakeholders and decision makers.

2.5.5 LIMITATIONS OF EXPERTS

As one would expect, all experts are not created equal. Some have more technical or business knowledge than others, and it is not always easy for the decision analyst to determine their limitations. Even experts are subject to motivational and cognitive biases—in fact, as will be pointed out in Section 2.6.2, experts may be even more subject to biases, such as failing to spread probability distributions widely enough. As indicated above, some experts are very uncomfortable doing anything other than “reporting” on what they know and may be unwilling to provide value judgments. Some experts will attempt to dominate the group by citing that their knowledge is more recent or more authoritative than others, and this may have the effect of “shutting down” other experts that are present. Some will “stretch” their areas of expertise to go far beyond their actual areas of expertise, often providing a mix of very good quality information with less credible information. One of the greatest challenges for the decision analyst is to determine the bona fides of both the experts and of the expertise they provide and to determine what can and cannot be used.

2.6 Behavioral Decision Analysis Insights

This section provides insights into the decision traps and barriers that get in the way of good decision analysis practice, as well as into the cognitive and motivational biases that can impact the quality of the knowledge we elicit from decision makers and SMEs.

2.6.1 DECISION TRAPS AND BARRIERS

An excellent summary of behavioral decision insights and barriers to good decision making can be found in Decision Traps (Schoemaker & Russo, 1989). The authors describe the ten most dangerous decision traps as follows:

1. Plunging inStarting data gathering and problem solving before fully understanding the complete nature of the problem and the organizational decision processes.
2. Frame blindnessSetting out and solving the wrong problem because the mental framework we are using is incomplete or incorrect. If the framework is wrong, it is difficult to develop a complete set of alternatives or to properly specify the values used in the decision.
3. Lack of frame controlFailing to consciously define the problem in more ways than one or being unduly influenced by the frames of others.
4. Overconfidence in our judgmentWhen we rely too heavily on our assumptions and opinions, it is easy to miss collecting the key factual information that is needed.
5. Short-sided shortcutsRelying inappropriately on “rules of thumb,” and failing to avoid well-known cognitive biases.
6. Shooting from the hipBelieving we can informally keep track of all information gathered in our head, and “winging it” rather than relying on systematic procedures.
7. Group failureAssuming that a group of many smart people will automatically lead to good choices even without managing the group decision-making process.
8. Fooling ourselves about feedbackFailing to interpret the evidence from past outcomes, either through hindsight biases or ego issues.
9. Not keeping trackFailing to keep systematic records of results of key decisions and failing to analyze them for lessons learned. This is sometimes called not being a learning organization.
10. Failure to audit our decision processFailing to develop an organized approach to understanding our own decision making, thus exposing ourselves to the other nine decision traps.

The first author of this chapter has compiled a similar list of barriers to good decision analysis based upon his experiences over the last 35 years as follows:

1. Inadequate problem formulationThis is related to Schoemaker and Russo’s frame blindness. Analysts frequently overconstrain or underconstrain the problem statement, thus leading to alternatives that do not really make sense, or eliminating alternatives prematurely. James Adams, in Conceptual Blockbusting (Adams, 1979), cites an example of how millions of dollars were spent to reduce damage to crops being caused by mechanical tomato pickers. The original problem statement was to “develop a better mechanical tomato picker to keep from damaging the tomato crops.” Once the problem statement was reframed as “reduce damage to the tomato crop that is currently caused by mechanical tomato pickers,” a totally new class of solutions emerged, and the problem was solved by developing a thicker-skinned tomato that was more resistant to damage.
2. Decision paralysis by waiting for “all of the data.” Analysts and decision makers frequently fail to finish their work because they continually search for more data. In reality, it is rare that “all” the data are ever available, and it is more effective to use a “requisite” approach to data gathering—gather what is needed to make the decision and no more (Phillips, 2007).
3. Looking for a 100% solutionThe analyst has to know when to stop modeling. Similar to the notion of requisite data as described above, a requisite decision model is one that is sufficient in form and content to resolve the issues at hand (Phillips, 2007). As the French author Voltaire said, “the perfect is the enemy of the good”.
4. Ineffective group decision-making processesThis relates to Schoemaker and Russo’s Group Failure. Too often, analysts assume that group processes are automatically better than individual decision processes. This view has been particularly magnified with the advent of group decision-making software. Such software manufacturers often tout the advantage of the anonymity that is provided to increase willingness to participate. However, the opinion of many experienced decision analysts runs counter to this, and anonymity can degrade the group process by providing a shield to hide behind in allowing participants to remain parochial in their views, thus decreasing open exchange of information.
5. Lack of access to the decision makerAll too often analysts do not get the access to the decision maker that is essential to properly frame the problem and to understand the preferences essential for value-focused thinking. Frequently, this is the result of “gatekeepers” to the decision makers who are afraid to have questions asked of the decision maker, either by themselves or by outside consultants, lest the decision maker think that his or her analysts do not know what they are doing. It is the collective experience of all authors of this handbook that it is the very rare decision maker who does not appreciate the opportunity to participate in the process. Rather than fearing an opportunity to communicate with the decision makers face-to-face, decision analysts should seek to meet with the decision makers as early and as often as possible without wasting their valuable time (see Chapter 5).
6. Insensitivity to deadlinesAnalysts often get so enraptured with the cleverness and sophistication of their techniques that they lose sight of the decision timelines. It is essential that analyses be “designed to time” so as to be responsive to decision-maker needs. A timely 80% solution is usually better than a 100% solution that is too late. Managing projects is one of our essential soft skills (Chapter 4).
7. No plan to implement the decisionIt is essential to have a workable plan to implement the solution that is produced by the analysis. A mathematically correct alternative that cannot be executed within time, budget, and other organizational constraints is of little use to the client organization. See Chapter 14 for more discussion on implementation.

2.6.2 COGNITIVE BIASES

“Cognitive biases are mental errors caused by our simplified information processing strategies. A cognitive bias is a mental error that is consistent and predictable” (Heuer, 1999). As analysts attempt to elicit value judgments and probabilities from decision makers and SMEs, they must confront many of the cognitive biases that are well documented in the behavioral decision analysis literature. Some of the biases are related to decision making, some to probability or value assessment, and some to personal motivation (motivation to have positive attitudes toward oneself). This section provides a quick overview of the most common biases. The letters DM after the name of the bias refer to decision-making and behavioral biases, P to probability or belief biases, and M to motivational biases.

  • Framing effect (DM)Drawing different conclusions from the same information, depending on how that information is presented. For example, assume a military commander has 500 troops under his command and is about to undertake a military operation. If he takes alternative A, he is certain to lose 250 people (the rest will survive unscathed). If he takes Alternative B, there is a 50% chance 150 will die, and a 50% chance 350 people will die. Which should he choose? When framed in this manner and presented to subjects, an overwhelming majority select alternative B. Note that the expected losses for A and B are the same—250 deaths. Now consider the question reframed as “if he takes alternative A, he is certain to save 250 people. If he takes Alternative B, there is a 50% chance he will only save 150 people, and a 50% chance he will save 350 people.” When presented this way, the vast majority of subjects select option A. When couched as certain lives lost, they choose the “lottery” alternative; when couched as certain lives saved, they choose the certain alternative even though the choices are the same (Mellers & Locke, 2007).
  • Bandwagon effect (DM)The tendency to act or perceive things in a certain way merely because many other people act or perceive them in the same manner. The bandwagon effect is related to groupthink and herd behavior, and is often seen in election polling where people want to vote for the candidate they perceive will be the winner. We also see the bandwagon effect in personal finance where, for example, many people “jump on the bandwagon” and buy the mutual funds that performed best the previous year, yet it is rare that the same funds are top performers year after year.
  • Information bias (DM)The tendency to seek information believing that it will help the decision process even when it cannot affect action (Baron, 2000). For example, people expend resources to gather information without performing a value of information analysis.
  • Confirmation bias (also known as selective search for evidence) (DM)The tendency to search for or interpret information in a way that confirms one’s preconceptions (Oswald & Stefan, 2004). It is often manifested by early information leading to misinterpretation of later information. For example, when the U.S. Navy ship U.S.S. Vincennes erroneously shot down a commercial Iranian Airbus, the Tactical Control Officer (TCO), responsible for identifying the aircraft as friend or foe, initially believed it to be a foe based upon its flight path. As additional information came in, such as altitude, speed, and location within safe passage corridors, the TCO misinterpreted each new clue in a way that supported his initial hypothesis of foe (Silverman, 1992). It is a natural tendency to seek confirming information since we all like to be proven correct—yet disconfirming information often has far more value than confirming information.
  • Anchoring and adjustment bias (DM, P)The common human tendency to rely too heavily, or “anchor,” on one trait or piece of information when making decisions and fail to sufficiently adjust from that anchor (Tversky & Kahneman, 1974). When focusing on the initial estimate in attempting to describe a complete probability distribution, there is a tendency to stay too close to the anchor and not adjust the extremes of the distribution enough. For example, in trying to put a probability distribution on the number of McDonalds in the United States, most people will first select their “best guess” (the number where it is equally likely to be over as under) and tend to underestimate the spread between the 1% and 99% points of the distribution. Probability assessors do better if they first estimate the point where they are 99% sure that the right answer is less than the number they have specified at the high end and the point where there is a 99% chance that the right answer is greater than the number they have specified at the low end. (There were 13,381 McDonalds in the United States as of May 2009!)
  • Availability bias (DM, P)People predict the frequency of an event, or a proportion within a population, based on how easily an example can be retrieved from memory (Heuer, 1999). In our decision-making processes, our estimates are influenced by the information that is most available to us either through personal experiences, dramatic or easily imagined events, or through the normal workings of the mind. For example, when asked if the letter “k” is more likely to be the first letter of a word or the third letter, most people will say the first letter even though it is three times as likely to be the third letter. Words with “k” as the first letter are far more “available” in our minds than words with the letter “k” in the third position (Tversky & Kahneman, 1974).
  • Base rate bias (P)The tendency to base judgments on specifics, ignoring general statistical information (Baron, 2000). Base rate bias occurs when the conditional probability of some hypothesis given some evidence fails to take into account the “base rate” or “prior probability” of the hypothesis and the total probability of the evidence (Tversky & Kahneman, 1982). The base rate bias is one example of the broader category of biases known as representativeness biases. For example, many people would say that a person who is described as quiet, introverted, and orderly is more likely to be a librarian than a salesperson even though a person chosen at random is much more likely to be a salesperson than a librarian (Tversky & Kahneman, 1974).
  • Certainty illusion (P)Prior expectations of relationships lead to correlations that do not exist. Many people use correlation interchangeably with causality, but they are different concepts (Heuer, 1999). For example, in experiments posing the multiple choice question “Potatoes are native to (A) Peru or (B) Ireland,” the vast majority of respondents will say Ireland. When asked to put a probability on the likelihood that they have the right answer, most will say more than 90%, with many saying 100%. Yet the correct answer is Peru. Potatoes are not native to Ireland, but many people associate potatoes with Ireland from their prominence during the well-known famine. The association of potatoes to Ireland during the famine leads to a correlation that is false in determining the native source of potatoes. This is also referred to as the illusion of validity, where the confidence that people express in their predictions has little or no regard to factors that limit predictive accuracy (Tversky & Kahneman, 1974).
  • Hindsight bias (P)Sometimes called the “I-knew-it-all-along” effect, the tendency to see past events as being predictable at the time those events happened (Pohl, 2004). For example, researchers asked college students to predict how the U.S. Senate would vote on the confirmation of Supreme Court nominee Clarence Thomas. Prior to the vote, 58% of the participants predicted that he would be confirmed. When the students were again polled after the confirmation hearings, 78% of the participants said they thought he would be approved. In another study, potential voters were asked for whom they would vote in an upcoming election. In after-the-fact polling of the same voters 1 month later, a far greater percentage said they voted for the winner than originally said they would vote for that candidate (Myers, 1994).
  • Representativeness bias (P)Occurs when people judge the probability or frequency of a hypothesis by considering how much the hypothesis resembles (or is representative of) available data as opposed to using a Bayesian calculation (see Appendix A). In causal reasoning, the representativeness heuristic leads to a bias toward the belief that causes and effects will resemble one another (Tversky & Kahneman, 1974). Humans tend to ignore sample sizes when making probability judgments, and they expect small samples to mirror population statistics. Local representativeness is when people perceive that small samples represent their population to the same extent as large samples (Tversky & Kahneman, 1982). The Gambler’s fallacy is an example of this thinking pattern—when a sequence of randomly generated trials repeatedly strays in one direction (e.g., a roulette wheel comes up red three times in a row) people mistakenly expect the opposite to be more likely over the next few trials (e.g., the wheel landing on black) (Goodie, 2011).
  • Recency effect (P)The tendency to weigh recent events more than earlier events. For example, people will generally overestimate the likelihood of a shark bite or of a person being hit by lightning if they have recently read or heard about such an event. This is closely related to the availability bias, where time frame proximity makes the information more readily accessible (Tversky & Kahneman, 1974).
  • Self-serving bias (M)Occurs when people attribute their successes to internal or personal factors but attribute their failures to situational factors beyond their control. The self-serving bias can be seen in the common human tendency to take credit for success but to deny responsibility for failure (Miller & Ross, 1975). Another version occurs when individuals evaluate ambiguous information in a way that is the most beneficial to their self-interests. This bias is related to the better-than-average effect (also known as the superiority illusion) in which individuals believe that they perform better than the average person in areas significant to their self-esteem (Kruger, 1999).

These and other biases can lead to common assessment mistakes to include the following:

  • Innate overconfidence makes us assume that what we know is correct—even when facts are limited, or when dealing with SMEs.
  • We rely on data that should not count to make important decisions.
  • We confuse memorable events with important and meaningful ones.
  • We tend to ignore the odds, even when they are heavily against us.
  • We underestimate the role of “lady luck” in everyday events.
  • We do not always treat a dollar as a dollar; for example, we feel differently if we lose a theater ticket that we paid for than one we receive as a prize, yet the loss is the same—a ticket that could be used for the theater!
  • We mentally put money into categories that do not make much sense;, for example, we think about spending money differently if we receive it as a gift verse having earned it, even though both go in the same bank account and can buy the same things.
  • Our tolerance for risk is inconsistent.
  • We often throw good money after bad because we do not ignore sunk costs
  • We often overlook the opportunity costs of not making a decision.

The first step in overcoming these biases is to be aware of and to recognize them when they occur. Additional ways to counter biases and faulty assessment heuristics include the following:

  • Carefully define what is being estimated.
  • Use multiple assessment methods as consistency checks.
  • Postulate multiple hypotheses and set up a list of pros and cons for each.
  • Use the “crystal ball” test: “assume a crystal ball said your most unlikely hypothesis was true; how could this possibly have happened?”
  • Seek disconfirming information as well as confirming information.
  • When anchors may be present, seek several different anchors.
  • Seek other opinions from individuals known or likely to have different opinions.

2.7 Two Anecdotes: Long-Term Success and a Temporary Success of Supporting the Human Decision-Making Process

We highlight the key points of this chapter with two anecdotes that show the importance that human decision making plays in determining the success or failure of a decision analysis.

In 1978, decision analysts from a decision analysis consulting firm began working with the U.S. Marine Corps (USMC) to develop a new approach for prioritizing items to be funded in the annual budget cycle. They had a “champion” in the form of a young Colonel who was fighting a “that’s not how we do it here” attitude in trying to implement an innovative, decision analysis approach. By developing a sound prioritization method based on benefit/cost analysis, tailoring it to the organizational culture and demands, and evolving it as the organizational considerations changed, the decision analysts developed a facilitated process that is still being used today. It is still being facilitated by some of the same decision analysts more than 30 years later. Over the years, the Marines have tried other approaches to prioritizing budgetary items, but they continue to return to the decision analysis framework. In a presentation at INFORMS, the following reasons were given for why the decision process has been so successful (Leitch et al., 1999):

  • The Marines (not the decision analysts) own and control the process.
  • It forces professional discussions about what is best for the USMC.
  • It allows all relevant voices to be heard at the right time.
  • The process permits rapid analysis and modification.
  • It supports decisions based upon the best-available information.
  • It creates an effective synergy between:
    • a quantitative framework and qualitative judgments
    • a people-oriented process and automation
    • rational and irrational input
    • complex thinking and simple modeling.
  • It has adapted over time to changing organizational needs.
  • It works!

As a point of interest, the young Colonel who took the risk of innovating and who made it all happen was P.X. Kelly, who later became a four-star general and served as Commandant of the Marine Corps.

In 2000, decision analysts from a consulting firm were asked by the Chief Systems Engineer of a major intelligence agency to develop a methodology and process for putting together the annual budget. The existing process was stove-piped, highly parochial, and there was little collaboration among those fighting for shares of the budget, thus leading to suboptimization. The guidance from the decision maker was to put in place a process that would allocate resources efficiently, but more importantly, that would break down internal barriers, encourage cross-discipline discussion, and foster shared purpose. A facilitated process was established that evolved over 7 years that included detailed stakeholder analysis (both internal and external), prioritization of fundamental goals and objectives, a detailed multiple objective value model with scenario-specific value curves, and clear communication tools. The process worked well over the first few years, and collaborative analysis was greatly enhanced. Few argued with the technical correctness of the analytical model. However, the more the process did, the more it was asked to do. Instead of addressing the strategic and tactical decisions for which it was designed, it was being used for day-to-day operational decisions for which it did not have the right degree of sensitivity and granularity to discriminate value among the alternatives. The data demands grew more than the stakeholders could tolerate and accommodate, stakeholder support for the process waned, and the process began to collapse under its own weight. Later, a subsequent decision maker determined that a simpler process was needed, and mandated that no value curves or weights be used and that facilitated processes were not necessary since SMEs internal to the organization could prioritize initiatives on their own. The organization is currently engaged in developing a new process “from scratch” that is “based on only objective data” and better fits the evolved culture and constraints of the organization and the decision-making style of the decision maker. After all, that is the golden rule of management!

2.8 Setting the Human Decision-Making Context for the Illustrative Example Problems

This chapter discusses decision-making challenges in general. We now describe those challenges further in the context of the three illustrative example problems introduced in Chapter 1. See Table 1.3 for further information on the illustrative examples.

2.8.1 ROUGHNECK NORTH AMERICAN STRATEGY (by Eric R. Johnson)

Roughneck Oil and Gas is a global oil and gas company with operations in North America. They had an advocacy-based approach to decision making that led to narrowly framed decisions and only incremental change from one year to the next. Decision makers were not familiar with creating distinctly different alternatives, and analysts had no experience accounting for the actual range of outcomes that might be encountered. The organization had many silos, with little communication between business areas. Within this situation, there was a desire to take a broad strategic look at the assets in North America.

2.8.2 GENEPTIN PERSONALIZED MEDICINE (by Sean Xinghua Hu)

Compared with traditional pharmaceuticals, personalized medicine decision-making creates additional complexity and challenges for a drug development team, particularly for a young biotech company like DNA Biologics.

To make the Geneptin decision, DNA Biologics had to wrestle with a number of cultural and organizational issues. How formal or informal should the decision process be? Should the approach be driven more by data and analysis or more by intuition and experience? To what extent should the senior management be involved in the decision process—fully engaged or a “just show me your final recommendation” approach? Which of the different organizational “habits” or cultures for building consensus should be employed —a top-down, hierarchical approach or a collaborative approach based on preagreed upon criteria? To what extent should the company’s decision be driven by science and innovation versus by commercial value?

These organizational and cultural aspects are not specific to personalized medicine, but because of inertia, lack of sufficient knowledge, and higher level of uncertainty associated with personalized medicine, these challenges were amplified at DNA Biologics.

Additional complexity and uncertainty associated with personalized medicine can make the decision analysis more challenging. There are additional decisions to make regarding design, pricing and positioning of diagnostic biomarker tests. There are additional variables to consider, including biomarker prevalence, addressable patient population, market share within the addressable patient population, and probability of success of both the drug and companion diagnostic test. Personalized medicine brings different tradeoffs; for example, a biomarker reduces addressable patient population but can increase the drug’s market share among the stratified patient segment. Some variables have a different impact on value—for example, personalized medicine R&D costs may be higher or lower than traditional costs; while other variables serve as new drivers of value—for example, a personalized medicine can offer a better benefit/risk ratio, allowing patients to potentially take it for longer duration and drug manufacturers to provide a more compelling value proposition to payers for reimbursement and to physicians and patients for clinical adoption, increasing commercial value to the drug company.

2.8.3 DATA CENTER DECISION PROBLEM (by Gregory S. Parnell)

A major government agency had an expanding mission that required significant increases in data analysis. The agency’s headquarters data centers were already operating at capacity, and the existing data centers lacked additional floor space, power, and cooling. The agency had identified the need for a large new data center and had already ordered a significant number of new servers, but had no process to verify that sufficient data center capacity would be provided to support mission demands. They also had no process for determining the best location of the data center.

There were several complicating factors in the decision. Technology advances had resulted in large servers becoming smaller, consuming more power, and requiring more cooling. Multiple stakeholder organizations within the agency were involved in the decision, including the mission-oriented operational users, the information technology office responsible for procuring and operating, and the logistics office responsible for facilities, power, and cooling. Some of their objectives conflicted with each other. Each believed that it should be the final decision authority, and each believed it had the expertise to make the decisions without using SMEs from the others. All of the organizations appeared to be biased toward solutions with which they were already familiar and comfortable. Life cycle costs were a major factor in the selection of the best data center alternative, and budgets for IT had been shrinking. Multiple approval levels were necessary to obtain the funds from both within and outside the agency. This included the requirement to communicate the need for funds to Congress. The agency had a history of getting what it asked for, but it had been challenged more and more to justify requests for additional funds.

2.9 Summary

For a decision analysis to be useful, it must be theoretically sound, performed with accepted analytical techniques, and be methodologically defensible. But that is not enough. It must work in the environment of the organizational culture and be compatible with the decision making style of the decision makers. It must be based on sound objective data when they are available and on sound subjective information from credible experts when objective data are not available. It must be as free as possible from cognitive and motivational biases of the participants. It frequently must reconcile stakeholders positions based on conflicting perspectives and, often, conflicting information. It often must reach a single conclusion that can be agreed upon by participants who at times have little or no motivation to reach consensus. And finally, it must be communicated clearly and effectively.

These challenges demand that decision analysts be far more than technical experts. We must be knowledge elicitors to gain the information required. We must be facilitators to help overcome group decision-making barriers. We must be effective communicators to prepare presentations and reports that will be read, understood, and accepted. Most of all, we must recognize that decision analysis is not just a science, but an art, and we must be fluent from both perspectives.

KEY TERMS

Anatomical decision making a description given to how people have been observed to make decisions, citing every body part but the brain! (rule of thumb, seat of pants, gut reaction, etc.)
Business knowledge knowledge of the business environment, practices, and procedures of an organization
Cognitive bias a pattern of deviation in judgment that occurs in particular situations.
Decision level the level in an organization at which decisions are made; may be strategic, tactical, or operational
Decision trap a behavior or barrier that impedes effective decision making
Human decision making the process by which we make decisions; it is essentially a technical process that includes the quantitative frameworks that we use coupled with a social process that includes organizational culture, human biases, and intuition used in decision making.
Motivational bias the effect that motivation to have positive attitudes to oneself has on the way a person perceives or acts upon information in the decision process.
Organizational decision making the process of choice through which organizational leaders and managers select among alternatives, allocate resources, or implement strategic goals and objectives.
Technical knowledge substantive knowledge of the specific domain of interest
Soft skills the non-quantitative, “social” skills that complement the quantitative decision-making methodologies; they include strategic thinking, managing, leading, facilitating, interviewing, researching, networking, and communicating.

REFERENCES

Adams, J. (1979). Conceptual Blockbusting. Stanford, CA: Stanford Alumni Association.

Baron, J. (2000). Thinking and Deciding, 3rd ed. New York: Cambridge University Press.

BusinessDictionary.com. Stakeholder Definition. 2011.

Eyes Wide Open. Tips on Strategic, Tactical and Operational Decision Making (2011). http://www.smallbusinesshq.com.au/factsheet/20305-tips-on-strategic-tactical-and-operational-decision-making.htm.

Goodie, F. (2011). Cognitive distortion as a component and treatment focus of pathological gambling: A review. Psychology of Addictive Behaviors, 26(2), 298–310.

Heuer, R. Jr. (1999). Psychology of Intelligence Analysis. Mclean, VA: Central Intelligence Agency.

Kruger, J. (1999). Lake Wobegon be gone! The “below-average effect” and the egocentric nature of comparative ability judgments. Journal of Personality and Social Psychology, 77(2), 221–232.

Leitch, S., Kuskey, K., Buede, D., & Bresnick, T. Of princes, frogs, and marine corps’ budgets: Institutionalizing decision analysis over 23 years. INFORMS. Philadelphia, 1999.

Mellers, B. & Locke, C. (2007). What have we learned from our mistakes? In W. Edwards, R. Miles, & D. von Winterfeldt (eds.), Advances in Decision Analysis, pp. 351–371. New York: Cambridge University Press.

Miller, D.T. & Ross, M. (1975). Self-serving biases in the attribution of causality: Fact or fiction? Psychological Bulletin, 82(2), 213–225.

Myers, D. (1994). Did You Know It All Along? Exploring Social Psychology. New York: McGraw-Hill, 15–19.

O’Boyle, J.G. The culture of decision-making. R&D Innovator, 1996.

Oswald, M. & Stefan, G. (2004). Confirmation bias. In R.F. Pohl (ed.), Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory. Hove, UK: Psychology Press.

Parnell, G.S., Driscoll, P., & Henderson, D. (eds.) (2011). Decision Making in Systems Engineering and Management. Hoboken, NJ: John Wiley & Sons.

Phillips, L. (2007). Decision conferencing. In W. Edwars, R. Miles, & D. Von Winterfeldt (eds.), Advances in Decision Analysis, pp. 375–398. New York: Cambridge University Press.

Pohl, R.F. (2004). Hindsight bias. In R.F. Pohl (ed.), Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory. Hove, UK: Psychology Press.

Sage, A. & Armstrong, J. (2000). Introduction to Systems Engineering. New York: John Wiley & Sons.

Schoemaker, J.E. & Russo, P. (1989). Decision Traps. New York: Bantam/Doubleday/Dell Publishing Group.

Silverman, B.G. (1992). Modeling and critiquing the confirmation bias in human reasoning. IEEE Transactions on Systems, Man, and Cybernetics, Sept/Oct: 972–982.

Society for Decision Professionals. (2012). SDP Home Page. http://www.decisionprofessionals.com, accessed April 2012.

Stanford Strategic Decision and Risk Management Decision Leadership Course (2008).

Tactical decision making in organizations. (2011). http://main.vanthinking.com/index.php/20080910125/Tactical-Decision-Making-in-Organizations.html, accessed 2011.

Trainor, T. & Parnell, G. Using stakeholder analysis to define the problem in systems engineering. INCOSE, 2007.

Tversky, A. & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124–1131.

Tversky, A. & Kahneman, D. (1982). Evidential impact of base rates. In P. Slovic, A. Tversky, & D. Kahneman (eds.), Judgment under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset