Appendix 1 Are You a Rational Person? Check Yourself

The purpose of this appendix is to help you assess your own rationality. First, we present the key principles of individual rationality, which focus on how people make independent decisions. Then we present and discuss game theoretic rationality, which focuses on how people make interdependent decisions.

Why Is It Important to Be Rational?

Let’s first consider why it is important for a negotiator to be rational. Rational models of behavior offer a number of important advantages for the negotiator:

  • Pie expansion and pie slicing. Models of rational behavior are based upon the principle of maximization such that the course of action followed guarantees the negotiator will maximize his or her interests (whether that interest is monetary gain, career advancement, prestige, etc.). In short, the best way to maximize one’s interests is to follow the prescriptions of a rational model.

  • Learning and personal growth. The rational models we present in this appendix make clear and definitive statements regarding the superiority of some decisions over others. Thus, they do not allow you to justify or rationalize your behavior. The truth may hurt sometimes, but it is a great learning experience.

  • Measure of perfection. Rational models provide a measure of perfection or optimality. If rational models did not exist, we would have no way of evaluating how well people perform during negotiations or what they should strive to do in future negotiations. We would not be able to offer advice to negotiators because we would not have consensus about what is a “good” outcome. Rational models provide an ideal.

  • Diagnosis. Rational models serve a useful diagnostic purpose because they often reveal where negotiators make mistakes. Because rational models are built on a well-constructed theory of decision making, they offer insight into the mind of the negotiator.

  • Dealing with irrational people. Negotiators often follow the norm of reciprocity. A negotiator who is well versed in rational behavior can often deal more effectively with irrational people.

  • Consistency. Rational models can help us be consistent. Inconsistency in our behavior can inhibit learning. Furthermore, it can send the counterparty ambiguous messages. When people are confused or uncertain, they are more defensive and trust diminishes.

  • Decision making. Rational models provide a straightforward method for thinking about decisions and provide a way of choosing among options which will produce the “best” outcome for the chooser, maximizing his or her own preferences (as we will see in this appendix).

Individual Decision Making

Negotiation is ultimately about making decisions. If we cannot make good decisions on our own, joint decision making will be even more difficult. Sometimes our decisions are trivial, such as whether to have chocolate cake or cherry pie for dessert. Other times, our decisions are of great consequence, such as choosing a career or a spouse. Our decisions about how to spend the weekend may seem fundamentally different from deciding what to do with our entire life, but some generalities cut across domains. Rational decision making models provide the tools necessary for analyzing any decision. The three main types of decisions are riskless choice, decision making under uncertainty, and risky choice.

Riskless Choice

Riskless choice, or decision making under certainty, involves choosing between two or more readily available options. For example, a choice between two apartments is a riskless choice, as is choosing among 31 flavors of ice cream or selecting a book to read. Often, we do not consider these events to be decisions because they are so simple and easy. However, at other times, we struggle when choosing among jobs or careers, and we find ourselves in a state of indecision.

Imagine you have been accepted into the MBA program at your top two choices: university X and university Y. This enviable situation is an approach-approach conflict, meaning that in some sense you cannot lose—both options are attractive; you need only to decide which alternative is best for you. You have to make your final choice by next week. To analyze this decision situation, we will employ a method known as multiattribute utility technique (or MAUT).1 According to MAUT, a decision maker should follow five steps: (a) identify the alternatives, (b) identify dimensions or attributes of the alternatives, (c) evaluate the utility associated with each dimension, (d) weight or prioritize each dimension in terms of importance, and (e) make a final choice.

Identification of Alternatives

The first step is usually quite straightforward. The decision maker simply identifies the relevant alternatives. For example, you would identify the schools to which you had been accepted. In other situations, the alternatives may not be as obvious. In the case that you did not get any acceptance letters, you must brainstorm new options.

Identification of Attributes

The second step is more complex and involves identifying the key attributes associated with the alternatives. The attributes are the features of an alternative that make it appealing or not. For example, when choosing among schools, relevant attributes might include the cost of tuition, reputation of the program, course requirements, placement options, weather, cultural aspects, family, and faculty.

Utility

The next step is to evaluate the relative utility or value of each alternative for each attribute. For example, you might use a 1 to 5 scale to evaluate how each school ranks on each of your identified attributes. You might evaluate the reputation of university X very highly (5) but the weather as very unattractive (1); you might evaluate university Y’s reputation to be moderately high (3) but the weather to be fabulous (5). MAUT assumes preferential independence of attributes (i.e., the value of one attribute is independent of the value of others).

Weight

In addition to determining the evaluation of each attribute, the decision maker also evaluates how important that attribute is to him or her. The importance of each attribute is referred to as weight in the decision process. Again, we can use a simple numbering system, with 1 representing relatively unimportant attributes and 5 representing very important attributes. For example, you might consider the reputation of the school to be very important (5) but evaluate the cultural attributes of the city to be insignificant (1).

Making a Decision

The final step in the MAUT procedure is to compute a single, overall evaluation of each alternative. For this task, first multiply the utility evaluation of each attribute by its corresponding weight, and then sum the weighted scores across each attribute. Finally, select the option that has the highest overall score.

We can see from the hypothetical example in Exhibit A1-1 that university X is a better choice for the student compared to university Y. However, it is a close decision. If the importance of any of the attributes were to change (e.g., tuition cost, reputation, climate, or culture), then the overall decision could change. Similarly, if the evaluation of any attributes changes, then the final choice may change. Decision theory can tell us how to choose, but it cannot tell us how to weigh the attributes that go into making choices.

According to the dominance principle, one alternative dominates another if it is strictly better on at least one dimension and at least as good on all others. For example, imagine university Y had been evaluated as a 5 in terms of tuition cost, a 5 in reputation, a 4 in climate, and a 4 in culture, and university X had been evaluated as a 1, 5, 4, and 3, respectively. In this case, we can quickly see that university Y is just as good as university X on two dimensions (reputation and climate) and better on the two remaining dimensions (tuition cost and culture). Thus, university Y dominates university X. Identifying a dominant alternative greatly simplifies decision making: If one alternative dominates the other, we should select the dominant option.

The example seems simple enough. In many situations, however, we are faced with considering many more alternatives, each having different dimensions. It may not be easy to spot a dominant alternative. What should we do in this case? The first step is to eliminate from consideration all options dominated by others and to choose among the nondominated alternatives that remain.

The dominance principle as a method of choice seems quite compelling, but it applies only to situations in which one alternative is clearly superior to others. It does not help us with the agonizing task of choosing among options that involve trade-offs among highly valued aspects. Some people may use “random” choice. The wisdom of choosing randomly depends on whether the choice set is among “noncomparable” or “indifferent” options: Systematic randomization among “noncomparable” options may lead to a chain of decisions resulting in monetary loss, but not when choosing among indifferent options.2 We now turn to situations that defy MAUT and dominance detection.

Decision Making Under Uncertainty

Sometimes we must make decisions when the alternatives are uncertain or unknown. These situations are known as decision making under uncertainty or decision making in ignorance.3 In such situations, the decision maker has no idea about the likelihood of events. Consider for example, a decision to plan a social event outdoors or indoors. If the weather is sunny and warm, it would be better to hold the event outdoors; if it is rainy and cold, it is better to plan the event indoors. The plans must be made a month in advance, but the weather cannot be predicted a month in advance. The distinction between risk and uncertainty hinges upon whether probabilities are known exactly (e.g., as in games of chance) or whether they must be judged by the decision maker with some degree of imprecision (e.g., almost everything else). Hence, “ignorance” might be viewed merely as an extreme degree of uncertainty when the decision maker has no clue (e.g., probability that the closing price of Dai Ichi-Sankyo stock tomorrow on the Tokyo stock exchange is above 1,600 yen).

Risky Choice

In decision making under uncertainty, the likelihood of events is unknown; in risky choice situations, the probabilities are known. Most theories of decision making are based on an assessment of the probability that some event will take place. Because the outcomes of risky choice situations are not fully known, outcomes are often referred to as “prospects.” Many people cannot compute risk accurately, even when the odds are perfectly known, as in the case of gambling. (See Exhibit A1-2 for some odds associated with winning the lottery.)

Negotiation is a risky choice situation because parties cannot be completely certain about the occurrence of a particular event. For instance, a negotiator cannot be certain that mutual settlement will be reached because negotiations could break off as each party opts for his or her BATNA. To understand risky choice decision making in negotiations, we need to understand expected utility theory.

Expected Utility Theory

Utility theory has a long history, dating back to the sixteenth century when French noblemen commissioned their court mathematicians to help them gamble. Modern utility theory is expressed in the form of gambles, probabilities, and payoffs. Why do we need to know about gambling to be effective negotiators? Virtually all negotiations involve choices, and many choices involve uncertainty, which makes them gamble. Before we can negotiate effectively, we need to be clear about our own preferences. Utility theory helps us do that.

EU is a theory of choices made by an individual actor.4 It prescribes a theory of “rational behavior.” Behavior is rational if a person acts in a way that maximizes his or her decision utility or the anticipated satisfaction from a particular outcome. The maximization of utility is often equated with the maximization of monetary gain, but satisfaction can come in many nonmonetary forms as well. Obviously, people care about things other than money. For example, weather, culture, quality of life, and personal esteem are all factors that bear on a job decision, in addition to salary.

EU is based on revealed preferences. People’s preferences or utilities are not directly observable but must be inferred from their choices and willful behavior. To understand what a person really wants and values, we have to see what choices he or she makes. Actions speak louder than words. In this sense, utility maximization is a tautological statement: A person’s choices reflect personal utilities; therefore, all behaviors may be represented by the maximization of this hypothetical utility scale.

EU is based on a set of axioms about preferences among gambles. The basic result of the theory is summarized by a theorem stating that if a person’s preferences satisfy the specified axioms, then the person’s behavior maximizes the expected utility. Next, we explore utility functions.

Utility Function

A utility function is the quantification of a person’s preferences with respect to certain objects such as jobs, potential mates, and ice cream flavors. Utility functions assign numbers to objects and gambles that have objects as their prizes (e.g., flip a coin and win a trip to Hawaii or free groceries). For example, a manager’s choice to stay at her current company could be assigned an overall value, such as a 7 on a 10-point scale. Her option to take a new job might be assigned a value of either 10 or 2, depending on how things work out for her at the new job. One’s current job is the sure thing; the alternative job, because of its uncertainty, is a gamble. How should we rationally make a decision between the two?

We first need to examine our utility function. The following seven axioms guarantee the existence of a utility function. The axioms are formulated in terms of preference-or-indifference relations defined over a set of outcomes.5 As will become clear, the following axioms provide the foundation for individual decision making, as well as negotiation or joint decision making.

Comparability

A key assumption of EU is that everything is comparable. That is, given any two objects, a person must prefer one to the other or be indifferent to both; no two objects are incomparable. For example, a person may compare a dime and a nickel or a cheeseburger and a dime. We might compare a job offer in the Midwest to a job offer on the West Coast. Utility theory implies a single, underlying dimension of “satisfaction” associated with everything. We can recall instances in which we refused to make comparisons, which often happens in the case of social or emotional issues such as having to do with marriage and children. However according to utility theory, we need to be able to compare everything to be truly rational. Many people are uncomfortable with this idea, just as people can be in conflict about what is negotiable.

Closure

The closure property states that if x and y are available alternatives, then so are all of the gambles of the form (x, p, y) that can be formed with x and y as outcomes. In this formulation, x and y refer to available alternatives; p refers to the probability that x will occur. Therefore, (x, p, y) states that x will occur with probability p, otherwise y will occur. The converse must also be true: (x, p, y) = (y, 1 – p, x), or y will occur with probability (1 – p), otherwise x will occur.

For example, imagine you assess the probability of receiving a raise from your current employer to be about 30%. The closure property states that the situation expressed as a 30% chance of receiving a raise (otherwise, no raise) is identical to the statement that you have a 70% chance of not receiving a raise (otherwise, receiving a raise).

So far, utility theory may seem to be so obvious and simple that it is absurd to spell it out in any detail. However, we will soon see how people violate basic “common sense” all the time and hence, behave irrationally.

Transitivity

Transitivity means that if we prefer x to y and y to z, then we should prefer x to z. Similarly, if we are indifferent between x and y and y and z, then we will be indifferent between x and z.

Suppose your employer offers you one of three options: a transfer to Seattle, a transfer to Pittsburgh, or a raise of $5,000. You prefer a raise of $5,000 over a move to Pittsburgh, and you prefer a move to Seattle more than a $5,000 raise. The transitivity property states that you should therefore prefer a move to Seattle over a move to Pittsburgh. If your preferences were not transitive, you would always want to move somewhere else. Further, a third party could become rich by continuously “selling” your preferred options to you.

Reducibility

The reducibility axiom refers to a person’s attitude toward a compound lottery, in which the prizes may be tickets to other lotteries. According to the reducibility axiom, a person’s attitude toward a compound lottery depends only on the ultimate prizes and the chance of getting them as determined by the laws of probability; the actual gambling mechanism is irrelevant:

(x,pq,y)=[(x,pq,y),q,y]

Suppose that the dean of admissions at your top choice university tells you that you have a 25% chance of getting accepted to the MBA program. How do you feel about the situation? Now, suppose the dean tells you that you have a 50% chance of not being accepted and a 50% chance you will have to face a lottery-type admission procedure, wherein half the applicants will get accepted and half will not. In which situation do you prefer to be? According to the reducibility axiom, both situations are identical. Your chances of getting admitted into graduate school are the same in each case: exactly 25%. The difference between the two situations is that one involves a compound gamble and the other does not.

Compound gambles differ from simple gamble ones in that their outcomes are themselves gambles rather than pure outcomes. Furthermore, probabilities are the same in both gambles. If people have an aversion or attraction to gambling however, these outcomes may not seem the same. This axiom has important implications for negotiation; the format by which alternatives are presented to negotiators—in other words, in terms of gambles or compound gambles—strongly affects our behavior.

Substitutability

The substitutability axiom states that gambles that have prizes about which people are indifferent are interchangeable. For example, suppose one prize is substituted for another in a lottery but the lottery is left otherwise unchanged. If you are indifferent to both the old and the new prizes, you should be indifferent about the lotteries. If you prefer one prize to the other, you will prefer the lottery that offers the preferred prize.

For example, imagine you work in the finance division of a company and your supervisor asks you how you feel about transferring to either the marketing or sales division in your company. You respond that you are indifferent. Then your supervisor presents you with a choice: You can be transferred to the sales division, or you can move to a finance position in an out-of-state branch of the company. After wrestling with the decision, you decide you prefer to move out of state rather than transfer to the sales division. A few days later, your supervisor surprises you by asking whether you prefer to be transferred to marketing or to be transferred out of state. According to the substitutability axiom, you should prefer to transfer because as you previously indicated, you are indifferent to marketing and sales; they are substitutable choices.

Betweenness

The betweenness axiom asserts that if x is preferred to y, then x must be preferred to any probability mixture of x and y, which in turn must be preferred to y. This principle is certainly not objectionable for monetary outcomes. For example, most of us would rather have a dime than a nickel and would rather have a probability of either a dime or a nickel than the nickel itself. But consider nonmonetary outcomes, such as skydiving, Russian roulette, and bungee jumping. To an outside observer, people who skydive apparently prefer a probability mixture of living and dying over either one of them alone; otherwise, one can easily either stay alive or kill oneself without ever skydiving. People who like to risk their lives appear to contradict the betweenness axiom. A more careful analysis reveals however, that this situation, strange as it may be, is not incompatible with the betweenness axiom. The actual outcomes involved in skydiving are (a) staying alive after skydiving, (b) staying alive without skydiving, or (c) dying while skydiving. In choosing to skydive therefore, a person prefers a probability mix of (a) and (c) over (b). This analysis reveals that “experience” has a utility.

Continuity or Solvability

Suppose that of three objects—A, B, and C—you prefer A to B and B to C. Now, consider a lottery in which there is a probability, p, of getting A and a probability of 1 − p of getting C. If p = 0, the lottery is equivalent to C; if p = 1, the lottery is equivalent to A. In the first case, you prefer B to the lottery; in the second case, you prefer the lottery to B. According to the continuity axiom, a value, p, which falls between 0 and 1, indicates your indifference between B and the lottery. This sounds reasonable enough.

Now consider the following example involving three outcomes: receiving a dime, receiving a nickel, and being shot at dawn.6 Certainly, most of us prefer a dime to a nickel and a nickel to being shot. The continuity axiom however, states that at some point of inversion, some probability mixture involving receiving a dime and being shot at dawn is equivalent to receiving a nickel. This derivation seems particularly disdainful for most people because no price is equal to risking one’s life. However, the counterintuitive nature of this example stems from an inability to understand very small probabilities. In the abstract, people believe they would never choose to risk their life but in reality, people do so all the time. For example, we cross the street to buy some product for a nickel less, although by doing so we risk getting hit by a car and being killed.

In summary, whenever these axioms hold, a utility function exists that (a) preserves a person’s preferences among options and gambles and (b) satisfies the expectation principle: The utility of a gamble equals the expected utility of its outcomes. This utility scale is uniquely determined except for an origin and a unit of measurement.

Expected Value Principle

Imagine you have a rare opportunity to invest in a highly innovative start-up company with a revolutionary new technology for its product. On the other hand, the technology is new and unproven. Further, the company does not have the resources to compete with the established competitors. Nevertheless if this technology is successful, an investment in the company at this point will have a 30-fold return. Suppose you just inherited $5,000 from your aunt. You could invest the money in the company and possibly earn $150,000—a risky choice. Or you could keep the money and pass up the investment opportunity. You assess the probability of success to be about 20%. (A minimum investment of $5,000 is required.) What do you do?

The dominance principle does not offer a solution to this decision situation because it provides no clearly dominant alternatives. But the situation contains the necessary elements to use the expected value principle, which applies when a decision maker must choose among two or more prospects, as in the previous example. The “expectation” or “expected value” of a prospect is the sum of the objective values of the outcomes multiplied by the probability of their occurrence.

For a variety of reasons, you believe there is a 20% chance the investment could result in a return of $150,000, which mathematically is 0.2 × $150,000 = $30,000. There is an 80% chance the investment will not yield a return, or 0.8 × $0 = $0. Thus, the expected value of this gamble is $30,000 minus the cost, or − $5,000, = $25,000.

The expected value principle dictates that the decision maker should select the prospect with the greatest expected value. In this case, the risky option (with an expected value of $30,000) has a greater expected value than the sure option (expected value of $0).

A related principle applies to evaluation decisions, or situations in which decision makers must state and be willing to act upon, the subjective worth of a given alternative. Suppose you could choose to “sell” to another person your opportunity to invest. What would you consider to be a fair price to do so? According to the expected-value evaluation principle, the evaluation of a prospect should be equal to its expected value. That is, the “fair price” for such a gamble would be $25,000. Similarly, the opposite holds: Suppose your next-door neighbor held the opportunity but was willing to sell the option to you. According to the expected value principle, people would pay up to $25,000 for the opportunity to gamble.

The expected value principle is intuitively appealing, but should we use it to make decisions? To answer this question, it is helpful to examine the rationale for using expected value as a prescription. Let’s consider the short-term and long-term consequences.7 Imagine that you will inherit $5,000 every year for the next 50 years. Each year, you must decide whether to invest the inheritance in a start-up company (risky choice) or keep the money (the sure choice). The expected value of a prospect is its long-run average value. This principle is derived from a fundamental principle: the law of large numbers.8 The law of large numbers states that the mean return will get closer and closer to its expected value the more times a gamble is repeated. Thus, you can be fairly sure that after 50 years of investing your money, the average return would be about $25,000. Some years you would lose, other years you would make money, but on average your return would be $25,000. When you look at the gamble this way, it seems reasonable to invest.

Now imagine the investment decision is a once-in-a-lifetime opportunity. In this case, the law of large numbers does not apply to the expected-value decision principle. You will either make $150,000, make nothing, or keep $5,000. No in-between options are possible. Under such circumstances, you can often find good reasons to reject the guidance of the expected value principle.9 For example, suppose you need a new car. If the gamble is successful, buying a car is no problem. But if the gamble is unsuccessful, you will have no money at all. Therefore, you may decide that buying a used car at or under $5,000 is a more sensible choice.

The expected value concept is the basis for a standard way of labeling risk-taking behavior. For example, in the previous situation you could either take the investment gamble or receive $25,000 from selling the opportunity to someone else. In this case, the “value” of the sure thing (i.e., receiving $25,000) is identical to the expected value of the gamble. Therefore, the “objective worth” of both alternatives is identical. What would you rather do? Your choice reveals your risk attitude. If you are indifferent to the two choices and are content to decide on the basis of a coin flip, you are risk neutral or risk indifferent. If you prefer the sure thing, then your behavior may be described as risk averse. If you choose to gamble, your behavior is classified as risk seeking.

Although some individual differences occur in people’s risk attitudes, people do not exhibit consistent risk-seeking or risk-averse behavior.10 Rather, risk attitudes are highly context dependent. The fourfold pattern of risk attitudes predicts people will be risk averse for moderate- to high-probability gains and low-probability losses, and risk seeking for low-probability gains and moderate- to high-probability losses.11

Expected Utility Principle

How much money would you be willing to pay to play a game with the following two rules: (a) An unbiased coin is tossed until it lands on heads; (b) the player of the game is paid $2 if heads appears on the opening toss, $4 if heads first appears on the second toss, $8 on the third toss, $16 on the fourth toss, and so on. Before reading further, indicate how much you would be willing to pay to play the game.

To make a decision based upon rational analysis, let’s calculate the expected value of the game by multiplying the payoff for each possible outcome by the probability that it will occur. Although the probability of the first head appearing on toss n becomes progressively smaller as n increases, the probability never becomes zero. In this case, note that the probability of heads for the first time on any given toss is (1/2)n, and the payoff in each case is ($2n); hence, each term of the infinite series has an expected value of $1. The implication is that the value of the game is infinite.12 Even though the value of this game is infinite, most people are seldom willing to pay more than a few dollars to play it. Most people believe the expected value principle produces an absurd conclusion in this case. The observed reluctance to pay to play the game, despite its objective attractiveness, is known as the St. Petersburg paradox.13

How can we explain such an enigma? Expected value is an undefined quantity when the variance of outcomes is infinite. The person or organization offering the game could not guarantee a payoff greater than its total assets, the game was unimaginable, except in truncated and therefore finite form.14 So what do we do when offered such a choice? To value it as “priceless” or even to pay hundreds of thousands of dollars would be absurd. So, how should managers reason about such situations?

Diminishing Marginal Utility

The reactions people have to the St. Petersburg game are consistent with the proposition that people decide among prospects not according to their expected objective values but, rather, according to their expected subjective values. In other words, the psychological value of money does not increase proportionally as the objective amount increases. To be sure, virtually all of us like more money rather than less money, but we do not necessarily like $20 twice as much as $10. And the difference in our happiness when our $20,000 salary is raised to $50,000 is not the same as when our $600,000 salary is raised to $630,000. Bernoulli proposed a logarithmic function relating the utility of money, u, to the amount of money, x. This function is concave, meaning that the utility of money decreases marginally. Constant additions to monetary amounts result in less and less increased utility. The principle of diminishing marginal utility is related to a fundamental principle of psychophysics, wherein good things satiate and bad things escalate. The first bite of a pizza is the best; as we get full, each bite brings less and less utility.

The principle of diminishing marginal utility is simple, yet profound. It is known as the everyman’s utility function (EU).15 According to Bernoulli, a fair price for a gamble should not be determined by its expected (monetary) value but rather by its expected utility. Thus, the logarithmic utility function in Exhibit A1-3 yields a finite price for the gamble.

According to EU, each of the possible outcomes of a prospect has a utility (subjective value) that is represented numerically. The more appealing an outcome is, the higher is its utility. The expected utility of a prospect is the sum of the utilities of the potential outcomes, each weighted by its probability. According to EU, when choosing among two or more prospects, people should select the option with the highest expected utility. Further, in evaluation situations, risky prospects should have an expected utility equal to the corresponding “sure choice” alternative.

EU principles have essentially the same form as expected value (EV) principles. The difference is that expectations are computed using objective (dollar) values in EV models as opposed to subjective values (utility) in EU models.

Risk Taking

A person’s utility function for various prospects reveals something about his or her risk-taking tendencies. If a utility function is concave, a decision maker will always choose a sure thing over a prospect whose expected value is identical to that sure thing. The decision maker’s behavior is risk averse (Exhibit A1-4, Panel A). The risk-averse decision maker would prefer a sure $5 over a 50–50 chance of winning $10 or nothing—even though the expected value of the gamble [0.5($10) + 0.5($0) = $5] is equal to that of the sure thing. If a person’s utility function is convex, he or she will choose the risky option (Exhibit  A1-4, Panel B). If the utility function is linear, his or her decisions will be risk neutral and of course, identical to those predicted by expected value maximization (Exhibit  A1-4, Panel C).

If most peoples’ utility for gains are concave (i.e., risk averse), then why would people ever choose to gamble? Bets that offer small probabilities of winning large sums of money (e.g., lotteries, roulette wheels) ought to be especially unattractive given that the concave utility function that drives the worth of the large prize is considerably lower than the value warranting a very small probability of obtaining the prize.

Consider the following two options. Which would you choose?

  • Option A: 80% probability of earning $4,000, otherwise $0

  • Option B: Earn $3,000 for sure

If you are like most people, you chose B. Only a small number of people (20%) choose A.

Now, consider two different options:

  • Option C: 20% probability of earning $4,000, otherwise $0

  • Option D: 25% probability of earning $3,000, otherwise $0

Faced with this choice, a clear majority (65%) choose option C over option D (the smaller, more likely payoff).16

However in this example, the decision maker violates EU, which requires consistency between the A versus B choice and the C versus D choice. In Exhibit A1-5, Branch 1 depicts the C versus D choice [i.e., 25% chance of making $3,000 (otherwise $0) or 20% chance of making $4,000 (otherwise $0)]. In Branch 2 of Exhibit A1-5, another stage has been added to the gamble between A and B, which effectively makes the two-stage gamble in Branch 2 identical to the one-stage gamble in Branch 1. Because Branch 1 is objectively identical to Branch 2, the decision maker should not make different choices. Stated another way, in the A versus B and C versus D choices, the ratio is the same: (0.8/1) = (0.20/0.25). However, people’s preferences usually reverse. According to the certainty effect, people have a tendency to overweight certain outcomes relative to outcomes that are merely probable. The reduction in probability from certainty (1) to a degree of uncertainty (0.8) produces a more pronounced loss in attractiveness than a corresponding reduction from one level of uncertainty (0.25) to another (0.2). People do not think rationally about probabilities. Those close to 1 are often (mistakenly) considered sure things. On the flip side is the possibility effect: the tendency to overweight outcomes that are possible relative to outcomes that are impossible.

Decision Weights

Decision makers transform probabilities into psychological decision weights. The decision weights are then applied to the subjective values. Prospect theory proposes a relationship between the probabilities’ potential outcomes and the weights those probabilities have in the decision process.

Exhibit A1-6 illustrates the probability weighting function proposed by cumulative prospect theory. It is an inverted-S function that is concave near 0 and convex near 1. The probability-weighting function offers several noteworthy features.

Extremity Effect

People tend to overweight low probabilities and underweight high probabilities.

Crossover Point

The crossover probability is the point at which objective probabilities and subjective weights coincide. Prospect theory does not pinpoint where the crossover occurs, but it is definitely lower than 50%.17

Subadditivity

Adding two probabilities, p 1 and p 2, should yield a probability p 3 = p 1 + p 2. For example, suppose you are an investor considering three stocks: A, B, and C. You assess the probability that stock A will close 2 points higher today than yesterday to be 20%, and you assess the probability that stock B will close 2 points higher today than yesterday to be 15%. The stocks are two different companies in two different industries and are completely independent. Now consider the price of stock C, which you believe has a 35% probability of closing 2 points higher today. The likelihood of a 2-point increase in either stock A or B should be identical to the likelihood of a 2-point increase in stock C. The probability-weighting relationship however, does not exhibit additivity. That is for small probabilities, weights are subadditive, as we see from the extreme flatness at the lower end of the curve. It means that most decision makers consider the likely increase of either stocks A or B to be less likely than an increase in stock C.

Subcertainty

Except for guaranteed or impossible events, weights for complementary events do not sum to 1. One implication of the subcertainty feature of the probability-weight relationship is that for all probabilities, p, with < p < 1, p(p) + p(1 − p) < 1.

Regressiveness

According to the regressiveness principle, extreme values of some quantity do not deviate very much from the average value of that quantity. The relative flatness of the probability-weighting curve is a special type of regressiveness, suggesting that people’s decisions are not as responsive to changes in uncertainty as are the associated probabilities. Another aspect is that nonextreme high probabilities are underweighted and low ones are overweighted.

The subjective value associated with a prospect depends on the decision weights and the subjective values of potential outcomes. Prospect theory makes specific claims about the form of the relationship between various amounts of an outcome and their subjective values.18 Exhibit A1-7 illustrates the generic prospect theory value function.

Three characteristics of the value function are noteworthy. The first pertains to the decision maker’s reference point. At some focal amount of the pertinent outcome, smaller amounts are considered losses and larger amounts gains. That focal amount is the negotiator’s reference point. People are sensitive to changes in wealth.

A second feature is that the shape of the function changes markedly at the reference point. For gains, the value function is concave, exhibiting diminishing marginal value. As the starting point for increases in gains becomes larger, the significance of a constant increase lessens. A complementary phenomenon occurs in the domain of losses: Constant changes in the negative direction away from the reference point also assume diminishing significance the farther from the reference point the starting point happens to be.

Finally, the value function is noticeably steeper for losses than for gains. Stated another way, gains and losses of identical magnitude have different significance for people; losses are considered more important. We are much more disappointed about losing $75 than we are happy about making $75.

Combination Rules

How do decision weights and outcome values combine to determine the subjective value of a prospect? The amounts that are effective for the decision maker are not the actual sums that would be awarded or taken away but are instead the differences between those sums and the decision maker’s reference point.

Summing Up: Individual Decision Making

Decisions may sometimes be faulty or irrational if probabilities are not carefully considered. A negotiator’s assessment of probabilities affects how he or she negotiates. Clever negotiators are aware of how their own decisions may be biased, as well as how the decisions of others may be manipulated to their own advantage. Now that we know about how individuals make decisions, we are ready to explore multiparty, or interdependent decision making.

Game Theoretic Rationality

Each outcome in a negotiation situation may be identified in terms of its utility for each party. In Exhibit A1-8 for example, party 1’s utility function is represented as u1; party 2’s utility function is represented as u2. Remember that utility payoffs represent the satisfaction parties derive from particular commodities or outcomes, not the actual monetary outcomes or payoffs themselves. A bargaining situation like the one in Exhibit A1-8 has a feasible set of utility outcomes, or F, defined as the set of all its possible utility outcomes for party 1 and party 2 and by its conflict point, c, where c = (c1, c2); c represents the point at which both parties would prefer not to reach agreement (the reservation points of both parties).

Two key issues concern rationality at the negotiation table: One pertains to pie slicing and one pertains to pie expansion. First, people should not agree to a utility payoff smaller than their reservation point; second, negotiators should not agree on an outcome if another outcome exists that is Pareto-superior, that is an outcome that is more preferable to one party and does not decrease utility for the other party.

For example in Exhibit A1-8, the area F is the feasible set of alternative outcomes expressed in terms of each negotiator’s utility function. The triangular area bcd is the set of all points satisfying the individual rationality requirement. The upper-right boundary abde of F is the set of all points that satisfy the joint rationality requirement. The intersection of the area bcd and of the boundary line abde is the arc bd: It is the set of all points satisfying both rationality requirements. As we can see, b is the least favorable outcome party 1 will accept; d is the least favorable outcome party 2 will accept.

The individual rationality and joint rationality assumptions do not tell us how negotiators should divide the pie. Rather, they tell us only that negotiators should make the pie as big as possible before dividing it. How much of the pie should you have?

Nash Bargaining Theory

Nash’s bargaining theory specifies how negotiators should divide the pie, which involves “a determination of the amount of satisfaction each individual should expect to get from the situation or rather, a determination of how much it should be worth to each of these individuals to have this opportunity to bargain.”19 Nash’s theory makes a specific point prediction of the outcome of negotiation, the Nash solution, which specifies the outcome of a negotiation if negotiators behave rationally.

Nash’s theory makes several important assumptions: Negotiators are rational; that is, they act to maximize their utility. The only significant differences between negotiators are those included in the mathematical description of the game. Further, negotiators have full knowledge of the tastes and preferences of each other.

Nash’s theory builds on the axioms named in EU by specifying additional axioms. By specifying enough properties, we exclude all possible settlements in a negotiation except one. Nash postulates that the agreement point, u, of a negotiation, known as the Nash solution, will satisfy the following five axioms: uniqueness, Pareto-optimality, symmetry, independence of equivalent utility representations, and independence of irrelevant alternatives.

Uniqueness

The uniqueness axiom states that a unique solution exists for each bargaining situation. Simply stated, one and only one best solution exists for a given bargaining situation or game. In Exhibit A1-9, the unique solution is denoted as u.

Pareto-Optimality

The bargaining process should not yield any outcome that both people find less desirable than some other feasible outcome. The Pareto-optimality (or efficiency) axiom is the joint rationality assumption made by von Neumann and Morgenstern.20 4 The Pareto-efficient frontier is the set of outcomes corresponding to the entire set of agreements that leaves no portion of the total amount of resources unallocated. A given option, x, is a member of the Pareto frontier if and only if, no option y exists such that y is preferred to x by at least one party and is at least as good as x for the other party.

Consider Exhibit A1-9: Both people prefer settlement point u (u1, u2), which eliminates c (c1, c2) from the frontier. Therefore, settlement points that lie on the interior of the arc bd are Pareto-inefficient. Options that are not on the Pareto frontier are dominated; settlements that are dominated clearly violate the utility principle of maximization. The resolution of any negotiation should be an option from the Pareto-efficient set because any other option unnecessarily requires more concessions on the part of one or both negotiators.

Another way of thinking about the importance of Pareto-optimality is to imagine that in every negotiation whether it be for a car, a job, a house, a merger, or some other situation, a table sits with hundreds, thousands, and in some cases, millions of dollars on it. The money is yours to keep, provided you and the other party (e.g., a car dealer, employer, seller, business associate) agree how to divide it. Obviously you want to get as much money as you can, which is the distributive aspect of negotiation. Imagine for a moment you and the other negotiator settle upon a division of the money that both of you find acceptable. However, imagine you leave half or one-third or some other amount of money on the table. A fire starts in the building, and the money burns. This scenario is equivalent to failing to reach a Pareto-optimal agreement. Most of us would never imagine allowing such an unfortunate event to happen. However in many negotiation situations, people do just that—they leave money to burn.

Symmetry

In a symmetric bargaining situation, the two players have exactly the same strategic possibilities and bargaining power. Therefore, neither player has any reason to accept an agreement that yields a lower payoff than that of the opponent.

Another way of thinking about symmetry is to imagine interchanging the two players. This alteration should not change the outcome. In Exhibit A1-9, symmetry means that u1 will be equal to u2. The feasible set of outcomes must be symmetrical with respect to a hypothetical 45-degree line, λ, which begins at the origin 0 and passes through the point c, thereby implying that c1 = c2. Extending this line out to the farthest feasible point, u, gives us the Nash point wherein parties’ utilities are symmetric.

The symmetry principle is often considered to be the fundamental postulate of bargaining theory.21 When parties’ utilities are known, the solution to the game is straightforward.22 However as we already noted, players’ utilities are usually not known. This uncertainty reduces the usefulness of the symmetry principle. That is, symmetry cannot be achieved if a negotiator has only half of the information.23

The Pareto-optimality and symmetry axioms uniquely define the agreement points of a symmetrical game. The remaining two axioms extend the theory to asymmetrical games in which the bargaining power is asymmetric.

Independence of Equivalent Utility Representations

Many utility functions can represent the same preference. Utility functions are behaviorally equivalent if one can be obtained from the other by an order-preserving linear transformation (e.g., by shifting the zero point of the utility scale or by changing the utility unit). A distinguishing feature of the Nash solution outcome is that it is independent of the exchange rate between two players’ utility scales; it is invariant with respect to any fixed weights we might attach to their respective utilities.

The solution to the bargaining game is not sensitive to positive linear transformations of parties’ payoffs because utility is defined on an interval scale. Interval scales, such as temperature, preserve units of measurement but have an arbitrary origin (i.e., zero point) and unit of measurement. The utility scales for player 1 and player 2 in Exhibit A1-9 have an arbitrary origin and unit of measurement.

For example, suppose you and a friend are negotiating to divide 100 poker chips. The poker chips are worth $1 each if redeemed by you and worth $1 each if redeemed by your friend. The question is this: How should the two of you divide the poker chips? The Nash solution predicts that the two of you should divide all the chips and not leave any on the table (Pareto-optimality principle). Further, the Nash solution predicts that you should receive 50 chips and your friend should receive 50 chips (symmetry principle). So far, the Nash solution sounds fine. Now, suppose the situation is slightly changed. Imagine the chips are worth $1 each if redeemed by you, but they are worth $5 each if redeemed by your friend. (The rules of the game do not permit any kind of side payments or renegotiation of redemption values.) Now, how should the chips be divided? All we have done is transform your friend’s utilities using an order-preserving linear transformation (multiply all her values by 5) while keeping your utilities the same. The Nash solution states that you should still divide the chips 50–50 because your friend’s utilities have not changed; rather they are represented by a different, but nevertheless equivalent linear transformation.

Some people have a hard time with this axiom. After all, if you and your friend are really “symmetric,” one of you should not come out richer in the deal. But consider the arguments that could be made for one of you receiving a greater share of the chips. One of you could have a seriously ill parent and need the money for an operation, one of you might be independently wealthy and not need the money, or one of you could be a foolish spendthrift and not deserve the money. Moreover, there could be a disagreement: One of you regards yourself to be thoughtful and prudent but is regarded as silly and imprudent by the other person. All of these arguments are outside the realm of Nash’s theory because they are indeterminate. Dividing resources to achieve monetary equality is as arbitrary as flipping a coin.

But in negotiation, doesn’t everything really boil down to dollars? No. In Nash’s theory, each person’s utility function may be normalized on a scale of 0 to 1 so that his or her “best outcome” = 1 and “worst outcome” = 0. Therefore, because the choices of origin and scale for each person’s utility function are unrelated to one another, actual numerical levels have no standing in theory, and no comparisons of numerical levels can affect the outcome.

This axiom has serious implications. Permitting the transformation of one player’s utilities without any transformation of the other player’s utilities destroys the possibility that the outcome should depend on interpersonal utility comparisons. Stated simply, it is meaningless for people to compare their utility with another. The same logic applies for comparing salaries, the size of offices, or anything else.

However, people do engage in interpersonal comparisons of utility (Chapter 3). The important point is that interpersonal comparisons and arguments based on “fairness” are inherently subjective, which leaves no rational method for fair division.

Independence of Irrelevant Alternatives

The independence of irrelevant alternatives axiom states that the best outcome in a feasible set of outcomes will also be the best outcome in any smaller subset of feasible outcomes that still contains that outcome. For example, a subset of a bargaining game may be obtained by excluding some of the irrelevant alternatives from the original game, without excluding the original agreement point itself. The exclusion of irrelevant alternatives does not change the settlement.

Consider Exhibit A1-9: The Nash solution is point u. Imagine the settlement options in the half-ellipse below the 45-degree line are eliminated. According to the independence of irrelevant alternatives axiom, this change should not affect the settlement outcome, which should still be u.

This axiom allows a point prediction to be made in asymmetric games by allowing them to be enlarged to be symmetric. For example, imagine that the game parties play is an asymmetric one like that just described (i.e., the half-ellipse below the 45-degree line is eliminated). Such a bargaining problem would be asymmetric, perhaps with player 2 having an advantage. According to Nash, it is useful to expand the asymmetric game to be one that is symmetric—for example, by including the points in the lower half of the ellipse that mirrors the half-ellipse above the 45-degree line. Once these points are included, the game is symmetric, and the Nash solution may be identified. Of course, the settlement outcome yielded by the new, expanded game must also be present in the original game.

The independence of irrelevant alternatives axiom is motivated by the way negotiation unfolds.24 Through a process of voluntary mutual concessions, the set of possible outcomes under consideration gradually decreases to just those around the eventual agreement point. This axiom asserts that the winnowing process does not change the agreement point.

In summary, Nash’s theorem states that the unique solution possesses these properties. Nash’s solution selects the unique point that maximizes the geometric average (i.e., the product) of the gains available to people as measured against their reservation points. For this reason, the Nash solution is also known as the Nash product. If all possible outcomes are plotted on a graph whose rectangular coordinates measure the utilities that the two players derive from them, as in Exhibits A1-8 and A1-9, the solution is a unique point on the upper-right boundary of the region. The point is unique because two solution points could be joined by a straight line representing available alternative outcomes achievable by mixing, with various odds, the probabilities of the original two outcomes, and the points on the line connecting them would yield higher products of the two players’ utilities. In other words, the region is presumed convex by reason of the possibility of probability mixtures, and the convex region has a single maximum-utility-product point, or Nash point.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset