11 Social Dilemmas

The previous chapters focused on negotiation situations called explicit negotiations in which people seek to reach mutual agreement via binding contract. In contrast, many negotiations are conducted by actions and pledges in the absence of a binding contract such as the Kyoto Protocol. We call these situations tacit negotiations.2 In tacit negotiations, negotiators are interdependent with respect to outcomes, but they make independent decisions. Negotiators’ outcomes are determined by the actions they take and the actions taken by others. People can either behave in a cooperative fashion (e.g., agreeing to reduce emissions) or in a competitive fashion (e.g., refusing to reduce emissions).

The distinction between these two different types of negotiation situations was first articulated by the famous mathematician John Nash, who referred to one branch of negotiations as “cooperative games” and the other as “noncooperative games.”3 In using the terms “cooperative” and “noncooperative,” Nash was not referring to the motivations or behaviors of the parties involved but rather, to how the underlying situation was structured. (See Exhibit 11-1.)

In negotiations, our outcomes depend on the actions of others. The situation that results when people engage in behaviors that maximize self-interest but lead to collective disaster such as a bidding war, greenhouse gases, or negative campaigning is a social dilemma. In this chapter we discuss several kinds of social dilemmas including the prisoner’s dilemma, the ultimatum dilemma, the dictator game, the trust dilemma, the volunteer dilemma, multiparty dilemmas, and the escalation dilemma. Each of these dilemmas depicts a real-world situation that might face a negotiator, such as in the opening example. In each of these dilemmas, people are faced with choices to trust in others or a situation and risk being potentially exploited. We discuss how dilemmas may be effectively handled by individuals, teams, and even countries.

By understanding how people behave in a variety of dilemma situations, it is possible to influence people’s behavior. Dilemma situations have been studied in a wide variety of contexts and people’s behavior is highly consistent across time and situations. For example, one investigation observed the behaviors of the same people across five games, including two prisoner’s dilemma games, a trust game, a dictator game, and a faith game, separated by intervals of several months.4 There was strong consistency in behavior across these situations: Those who were prosocial stayed prosocial; those who were proself consistently behaved in a proself fashion. An examination of cooperative behavior revealed that business people were the least cooperative as compared to students, professionals, employees, and so on.5 We discuss how dilemmas may be effectively handled by individuals, teams, and organizations.

Social Dilemmas in Business

Business competitors routinely face social dilemmas. Some industries are particularly vicious, such as telecommunications companies. An example of the cutthroat nature of industry rivals occurred in China when software giant Tencent shut down its messaging service QQ on computers installed with the antivirus software 360 Safe, created by rival Qihoo 360. A battle between the companies began months earlier when Qihoo 360 accused Tencent’s QQ software of leaking users’ private information and offered its own service to prevent such leaks. Tencent publically accused Qihoo of slander and bad business practices. As a result, more than 78% of Chinese Internet users felt the companies had stopped looking out for the needs of their clients. Eventually, the rivals realized that they needed one another and forged an alliance by restarting the messaging service.6

In contrast, other industries have attempted to find points of cooperation that can align their competitive goals. For example, in 2010 Apple and Verizon Wireless teamed up to make the iPhone available on Verizon networks. For three years the iPhone had only been available on AT&T networks. The partnership was a direct attempt to capture the market of the more popular Google Android software. In 2010, 32% of the smartphone market in the United States was on Google Android software, and 25% had the iPhone software package.7 However, by 2013, the number of Google Android software users dropped to 28% and 21% of new iPhone buyers switched from the Android platform.8 Similarly, two national dairy companies using separate advertising campaigns agreed to create a single marketing plan to increase milk sales in the United States. Dairy Management, Inc. which used the “Got Milk?” campaign and the National Fluid Milk Processor Promotion Board which used a popular collection of advertisements in which celebrities wear milk mustaches, coordinated campaigns to increase total fluid milk sales by 4% by the year 2000.9 Willingness to engage in generic advertising (e.g., advertising the local mall instead of one’s own store) is a common form of interfirm cooperation. Other examples are the American Egg Board with their catchy phrase “The incredible edible egg” and the Cattlemen’s Beef Board and National Cattlemen’s Beef Association who says “Beef, it’s what’s for dinner.” Companies confronting a declining trend contribute significantly more dollars to generic advertising; moreover, it positively influences their expectations that others will contribute as well.10

The Prisoner’s Dilemma

Thelma and Louise are common criminals who have just been arrested on suspicion of burglary. Law enforcement has enough evidence to convict each suspect of a minor breaking and entering crime but insufficient evidence to convict the suspects on a more serious felony charge of burglary and assault. The district attorney immediately separates Thelma and Louise after their arrest. Each suspect is approached separately and presented with two options: confess to the serious burglary charge or remain silent (do not confess). The consequences of each course of action depend on what the other decides to do. Thelma and Louise must make their choices independently. They cannot communicate prior to making an independent, irrevocable decision. The decision each suspect faces is illustrated in Exhibit 11-2, which indicates Thelma and Louise will go to prison for as many as 15 years, depending upon what the other partner chooses. Imagine you are advising Thelma. Your concern is not morality or ethics; you are simply trying to get her a shorter sentence. What would you advise her to do?

Ideally, it is desirable for both suspects to not confess, thereby minimizing the prison sentence to one year for each (cell A). This option is risky, however. If one confesses, then the suspect who does not confess goes to prison for the maximum sentence of 15 years—an extremely undesirable outcome (cell B or C). In fact, the most desirable situation from the standpoint of each suspect is to confess, but have the other person not confess. Then, the confessing suspect would be released, and his or her partner goes to prison for the maximum sentence of 15 years. Given these contingencies, what should Thelma do? Before reading further, stop and think about what is her best course of action.

When each person pursues the course of action that is most rational from her point of view, the result is mutual disaster. That is, both Thelma and Louise go to prison for 10 years (cell D). The paradox of the prisoner’s dilemma is that the pursuit of individual self-interest leads to collective disaster. It is easy for Thelma and Louise to see that each could do better by cooperating, but it is not easy to know how to implement this behavior.

Cooperation and Defection as Unilateral Choices

We will use the prisoner’s dilemma situation depicted in Exhibit 11-2 to analyze decision making. We will refer to the choices that players make in this game as cooperation or defection, depending upon whether they remain silent or confess. The language of cooperation and defection allows the prisoner’s dilemma game structure to be meaningfully extended to other situations that do not involve criminals but nevertheless have the same underlying structure, such as whether an airline company should bid for a smaller company, whether a cola company or politician should engage in negative advertising, or whether countries should agree to reduce emissions. However, prisoner’s dilemmas don’t just describe criminals and business strategy. In fact, the prisoner’s dilemma was initially developed to analyze the “arms race” between the United States and the Soviet Union. Each country sought to develop and deploy arsenals of nuclear arms they thought necessary for military defense. Studies of nuclear proliferation are crucial for policy making and debate.11

Rational Analysis

We use the logic of game theory to provide a rational analysis of this situation. We consider three different cases: (a) one-shot, nonrepeated play situations (as in the case of Thelma and Louise), (b) the case in which the decision is repeated for a finite number of times, and (c) the case in which the decision is repeated for a potentially infinite number of trials or the end is unknown.

Case 1: One-Shot Decision

Game theoretic analysis relies on the principle of dominance detection: A dominant strategy results in a better outcome for player 1 no matter what player 2 does.

To illustrate the dominance principle, suppose you are Thelma and your partner in crime is Louise. First, consider what happens if Louise remains silent (does not confess). Thus, we are focusing on the first row in Exhibit 11-2. Remaining silent puts you in cell A: You both get one year. This outcome is not too bad, but maybe you could do better. Suppose you decide to confess. In cell B, you get 0 years and Louise gets 15 years. Certainly no prison sentence is much better than a one-year sentence, so confession seems like the optimal choice for you to make, given that Louise does not confess.

Now, what happens if Louise confesses? In this situation, we focus on row 2. Remaining silent puts you in cell C: You get 15 years, and Louise gets 0 years, which is not very good for you. Now, suppose you confess. In cell D, you both get 10 years. Neither outcome is attractive, but 10 years is certainly better than 15 years. Given that Louise confesses, what do you want to do? The choice amounts to whether you want to go to prison for 15 years or 10 years. Again, confession is the optimal choice for you.

No matter what Louise does (remains silent or confesses), it is better for Thelma to confess. Confession is a dominant strategy because under all possible states of the world, players should choose to confess. We know that Louise is smart and has looked at the situation the same way as Thelma and has reached the same conclusion. In this sense, mutual defection is an equilibrium outcome, meaning no player can unilaterally (single-handedly) improve her outcome by making a different choice.

Thus, both Thelma and Louise are led through rational analysis to confess, and they collectively end up in cell D, where they both go to prison for a long time. This outcome is both unfortunate and avoidable. Certainly, both suspects would prefer to be in cell A than in cell D. Is escape possible from the tragic outcomes produced by the prisoner’s dilemma? It would seem that players might extricate themselves from the dilemma if they could communicate, but we already noted that communication is outside the bounds of the noncooperative game. Further, because the game structure is noncooperative, any deals that players might make with one another are nonbinding. For example, antitrust legislation prohibits companies from price fixing, which requires that each company establish prices on its own, without agreeing with a competitor.

What other mechanism might allow parties in such situations to avoid the disastrous outcome produced by mutual defection? One possibility is to have both parties make those decisions over time, thereby allowing them to influence one another. Suppose the parties did not make a single choice but instead made a series of choices and received feedback about the other player’s choice after each of their decisions. Perhaps repeated interaction with the other person would provide a mechanism for parties to coordinate their actions. If the game is to be played more than once, players might learn that cooperation may be elicited in subsequent periods by cooperating on the first round. We consider that situation next.

Case 2: Repeated Interaction Over a Finite Amount of Time

Instead of making a single choice and living with the consequence, suppose Thelma and Louise were to play the game in Exhibit 11-2 a total of five times. It might seem strange to think about criminals repeating a particular interaction, so it may be useful to think about two political candidates deciding whether to engage in negative campaigning (hereafter referred to as “campaigning”). Term limits in their state dictate that they can run and hold office for a maximum of four years. An election is held every year. During any election period, each candidate makes an independent choice (to campaign or not), then learns of the other’s choice (to campaign or not). After the election, the candidates consider the same alternatives once again and make an independent choice; this interaction continues for five separate elections.

We use the concept of dominance as applied previously to analyze this situation, but we need another tool that tells us how to analyze the repeated nature of the game. Backward induction is the mechanism by which a person decides what to do in a repeated game situation by looking backward from the last stage of the game.

We begin by examining what players should do in election 4 (the last election). If the candidates are making their choices in the last election, the game is identical to that analyzed in case 1, the one-shot case. Thus, the logic of dominant strategies applies, and we are left with the conclusion that each candidate will choose to campaign. Now, given that we know each candidate will campaign in the last election, what will they do in the third election?

From a candidate’s standpoint, the only reason to cooperate (or to not campaign) is to influence the behavior of the other party in the subsequent election. In other words, a player might signal a willingness to cooperate by making a cooperative choice in the preceding period. We have already determined that it is a foregone conclusion that both candidates will defect (choose to campaign) in the last election, so it is futile to choose the cooperative (no campaigning) strategy in the third election. So, what about the second election? Given that candidates will not cooperate in the last election, nor in the second-to-last election, they would find little point to cooperating in the third-to-last election for the same reason that cooperation was deemed to be ineffective in the second-to-last election. As it turns out, this logic can be applied to every election in such a backward fashion. Moreover, this reasoning is true in any situation with a finite number of elections. This realization leaves us with the conclusion that defection remains the dominant strategy even in the repeated trial case.12

This conclusion is disappointing. It suggests that cooperation is not possible even in long-term relationships. However, it runs counter to intuition, observation, and logic. We must consider another case, arguably more realistic of the situations we want to study in most circumstances, in which repeated interaction continues for an infinite or indefinite amount of time.

Case 3: Repeated Interaction for an Infinite or Indefinite Amount of Time

In the case in which parties interact with one another for an infinite or indefinite amount of time, the logic of backward induction breaks down. Because no identifiable endpoint from which to reason backward exists, we are left with forward-thinking logic.

If we anticipate playing a prisoner’s dilemma game with another person for an infinitely long or uncertain length of time, we reason that we might influence their behavior with our own behavior. We may signal a desire to cooperate on a mutual basis by making a cooperative choice initially. Similarly, we can reward and punish their behavior through our actions.

Under such conditions, the game theoretic analysis indicates that cooperation in the first period is the optimal choice.13 Should our strategy be to cooperate no matter what? No! If a person adopted cooperation as a general strategy, it would surely lead to exploitation. So, what strategy would be optimal to adopt? Before reading further, stop and indicate what strategy you think is best.

The Tournament of Champions

In 1981, Robert Axelrod, a leading game theorist, invited members of the scientific community to submit a strategy to play in a prisoner’s dilemma tournament. To play in the tournament, a person had to submit a strategy (a plan that instructed a decision maker of what to do in every trial under all possible conditions) in the form of a computer program written in FORTRAN code. Axelrod explained that each strategy would play all other strategies across 200 trials of a prisoner’s dilemma game. He further explained that the strategies would be evaluated in terms of the maximization of gains across all opponents. Hundreds of strategies were submitted by eminent scholars from around the world.

The Winner is a Loser

The winning strategy of the tournament was the simplest strategy submitted. The FORTRAN code was only four lines long. The strategy was called tit-for-tat and was submitted by Anatol Rapoport. Tit-for-tat accumulated the greatest number of points across all trials with all of its opponents. The basic principle for tit-for-tat is simple: Tit-for-tat always cooperates on the first trial, and on subsequent trials it does whatever its opponent did on the previous trial (time period). For example, suppose tit-for-tat played against someone who cooperated on the first trial, defected on the second trial, and then cooperated on the third trial. Tit-for-tat would cooperate on the first trial and the second trial, defect on the third trial, and cooperate on the fourth trial.

Tit-for-tat never beat any of the strategies it played against. Because it cooperates on the first trial, it can never do better than its opponent. The most tit-for-tat can do is earn as much as its opponent. If it never wins, how can tit-for-tat be so successful in maximizing its overall gains? The answer is that it induces cooperation from its opponents.

Psychological Analysis of Why Tit-for-Tat Is Effective

Not Envious

One reason why tit-for-tat is effective is that it is not an envious strategy. Tit-for-tat never aims to beat its opponent. Tit-for-tat can never earn more than any strategy it plays against. Rather, the tit-for-tat strategy seeks to maximize its own gain in the long run. Unfortunately, people are often preoccupied with how much the other party is earning. And fairness becomes a more important concern than self-interest when negotiation involves negative payoffs than when it involves positive payoffs.14 When people are motivated by fairness, they are more likely to make attractive offers; but when they are motivated by self-interest, they don’t reveal information and make unattractive offers.15

Nice

Tit-for-tat always begins the interaction by cooperating. Furthermore, because it is never the first to defect, tit-for-tat is a nice strategy. This feature is important because it is difficult for people to recover from initial defections. Competitive, aggressive behavior often sours a relationship. Moreover, aggression often begets aggression. The tit-for-tat strategy neatly avoids the costly mutual escalation trap.

Tough

A strategy of solid cooperation would be easily exploitable by an opponent. Tit-for-tat can be provoked: It will defect if the opponent invites competition. By reciprocating defection, tit-for-tat conveys the message that it cannot be taken advantage of. Indeed, tit-for-tat players effectively move competitive players away from them, thus minimizing noncooperative interaction.16 Analyses of strategic rivalries from 1816 to 1999 revealed that a state has an incentive to initiate and escalate conflicts to maintain a reputation of ‘resolute behavior’ important for general and immediate deterrence.17

Forgiving

Tit-for-tat is a forgiving strategy in the sense that it reciprocates cooperation, another important feature. It is often difficult for people in conflict to recover from defection and end an escalating spiral of aggression. Tit-for-tat’s eye-for-an-eye strategy ensures its response to aggression will never be greater than what it received.

Simple

Another reason why tit-for-tat is so effective is that it is simple. People can quickly figure out what to expect from a player who follows it. When people are uncertain or unclear about what to expect, they are more likely to engage in defensive behavior. When uncertainty is high, people often assume the worst about another person.

In summary, tit-for-tat is an extremely stable strategy. Negotiators who follow it often induce their opponents to cooperate. However, few who play prisoner’s dilemma games actually follow tit-for-tat. For example, in our analysis of more than 600 executives playing the prisoner’s dilemma game, the defection rate is nearly 40%, and average profits are only one-tenth of the possible maximum! But tit-for-tat is not uniquely stable; other strategies are stable as well. For example, solid defection is a stable strategy. Two players who defect on every trial have little reason to do anything else. Once someone has defected, it is difficult to renew cooperation.

Recovering from Defection

The beer industry is highly competitive, with different companies trying to capture market share by using negative campaigning. The computer industry is also highly competitive, with companies trying to capture market share using the same tactic of negative campaigning. In 2010, Apple released a series of commercials for its Macintosh computer where the Mac was portrayed as a hip, young man while the PC was represented by a socially inept, blandly dressed middle-aged man.18 Similarly, rivals Coke and Pepsi spent decades as the world’s Number 1 and 2 soft drink makers, and an equal amount of time taunting one another in advertising. One Pepsi Max ad portrayed delivery drivers from the rival soft drink makers forming a short-lived friendship in a diner over the song “Why Can’t We Be Friends” by the band, War. The Pepsi Max driver and the Coke Zero driver sample each other’s drinks, and the Coca-Cola driver prefers the Pepsi drink. When the Pepsi driver snaps a picture of the Coke driver enjoying the drink, a cartoonish fight erupts.19 Hundreds of thousands of dollars are spent on negative advertising—a form of defection. How can an escalating spiral of defection be brought to an end? Consider the following strategies:

Make Promises

“Talk is cheap” in a prisoner’s dilemma game because people can always say one thing but actually do another. Nevertheless, when people make verbal commitments they tend to honor them, even if they are not binding.20 In one investigation, 40% of people made voluntary promises, and these people were 50% more likely to cooperate than people who did not make promises.21 When players alternate making proposals with a confirmation stage, they are more likely to cooperate.22 Confirmation acts as a tacit message that signals players’ willingness to cooperate.

Make Situational Attributions

We often blame conflict escalation on others’ ill will and evil intentions. We fail to realize that we might have done the same thing as our competitor had we been in his or her shoes. Why? We punctuate events differently than do our opponents. We see our behavior as a defensive response to the other. In contrast, we view the other as engaging in unprovoked acts of aggression. The solution is to see the other party’s behavior as a response to our own actions. In the preceding situation, your competitor’s negative ad campaign may be a payback for your campaign a year ago.

One Step at a Time

Once destroyed, trust takes time to rebuild. Rebuild trust incrementally by taking a series of small steps that effectively “reward” the other party if they behave cooperatively. For example, the GRIT (graduated reduction in tension relations) strategy (reviewed in Chapter 9) calls for parties in conflict to offer small concessions.23 This approach reduces the risk for the party making the concession.

Getting Even and Catching Up

As we saw in Chapter 3, people are especially concerned with fairness. One way of rebuilding trust is to let the other party “get even” and catch up. Repairing a damaged relationship may depend on repentance on the part of the perpetrator and forgiveness on the part of the injured.24 Even more surprising is that small amends are as effective as large amends in generating future cooperation.

Make Your Decisions at the Same Time

Imagine you are playing a prisoner’s dilemma game like that described in the Thelma and Louise case. You are told about the contingencies and payoffs in the game and then asked to make a choice. The twist in the situation is that you are told that your opponent (a) has already made her choice earlier that day, (b) will make her choice later that day, or (c) will make her choice at the same time as you. In all cases, you will not know the other person’s choice before making your own. When faced with this situation, people are more likely to cooperate when their opponent’s decision is temporally contiguous with their own decision (i.e., when the opponent makes her decision at the same time).25 Temporal contiguity fosters a causal illusion: the idea that our behavior at a given time can influence the behavior of others. This logical impossibility is not permissible in the time-delayed decisions.

In the prisoner’s dilemma game, people make choices simultaneously; therefore, one’s choice cannot influence the choice the other person makes on a given trial, only in subsequent trials. That is, when Thelma makes her decision to confess or not, it does not influence Louise, unless she is telepathic. Nevertheless, people act as if their behavior influences the behavior of others, even though it logically cannot.

In an intriguing analysis of this perception, Douglas Hofstadter wrote a letter, published in Scientific American, to 20 friends.

Letter from Douglas Hofstadter to 20 Friends in Scientific American:26

Dear          :

I am sending this letter by special delivery to 20 of you (namely, various friends of mine around the country). I am proposing to all of you a one-round Prisoner’s Dilemma game, the payoffs to be monetary (provided by Scientific American). It is very simple. Here is how it goes.

Each of you is to give me a single letter: C or D, standing for “cooperate” or “defect.” This will be used as your move in a Prisoner’s Dilemma with each of the 19 other players.

Thus, if everyone sends in C, everyone will get $57, whereas if everyone sends in D, everyone will get $19. You can’t lose! And, of course, anyone who sends in D will get at least as much as everyone else. If, for example, 11 people send in C and nine send in D, then the 11 C-ers will get $3 apiece from each of the other C-ers (making $30) and will get nothing from the D-ers. Therefore, C-ers will get $30 each. The D-ers in contrast, will pick up $5 apiece from each of the C-ers (making $55) and will get $1 from each of the other D-ers (making $8), for a grand total of $63. No matter what the distribution is, D-ers always do better than C-ers. Of course, the more C-ers there are, the better everyone will do!

By the way, I should make it clear that in making your choice you should not aim to be the winner but simply to get as much money for yourself as possible. Thus, you should be happier to get $30 (say, as a result of saying C along with 10 others, even though the nine D-sayers get more than you) than to get $19 (by saying D along with everyone else, so that nobody “beats” you). Furthermore, you are not supposed to think that at some later time you will meet with and be able to share the goods with your coparticipants. You are not aiming at maximizing the total number of dollars Scientific American shells out, only at maximizing the number of dollars that come to you!

Of course, your hope is to be the unique defector, thereby really cleaning up: with 19 C-ers, you will get $95 and they will each get 18 times $3, namely $54. But why am I doing the multiplication or any of this figuring for you? You are very bright. So are the others. All about equally bright, I would say. Therefore, all you need to do is tell me your choice. I want all answers by telephone (call collect, please) the day you receive this letter.

It is to be understood (it almost goes without saying, but not quite) that you are not to try to consult with others who you guess have been asked to participate. In fact, please consult with no one at all. The purpose is to see what people will do on their own, in isolation. Finally, I would appreciate a short statement to go along with your choice, telling me why you made this particular one.

Yours,

Doug H.

Hofstadter raised the question of whether one person’s action in this situation can be taken as an indication of what all people will do. He concluded that if players are indeed rational, they will either all choose to defect or all choose to cooperate. Given that all players are going to submit the same answer, which choice would be more logical? It would seem that cooperation is best (each player gets $57 when they all cooperate and only $19 when they all defect). At this point, the logic seems like magical thinking: A person’s choice at a given time influences the behavior of others at the same time. Another example: People explain that they have decided to vote in an election so that others will, too. Of course, it is impossible that one person’s voting behavior could affect others in a given election, but people act as if it does. Hofstadter argues that decision makers wrestling with such choices must give others credit for seeing the logic they themselves have seen. Thus, we need to believe that others are rational (like ourselves) and that they believe that everyone is rational. Hofstadter calls this rationality, superrationality. For this reason, choosing to defect undermines the very reasons for choosing it. In Hofstadter’s game, 14 people defected, and 6 cooperated. The defectors received $43; the cooperators received $15. Robert Axelrod was one of the participants who defected, and he remarked that a one-shot game offers no reason to cooperate. When people believe that their counterparty will make a decision in the future, their decision to cooperate is often guided by an illusory belief that they can control the other party’s decision, even when they cannot.27

Ultimatum Dilemma

In an ultimatum bargaining situation, one person (proposer) makes a final offer—an ultimatum—to another person (responder). If the responder accepts the offer, then the proposer receives the demand that he or she made, and the responder agrees to accept what was offered to him or her. If the offer is refused, then no settlement is reached (i.e., an impasse occurs) and negotiators receive their respective reservation points.

How should we negotiate in ultimatum situations? What kind of a final offer should we make to another person? When the tables are turned, on what basis should we accept or refuse a final offer someone makes to us?

Suppose someone with a $100 bill in hand approaches you and the person sitting on the bus beside you. This person explains that the $100 is yours to share with the other person if you can propose a split to which the other person will agree. The only hitch is that the division you propose is a once-and-for-all decision: You cannot discuss it with the other person, and you have to propose a take-it-or-leave-it split. If the other person accepts your proposal, the $100 will be allocated accordingly. If the other person rejects your proposal, no one gets any money, and you do not have the opportunity to propose another offer. Faced with this situation, what should you do? (Before reading further, indicate what you would do and why.)

It is useful for us to solve this problem using the principles of decision theory and then see whether the solution squares with our intuition. Once again, we use the concept of backward induction, working backward from the last period of the game. The last decision in this game is an ultimatum. In this game, the responder (the person beside you on the bus) must decide whether to accept the proposal offered by you or reject the offer and receive nothing. From a rational standpoint, the responder should accept any positive offer you make to him or her because, after all, something (even 1 cent) is better than nothing.

Now we can examine the next-to-last decision in the game and ask what split of money the proposer (you) should make. Because you know that the responder should accept any positive offer greater than $0, the game theoretic solution is for you to offer $0.01 to the responder and demand $99.99 for yourself. This proposal is a subgame perfect equilibrium because it is rational within each period of the game.28 In other words, even if the game had additional periods to be played in the future, your offer of $99.99 (to you) and $0.01 to the other person would still be rational at this point.

Contrary to game theoretic predictions, most people do not behave in this way. That is, most proposers offer amounts substantially greater than $0.01 for the responder, often around the midpoint, or $50. Further, responders often reject offers that are not 50–50 splits.29 Thus, some responders choose to have $0 rather than $1 or $2—or even $49. Proposers act nonrationally, and so do responders. This response seems completely counter to one’s interests, but as we saw in Chapter 2, people are often more concerned with how their outcomes compare to others than with the absolute value of their outcomes.30 Several factors determine the likelihood that the responder will accept the offer made by the proposer.

Complete versus Incomplete information

Acceptance rates are driven by how much information the responder has about the size of the total pie.31 When the responder does not know the size of the pie and receives a dollar offer, she is much more likely to reject it. In an ultimatum situation, ignorance might also serve as strength, rather than a weakness, in the following sense. For example, consider a situation in which a proposer can choose between two possible offers to make to a responder. Option A always gives the proposer a higher payoff than option B. However, suppose that the payoff for the responder depends on a random determination (sometimes the responder gets a desirable payoff, and other times not). Now, suppose that the proposer does not know how much money the responder will receive; instead, the proposer only knows how much he/she will get. When proposers are “ignorant” about how much the responder gets, responders accept whatever the proposer suggests almost all of the time; this is not true when the responder believes the proposer is not ignorant.32

Framing

Ultimatum games can be framed as “taking” or “giving.” Allocations to responders are highest when the game is framed as “taking”; and allocations are lowest when the game is framed as “giving.”33

Deadlines

Responders usually set deadlines that are too short; a better strategy in the case of uncertainty about the other party’s deadline is to set a longer deadline.34

Feelings and Emotions

Compared with proposers who are not dependent on their feelings, those who are dependent on their feelings make less generous offers.35

Social Identity

The social identity of the players affects the allocations offered by proposers. For example, even when people know that the counterparty is behaving unfairly, they are more likely to hold a positive bias toward their in-group.36 And, people are more likely to deceive strangers than their friends; and people are more suspicious of strangers than friends in an ultimatum game.37

Dictator Game

In the dictator game, the proposer makes a suggested split of money (or resources) for himself or herself and a responder. However, unlike the ultimatum game in which the responder has the choice to either accept or reject the proposed split, in the dictator game, the responder must accept the split. On the surface, it would seem that in such a situation the proposer (dictator) would simply keep everything for himself or herself, given than the responder has no say in the matter. However, this is not actually what happens. A striking number of dictators give the responder a nonzero allocation. Some research results report that proposers either fail to maximize their own expected utility or that the proposer’s utility functions include benefits received by others. However, other research suggests the proposer’s generous behavior is motivated by their desire to behave in a manner consistent with social norms and a willingness to take less in order to avoid actions that could be viewed as socially inappropriate, such as greed. This could indicate that the proposer expects some long-term benefit from their altruistic demonstration of generosity.38

Trust Game

The trust game extends the dictator game by one step in terms of the reward that the dictator can unilaterally split between himself and a partner, which is partially decided by an initial offer (gift) from the partner. Thus, in the trust game, the first move is made by the dictator’s partner (trustor), who must decide how much his or her initial endowment to trust the dictator with, in the hope of receiving some of it back. The rules of the game specify that the gift that the trustor makes to the trustee will be increased by some factor. Behavior in trust games involves possible deception if the partner is trusting.

From a purely economic point of view, the trustor (investor) should not invest anticipating that the allocator (trustee) will keep all the money. However in actuality, trustors send money. Moreover, people are more likely to send money in the trust game than take a gamble involving equal stakes in a situation that does not involve another person (e.g. a lottery).39 However when people are encouraged to engage in consequential thinking (thinking about what might happen as the game is played out), their decision to invest decreases dramatically.40 Although many trustees return enough money to the trustor to equalize their payoffs, trustors only benefit when they send all or almost all of their endowments.41 Recipients (trustees) view sending less than everything as a lack of trust. Trustors focus primarily on the risks associated with trusting, whereas trustees base their decision on the level of benefits they have received.42

Binding versus Nonbinding Contracts

The use of binding contracts has a paradoxical effect on trust. In one investigation, negotiators operated under either binding or nonbinding contracts and then these contracts were removed. Trust dropped significantly when binding contracts were removed, but not when nonbinding contracts were removed.43 This is because people attribute cooperation to constraints imposed by the contract, not personal trust.

One investigation examined proposals made between trustors and trustees who either communicated using nonbinding numerical or chat messages. Numerical communication significantly increased both trusting behavior and trustworthiness, and a 60-second communication in a chat room generated an even larger effect.44 Chat messages increase the likelihood that trustors and trustees adhere to nonbinding agreements.

Social Networks and Reputations

When people play the trust game in the context of a social network, trust is lower when people are building networks, but once networks are formed, they engage in higher levels of trust and trustworthiness.45 When trustors have an opportunity to observe the allocators’ previous decision, they are more likely to invest (send money).46 Compared to individual decision makers, group representatives who are responsible for making unilateral decisions on behalf of their groups are more trusting but less reciprocating.47 Gender differences in trust exist: Men tend to trust people based on whether they share a common group membership; women trust people with whom they have a relationship.48

Relationship Threat

People in long-term relationships need to cope with stressors. Paradoxically, stressors can increase trust. Indeed, when people experience a stress to their relationship, they are more likely to display higher trusting behavior, even in a one-shot trust game.49

Self-Blame and Regret

One reason why trustors don’t invest is because they anticipate blaming themselves if trust is violated. Indeed, people were more reluctant to invest money in a company when it risked failure due to fraud versus low consumer demand, because people are more likely to blame themselves for not anticipating fraud.50

Restoring Broken Trust

After a trust violation, some people are quick to forgive, but others never trust again. Desire to punish others is determined by feelings of anger. As people get older, they are less likely to seek retribution (punish others who have violated their trust).51 People who believe that moral character can change over time (incremental beliefs) are more likely to trust the counterparty following an apology, but people who believe that moral character cannot change (entity beliefs) are not.52 For example, people who read an essay in which a person’s moral character changes are more likely to trust another person than people who read an essay in which people’s moral character remains unchanged over time.

When trust is violated in trust games, a key question concerns how transgressors make amends and whether such amends are accepted by the victim. A widely used strategy is having the transgressor pay financial compensation to the victim. When the transgressor makes voluntary (as opposed to forced) compensation to the victim, this communicates more repentance to victims than when the compensation is imposed, particularly when people have a low tendency to forgive.53 Larger compensations elicit more trust as compared to exact or partial compensations, but not when the transgressor’s bad intentions are obvious.54

Volunteer Dilemma

The volunteer dilemma is a situation in which at least one person in a group must sacrifice his or her own interests to better the group.55 An example is a group of friends who want to go out for an evening of drinking and celebration. The problem is that not all can drink if one person must safely drive everyone home. A “designated” driver is a volunteer for the group. Another example is a bystander’s decision to help a victim or a company’s decision to develop innovative products. From a purely economic standpoint, there is little or no incentive for a member to volunteer and there is a diffusion of responsibility among group members.56 Indeed, group size has a negative effect on volunteering.57 When volunteers can share the cost, the probability of volunteering is greater than when one of the volunteers takes the full burden. Communication increases volunteering and the act of volunteering strengthens group ties.58 Feelings of obligation to one’s group, expectation of extrinsic rewards, and identifying with one’s organization all significantly increase volunteerism.59

Multiparty Dilemmas

Sometimes, managers find themselves involved in a prisoner’s dilemma that contains several people. For example, there are several commercial air carriers, and there are several fast-food options that compete with one another for market share. In these types of situations, parties may exhibit cooperative strategies or competitive, self-interested strategies, such as price competition and negative advertising. The multiperson prisoner’s dilemma is known as a social dilemma. In general, people behave more competitively (in a self-interested fashion) in social dilemmas (when more than two players compete) as compared to prisoner’s dilemmas (when two players compete). Why do they act this way?

First, the prisoner’s dilemma involves two parties; the social dilemma involves several people. People behave more competitively in groups than in two-person situations.60

Second, the costs of defection are spread out, rather than concentrated upon one person. Simply stated, when one person makes a self-interested choice and others choose to cooperate, everyone but the defector absorbs some (but not all) of the cost. Thus, the defecting person can rationalize that everyone is suffering a little bit, rather than a lot. This mindset may lead people to be more inclined to serve their own interests.

Third, social dilemmas are riskier than prisoner’s dilemmas. In the two-person dilemma, a certain minimal payoff to parties can be anticipated in advance. However, this outcome is not true in a social dilemma. The worst-case scenario is when the negotiator chooses to cooperate and everyone else defects. The costs of this situation are great. Greater risk and more uncertainty lead people to behave in a more self-interested, competitive fashion.

Fourth, social dilemmas provide anonymity that prisoner’s dilemmas do not. Whereas anonymity is impossible in two-party situations, in social dilemmas people can “hide among the group.” When people feel less accountable, they are more inclined to behave in a self-interested, competitive fashion.

Finally, people in social dilemmas have less control over the situation. In a classic, two-party prisoner’s dilemma, people can directly shape and modify the behavior of the other person. Specifically, by choosing defection, one person may punish the other; by choosing cooperation, he or she can reward the other. This is the beauty of the tit-for-tat strategy. However, in a social dilemma, if someone defects, one person cannot necessarily punish the other on the next period because others will also be affected, and as we have seen, the costs of defection are spread out. For example, consider a classic social dilemma as illustrated by the Organization of the Petroleum Exporting Countries (OPEC). OPEC is a group of mostly Middle Eastern nations that have all agreed to reduce their production of oil. Lowering the volume of available oil creates greater demand, and oil prices go up. Obviously, each member of OPEC has an incentive to increase its production of oil, thus creating greater profit for itself. However, if all members violate the OPEC agreement and increase the production of oil, demand decreases, and so does the price of oil, thus driving down profits for the entire group.

The Tragedy of the Commons

Imagine you are a farmer. You own several cows and share a grazing pasture known as “commons” with other farmers. One hundred farmers share the pasture. Each farmer is allowed to have one cow graze. Because the commons is not policed, it is tempting for you to add one more cow. By adding another cow, you can double your utility, and no one will really suffer. If everyone does the same thing however, the commons will be overrun and the grazing area depleted. The cumulative result will be disastrous. What should you do in this situation if you want to keep your family secure?

The analysis of the tragedy of the commons 61 may be applied to many real-world problems, such as pollution, use of natural resources, and overpopulation. In these situations, people are tempted to maximize their own gain, reasoning that their pollution, failure to vote, and using polystyrene cups will not have a measurable impact on others. However, if everyone engages in this behavior, the collective outcome is disastrous: Air will be unbreathable, not enough votes will support a particular candidate in an election, and landfills will be overrun. Thus, in the social dilemma the rational pursuit of self-interest produces collective disaster.

In the social dilemma situation, each person makes behavioral choices similar to those in the prisoner’s dilemma to benefit oneself or the group. As in the prisoner’s dilemma, the choices are referred to as “cooperation” and “defection.” The defecting choice always results in better personal outcomes, at least in the immediate future, but universal defection results in poorer outcomes for everyone than with universal cooperation.

A hallmark characteristic of social dilemmas is that the rational pursuit of self-interest is detrimental to collective welfare. This factor has serious and potentially disastrous implications. (In this sense, social dilemmas contradict the principle of hedonism and laissez-faire economics.) Unless some limits are placed on the pursuit of personal goals, the entire society may suffer.

Types of Social Dilemmas

The two major forms of the social dilemma are resource conservation dilemmas (also known as collective traps) and public goods dilemmas (also known as collective fences).62 In the resource conservation dilemma, people take or harvest resources from a common pool (like the farmers in the commons). Examples of the detrimental effects of individual interest include pollution, harvesting (of fossil fuels), burning of fossil fuels, water shortages, and negative advertising (see Exhibits 11-3a and 11-3b for examples). The defecting choice occurs when people consume too much. For groups to sustain themselves, the rate of consumption cannot exceed the rate of replenishment of resources.

In public goods dilemmas, people contribute or give resources to a common pool or community. Examples include donating to public radio and television, paying taxes, voting, doing committee work, and joining unions. The defecting choice is to not contribute. Those who fail to contribute are known as defectors or free riders. Those who pay while others free ride are affectionately known as suckers.

Think of resource conservation dilemmas as situations in which people take things; and think of public goods dilemmas as situations in which people must contribute. Moreover, both kinds of dilemmas—taking too much and failing to contribute—can occur within an organization or between different organizations (see Exhibit 11-4 for examples).

How to Build Cooperation in Social Dilemmas

Most groups in organizations could be characterized as social dilemma situations.63 Members are left to their own devices to decide how much to take or contribute for common benefit. Consider an organization in which access to supplies and equipment—such as computers, photocopy paper, stamps, and envelopes—is not regulated. Each member may be tempted to overuse or hoard resources, thereby contributing to a rapid depletion of supply.

Many individual characteristics of people have been studied, such as gender, race, Machiavellianism, status, age, and so forth.64 Few, if any, individual differences reliably predict behavior in dilemma games. In fact, people cooperate more than rational analysis would predict. Many investigations use a single trial or fixed number of trials in which the rational strategy is solid defection. When the game is infinite or the number of trials is indefinite, however, people cooperate less than they should. What steps can the negotiator take to build greater cooperation and trust among organization members? Two major types of approaches for maximizing cooperation are structural strategies (which are often institutional changes) and psychological strategies (which are usually engaged in by the organizational actor; see Exhibit  11-5).

Structural Strategies

Structural strategies involve fundamental changes in the way that social dilemmas are constructed. They are usually the result of thoughtful problem solving and often produce a change in incentives.

Align Incentives

Monetary incentives for cooperation, privatization of resources, and a monitoring system increase the incidence of cooperation. For example, by putting in “high-occupancy vehicle” lanes on major highways, single drivers are motivated to carpool. However, realignment of incentives can be time consuming and expensive.

Often, defectors are reluctant to cooperate because the costs of cooperation seem exorbitantly high. For example, people often defect by not paying their parking tickets because the price is high and they have several tickets. In some cases, city officials introduce amnesty delays for delinquent parking tickets, whereby people can cooperate at a cost less than they expected. Some U.S. cities have adopted similar policies to induce people to return borrowed library books.

Cooperation can also be induced through reward and recognition in organizations. Recognition awards, such as gold stars, employee-of-the-month awards, and the like, are designed to induce cooperation rather than defection in a variety of organizational social dilemmas.

In some instances, cooperation can be induced by increasing the risk associated with defection. For example, some people do not pay their state or federal income tax in the United States. This behavior is illegal, and if a defector is caught, he or she can be convicted of a crime. The threat of spending years in jail often lessens the temptation of defection. However, most tacit negotiations in organizations are not policed in this fashion, and therefore, defection is more tempting for would-be defectors.

Monitor Behavior

When we monitor people’s behavior, they often conform to group norms. The same beneficial effects also occur when people monitor their own behavior. For example, when people meter their water consumption during a water shortage, they use less water.65 Moreover, people who meter their water usage express greater concern with the collective costs of overconsumption during a drought.

One method of monitoring behavior is to elect a leader. For example, people often favor electing a leader when they receive feedback that their group has failed at restricting harvests from a collective resource.66 When a leader is introduced into a social dilemma situation, especially an autocratic leader, individual group members might fear restriction of their freedom.67 People are more reluctant to install leaders in public goods situations (contributing) than in common resource situations (taking) because it is more threatening to give up decision freedom over private property than over collective property.68

Regulation

Regulation involves government intervention to correct market imperfections, with the idea of improving social welfare. Examples include rationing, in which limits are placed on access to a common-pool resource (i.e., water use). Regulation also occurs in other markets, such as agriculture. The telephone industry in the United States is a heavily regulated industry. In 1934, Congress created the Federal Communications Commission (FCC) to oversee all wire and radio communication (e.g., radio, broadcast, telephone). Even though regulation does not always result in a system that encourages responsible behavior (e.g., the moral hazard problem created by the Federal Deposit Insurance Corporation system), the intent of regulation is to protect public (social) interests.

Privatization

The basic idea of privatization is to put public resources under the control of specific individuals or groups, i.e., put public lands in private hands. The rationale is that public resources will be better protected if they are under the control of private groups or individuals. When the Toronto city council privatized their garbage collection services, they saved $11M, reduced citizen complaints, and decreased law violations.69

Tradable Permits

Tradable environmental allowance (TEA) governance structures are another way of navigating social dilemmas. In TEA arrangements, instead of competing for scarce resources (like the right to pollute), companies purchase the rights to pollute or to use scarce resources.70 Users treat these rights as they would conventional property and thus, conserve resources carefully.71 Tradable permits have been successfully used for managing fisheries, water supply, and air and water pollution in many different countries.72 For example, in the fishing industry, the total allowable catch (or TAC) is set by government agencies and subsequently allocated to associations or individual users. As in the case of pollution, these allocations can be traded by individuals or companies.

Psychological Strategies

In contrast to structural strategies, which often require an act of government or layers of bureaucracy to enact, psychological strategies are inexpensive and only require the wits of the influence agent.

Psychological Contracts

Legal contracts involve paperwork and are similar to the deterrence-based trust mechanisms we discussed in Chapter 6. In contrast, psychological contracts, commonly known as “handshake deals,” are not binding in a court of law, but create psychological pressure to commit. People are more likely to cooperate when they promise to cooperate. Although such promises are nonbinding and are therefore “cheap talk,” people nevertheless act as if they are binding. The reason for this behavior, according to the norm of commitment, is that people feel psychologically committed to follow through with their word.73 The norm of commitment is so powerful that people often do things that are completely at odds with their preferences or that are highly inconvenient. For example, once people agree to let a salesperson demonstrate a product in their home, they are more likely to buy it. Homeowners are more likely to consent to have a large (over 10 foot tall), obtrusive sign in their front yard that says “Drive Carefully” when they agree to a small request made the week before.74

Economics

Our behavior in social dilemmas is influenced by our perceptions about what kinds of behavior are appropriate and expected in a given context. In an intriguing examination of this idea, people engaged in a prisoner’s dilemma task. In one condition the game was called the “Wall Street game,” and in another condition the game was called the “Community game.”75 Otherwise, the game, the choices, and the outcomes were identical. Although rational analysis predicts that defection is the optimal strategy no matter what the name, in fact cooperation was three times as high in the Community game as in the Wall Street game, indicating that people are sensitive to situational cues as trivial as the name of the game. Indeed, people behave more competitively in social dilemmas involving economic decisions compared to those involving noneconomic decisions.76 People majoring in economics or who have taken multiple economics courses keep more money in an allocation task than those who have not taken economics courses.77 Education in economics is associated with more acceptance of greed.

Communication

A key determinant of cooperation is communication.78 When people are allowed to communicate with the members of the group prior to making their choices, cooperation increases dramatically.79 The type of communication matters as well. Task-related communication (as opposed to nontask-related communication) promotes greater cooperation by activating interpersonal norms related to fairness and trust.80

Two reasons explain this increase in cooperation.81 First, communication enhances group identity or solidarity. Second, communication allows group members to make public commitments to cooperate. Verbal commitments in such situations indicate the willingness of others to cooperate. They reduce the uncertainty people have about others in such situations and provide a measure of reassurance to decision makers. Of the two explanations, it is the commitment factor that is most important.82

In our investigations on the relative effectiveness of verbal face-to-face communication as compared to written-only or no communication, people who communicate face-to-face are much more likely to reach a mutually profitable deal because they are able to coordinate on a price above each party’s BATNA.83 Commitments also shape subsequent behavior. People are extremely reluctant to break their word, even when their words are nonbinding. If people are prevented from making verbal commitments, they attempt to make nonverbal ones.

The other reason why communication is effective in engendering cooperation is that it allows group members to develop a shared group identity. Communication allows people to get to know one another and feel more attached to their group. People derive a sense of identity from their relationships to social groups.84 When our identity is traced to the relationships we have with others in groups, we seek to further the interests of these groups. This identification leads to more cooperative, or group-welfare, choices in social dilemmas.

Social identity is often built through relationships. For example, Israeli and Palestinian police officers in the West Bank formed joint police patrols in search of reckless drivers, drug smugglers, human traffickers, and other criminals where Jewish settlers and Palestinian residents reside and share the same roads. Because crimes have no borders, the partnership focused on a higher-order goal.85 In another example, a field study of a drought in California revealed that people were more willing to support authorities’ request for limiting their own use of water when they had strong relational bonds to the authorities.86

Personalize Others

People often behave as if they were interacting with an entity or organization rather than a person. For example, an embittered customer claims that the airline refused to refund her when in fact it was a representative of the airline who did not issue a refund. To the extent that others can be personalized, people are more motivated to cooperate than if they believe they are dealing with a dehumanized bureaucracy. Even more important is that people see you as a cooperator. People cooperate more when others have cooperated in a previous situation.87

A simulation of small firm behavior revealed that as firms grow larger they become more like prisoner’s dilemmas, which pit self-interest against cooperation.88 Some managers shared a history of coordinating their behavior; others did not. Those who had a history of coordinating their actions were more likely to cooperate in a subsequent prisoner’s dilemma situation. Further, the difference was dramatic: Those who had a previous history cooperated in the prisoner’s dilemma game about 71% of the time, whereas those without a history only cooperated 15% to 30% of the time.

Still another reason why people cooperate is that they want to believe they are nice. For example, one person attributed his decision to make a cooperative choice in the 20-person prisoner’s dilemma game to the fact that he did not want the readers of Scientific American to think he was a defector.89 This behavior is a type of impression management.90 Impression management raises the question of whether people’s behavior is different when it is anonymous than when it is public. The answer appears to be yes. However, it is not always the case that public behavior is more cooperative than private behavior. For example, negotiators who are accountable to a constituency often bargain harder and are more competitive than when they are accountable for only their behavior.91

Social Sanctions

Social sanctions are punishments that are administered in a community or a group when defection occurs. Unlike legal sanctions, social sanctions are not economic penalties and fines but could be a form of reprimand. Longtime U.S. Representative Charles Rangel was the first member of the House to be censured in three decades when the chamber overwhelmingly voted to reprimand him for failure to pay income taxes and misusing his office to solicit fund-raising donations. Censure, the highest form of punishment short of outright expulsion from the House, was an embarrassment to one of the longest-serving and most highly regarded elected officials in the United States, and served as a clear warning against the abuse of power by House members. “I know in my heart I am not going to be judged by this Congress. I’ll be judged by my life in its entirety,” Rangel said after the punishment was handed down.92

Focus on Benefits of Cooperation

The probability that a person will make a particular choice in a social dilemma is a function of the attraction of that choice in terms of its ability to return a desirable outcome immediately.93 Our attraction to a choice is usually a reflection of our ability to imagine or mentally simulate good outcomes.94 In a direct examination of people’s ability to think positively in a prisoner’s dilemma game, participants were instructed to think about some alternatives that were “worse” or “better” than what actually happened, then they played some more. The results were startling: Negotiators’ subsequent cooperation with their partner was directly related to the number of best-case scenarios they generated, and negotiators who generated worst-case scenarios defected a lot.95 The message? Thinking about how good we can be greatly increases cooperation.

How to Encourage Cooperation in Social Dilemmas When Parties Should Not Collude

In the examples thus far, we’ve suggested ways negotiators can entice others to cooperate. However, in many situations, it is illegal for parties to cooperate. Consider the problem of price fixing among companies within an industry. One example concerns how a pharmaceutical company might respond to the entry of a new competitor in a particular class of a drug. The following principles encourage cooperation in social dilemmas when companies should not privately collude:96

  • Keep your strategy simple. The simpler your strategy, the easier it is for your competitors to predict your behavior. The correspondence is nearly one-to-one between uncertainty and competitive behavior: Greater uncertainty leads to more competitive behavior;97 thus, it helps to minimize uncertainty for your competitors.

  • Signal via actions. The adage that behaviors speak louder than words is important. A person in a group who shows unwavering, consistent cooperation can effectively catalyze cooperation in the group because the consistent cooperator shapes the norms of the group.98

  • Do not be the first to defect. It is difficult to recover from escalating spirals of defection. Thus, do not be the first to defect.

  • Focus on your own payoffs, not your payoffs relative to others. Social dilemmas trigger competitive motives (as discussed in Chapter 5). The competitive motive is a desire to “beat” the other party. Instead, focus on your profits.

  • Be sensitive to egocentric bias. Most people view their own behavior as more cooperative than that of others. We see ourselves as more virtuous, more ethical, and less competitive than others see us. When planning your strategy, consider the fact that your competitors will see you less favorably than you perceive yourself.

Escalation of Commitment

Suppose you make a small investment in a start-up company that seems to have great potential. After the first quarter, you learn that the company suffered an operating loss. You cannot recover your investment; your goal is to maximize your long-term wealth. Should you continue to invest in the company? Consider two possible choices in this situation:

  1. Losing the small amount of money you have already invested.

  2. Taking additional risk by investing more money in the company, which could turn around and make a large profit or plummet even further.

The reference point effect described in Chapter 2 would predict that most negotiators would continue to invest in the company because they have already adopted a “loss frame” based upon their initial investment. Suppose you recognize that the company in question did not perform well in the first period and you consider your initial investment to be a sunk cost—that is, water under the bridge. In short, you adapt your reference point. Now, ask yourself which of the following would be the wiser choice:

  1. Do not invest in the company (a sure outcome of $0).

  2. Take a gamble and invest more money in a company that has not shown good performance in the recent past.

Under these circumstances, most people choose not to invest in the company because they would rather have a sure thing than a loss. A negotiator’s psychological reference point also influences the tendency to fall into the escalation trap. Recall that negotiators are risk seeking when it comes to losses and risk averse for gains. When negotiators see themselves as trying to recover from a losing position, chances are they engage in greater risk than if they see themselves as starting with a clean slate. Like the gambler in Las Vegas, negotiators who are hoping to hold out longer than their opponent (as in a strike) have fallen into the escalation trap. Most decision makers and negotiators do not readjust their reference point. Rather, they fail to adapt their reference point and continue to make risky decisions, which often prove to be unprofitable.

The escalation of commitment refers to the unfortunate tendency of negotiators to persist with a losing course of action, even in the face of clear evidence that their behaviors are not working and the negotiation situation is quickly deteriorating. The two types of escalation dilemmas are personal and interpersonal. In both cases, the dilemma is revealed when a person would do something different if he or she had not already been involved in the situation.

Personal escalation dilemmas involve only one person, and the dilemma concerns whether to continue with what appears to be a losing course of action or to cut one’s losses. Continuing to gamble after losing a lot of money, investing money in a car or house that continues to malfunction or deteriorate, and waiting in long lines that are not moving are examples of personal escalation dilemmas. To stop, in some sense, is to admit failure and accept a sure loss. Continuing to invest holds the possibility of recouping losses.

Interpersonal escalation dilemmas involve two or more people, often in a competitive relationship, such as negotiation. Union strikes are often escalation dilemmas, and so is war. Consider the situation faced by Lyndon Johnson during the early years of the Vietnam War. Johnson received the following memo from George Ball, the then Undersecretary of State:

The decision you face now is crucial. Once large numbers of U.S. troops are committed to direct combat, they will begin to take heavy casualties in a war they are ill-equipped to fight in a non-cooperative if not downright hostile countryside. Once we suffer large casualties, we will have started a well-nigh irreversible process. Our involvement will be so great that we cannot—without national humiliation—stop short of achieving our complete objectives. Of the two possibilities I think humiliation will be more likely than the achievement of our objectives—even after we have paid terrible costs.99

In escalation dilemmas, negotiators commit further resources to what appears to unbiased observers to be a failing course of action. In most cases, people fall into escalation traps because initially the situation does not appear to be a losing enterprise. The situation becomes an escalation dilemma when the persons involved in the decision would make a different decision if they had not been involved up until that point or when objective decision makers would not choose that course of action. Often in escalation situations, a decision is made to commit further resources to “turn the situation around,” such as in the case of gambling (personal dilemma) or making a final offer (interpersonal dilemma). The bigger the investment and the more severe the possible loss, the more prone people are to try to turn things around.

The escalation of commitment process is illustrated in Exhibit 11-6.100 In the first stage of the escalation of commitment, a person is confronted with questionable or negative outcomes (e.g., a rejection of one’s offer by the counterparty, decrease in market share, poor performance evaluation, a malfunction, or hostile behavior from a competitor). This external event prompts a reexamination of the negotiator’s current course of action, in which the utility of continuing is weighed against the utility of withdrawing or changing course. This decision determines the negotiator’s commitment to his or her current course of action. If this commitment is low, the negotiator may make a concession, engage in integrative negotiations (rather than distributive negotiations), or possibly revert to his or her BATNA. If this commitment is high, however, the negotiator will continue commitment and continue to cycle through the decision stages. Negotiators who have previously sunk investments in an outside option develop a heightened sense of entitlement, even when the outside option has been foregone. Negotiators who feel entitled set higher aspirations and behave in a more opportunistic fashion, including exploitation of others.101

When negotiators receive indication that the outcomes of a negotiation may be negative, they should ask themselves, “What are the personal rewards for me in this situation?” In many cases, the process of the negotiation itself, rather than the outcome of the negotiation becomes the reason for commencing or continuing negotiations. This reasoning leads to a self-perpetuating reinforcement trap, wherein the rewards for continuing are not aligned with the actual objectives of the negotiator. Ironically, people who have high, rather than low self-esteem are more likely to become victimized by psychological forces; people with high self-esteem have much more invested in their ego and its maintenance than do those with low self-esteem.102 Sometimes face-saving concerns lead negotiators to escalate commitment; some negotiators worry they will look silly or stupid if they back down from an initial position. Ego protection often becomes a higher priority than the success of the negotiation.

Avoiding the Escalation of Commitment in Negotiations

Most negotiators do not realize they are in an escalation dilemma until it is too late. Complicating matters is the fact that in most escalation dilemmas, a negotiator (like a gambler) might have some early “wins” or good signs that reinforce their initial position. How can a negotiator best get out of an escalation dilemma?

The best advice is to adopt a policy of risk management: Be aware of the risks involved in the situation, learn how to best manage these risks, and set limits, effectively capping losses at a tolerable level. It is also important to find ways to get information and feedback about the negotiation from a different perspective.

Set Limits

Ideally, a negotiator should have a clearly defined BATNA. At no point should a negotiator make or accept an offer that is worse than his or her BATNA.

Avoid Decision Myopia

A negotiator should get several perspectives on the situation. Ask people who are not personally involved in the negotiation for their appraisal. Be careful not to bias their evaluation with your own views, hopes, expectations, or other details, such as the cost of extricating yourself from the situation, because that will only predispose them toward your point of view, which is not what you want—you want an honest, critical assessment.

Recognize Sunk Costs

Probably the most powerful way to escape escalation of commitment is to simply recognize and accept sunk costs, which are basically water under the bridge: money (or other commitments) previously spent that cannot be recovered. It is often helpful for negotiators to consider removal of the project, product, or program. In this way, the situation is redefined as one in which a decision will be made immediately about whether to invest; that is, if you were making the initial decision today, would you make the investment currently under consideration (as a continuing investment), or would you choose another course of action? If the decision is not one you would choose with your current knowledge, think about how to terminate the project.

Diversify Responsibility and Authority

In some cases, it is necessary to remove or replace the original negotiators from deliberations precisely because they are biased. One way to carry out such a removal is with an external review: appointing someone who does not have a personal stake in the situation.

Redefine the Situation

Often, it helps to view the situation not as the “same old problem” but as a new one. Washington, D.C. teachers dramatically redefined the situation when they ratified a new contract in 2010, which expanded the ability of administrators to remove poor teachers from classrooms based on student results, an idea once unthinkable to the educators. Initially, many of the proposals—performance pay linked to test score growth, weakening of seniority and tenure—were vehemently opposed. But through persistent efforts, the ideas got incorporated into mainstream thinking that effectively changed the status quo.103

Conclusion

We discussed several noncooperative bargaining situations that are all characterized by an absence of contracts and typical enforcement mechanisms: prisoner’s dilemmas, ultimatum dilemmas, dictator games, trust games, volunteer dilemmas, and multiparty or social dilemmas. The common theme across each of these games or situations is that it models choices that people make in real-life situations to act either in a self-interested, opportunistic fashion or in a trusting fashion that might lead to exploitation. Strategic pie-expanding and pie-slicing strategies in the two-person prisoner’s dilemma can be achieved via the tit-for-tat strategy, but tit-for-tat works only with two players in a repeated game. Many tacit negotiations within and between organizations involve more than two players, and are called social dilemmas. The best way to ensure cooperation in social dilemmas is to align incentives, monitor behavior, practice regulation and privatization, use tradable permits, communicate with involved parties, personalize others, and focus on the benefits of cooperation. Escalation dilemmas occur when people invest in what is (by any objective standards) a losing course of action. People can de-escalate via setting limits, getting several perspectives, recognizing sunk costs, diversifying responsibility, and redefining the situation.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset