Chapter 2. Simple Ideas, Complex Organizations

America's East Coast welcomed a crisp, sunny fall morning on September 11, 2001. For airline passengers in the Boston–Washington corridor, the perfect fall weather offered prospects of on-time departures and smooth flights. The promise would be broken for four flights, all bound for California. Like Pearl Harbor, 9/11 was a day that will live in infamy, a tragedy that changed America's sense of itself and the world. If we probe the how and why of 9/11, we find determined and resourceful terrorists, but we also find vulnerability and errors in organizations charged with detecting and preventing such catastrophes.

American Airlines flight 11 was first in the air, departing from Boston on time at 8:00 AM. United 175 followed at 8:15, ten minutes behind schedule. American 77, after a twenty-minute delay, left Washington-Dulles at 8:20 AM. Delayed forty minutes by congestion at Newark, United flight 93 departed at 8:42 AM.

The first sign that something was amiss for American 11 came less than fifteen minutes into the flight, when pilots stopped responding to input from air traffic controllers. For United 175, signs surfaced when the aircraft changed beacon codes, deviated from its assigned altitude, and failed to respond to New York air traffic controllers. American 77 departed from its assigned course at 8:54 AM, and attempts to communicate with the plane were futile. The last flight, United 93, followed a routine trajectory until the aircraft dropped precipitously. The captain radioed "Mayday," and controllers heard sounds of a violent struggle in the cockpit.

All four planes had been hijacked by teams of Al Quaeda terrorists who had managed to board the planes in spite of a security checkpoint system aimed at preventing such occurrences. In a meticulously planned scheme, the terrorists turned commercial aircraft into deadly missiles. Each aircraft was aimed at a high-profile target—New York's World Trade Center, the Pentagon, and the nation's Capitol. One by one, the planes slammed into their targets with devastating force. Only United 93 failed to reach its objective. A heroic passenger effort to regain control of the plane failed but thwarted the terrorists' intentions to ram the White House or Capitol building.

Why did no one foresee such a catastrophe? In fact, some had. As far back as 1993, security experts had envisioned an attempt to destroy the World Trade Center using airplanes as weapons. Such fears were reinforced when a suicidal pilot crashed a small private plane onto the White House lawn in 1994. But the mind-set of principals in the national security network was riveted on prior hijacks, which had almost always ended in negotiations. The idea of a suicide mission, using commercial aircraft as missiles, was never incorporated into homeland defense procedures.

America's homeland air defense system fell primarily under the jurisdiction of two government agencies: the Federal Aviation Administration (FAA) and the North American Aerospace Defense Command (NORAD). As the events of 9/11 unfolded, it became clear that these agencies' procedures to handle hijackings were inadequate. The controller tracking American 11, for example, began to suspect a hijacking early on and relayed the information to regional FAA headquarters, which began to follow its hijack protocol. As part of that protocol, a designated hijack coordinator could have requested a military fighter escort for the hijacked aircraft—but none was requested until too late.

At the same time, communication channels fell behind fast-moving events. Confusion at FAA headquarters resulted in a delay in informing NORAD about United 93. An interagency teleconference to provide coordination between the military and the FAA was hastily put together, but technical delays kept the FAA from participating. When NORAD asked for FAA updates, they got either no answer or incorrect information. Long after American 11 crashed into the World Trade Center, NORAD thought the flight was still headed toward Washington, D.C.

In the end, nineteen young men were able to outwit America's homeland defense systems. We can explain their success in part by pointing to their fanatical determination, meticulous planning, and painstaking preparation. Looking deeper, we can see a dramatic version of an old story: human error leading to tragedy. But if we look deeper still, we find that even the human-error explanation is too simple. In organizational life, there are almost always systemic causes upstream of human failures, and the events of 9/11 are no exception.

The nation had a web of procedures and agencies aimed at detecting and monitoring potential terrorists. Those systems failed, as did procedures designed to respond to aviation crises. Similar failures have marked other well-publicized disasters: nuclear accidents at Chernobyl and Three Mile Island, for example, and the botched response to Hurricane Katrina on the Gulf Coast in 2005. Each event illustrates a chain of error, miscommunication, and misguided actions.

Events like 9/11 and Katrina make headlines, but similar errors and failures happen every day. They rarely make front-page news, but they are all too familiar to people who work in organizations. The problem is that organizations are complicated, and communication among them adds another tangled layer. Reading messy situations accurately is not easy. In the remainder of this chapter, we explain why. We discuss how the fallacies of human thinking can obscure what's really going on and lead us astray. Then we describe some peculiarities of organizations that make them so difficult to figure out and manage.

COMMON FALLACIES IN EXPLAINING ORGANIZATIONAL PROBLEMS

Albert Einstein once said that a thing should be made as simple as possible, but no simpler. When we ask students and managers to analyze cases like 9/11 they often make things simpler than they really are. They do this by relying on one of three misleading, oversimplified one-size-fits-all concepts.

The first and most common is blaming people. This approach casts everything in terms of individual blunders. Problems result from bad attitudes, abrasive personalities, neurotic tendencies, stupidity, or incompetence. It's an easy way to explain anything that goes wrong. Once Enron went bankrupt, the hunt was on for someone to blame, and the top executives became the target of reporters, prosecutors, and talk-show comedians. One CEO said, "We want the bad guys exposed and the bad guys punished" (Toffler and Reingold, 2004, p. 229).

As children, we learned it was important to assign blame for every broken toy, stained carpet, or wounded sibling. Pinpointing the culprit is comforting. Assigning blame resolves ambiguity, explains mystery, and makes clear what must be done next: punish the guilty. Enron had its share of culpable individuals, some of whom eventually went to jail. But there is a larger story about the organizational and social context that set the stage for individual malfeasance. Targeting individuals while ignoring larger system failures oversimplifies the problem and does little to prevent its recurrence.

When it is hard to identify a guilty individual, a second popular option is blaming the bureaucracy. Things go haywire because organizations are stifled by rules and red tape—or because a lack of clear goals and roles creates chaos. One or the other explanation almost always applies. If things are out of control, then the system needs clearer rules and procedures, as well as tighter job descriptions. The 9/11 terrorist attacks could have been thwarted if agencies had had better protocols for such a terrorist attack. Tighter financial controls could have prevented Enron's free fall. The problem is that piling on rules and regulations typically leads to bureaucratic rigidity. Rules inhibit freedom and flexibility, stifle initiative, and generate reams of red tape. Could Enron have achieved its status as America's most innovative company if it had played by the old rules? When things become too tight, the solution is to "free up" the system so red tape and rigid rules don't stifle creativity and bog things down. But many organizations vacillate endlessly between being too loose and too tight.

A third fallacy attributes problems to thirsting for power. In the case of Enron, key executives were more interested in getting rich and expanding their turf than in advancing the company's best interests. The various agencies dealing with 9/11 all struggled prior to the disaster for their share of scarce federal resources. This view sees organizations as jungles teeming with predators and prey. Victory goes to the more adroit, or the more treacherous. Political games and turf wars cause most organizational problems. You need to play the game better than your opponents—and watch your backside.

Each of these three perspectives contains a kernel of truth but oversimplifies a knottier reality. Blaming people points to the perennial importance of individual responsibility. Some problems are caused by personal characteristics: rigid bosses, slothful subordinates, bumbling bureaucrats, greedy union members, or insensitive elites. Much of the time, though, condemning individuals blocks us from seeing system weaknesses and offers few workable options. If, for example, the problem really is someone's abrasive or pathological personality, what do we do? Even psychiatrists find it hard to alter character disorders, and firing everyone with a less-than-ideal personality is rarely a viable option. Training can go only so far in preparing people to carry out their responsibilities perfectly every time.

The blame-the-bureaucracy perspective starts from a reasonable premise: organizations are created to achieve specific goals. They are most effective when goals and policies are clear (but not excessive), jobs are well defined (but not constricting), control systems are in place (but not oppressive), and employees behave prudently (but not callously). If organizations always behaved that way, they would presumably work a lot better than most do. In practice, this perspective is better at explaining how organizations should work than why they often don't. Managers who cling to facts and logic become discouraged and frustrated when confronted by intractable irrational forces. Year after year, we witness the introduction of new control systems, hear of new ways to reorganize, and are dazzled by emerging management methods and gurus. Yet old problems persist, seemingly immune to every rational cure we devise. As March and Simon point out, rationality has limits.

The thirst-for-power view highlights enduring, below-the-surface features of organizations. Its dog-eat-dog logic offers a plausible analysis of almost anything that goes wrong. People both seek and despise power but find it a convenient way to explain problems. Within hours of the 9/11 terror attacks, a senior FBI official called Richard Clarke, America's counterterrorism czar, to tell him that many of the terrorists were known members of Al Quaeda. "How the f__k did they get on board then?" Clarke exploded. "Hey, don't shoot the messenger. CIA forgot to tell us about them." In the context of the long-running battle between the FBI and CIA, the underlying message of blame was clear: the CIA's self-interested concern with its own power caused this catastrophe.

The tendency to blame what goes wrong on people, the bureaucracy, or the thirst for power is part of our mental wiring. But there's much more to understanding a complex situation than assigning blame. Certain universal peculiarities of organizations make them especially difficult to sort out.

PECULIARITIES OF ORGANIZATIONS

Human organizations can be exciting and challenging places. At least, that's how they are often depicted in management texts, corporate annual reports, and fanciful managerial thinking. But in reality they can be deceptive, confusing, and demoralizing. It is a mistake to assume that organizations are either snake pits or rose gardens (Schwartz, 1986). Managers need to recognize characteristics of life at work that create opportunities for the wise as well as traps for the unwary. A case from the public sector provides a typical example:

Demarco had other options, but she couldn't see them. She and Paul Osborne both thought they were on track. They were tripped up in part by human fallibility, but even more important, by how hard it can be to understand organizations. The first step in managerial wisdom and artistry is to recognize key characteristics of organizations. Otherwise, managers are persistently surprised and caught off guard.

First, organizations are complex. They are populated by people, whose behavior is notoriously hard to predict. Large organizations in particular include a bewildering array of people, departments, technologies, and goals. Moreover, organizations are open systems dealing with a changing, challenging, and erratic environment. Things can get even more knotty across multiple organizations. The 9/11 disaster resulted from a chain of events that involved several separate systems. Almost anything can affect everything else in collective activity, generating causal knots that are hard to untangle. After an exhaustive investigation, our picture of 9/11 is woven from sundry evidence, conflicting testimony, and conjecture.

Second, organizations are surprising. What you expect is often not what you get. Paul Osborne saw his plan as a bold leap forward; Helen and her group considered it an expensive albatross. In their view, Osborne made matters worse by trying to improve them. He might have achieved better results by spending more time with his family and leaving things at work alone. And imagine the shock of Enron's executives when things fell apart. Until shortly before the bottom fell out, the company's leadership team appeared confident they were building a pioneering model of corporate success. Many analysts and management professors shared their optimism.

The solution to yesterday's problems often creates future obstacles. A friend of ours was president of a retail chain. In the firm's early years, he had a problem with two sisters who worked in the same store. To prevent this from recurring, he established a nepotism policy prohibiting members of the same family from working for the company. Years later, two key employees met at work, fell in love, and began to live together. The president was stunned when they asked if they could get married without being fired. As in this case, today's sensible choice may turn into tomorrow's regret. Taking action in a cooperative venture is like shooting a wobbly cue ball into a scattered array of self-directed billiard balls. Balls career in so many directions that it is impossible to know how things will eventually sort out.

Third, organizations are deceptive. They camouflage mistakes and surprises. After 9/11, America's homeland defense organizations tried to conceal their lack of preparedness and confusion for fear of revealing strategic weaknesses. Enron raised financial camouflage to an art form with a series of sophisticated partnerships (carrying Star Wars names like Chewco, Jedi, and Kenobe). Helen Demarco and her colleagues disguised obfuscation as technical analysis in hopes of fooling the boss.

It is tempting to blame deceit on individual character flaws. Yet Helen Demarco disliked fraud and regretted cheating—she simply believed she had no other alternative. Sophisticated managers know that what happened to Paul Osborne happens all the time. When a quality initiative fails or a promising product tanks, subordinates often either clam up or cover up. They fear that the boss will not listen or will punish them for being insubordinate. Thus early warnings that terrorists might commandeer commercial airliners went unvoiced or unheeded. Internal naysayers at Enron were silenced until dissenters "blew the whistle" publicly. A friend in a senior position in a large government agency put it simply: "Communications in organizations are rarely candid, open, or timely."

Fourth, organizations are ambiguous. Complexity, unpredictability, and deception generate rampant ambiguity, a dense fog that shrouds what happens from day to day. Figuring out what is really going on in businesses, hospitals, schools, or public agencies is not easy. It is hard to get the facts and, if you pin them down, even harder to know what they mean or what to do about them. Helen Demarco never knew how Paul Osborne really felt, how receptive he was to other points of view, or how open he was to compromise. She and her peers piled on more mystery by conspiring to keep him in the dark. As the 9/11 case illustrates, when you incorporate additional organizations into the human equation, uncertainty mushrooms.

Ambiguity has many sources. Sometimes available information is incomplete or vague. Different people may interpret the same information in a variety of ways, depending on mind-sets and organizational doctrines. At other times, ambiguity is intentionally manufactured as a smoke screen to conceal problems or steer clear of conflict. Much of the time, events and processes are so intricate, scattered, and uncoordinated that no one can fully understand—let alone control—the real truth. Exhibit 2.1 lists some of the most important sources of organizational uncertainty.

ORGANIZATIONAL LEARNING

How can lessons be extracted from surroundings that are complex, surprising, deceptive, and ambiguous? It isn't easy. Yet turbulent, rapidly shifting situations require organizations to learn better and faster. Michael Dell, founder and CEO of Dell Computer Corporation, explained it this way: "In our business, the product cycle is six months, and if you miss the product cycle, you've missed the opportunity. In this business, there are two kinds of people, really: the quick and the dead" (Farkas and De Backer, 1996).

With stakes so high, how organizations learn from experience has become a timely topic. Decades ago, scholars debated whether the idea of organizational learning made sense: Could organizations actually learn, or was learning inherently individual? That debate lapsed as experience verified instances where individuals learned and organizations didn't, or vice versa. Complex firms such as Microsoft, Toyota, and British Airways have "learned" capabilities far beyond individual knowledge. Lessons are enshrined in acknowledged protocols and shared cultural codes and traditions. At the same time, individuals often learn when systems cannot.

From the late 1980s onward, senior officials in China recognized that the nation was heading in two contradictory directions, promoting capitalism economically while defending communism politically. Behind the scenes, party members began an urgent search for a way to bridge the gap between rival ideologies. Publicly, though, the party tamped down dissent and argued that capitalism was one more sign of socialist progress (Kahn, 2002). Most knew the party was on the road to perdition, but the system obscured that reality.

Several perspectives on organizational learning are exemplified in the work of Peter Senge (1990), Barry Oshry (1995), and Chris Argyris and Donald Schön (1978, 1996). Senge sees a core learning dilemma: "We learn best from our experience, but we never directly experience the consequences of many of our decisions" (p. 23). Learning is relatively easy when the link between cause and effect is clear. But complex systems often sever that connection: causes remote from effects, solutions detached from problems, and feedback delayed or misleading (Cyert and March, 1963; Senge, 1990). At home, you flip a switch and the light goes on. In an organization, you flip the switch and nothing happens—or a toilet may flush in a distant building. You are still in the dark, and the user of the toilet is unpleasantly surprised. To understand what is going on, you need to master the system's circular causality.

Senge emphasizes the value of "system maps" that clarify how a system works. Consider the system created by "Chainsaw Al" Dunlap, CEO of Scott Paper in the early 1990s. Dunlap was proud of his nickname and his turnaround at Scott. He raised profits and market value substantially by slashing head count and cutting frills such as research and development. But he rarely acknowledged Scott's steady loss of market share (Byrne, 1996). It is one of many examples of actions that look good until long-term costs become apparent. A corresponding systems model might look like Exhibit 2.2. The strategy might be cutting training to improve short-term profitability, drinking martinis to relieve stress, offering rebates to entice customers, or borrowing from a loan shark to cover gambling debts. In each case, what seems to work in the moment creates long-term costs down the line.

Oshry (1995) agrees that system blindness is widespread but highlights causes rooted in troubled relationships between groups that have little grasp of what's above or below their level. Top managers feel overwhelmed by complexity, responsibility, and overwork. They are chronically dissatisfied with subordinates ' lack of initiative and creativity. Middle managers, meanwhile, feel trapped between contradictory signals and pressures. The top tells them to take risks but then punishes mistakes. Their subordinates expect them to shape up the boss and improve working conditions. Top and bottom tug in opposite directions, causing those in between to feel pulled apart, confused, and weak. At the bottom, workers feel helpless, unacknowledged, and demoralized. "They give us lousy jobs and pay, and order us around—never telling us what's really going on. Then they wonder why we don't love our work." If you cannot step back and see how system dynamics create these patterns, you muddle along blindly, unaware of better options.

Both Oshry and Senge argue that our failure to read system dynamics traps us in a cycle of blaming and self-defense. Problems are always caused by someone else. Unlike Senge, who sees gaps between cause and effect as primary barriers to learning, Argyris and Schön (1978, 1996) emphasize individuals' fears and defenses. As a result, "the actions we take to promote productive organizational learning actually inhibit deeper learning" (1996, p. 281).

According to Argyris and Schön, our behavior obstructs learning because we avoid undiscussable issues and tiptoe around organizational taboos. Our actions often seem to work in the short run because we avoid conflict and discomfort, but we create a double bind. We can't solve problems without dealing with problems we have tried to hide, but tackling them would expose our cover-up. Facing that double bind, Helen Demarco and her colleagues chose to disguise their scheme. The end result is escalating games of sham and deception. This is what happened at Enron, where desperate maneuvers to obscure the truth made the day of reckoning more catastrophic.

COPING WITH AMBIGUITY AND COMPLEXITY

Organizations deal with a complicated and uncertain environment by trying to make it simpler. One approach to simplification is to develop better systems and technology to collect and process information. Another is to break complex issues into smaller chunks and assign slices to specialized individuals or units. Still another approach is to hire or develop professionals with sophisticated expertise in handling thorny problems. These and other methods are helpful but not always sufficient. Despite the best efforts, unanticipated—and sometimes appalling—events still happen. The key in dealing with these events is developing better mental maps to anticipate complicated and unforeseeable problems.

You See What You Expect

On April 14, 1994, three years after the first Gulf War ended, two U.S. F-15C fighter jets took off from a base in Turkey to patrol the no-fly zone in northern Iraq. Their mission was to "clear the area of any hostile aircraft" (Snook, 2000, p. 4). The zone had not been violated in more than two years, but Iraqi antiaircraft fire was a continuing risk, and the media speculated that Saddam Hussein might be moving a large force north. At 10:22 AM, the fighter pilots reported to AWACS (Airborne Warning and Control System) controllers that they had made radar contact with two slow, low-flying aircraft. Unable to identify the aircraft electronically, the pilots descended for visual identification. The lead pilot, Tiger 01, spotted two "Hinds"—Soviet-made helicopters used by the Iraqis. He reported his sighting, and an AWACS controller responded, "Copy, 2 Hinds" (p. 6). The fighters circled back to begin a firing run. They informed AWACS they were "engaged," and, at 10:30 AM, shot down the two helicopters.

Destroying enemy aircraft is the fighter pilots' grail. Only later did the two learn that they had destroyed two American UH-60 Black Hawk helicopters, killing all twenty-six U.N. relief workers aboard. How could experienced, highly trained pilots make such an error? Snook (2000) offers a compelling explanation. The two types of aircraft had different paint colors—Hinds tan, Black Hawks forest green—and the Black Hawks had American flags painted on the fuselage. But the Black Hawks' camouflage made them difficult to see against the terrain, particularly for fighters flying very fast at high altitudes. Visual identification required flying at a dangerously low altitude in a mountain-walled valley. The fighter pilots were eager to get above the mountains as quickly as possible. An extensive postmortem confirmed that the Black Hawks would have been difficult to identify. The pilots did the normal human thing in the face of ambiguous perceptual data: they filled in gaps based on what they knew, what they expected, and what they wanted to see. "By the time Tiger 01 saw the helicopters, he already believed that they were enemy. All that remained was for him to selectively match up incoming scraps of visual data with a reasonable cognitive scheme of an enemy silhouette" (p. 80).

Recall that in Chapter One, we described the "blink" process of rapid cognition. The essence of this process is matching situational cues with a well-learned mental model—a "deeply-held, nonconscious category or pattern" (Dane and Pratt, 2007, p. 37). While necessary and useful, quick judgments are not foolproof. Their accuracy depends on available clues, expectations, and patterns in the decision maker's repertory. All of these presented problems for the fighter pilots. The perceptual data were hard to read, and expectations were prejudiced by a key missing clue—no mention of friendly helicopters. Even though situation analysis plays a pivotal role in their training, pilots lacked adequate diagnostic schemata for distinguishing Hinds from Black Hawks. All this made it easy for them to conclude that they were seeing enemy aircraft.

Making Sense of What's Going On

Some events are so clear and unambiguous that it is easy for people to agree on what is going on. Determining if a train is on schedule, if a plane landed safely, or if a clock is keeping accurate time is straightforward. But most of the important issues confronting managers are not so clear-cut. Solid facts and simple problems in everyday life at work are scarce. Will a reorganization work? Was a meeting successful? Why did a consensual decision backfire? In trying to make sense of complicated and ambiguous situations, we—like the F-15C fighter pilots—depend very much on our frames, or mind-sets, to give us a full reading of what we are up against. But snap judgments work only if we have adequately sized up the situation.

Since our interpretations depend so much on our cognitive repertoires, expectations, beliefs, and values, our internal world is as important as what is outside— sometimes more so. The fuzziness of everyday life makes it easy for people to shape the world to conform to their favored internal schemata. As noted by DeBecker, "Many experts lose the creativity and imagination of the less informed. They are so intimately familiar with known patterns that they may fail to recognize or respect the importance of a new wrinkle" (1997, p. 30). In such cases, snap judgments work against, rather than for, the person who is trying to figure things out.

Managers regularly face an unending barrage of puzzles or "messes." To act without creating more trouble, they must first grasp an accurate picture of what is happening. Then they must move quickly to a deeper level, asking, "What is really going on here?" That's the main objective of teaching pilots the art of situational analysis. But this important step in reading a situation is often overlooked. As a result, managers too often form superficial analyses and leap on the solutions nearest at hand or most in vogue. Market share declining? Try strategic planning. Customer complaints? Put in a quality program. Profits down? Time to reengineer or downsize.

A better alternative is to think, to probe more deeply into what is really going on, and to develop an accurate diagnosis. The process is more intuitive than analytic: "[It] is in fact a cognitive process, faster than we recognize and far different from the step-by-step thinking we rely on so willingly. We think conscious thought is somehow better, when in fact, intuition is soaring flight compared to the plodding of logic" (DeBecker, 1997, p. 28). The ability to size up a situation quickly is at the heart of leadership. Admiral Carlisle Trost, former chief of naval operations, once remarked, "The first responsibility of a leader is to figure out what is going on.... That is never easy to do because situations are rarely black or white, they are a pale shade of gray ... they are seldom neatly packaged."

It all adds up to a simple truth that is easy to overlook. The world we perceive is, for the most part, constructed internally. The ideas, or theories, we hold determine whether a given situation is foggy or clear, mildly interesting or momentous, a paralyzing disaster or a genuine learning experience. Personal theories are essential because of a basic fact about human perception: in any situation, there is simply too much happening for us to attend to everything. To help us understand what is going on and what to do next, well-grounded, deeply ingrained personal theories offer two advantages: they tell us what is important and what can be safely ignored, and they group scattered bits of information into manageable patterns.

The Dilemma of Changing or Conserving

To a nonpilot, a commercial airliner's cockpit is a confusing array of controls, switches, and gauges. Yet an experienced pilot can discern the aircraft's status at a glance. Like other professionals, a pilot learns patterns that cluster seemingly fragmented bits of information into a clear picture. The patterns take many hours to learn, but once learned, they help the pilot size things up with ease, speed, and accuracy. In the same way, an experienced manager can read a situation very rapidly, decide what needs to be done, and make it happen.

The good and bad news is that, right or wrong, our theories shield us from confusion, uncertainty, and anxiety. Tiger 01, for example, knew exactly what to do because he believed what he saw. We rely on the theories we have, and, in the heat of the moment, it's not easy to recognize when we are making a big mistake if we feel confident in our judgment. But, as Gladwell writes: "Our snap judgments and first impressions can be educated and controlled" to shift the odds in our favor (2005, p. 15).

This learning needs to happen before we find ourselves in make-or-break situations. When the stakes are high, we have tried every lens we know, and nothing works, we get anxious and stuck. We are caught in a dilemma: holding on to old patterns is ineffective, but developing new mental models is difficult. It is also risky; it might lead to analysis paralysis and further erosion of our confidence and effectiveness.

This dilemma exists even if we see no flaws in our current mind-set, because our theories are self-sealing filters—they block us from recognizing our errors. Extensive research documents the many ways in which individuals spin reality to protect existing beliefs (see, for example, Garland, 1990; Kühberger, 1995; Staw and Hoang, 1995). This helps to explain why Enron's Ken Lay insisted he had done the right thing, even though his company collapsed. Heath and Gonzalez (1995) found that decision makers rely on others more to strengthen preconceived thinking than to gain new information. Tetlock (2000) showed that managers ' judgments of performance were influenced by cognitive preferences and political ideologies. Extensive research on the "framing effect" (Kahneman and Tversky, 1979) shows how powerful subtle cues can be. Relatively modest changes in how a problem or decision is framed can have a dramatic impact on how people respond (Shu and Adams, 1995; Gegerenzer, Hoffrage, and Kleinbölting, 1991). Decision makers, for example, tend to respond more favorably to an option that has a "70 percent chance of success" than one that has a "30 percent chance of failure," even though they are statistically identical.

Many of us recognize that our mental maps influence how we interpret the world. Less widely understood is that what we expect often determines what we get. Rosenthal and Jacobson (1968) studied schoolteachers who were told that certain students in their classes were "spurters"—students who were "about to bloom." The so-called spurters had been randomly selected but still achieved above-average gains on achievement tests. They really did spurt. Somehow the teachers' expectations were communicated to and assimilated by the students. Modern medical science is still trying to understand the power of the placebo effect—the power of sugar pills to make people better. Results are attributed to an unexplained change in the patient's belief system. Patients believe they will get better; therefore they do. Similar effects have been replicated in countless reorganizations, new product launches, and new approaches to performance appraisal. All these examples show how hard it is to disentangle the reality out there from the models in our minds.[5]

In Western cultures, particularly, there is a tendency to embrace one theory or ideology and to try to make the world conform. If it works, we persist in our view. If discrepancies arise, we try to rationalize them away. If people challenge our view, we ignore them or put them in their place. Only poor results over a long period of time call our theories into question. Even then, we often simply entrench ourselves in a new worldview, triggering the cycle again.

Japan has four major religions, each with unique beliefs and assumptions: Buddhism, Confucianism, Shintoism, and Taoism. Though the religions differ greatly in history, traditions, and basic tenets, many Japanese feel no need to choose only one. They use all four, taking advantage of the strengths of each for suitable purposes or occasions. The four frames can play a similar role for managers in modern organizations. Rather than portraying the field of organizational theory as fragmented, we present it as pluralistic. Seen this way, the field offers a rich assortment of lenses for viewing organizations. Each theoretical tradition is helpful. Each has blind spots. Each tells its own story about organizations. The ability to shift nimbly from one to another helps redefine situations so they become understandable and manageable. The ability to reframe is one of the most powerful capacities of great artists. It can be equally powerful for managers. Undergraduates at Vanderbilt University captured this in a class-initiated rap (for best results, rap fans might imagine the rapper Common doing these lines in a neo-soul, hip-hop style):

Reframe, reframe, put a new spin on the mess you're in. Reframe, reframe, try to play a different game. Reframe, reframe, when you're in a tangle, shoot another angle; look at things a different way.

SUMMARY

Because organizations are complex, surprising, deceptive, and ambiguous, they are formidably difficult to comprehend and manage. Our preconceived theories and images determine what we see, what we do, and how we judge what we accomplish. Narrow, oversimplified perspectives become fallacies that cloud rather than illuminate managerial action. The world of most managers and administrators is a world of messes: complexity, ambiguity, value dilemmas, political pressures, and multiple constituencies. For managers whose images blind them to important parts of this chaotic reality, it is a world of frustration and failure. For those with better theories and the intuitive capacity to use them with skill and grace, it is a world of excitement and possibility. A mess can be defined as both a troublesome situation and a group of people who eat together. The core challenge of leadership is to move an organization from the former to something more like the latter.

In succeeding chapters, we look at four perspectives, or frames, that have helped managers and leaders find clarity and meaning amid the confusion of organizational life. The frames are grounded in cool-headed management science and tempered by the heat of actual practice. We cannot guarantee your success as a manager or a change agent. We believe, though, that you can improve your odds with an artful appreciation of how to use the four lenses to understand and influence what's really going on.



[4] We used citation analysis (how often a work is referenced in the scholarly literature) to develop a list of "scholars' greatest hits"—the works that organizational scholars rely on most. The Appendix shows our results and discusses how we developed our analysis. At appropriate points in the book (where the ideas are most relevant, as here), we present a brief summary of key ideas from works at the top of our list.

[5] These examples all show thinking influencing reality. A social constructivist perspective goes a step further to say that our thinking constructs social reality. In this view, an organization exists not "out there" but in the minds and actions of its constituents. This idea is illustrated in an old story about a dispute among three baseball umpires. The first says, "Some's balls, and some's strikes, and I calls'em like they are." The second counters, "No, you got it wrong. Some's balls, and some's strikes, and I calls'em the way I sees them." The third says, "You guys don't really get it. Some's balls, and some's strikes, but they ain't nothing until I call them." The first umpire is a realist who believes that what he sees is exactly what is. The second recognizes that reality is influenced by his own perception. The third is the social constructivist—his call makes them what they are. This distinction is particularly important in the symbolic frame, which we return to in Chapter Twelve.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset