3. Denver International Airport Reconsidered

,

The city of Denver, Colorado, set out in 1988 to build a new airport to replace the existing one, Stapleton Airport. Stapleton was judged incapable of expansion, inadequate to serve the growing city, and guilty of contributing to ever-more-evident noise- and air-pollution problems. With the new airport, costs would be reduced, pollution and air-traffic delays would be eliminated, and growth would be assured. The new Denver International Airport (DIA) was scheduled to open on October 31, 1993. That was the plan.

Another Fine Mess

Cut to the chase: Everything went fine, except those damn software guys let the side down again. (Sigh, groan, general rolling of the eyes.) On October 31, 1993, every other part of the vast airport complex was ready to go . . . honest it was. Really. Trust us on this. But the software wasn’t ready, so the airport couldn’t open!

Specifically, what wasn’t ready on time was the infamous DIA Automated Baggage Handling System (ABHS). The airport couldn’t open without functional baggage-handling software. Since building the airport involved huge capital expenditure, all that capital was tied up while the software guys scrambled around playing catch up. And time is money. The taxpayers took the hit. This is not a matter subject to elaborate analysis; it is as simple as this:

Image

And it was all the fault of those awful software people.

This kind of dollars-to-dumpster simplification was a feature of newspaper and journal coverage of the DIA troubles from the first sign of delay in early 1993 until the partial opening in 1995. So much blame was laid on the software team that even today, the phrase “DIA Automated Baggage Handling System” is a recognized symbol of incompetent software projects.

An article in Scientific American put responsibility for the DIA disappointment squarely on the software industry and its lax standards and practices:

software engineering discipline remains years—perhaps decades—short of the mature engineering discipline needed to meet the demands of an information-age society.1

1 W. Wayt Gibbs, “Software’s Chronic Crisis,” Scientific American (September 1994), p. 84.

This was a process problem, the article asserted. The delays at DIA might very well have been avoided, the article claimed, if only the project had improved its process to include

1. higher CMM level

2. more use of formal methods

3. mathematical specification languages like B and VDM

But was it really a process problem?

Beyond the Process

Suppose you had an utterly perfect process for delivering software. Would that remove all uncertainty from your projects? In fact, is the software building process even one of the major sources of uncertainty? We suggest not. Among the more important sources of uncertainty are these:

1. Requirement: What exactly is it that the system has to do?

2. Match: How will the system interact with its human operators and other peer systems?

3. Changing environment: How will needs and goals change during the period of development?

4. Resources: What key human skills will be available (when needed) as the project proceeds?

5. Management: Will management have sufficient talent to set up productive teams, maintain morale, keep turnover low, and coordinate complex sets of interrelated tasks?

6. Supply chain: Will other parties to the development perform as hoped?

7. Politics: What is the effect of using political power to trump reality and impose constraints that are inconsistent with end-project success?

8. Conflict: How do members of a diverse stakeholder community resolve their mutually incompatible goals?

9. Innovation: How will technologies and approaches unique to this project affect the eventual outcome?

10. Scale: How will upscaling volume and scope beyond past experience impact project performance?

Even the most perfect construction process can’t remove uncertainty from a complex systems development project. Where there is uncertainty, there is risk. Where there is risk, there needs to be a conscious and thoughtful effort to manage it. Instead of asking, “How did they go about building their software?” we can gain a lot more insight into what happened at DIA by asking, “How did they go about managing their risks?”

Risk Management at DIA

In our brief summary of the events at DIA, we asked you to swallow the often-repeated claim that the airport was 100-percent ready to open except for the baggage-handling software, and that the airport couldn’t open at all without that software. Let’s go over that premise again in some detail.

First of all, maybe the assertion that all the other subprojects were complete wasn’t true. Maybe the baggage system was not the only late component, merely the most visibly late component. Maybe the whole schedule was hopeless and everybody was late. When this happens, a common ploy is for heads of the various subprojects to play a little brinkmanship to assert complete readiness, hoping that one of their peers will crack first. When someone finally cracks, the others just affect to wrinkle their brows in disappointment and then frantically use the extra time to fix up their own domains. Maybe that’s what happened at DIA. But just for the purposes of this analysis, let’s assume not. Take all the other subproject managers at their word and assume that the airport could indeed have opened but for the failure of the Automated Baggage Handling software. The entire cost of delay—more than $500 million in extra financing—was therefore attributable to the lateness of that one key element.

And now start asking yourself a few key questions:

Q1: Why couldn’t the airport open without the baggage-handling software?

That’s easy: The baggage-handling software was on the overall project’s critical path for the airport’s opening. It was so essential to airport operations that the members of the organization’s governing board knew they couldn’t move passengers through the airport, even for a single day, without that system.

Q2: Why was the ABHS on the critical path?

Well, because there was no other way to move the baggage. The system of tele-carts and bar-code readers and scanning devices and switch points and cart unloaders was the only way to get baggage to and from the planes.

Q3: Are there no alternative ways to move baggage?

Of course. There is, for example, the time-honored method of having big burly guys haul the stuff. There is also the conventional airport approach of small trucks pulling hand-loaded carts, daisy-chained together.

Q4: When the ABHS wasn’t ready on time, why couldn’t DIA open with one of these alternative methods of moving baggage?

Um. Well. (Hem and haw.) The tunnels that were meant to serve the automated tele-cart system were too low for people and couldn’t accommodate the trucks. So the automated system had to work.

Q5: Couldn’t the tunnels have been redesigned so that trucks and hauled carts could go through them?

Yes, but there wasn’t time. By the time it was discovered that the ABHS software would be late, the tunnels were already built. And the time to revamp them was judged to be longer than the time required to perfect the software.

Q6. Couldn’t the revamping of the tunnels have started earlier?

Yes, but that wasn’t judged appropriate. Money and time spent on the tunnels would have been wasted had the software actually been delivered on time, as upper management was then assuring it would be.

Q7: Wasn’t lateness of the ABHS software seen as a potential risk?

Only after it happened. Before that, the software was placed on an aggressive schedule and managed for success.

Q8: Haven’t software projects been late before?

Yes, but this one was supposed to be different.

Q9: Was there any history of prior projects building similar systems?

Yes. The Franz Josef Strauss Airport in Munich had installed a pilot ABHS, designed along the lines of the DIA version.

Q10: Did the DIA team visit the Munich project, and if so, what did it learn?

Members of DIA’s ABHS project did visit Munich. The Munich software team had allowed a full two years for testing and six months of 24-hour operation to tune the system before cut-over. They told the DIA folk to allow that much or more.

Q11: Did DIA management follow this advice?

Since there wasn’t time for such extensive testing and tuning, they elected not to.

Q12: Did the project team give sufficient warning of impending lateness?

First of all, the invisible hand of the marketplace made a significant gesture right at the outset. When the DIA board of governors first put the ABHS out to bid, nobody was willing to submit a bid for the scheduled delivery date. All bidders judged that starting the project off with such a schedule was a sure way to court eventual disaster.

Eventually, the airport engaged BAE Automated Systems to take on the project on a best-efforts basis. During the project, the contractor asserted early and often that the delivery date was in jeopardy and that the project was slipping further behind with each month and each newly introduced change. All parties were made aware that they were trying to do a four-year project in two years, and that such efforts don’t usually come home on time. All of this evidence was ignored.

Risk Management Practices Honored in the Breach

It’s not how risk management was practiced at DIA that sunk the project. It’s that there was no effort at risk management at all. Even the most perfunctory risk management effort—probably in the first minute of the first risk-discovery brainstorm—would have listed a delay in the software delivery as a significant risk.

An exposure analysis of this risk would have shown that since the baggage-handling software was on the critical path, any delay would postpone the airport’s opening, resulting in financial penalties of $33 million per month. (That carrying cost would have been easily calculable from the beginning.) From there, it would have been an obvious conclusion that moving the software off the critical path was a key mitigation strategy. A few million dollars spent early in the effort to make an alternative baggage-handling scheme feasible would have saved half a billion dollars when the software project did not complete on time.

At the very end of this book, we list a dozen or so necessary actions that together constitute risk management. As you will see, DIA upper management methodically observed precisely zero of these.

So, Who Blew It?

Since the contractor has already taken so much heat for its failure to deliver DIA’s ABHS on time, it seems only fair to mention here that risk management was not entirely the contractor’s job. If you agree with our assessment that this was a failure of risk management far more than of software process, then it makes no sense to blame the contractor. In fact, the risk of the $500 million of extra financing cost belonged at the next level up. Responsibility for risk management accrues to whichever party will have to pay the price for risks that are ignored.

In this case, all such costs were eventually paid for by the contracting agency, Denver Airport System, an arm of the city government. Thus, the city of Denver was responsible for managing the financing risk, something it made no discernible effort to do.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset