Chapter 1. Eating Elephants Is Difficult

 

“Progress is made by lazy men looking for easier ways to do things.”

 
 --Robert Heinlein
<feature><title>Chapter Contents</title> <objective>

Today’s Delivery Methods 4

</objective>
<objective>

Why Do Big Projects Fail? 5

</objective>
<objective>

Environmental Complexity 13

</objective>
<objective>

Brownfield Sites Must Be Surveyed 20

</objective>
<objective>

Endnotes 21

</objective>
</feature>

Information technology can now accomplish immensely complex tasks, but despite the IT industry’s major strides forward, a disturbing statistic remains: Nearly 70 percent of really big IT projects fail.

This book is all about making these kinds of projects succeed.

 

In the past 35 years, the computer has changed so much that it is largely unrecognizable and sometimes invisible. I have a small, quiet computer in my living room that can play DVDs, record two digital television channels simultaneously, display my family photos, and play my videos and CDs.

Computers are powerful and everywhere. When I joined IBM, mainstream PCs struggled to stick a few windows on the screen. If you were lucky, your computer talked to a shared file server and provided you with some cute windows that enabled you to look at the green screens where the real work was done. These days, my computer desktop is a panacea of multitasking for documents, virtual worlds, videos, MP3s, e-mail, and instant messaging. I have so many windows doing so many different things that I sometimes think I need another computer to control them all for me.

 
 --R.H.

Today’s Delivery Methods

In the past 35 years, the IT industry has changed pretty much everything about how it delivers projects. Rather than starting from scratch with each IT project, huge numbers of standards and techniques are available to draw from. Computer architectures, based on both software and hardware, can be formally recorded in precise and unambiguous ways. Requirements, models, and even logic can be described using standard languages and diagrams. Not much that’s written in the multilayered, complex software isn’t an object. Supplementing all these IT methods are rigorous and prescriptive approaches for running projects and instilling quality from the start.

However, despite all the advances and standards, the big IT projects don’t fail just once in a while: They fail most of the time. Look at the newspapers, or worse still, the trade press: The IT industry has an unenviable reputation for being late, expensive, and inefficient at delivering large projects.

As the Royal Academy of Engineering stated:[1]

Note

The overall U.K. spending on IT is projected to be a monumental £22.6 billion (or $45,000,000,000 US dollars). Against this background, it is alarming that significant numbers of complex software and IT projects still fail to deliver key benefits on time and to target cost and specification.

This report estimates that the success rate of complex projects has reached an unprecedented high of 30 percent from a previously measured low of 15 percent. The industry can hardly be proud of these figures.

Delivering a big IT project is a huge and complex undertaking. It requires sophisticated coordination and strong leadership. The IT industry often measures large projects in terms of the number of person-years expended to complete them. This book is built from the experience gained on a number of projects that took between 300 and 500 person-years, but the information applies to any large IT project. In this book, we compare such projects to eating an elephant: Eating an elephant is difficult, and the dining experience can be disappointing.

Even when the first helping is complete (or the system actually gets delivered), planning some additional courses is normal. Changing, adapting, or augmenting a delivered system is often far more difficult and more costly than expected. This book also considers how to reliably and efficiently achieve such change in a complex environment.

Why Do Big Projects Fail?

In a supposedly mature IT industry, why do big projects often fail, whether through cancellations, missed deadlines, cost overruns, or compromised goals?

Let’s assume that the generally well-paid people who execute these IT projects are not stupid. Second, let’s assume that they know the proven methods and tools for their job area and are using them properly. This might not be true in all cases, but more experienced staff tends to handle the large and risky projects, so it’s not an unreasonable assumption to make.

If people know what they are meant to be doing, perhaps the industry as a whole is immature. Perhaps its best practices are flawed. After all, the IT industry is new compared to older engineering practices, such as building construction.

Still, other modern industries seem to have few of the continuing engineering problems that the IT industry does. Within 35 years of the invention of a new technology, the industries that were spawned are generally working wonders with it. In 1901, the first powered, heavier-than-air airplane took off; by 1936, liquid-fuel rockets and helicopters were being tested and a regular commercial transatlantic airmail service was operating. Thirty-five years is plenty of time to make an entirely new, complex, and very risky technology commercially reliable.

Indeed, taking into account all the improvements that have been made to software engineering over the past 35 years, it is difficult to claim that it is an immature industry. Something other than ineptitude and immaturity is causing the problems.

What can this be? Well, a number of risk areas are known to contribute to project failure, including these:

  • Globalization

  • Organization and planning

  • Project reporting

  • Change management

  • Induced complexity

  • Requirements definition

Demands of Global IT Systems

The world is getting flatter and smaller. IT systems are becoming more global and, thus, must meet new demands. For example, they must cope with vast numbers of users, deal with unpredictable peaks of activity, be available around the clock, and work simultaneously in five languages and a dozen currencies. Meeting these new kinds of demand is challenging, but if these capabilities are identified early enough, they are rarely the core cause of a project’s failure.

Organization and Planning

Thanks to politics and commercial constraints, projects are still regularly structured in the wrong way. When this happens, the impact is devastating.

For example, any sizeable project needs a single powerful sponsor or champion who is an interested stakeholder in the success of the delivered project. When multiple people are in charge, the consequences can be disastrous.

 

On one memorable occasion, a $2,000,000,000 project I worked on was buffeted around by two equally powerful stakeholders from two different organizations for nearly two years. It was a nightmare of conflicting deadlines and requirements. When it came time to make the critical decisions about what the project should do next, the two sponsors could not agree. Finally, the project was given to a single stakeholder from a third organization. This single stakeholder was empowered to make decisions, yet he had an entirely different perspective than the previous two. The result was chaos and lawsuits.

 
 --R.H.

In addition to a single strong sponsor, those who run the project (from commercial, project, and technical perspectives) need to have clear governance arrangements and be empowered to do their jobs. Decision making must be quick and authoritative, but it also must consider all stakeholders who are affected by the decision. Planning must be robust and well engineered, with sensibly sized phases and projects.

Many mistakes can be made in this area. Fortunately, Frederick P. Brooks, Jr. wrote a book about it in 1975 called The Mythical Man Month.[2] The authors recommend that everyone leading a big IT project should read it.

Project Reporting

Even if the project is structured correctly, human nature and poor execution can get in the way.

Large organizations are often populated with “bad news diodes”[3] who ensure that senior people hear only what they want to hear. In an electric circuit, a diode allows current to flow in only one direction. In most large organizations and big, expensive projects, human bad news diodes treat news in the same way (see Figure 1.1). Good news flows upward to senior management without resistance, but bad news simply can’t get through; it can only sink downward.

The bad news diode ensures that bad news sinks without a trace, whereas good news immediately reaches the ears of the highest management.

Figure 1.1. The bad news diode ensures that bad news sinks without a trace, whereas good news immediately reaches the ears of the highest management.

This is not surprising. Given the size and overheads of big projects, bad news usually means big costs. People often try to cover up and fix any adverse situation until it is too late. No one wants to be associated with thousands of dollars’ worth of cost overruns.

Effective managers use project-reporting measures that are difficult to fake, instill a “no blame” culture, and actually walk around project locations talking to all levels of staff, to ensure that all news, good or bad, is flowing around the project.

Change Management

Even if you’ve managed to get these aspects of your project just right, big projects still have one other inherent problem. Big projects tend to be long. Given the pace of change in today’s businesses, the perfectly formed project is likely to find itself squeezed, distorted, and warped by all kinds of external influences—new business directions, technological updates, or simply newly discovered requirements.

Indeed, as you begin to interfere and interact with the problem you are trying to solve, you will change it. For example, talking to a user about what the system currently does might change that user’s perception about what it needs to do. Interacting with the system during testing might overturn those initial perceptions again. Introducing the solution into the environment might cause additional issues.

In IT systems, these unanticipated changes often arise from a lack of understanding that installing a new IT system changes its surroundings. Subsequent changes also might need to be made to existing business procedures and best practices that are not directly part of the solution. This might change people’s jobs, their interaction with their customers, or the skills they require.

In the worst case, the business environment into which a system is intended to fit can change during the lifetime of the project or program. The nature of big projects and the time it takes to deliver them could mean that the original problem changes or even disappears if the business changes direction during the solving process. As a result, IT might end up solving last year’s business problem, not the current ones.

How project managers analyze the impact of those changes and decide first whether to accept them, and then when and how to absorb them, is a crucial factor of big project success or failure. At the very least, however, the project leaders must be aware of the changes that are happening around them—and, quite often, this is not the case.

Induced Complexity

Advances in technology now mean that the IT technology itself is rarely the cause of a major IT project failure. However, how that technology is put together can sometimes cause problems. Unfortunately, the easy availability of configurable products and off-the-shelf components, combined with the increased tendency to layer software upon more software, can generate unnecessary complexity. This kind of complexity is called induced complexity.

Self-discipline within—or a watchful eye over—the technical designers or IT architects is sometimes required, especially in the high-level design stages. At this point, the ease of drawing boxes, clouds, and arrows, and a good dose of brand-new best practices can result in theoretically elegant and flexible architectures that, in reality, can never efficiently be run, operated, or maintained.

We have resurrected several projects that have suffered at the hands of well-meaning but ultimately misguided “fluffy cloud” architects who often have an insatiable desire to apply the latest technologies and ideas. Innovation is good, but it’s really not a good idea to innovate everywhere at once.

Instead, IT architects would do well to employ Occam’s Razor. Named after the English fourteenth-century logician and Franciscan friar William of Ockham, Occam’s Razor holds that the explanation of any problem should make as few assumptions as possible. Assumptions that make no difference to the predictive power of the theory should be shaved off. In other words, the simplest solution that fits the facts is usually the right one.

When applied to IT architecture, this means that any design should be pared to the minimum level of complexity that is required to meet its requirements. Simple is good.

It’s always worth asking the following questions of any architecture:

Am I writing things that I could take from a shelf?

Do we need that level of abstraction?

I know product A isn’t quite as good as product B at doing that, but because we have to use product A elsewhere anyway, couldn’t we use just product A?

Wouldn’t it be simpler if we had only one way of doing that?

 

On one project, I needed to connect the computer system I was designing to another system in an external organization. The strategic mechanism proposed by the “fluffy cloud” enterprise architects was for the information from our system to be sent in and out of a rabbit warren of connections before the information could be received at its destination. Because of some limitations on the intervening systems, the reliability of the communication could not be guaranteed without a significant amount of extra design and coding. After a minimal amount of investigation, we determined that the two systems that needed to talk were exactly the same type and could be far more securely and efficiently connected directly together.

 
 --R.H.

Requirements Definition

IT projects requirements are often divided into two categories: functional and nonfunctional. Functional requirements describe what the system must do—for example, a user must be able to withdraw money from an account. Functional requirements are often expressed in business process diagrams or use cases, which describe how a user interacts with the system to complete tasks or meet businesses goals.

Unfortunately, nonfunctional requirements are often overlooked. These requirements provide information about the desired characteristics of the system(s) to be built—for example, the system must be available 24 hours a day, 7 days a week. Nonfunctional requirements include response times, performance, availability, manageability, maintainability, and so on. In general, the approach for nonfunctional requirements is less mature and standardized across the IT industry.

The IT industry generally assumes that these two types of requirements encompass all requirements. If they are well documented and managed as described earlier, all is assumed to be well. However, we have observed a third kind of requirement: constraints. Despite being more numerous than the other requirements, constraints are often ignored—until it is too late. We return to constraints in more detail in subsequent chapters.

To illustrate the point, let’s eavesdrop on a discussion between a client who is in a hurry and the architect who is designing his dream family home....

<dialog> <speaker>LANDOWNER:</speaker>

Thanks for coming over. I’m finding it hard to visualize my new home from those fantastic sketches you sent me, so I thought a site visit would help. The construction firm is lined up to start work next week, so I want to make sure we’ve got everything right. Otherwise, we won’t be able to move in for Christmas.

<speaker>ARCHITECT:</speaker>

Yes, great idea. I’m glad you liked the drawings. I think the granite facades and the single-story idea will make the most of this view over the valley. A clean and elegant build is just what this site needs. Mind you, looks like quite a steep path to the site. What’s the best way down?

<speaker>LANDOWNER:</speaker>

No, this is the site. I said it was on the top of a hill.

<speaker>ARCHITECT:</speaker>

But we’re on a 45-degree slope. You didn’t mention the site wasn’t flat! I think I’m going to have to make some adjustment to the plans.

<direction>

The architect speedily makes adjustments to his drawing, dropping his pencil. Stooping to pick it up, he sees a number of holes leading into the hillside.

</direction>
<speaker>ARCHITECT:</speaker>

Hmm, looks like there’s some old mine workings, too. Are they safe?

<speaker>LANDOWNER:</speaker>

Well, they’re a couple hundred years old, and they haven’t flooded or collapsed yet. They do creak quite a bit, though. I’ve just discovered a colony of protected lesser-eared cave bats living in one of them, so we’re going to have to work around them. To save costs, I think the best thing to do is to move the utility areas of the house down into the unoccupied caverns.

<speaker>ARCHITECT:</speaker>

Right, okay. I’m going to need a map of the workings to incorporate into the detailed ground plans.

<speaker>LANDOWNER:</speaker>

No problem. I’ve got a couple of maps, one from the council’s surveyors and the other from the subsidence board.

<direction>

The landowner hands over some very old-looking maps, both of which have some tears and holes in them.

</direction>
<speaker>ARCHITECT:</speaker>

Er, these maps look completely different. Which one’s more accurate?

<speaker>LANDOWNER:</speaker>

No idea—they kept on digging for another 20 years after those were done.

<speaker>ARCHITECT:</speaker>

Look, these changes are going to require me to alter the design somewhat. I’ll come back next week.

<speaker>LANDOWNER:</speaker>

Okay, no problem.

<direction>

One week later.

</direction>
<speaker>ARCHITECT:</speaker>

I’ve worked night and day to put together these plans for the new design. I had to go straight to blueprints, as the building contractors start tomorrow. The house is now on three stories down the hillside, and I’ve included an underground swimming pool in the largest of the three caverns.

<speaker>LANDOWNER:</speaker>

Wow! That sounds great! Never been very good with engineering drawings, though. The roof looks a bit shallow.

<speaker>ARCHITECT:</speaker>

That’s the side view, and you’re looking at it upside down.

<speaker>LANDOWNER:</speaker>

Ah, okay. Don’t like the look of those windows, either.

<speaker>ARCHITECT:</speaker>

That’s the stairwell, seen from above.

<speaker>LANDOWNER:</speaker>

Right. I think the walls would look better without all those arrows and numbers, too. It’s all a little avant garde.

<speaker>ARCHITECT:</speaker>

No, those are the dimensions of the building. The walls are still granite.

<speaker>LANDOWNER:</speaker>

Well, I’m sure it will look great. Now, you know we’re in a hurry, so I guess we’d better start. My mother-in-law is coming down for Halloween, so we might need to make some changes if she doesn’t like where you’ve put the guest room ....

</dialog>

You just know that this build is not going to be a happy experience. The site is a nightmare of subsidence and slopes. The mother-in-law is bound not to like the orientation of her bedroom, and it’s pretty clear that the landowner hasn’t got the faintest clue what he’s been shown. With the construction company arriving next week, you just know that the building is likely to go over budget and probably will not be ready for the Christmas after next.

Environmental Complexity

The architect’s initial design had little correlation to the environment in which it was going to sit. Beautiful? Yes. Practical? No. A single-story structure would have been hugely expensive to realize, requiring either major terracing of the hillside or a reengineering of the building by putting it on stilts. Either way, a multistory house built into the hillside was likely a much better solution.

For any complex problem, some degree of analysis and investigation is essential to frame the requirements of the solution and understand its context. In this case, a thorough site survey would have prevented the architect from furiously having to rework his previous plans.

Analysis takes time. Indeed, when building IT systems, analysis typically takes almost as long as building the system. Even then, IT architects seldom do as thorough a job of surveying a site as building architects do. Unfortunately, in IT, relatively little time is spent on the equivalent of a site survey.

This is despite the fact that very few IT projects are delivered on “Greenfield” sites anymore. Most businesses already have a significant and complex IT environment. Some are so complex that they could be considered contaminated or polluted by their complexity. We call such environments Brownfield sites.

A site survey enables you to understand this complexity, described in terms of the third kind of requirement we introduced earlier: constraints. These requirements do not specify what the solution must do or how fast it needs to go; they simply define the environment in which it must exist. Constraints massively affect the final solution, but the early requirements phase of any project rarely captures them. This is despite the fact (or perhaps because of the fact) that they far outnumber the number of other requirements put together, as shown in Figure 1.2.

The height of the constraints for a big project in terms of paper would be more than 3 feet high. This makes the other requirements look rather inadequate by comparison.

Figure 1.2. The height of the constraints for a big project in terms of paper would be more than 3 feet high. This makes the other requirements look rather inadequate by comparison.

 

I recently led a project that is fairly typical of the elephantine programs we are discussing. The functional requirements were captured in a 250-page document. The nonfunctional requirements were captured in an 80-page document. The constraints were primarily interface constraints—typically, a 40-page document. Now, that doesn’t sound too bad—what were we worried about? Well, the problem was that the documents mentioned were for each interface, and there were 250 of them!

 
 --K.J.

All too often, constraints are ignored until the detailed design starts or until the system is being tested for its compatibility with the existing environment. The accumulated effect of these missed requirements is rework, overruns, and, in the worst case—and often in the most complex and expensive situations—project failures.

Boehm’s Software Engineering Economics[4] provides some startling evidence on how much more it costs to fix such a problem at different stages of the lifecycle. Let’s consider, as an example, a complex system-to-system interface that was overlooked as a requirement because no thorough site survey was performed.

If the omission had been caught at the requirements stage, it might have taken a few days of work and, say, $1,000 to make the update. Now if it isn’t caught until the systems are tested together, Boehm’s figures suggest that it will cost $50,000 to fix. This is not surprising when you consider the cumulative effects of such a late change. A great deal of documentation would need to be revised at this stage, and a lot of retesting would be required. The project almost certainly would slip.

In the worst of all worlds, the omission might not be noticed until the system went live and things started going awry with customers. At that point, the cost of fixing the requirements defect, Boehm suggests, would be $82,000—and that doesn’t include the loss of reputation or consequential damage to the business.

All in all, it’s a good idea and good economics to catch defects as early as possible, especially if that defect is a requirement itself. The lack of an IT equivalent of a site survey causes many IT projects to fail.

Unfortunately, no cost-effective, IT industry best practice addresses coping with the existing business and IT environmental complexity that surrounds the solution being delivered. The complexity of the existing IT environment is both unavoidable and very difficult to manage successfully.

Complexity Is Everywhere

Environmental complexity is the almost inevitable end result of many years of IT investment. It is the accumulation of complexity caused by many years of creating, adding, deleting, and updating interconnected and overlapping systems.

This is an almost universal phenomenon, but the IT industry’s current best practices, tools, and techniques largely do not recognize it. Almost nothing in the industry toolbox deals with the risks and problems it generates.

This book specifically addresses the problems environmental complexity causes, by introducing a new way of delivering systems specifically designed to deal with both system and environmental complexity.

How Complex Is Complex?

Arguably, environmental complexity started accumulating when the IT industry stopped throwing away everything whenever they bought a new computer. Although the practice sounds bizarre today, before 1964, moving to a new computer required discarding the previous investment in hardware and software, and starting again. The IBM System/360, released in 1964, changed all that by creating a family of computers and attached devices that were compatible with each other.

Over the 40 years or so since the System/360™ and its lookalikes were introduced, most large organizations have invested in and maintained 40 years’ worth of information technology complexity. They have accumulated mountains of systems that serve isolated business needs, known as “stove-pipe” systems. After they were installed, they were maintained, extended, used, and abused until their code base became similar to years of solidified lava flow building up on the sides of a volcano. Just as in a real, active volcano, such a system is pretty impossible to shift, no one really likes standing next to one, and only mad people want to poke their heads inside.

The average large business is endowed with a lot of complex computing. The environmental complexity isn’t caused by the numbers of computers, the power of those computers, or the size of their data stores—it’s caused by the complexity of the code they run, the functional size of the systems.

Function point analysis (now less elegantly known as the IFPUG method[5]), originally proposed in 1979 by Allan Albrecht[6] of IBM, measures how much functionality a system provides to an end user. It also takes into account some of the characteristics (the nonfunctional requirements) of the system. The industry norm for the creation of function points is 10 function points per person-month—that is, a project of 20 people working for a year should result in 2,400 function points.

Surveys of entire organizations can be done to estimate the number of function points present in their system portfolios. How much IT complexity might a bank or government organization have accumulated over the past 40 years? Well, the typical portfolio of applications for a sizeable bank or other large enterprises has been measured at around 500,000 function points. Using the industry norm function point productivity figure[7] means that such a portfolio represents more than 4,000 person-years of effort. It doesn’t take much imagination to know that something that has taken this long to build is likely to be exceptionally complex.

Indeed, if you compared the size in person-years of a really large IT project to the size of the business’s investment in existing IT complexity, the hugely complex project would be dwarfed, as shown in Figure 1.3.

A 500 person-year project tends to fill a building, but if we visualized the business’s existing IT investment in the same way, it would fill a skyscraper.

Figure 1.3. A 500 person-year project tends to fill a building, but if we visualized the business’s existing IT investment in the same way, it would fill a skyscraper.

A detailed understanding of both is beyond comprehension, but it is precisely this level of understanding that is necessary to ensure that the project will succeed within its environment.

The Effects of Environmental Complexity

The accumulation of complexity in today’s IT environments has other impacts besides the failure of large IT projects. The cost of operating and maintaining a high level of IT complexity—in essence, just standing still—becomes more expensive as systems grow larger and older. As a result, only a small part of the spending in an enterprise’s IT budget is now devoted to new functionality. In general, IT spending can be split into three main areas:

  • Steady state—The cost of keeping the existing system going and the maintenance necessary to support the ongoing operations

  • Regulatory compliance—The changes that are enforced in a business to satisfy new laws and mandatory requirements within an industry

  • Innovation capacity—The introduction of new capabilities and new technologies into the business

Changes required by legislation are a necessary burden to stay in business. These costs have been rising in recent years partly because of new legislative demands (such as Sarbanes-Oxley compliance), but also because these changes must take place within increasingly complex and difficult-to-maintain systems. Each legislative change makes the IT environment more complex, so future changes and maintenance will cost even more. It is a vicious spiral, as shown in Figure 1.4.

Gartner’s IT Budget Analysis (March 2006) shows a significant yearly decline in the amount of IT budget available for business innovation.[8]

Figure 1.4. Gartner’s IT Budget Analysis (March 2006) shows a significant yearly decline in the amount of IT budget available for business innovation.[8]

To break this downward spiral and allocate more of the IT budget to new business capabilities, the IT industry must find a better way to maintain and change these large and complex environments.

The Ripple Effect

The most disturbing effect of this environmental complexity is a phenomenon known as the ripple effect. This is experienced when a business updates its software or hardware. The business might want a new business function that is offered by the latest software version, or the product the business is using might be going out of support.

Although the change might seem simple at first, there rarely is such a thing as a nondisruptive change to any nontrivial environment. As the application middleware, database, or operating system version changes, a small but significant ripple is sent out around the environment. Changing one element might require another part of the IT environment to change, to ensure compatibility. Ultimately, these ripples can hit business applications and result in retesting, application changes, or even the need to reintegrate them with their surroundings. Figure 1.5 shows this ripple effect in action.

A simple change in one area of a complex IT environment can affect other areas, ultimately resulting in a substantial change that requires a business function retest. Such changes can take a long time.

Figure 1.5. A simple change in one area of a complex IT environment can affect other areas, ultimately resulting in a substantial change that requires a business function retest. Such changes can take a long time.

What started as an apparently small change is suddenly a wide and broad change for many parts of the IT organization. In a typical large enterprise, this could take 6 to 18 months (or more) to complete. In addition, the increasing movement toward globalization and continuous operations limits the time and opportunity to make such changes.

The increasing complexity of existing business, application, and infrastructure environments is thus beginning to slow an organization’s capability to change and adapt. All these supposedly independent elements are, in reality, deeply and strongly connected. These ripples are felt hardest when these environmental constraints are poorly defined or captured at the start of the project, with delays and overruns as the results.

Brownfield Sites Must Be Surveyed

This chapter looked at the frequent causes of failure for large projects. The chances that a big IT project will succeed increase if businesses follow the advice and best practices given in this chapter. However, we have also identified that environmental complexity is the remaining major inhibitor to the success of major IT projects, and this has no obvious answer.

A site survey might enable a business to identify missing or defective requirements early, when they are inexpensive to correct. It could also help a business understand and plan for the ripple effect. Ultimately, a really thorough site survey might help an organization work out how to simplify their IT environment. This could reduce maintenance costs and enable a business to keep up with current legislation more easily. Unfortunately, site surveys almost never happen. This is because using standard IT analysis techniques to perform such a survey across an environment that took nearly 4,000 person-years to build would likely be prohibitively expensive. And as soon as one was conducted, it would be outdated.

The industry needs a new way to conduct major IT projects that takes environmental complexity into account. Such an approach would need to make site surveys cost-effective, practical, and capable of quickly reflecting changes to the environment.

This book is about such an approach. We call it Brownfield to contrast it with the traditional Greenfield development approaches that tend to ignore this complexity. We’ve borrowed these terms from the construction industry: Brownfield sites are those in which redevelopment or reuse of the site is complicated by existing contaminants. Greenfield sites are clean, previously undeveloped land. Few IT Greenfield sites exist today. Brownfield is not, therefore, a technology or a product, but a new way of executing big IT projects. It is a new way to eat elephants.

In the next chapter, we consider why environmental complexity is such a problem for the IT industry and how we might begin to overcome it.

Endnotes

1.

The Challenges of Complex IT Projects. The Royal Academy of Engineering and The British Computer Society. April 2004. Published by The Royal Academy of Engineering London, UK. http://www.bcs.org/server.php?show=conWebDoc.1167

2.

Brooks, Frederick P. The Mythical Man Month. Addison-Wesley, 1995.

3.

Birman, Professor Ken. Integrating the e-Business. Paper, IFIP/ACM International Conference on Distributed Systems Platforms and Open Distributed Processing, NY/USA, 2000.

4.

Boehm, B.W. Software Engineering Economics. Prentice-Hall, Englewood Cliffs, NJ, 1981.

5.

Function Points Counting Practices Manual—Release 4.1. International Function Point Users Group (IFPUG), Princeton Junction, NJ, USA, 1999.

6.

Albrecht, Allan J. Measuring Application Development Productivity. Proceedings, IBM Applications Development Symposium, CA, USA, 1979.

7.

Jones, Capers. Applied Software Measurement. 2nd ed. McGraw-Hill, Inc. New York, NY, USA, 1996.

8.

Gartner Executive Programs (EXP). Gartner CIO Survey. Gartner Inc, Stamford, CT/USA, 2006.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset