Chapter 4

Promoting Value Through Simplicity

,

 

 

 

Everything should be made as simple as possible,
but not simpler.

Albert Einstein

 

The stage is now set for connecting the concepts of value, described in Chapter 3, with the ideas of simplicity introduced in section 1.2. As a matter of fact, there is no single obvious way to make this connection and we shall therefore have to justify our approach. One thing to be aware of is that the concepts of value and those of simplicity introduced so far are not truly defined on the same conceptual level. On the one hand, we have ideas of values, which were defined specifically in the context of IT: Use Value, Sustainable Value, and Strategic Value. On the other hand, we have concepts of simplicity, which were defined at a much more general level: Reduction, Hiding, Organization, Learning, Time Saving, and Trust.

Part of the difficulty in linking value and simplicity lies in this discrepancy: we have concepts of value specific to IT while we have general and fundamental concepts of simplicity and complexity.

The most straightforward path would be to identify each concept of value from Chapter 3 as a function of one or more simplicity or complexity parameters. Actually, this is what we did, implicitly, in Chapter 3 when we related the Use Value and the Sustainability Value with some fundamental concepts of simplicity. Although this approach looks quite appealing, it appears too naive and does not immediately lend itself to a pragmatic framework for enhancing value, especially because of the above discrepancy.

Another strategy could be to attempt to define simplicity or complexity concepts that are truly specific to IT and then link those to the various values. Even assuming these ad hoc concepts exist, their relation with our earlier concepts could be quite obscure and thus of little practical use. Again, this approach would hardly lead to an effective framework for value enhancement.

For these reasons, we choose a more pragmatic approach in this chapter. First, we identify and logically organize the main sources of uncontrolled increase of complexity in IT. This probably introduces some arbitrariness in the approach, as other classification schemes could be considered equally legitimate. This seems, however, an inevitable price to pay to move from simplicity principles to simplicity actions. To keep things managable, we chose to identify three main sources of uncontrolled increase in IT complexity, hoping that these will connect with the reader's own experience once we have defined them precisely. Explicitly, these main sources are growing technical heterogeneity, changing requirements, and a restricted set of human factors. We next identify primary causes that in turn are at the origin of the each main source. Having primary causes and main sources is merely a matter of organizing, as logically as possible, a range of causes that often overlap or even have circular implications. Finally, the concepts of simplicity will enter the analysis in two ways:

1) Simplicity principles will be associated with actions to mitigate the primary causes of complexity and hence also the main sources.

2) Simplicity actions will be identified as opportunities to generate different forms of value for the information system (IS).

This is pictured in Figure 4.1.

Figure 4.1. Generation of uncontrolled complexity has three main sources: growing technological heterogeneity, changing requirements, and a number of human factors. Primary causes will be identified, which contribute to those main sources. Simplicity principles will be applied directly to those primary sources to mitigate their effects. We shall argue how these simplicity actions translate into creation of value, in relation to the sources of complexity on which they act

ch4-fig4.1.jpg

These reflections also underlie the practical simplicity framework described in Chapter 5, where we examine how simplicity actions translate practically when we focus on specific architecture layers of an IS.

4.1. Growing technical heterogeneity

Among the many causes of uncontrolled IT complexity, growing technical heterogeneity is probably the most obvious cause. Put simply, technical heterogeneity refers to the fact that different technologies or tools are used jointly for similar purposes. This heterogeneity may concern the operating systems, business applications, or programming languages used for specific developments.

Here are a few common examples:

– Running different operating systems such as Windows, Linux, and Mac OS is a very common source of IT heterogeneity. With different file systems, different security models, and different ways to install new software, this is perhaps the most common source of IT heterogeneity. The Java language with its famous motto: “write once, run everywhere” was indeed originally created to alleviate this problem regarding specific developments.

– Using different programming languages such as Java, .NET, or Cobol for specific developments is another instance of technological heterogeneity. Even though each language is in principle universal, their coexistence within a given IS will usually imply numerous gateways, marshalling and un-marshalling operations as well as code wrapping. But from a business-operation point of view, this code is really do-nothing code.

– Having different collaborative office tools within one company, say Microsoft Exchange, Google Apps, and Lotus Notes, is another instance of technological heterogeneity. Each previously mentioned solution provides more or less equivalent mail and calendar services. But each solution comes with its specificities, which make the different tools not fully interoperable, hence the complexity.

– Even within a given layer of software architecture, there can be a great deal of heterogeneity. This is because many APIs and framework are available for each technical task. It is thus not uncommon to find several O/R1 frameworks or MVC mechanisms used within an IS or even within a single application.

– Middleware and infrastructure software such as application servers, ESBs,2 relational databases, or enterprise directories from different vendors also largely contributes to technical heterogeneity.

– Perhaps one of the trickiest forms of uncontrolled complexity is that which results from the coexistence of several versions of the same product, language, OS, or applications, as differences in behavior are usually very subtle and hard to track and fix.

For more clarity, we distinguish two kinds of macroscopic complexities (not to be confused with the information theory complexities) related to the plethora of technologies that coexist in an IS:

Horizontal complexity: we use this expression to refer to situations in which several equivalent technologies coexist to perform the same tasks. Tasks could be either technical or business-related. A typical example is one in which several brands of application server are running different applications.

Vertical complexity: we use this expression to refer to situations where technologies are nested like Russian Matryoshka dolls. Such a situation usually occurs as the consequence of an iterative use of the “simplicity through hiding” principle: a new layer of architecture wraps the previous one in an attempt to hide its complexity.

As an example, consider handling data in a relational database from within Java code. The first layer of architecture is the so-called driver that converts Java queries into queries specific to the database at hand. The Java API (JDBC) that defines how Java code interacts with an RDBMS can be considered the second layer. An O/R mapping framework on top of JDBC is yet another layer. As working with an O/R framework often involves performing much configuration and writing large amounts of boiler plate code, many development environments currently offer wizards that automatically generate most of this code and parameterization.

Still other mechanisms take charge of the automatic creation of unit tests or the continuous-integration tasks during development and deployment. All of these nested layers of technology contribute to what we term “vertical complexity”.

Technical heterogeneity has several worrisome consequences:

It increases the range of technical skills needed to maintain an IS in operational condition.

It increases the amount of purely technical code to perform conversions and wrapping tasks that have no business value.

It increases IT workloads by duplicating maintenance tasks.

It complicates the interdependence between subsystems, making the overall system less predictable and more difficult to maintain.

Figure 4.2. There are three primary causes of complexity due to technical heterogeneity: openness, quick obsolescence of technology, and absence of technological leadership

ch4-fig4.2.jpg

Impact on value

Generally speaking, significant heterogeneity strongly decreases sustainability value, as maintenance costs increase with heterogeneity. In extreme but not uncommon cases, a very high level of heterogeneity may even make maintenance impossible, as too many IT skills are required.

Heterogeneity usually brings with it unpredictability and unreliability, which may negatively impact use value regarding performance and availability of the system.

Let us now move to the primary causes, which explain why heterogeneity grows.

4.1.1. Openness

ISs do not exist in a vacuum; they are fundamentally open systems that must adapt to changing conditions. Most of them need to connect one way or another to the external world, whether it is to reach customers, to perform B2B transactions with suppliers, to exchange data with partners, or to take advantage of remote services. Roughly speaking, we can distinguish two modes of integration. One can be termed tight integration and the other loose integration.

Tight integration means that a service or application is integrated into the existing IS by adapting the code. Usually, custom code must be written that complies with some API, depending in which subsystem the new service is being integrated. The main advantage of this type of integration is that such tight interaction allows dealing with complex transactional and security contexts. This would otherwise be very hard to achieve by merely wrapping the new system in a set of stateless services.

The Java Connector Architecture (JCA) provides an example of such a tight-integration mechanism. It defines various contracts for sharing security and transactional settings as well as handling connection pools.

Loose integration, on the other hand, means that new services to be integrated are first wrapped and exposed as stateless services using only ubiquitous protocols and standards such as HTTP and XML.

As an example, the so-called REST architecture style provides a way to define services in which the HTTP protocol supplies the basic operations (GET, PUT, POST and DELETE) for manipulating remote data.

The primary advantage of this type of integration is its straightforwardness. When many services are involved, it will become necessary to define a pivot format of business objects, usually in XML, in order to maintain coherence, interoperability, and flexibility among the different services.

4.1.1.1. Why complexity increases

Let us summarize the various ways in which uncontrolled complexity can creep in, due to increased openness of the IS, and favor technological heterogeneity.

– If we are not careful, the IS might become dependent on the inner workings of the new services to which it is now coupled. Any functional or technical change in the newly included services could impact the existing infrastructure and require appropriate changes.

– Connecting to new systems can also imply performing additional or enhanced security checks to protect from malware or from possibly inconsistent data manipulation.

– Most often encoding and decoding operations are required, because the new system does not use the same vocabulary or business objects as the existing ones.

4.1.1.2. Implementing simplicity

Correspondingly, here is a list of general simplicity countermeasures that can be applied to mitigate the increase of complexity due to opening up the IS.

– To avoid creating dependencies on new systems, the classic solution is to define a set of appropriate interfaces. These interfaces are contracts that both sides, the existing IS and the new system, must comply with when exchanging data. This way, even if the implementation of the service changes over time, the contracts are stable and will protect from changes creeping into the existing infrastructure. Of course, substantial functional modifications will still require changing these interfaces. Defining interfaces is really an instance of achieving simplicity through hiding complexity. It is also an instance of good abstraction as we have already discussed in section 2.3.2.

– When many services are added and required to exchange data, it is often very useful to define an XML pivot format, which specifies an official vocabulary that all services must share. Defining a pivot format will simplify subsequent integration of new services. The price tag can be quite high, however. Major harmonizing tasks unfortunately do not happen overnight and require many experts, often with different technical backgrounds, to collaborate over long periods. The endeavor is so significant that, whenever businesses-object models or XSD schemas or generic business processes have already been defined for an industry, they should be used or extended rather than developed from scratch. Defining a pivot format implies different forms of simplicity: reducing duplicated or overlapping concepts, organizing the remaining ones, and finally learning these concepts to implement them efficiently in new services.

Avoid exotic solutions as much as possible in favor of straightforward ones based on industry standards such as XML. The REST architecture style is a good example of a robust and simple architecture. These solutions will be by far easier to maintain, because skilled employees will be easier to find and because they reduce horizontal complexity.

This is really simplicity through reduction.

4.1.2. Rapid obsolescence of IT

The pace of technological progress in hardware performance has been steady for about half a century now and it currently shows no sign of slowing down. It seems that Moore's law3 will still be around in the years to come. Automatically, this fast pace of evolution favors quick obsolescence of IT, which in turn favors heterogeneity as versions of technology accumulate in layers over the years.

Let us review the three main mechanisms at work in this continuous obsolescence.

Obsolescence driven by performance

New technologies replace older ones, pushed by the availability of more resources, by their compliance to newer standards and by algorithmic enhancements.

Newer versions of a technology typically offer more features, better performance, and include more of the emerging standards than their predecessors, sometimes even for a less significant effort and a smaller investment.

The successive versions of the Java Enterprise platform illustrate quite well this path to obsolescence: the number of APIs has progressively increased to encompass nearly all aspects of enterprise computing and all industry standards and protocols. Some Java APIs are officially marked as “deprecated”.

This essentially contributes to horizontal complexity of IT as successive technologies usually perform equivalent tasks.

Obsolescence by driven automation

Repetitive tasks with low added value are progressively automated.

As more computing resources become available, more and more tasks can be automated. Repetitive tasks with low human-added value are automated first. Here are a couple of examples:

There are generators or wizards, which generate much of the so-called boilerplate code in a software solution. This can include GUI generation and event handling, O/R tools or unit tests.

There are also scripting mechanisms, which automate tasks related to packaging applications in executable archives. Within the Java platform, ANT has been one of the favorite tools among developers especially because scripts now use the XML standard.

There are tools which take care of many of the tasks related to project supervision and continuous integration. Dependencies between various code archives are handled automatically, project supervision websites are generated to share information with all members of a team, depending on their responsibility, etc. Within the Java platform, Maven remains the archetype of this kind of project tool.

Low-level tasks (writing HTML code, writing SQL queries, building a GUI, or creating project archives) are increasingly delegated to a set of automations. This progressive automation is indeed nothing but a form of simplicity through hiding. Arguably, we are thus witnessing obsolescence by hiding in the sense that the primitive mechanisms still exist but they are not used explicitly anymore. They disappear at the bottom of the technology stack.

The automation of low-level tasks thus essentially contributes to vertical complexity of IT.

Obsolescence driven by abstraction

Low-level complexity is progressively wrapped within new abstractions.

Partly related to the above progressive automation process, there is a progressive wrapping of complexity in new abstractions.

Consider, for instance, the basic task, which is to create a dynamic web page using the Java platform. At the bottom of the API stack, there is the possibility to write an HTML page explicitly as a plain string of characters and send it to an HTTP stream. As this quickly becomes tedious, the Java platform provides an abstraction of a request-response paradigm that underlies any web server. It is called the servlet API. But using this API still entails manipulating low-level HTTP concepts, such as writing to output streams and managing sessions. Therefore, on top of the servlet API, we have the JSP mechanism, which eases the writing of dynamic HTML pages a bit. Building sophisticated pages with a great deal of interactivity remains, however, a complex task, even with JSP available. Therefore, still another layer has been added, namely JSF, which is a standard for creating graphical components. Well… unfortunately writing JSF code is still no easy matter, thus there are wizards that generate part of this code … and to benefit from Ajax4 features, add to this the ICEFaces framework!

Here again, obsolescence corresponds to a progressive phasing out of the low-level technologies in favor of these higher abstractions.

Just as for automation, nesting of concepts into abstraction essentially contributes to vertical complexity.

4.1.2.1. Why complexity increases

The exponential increase in resources and the correlated quick obsolescence of technologies have obviously directly benefited users in various ways: response times have decreased and graphical user interfaces have steadily become richer and slicker. Applications are moreover continuously scaling to accommodate ever-larger numbers of simultaneous users. Definitely, IT complexity has not been completely useless!

But on the other hand, let us face it; this seemingly limitless access to resources has also favored a kind of IT obesity and even IT anarchy. The reason for this is simple: as computing resources, CPU and RAM now appear essentially limitless to some developers, some of them tend to lose any sense of economy and sobriety in their designs. As a consequence, the complexity of solutions has often grown totally out of proportion when considering the problems that had to be solved (or coded).

As an extreme example, consider an MDA tool to generate code for fetching a single row of data from a database. This code will most likely include calls to an O/R framework which, in turn, uses a large array of APIs whose code, in turn, runs on a virtual machine, which itself runs on top of a virtual operating system. The same result could be achieved by issuing an elementary, manually written, SQL request directly to the database!

Let us review the various other ways in which quick obsolescence generates growing heterogeneity and, hence, more complexity:

– First, the time span between successive versions of a given technology (say Java EE 5 and Java EE 6) or between two equivalent technologies (say Java and RubyJ) is usually too short to allow for a systematic and global update of all IT within any company of significant size. Inevitably, the successive versions of a given technology and equivalent technologies will accumulate over the ages and progressively increase technological heterogeneity.

– A thorough mastery of technologies by IT engineers often proves elusive. Technologies are so numerous and change at such a fast pace that this prevents building a genuine technological expertise, not to mention a technological culture. A deep understanding of technology is, most often, practically impossible. True conceptual understanding and professional mastery is replaced by a sloppy, trial-and-error approach because there is just no other option.

Within the Java platform, the EJB technology (a specification for distributed business components), especially its first two versions, was notoriously so intricate that only few developers ever mastered it. Many even avoided using it altogether. The same seemed to happen more recently with JSF (a specification for building graphical components) and its various extensions. The lifecycle of these objects is so complex and subtle that many developers wonder whether learning this technology is really worthwhile and often prefer to stick with more straightforward solutions for which they at least gained some experience (e.g. Struts).

This remark is truer still for the tools supporting these APIs, which inherit their instability. Many tools, helpers, wizards, and templates were initially designed to enhance productivity but never had the opportunity to mature to become reliable and productive. Doing things the pedestrian way often proves faster.

In the worst case, which is unfortunately by no means uncommon, “freshman” developers use technologies, especially frameworks, without a clear understanding of their rationale. They use a framework mainly because it is already there, because some acronyms have reached the ears of deciders, or because some manager, who read the latest hype published in the IT press, has told them to do so.

The authors have several times met developers who used the Java Spring framework on a daily basis without knowing what dependency injection5 is really good for!

The massive use of contract developers from software engineering companies makes this issue even more acute. Typically, these developers will face a new technological environment every 6 months or so, when they hop from one project to another. Their management expects them to adapt to their environment and to be quickly productive, which leaves little room for any deep understanding or professional mastery of anything. Hence, forced dilettantism is the rule more often than not.

Paraphrasing our earlier discussion on simplicity through learning, we can summarize the above remarks as the contrapositive statement: we observe complexity through absence of understanding.

– Closely related to the above is the impossibility to make an educated choice among IT solutions or tools. Quite often, solutions are roughly equivalent and pinpointing the subtle differences among a range of solutions would indeed require an effort that is beyond what most IT departments can afford. Under such circumstances, making rational choices becomes nearly impossible. The most common substitutes to reason are well-known: playing political games and/or practicing divination.

– Finally, and for the sake of completeness, let us recall the complexity that stems from the illusion of limitless resources that we mentioned earlier.

4.1.2.2. Implementing simplicity

Paralleling the previous remarks, let us see what simplicity has to offer:

– There is no obvious way to bypass the need to regularly upgrade IT to new versions and standards. Nevertheless, it is good to be aware that the SaaS6 model now provides a new kind of solution for some applications and services. Recall that SaaS is an emerging model for buying and consuming computing resources. Rather than buying licenses for software packages from editors or designing custom software and running it in-house, the SaaS model proposes to buy ready-to-use services from a service provider. The provider is in charge of maintaining and scaling up the technical infrastructure that hosts the services, while customers have strictly no installation and no maintenance tasks under their responsibility. Currently, most applications and services which are likely to be migrated to the SaaS models are commodities. They include services like email, agendas, collaborative tools, backup tools, or tools for workflow management. Critical business applications remain for the moment in-house for obvious security reasons.

The prominent players in the SaaS marketplace are currently Google, Salesforce, Amazon, and Microsoft. They mostly propose online office suites and an array of collaborative tools such as the Google Apps for instance. Business process management tools are available as well.

There are also intermediate solutions between the classical in-house hosting and a full-fledged SaaS model. A provider can, for instance, propose a hosting infrastructure on which customers can run their applications. In this model, customers are basically just renting some quantity of CPU horsepower and some storage space. The maintenance of the hardware and all low-level IT services, such as web servers, is the responsibility of the provider, while the maintenance of middleware and applications is the responsibility of the customers. Still another possibility is that a provider offers a complete hosting platform, including application servers and databases.

All these SaaS outsourcing variants are potential ways to mitigate uncontrolled complexity generated by IT obsolescence.

From the point of view of SaaS customers, this new kind of outsourcing is nothing but a specific form of simplicity through hiding complexity far away in the cloud!

– To fight IT dilettantism there is no miracle solution either. There are basically only two ways we can suggest. The first concerns the hiring process for IT professionals. The second concerns keeping proven IT professionals once they have been recognized.

Many job offers begin with a list of products and technologies the new employee is expected to master. Something like: the candidate must have working experience with: Java EE 6, PHP 5.3.5, MySQL 5.5.9, LIFERAY 6.0.5, JBoss AS 5 Development…

In other words, emphasis is wrongly placed on the most volatile kind of knowledge, namely knowledge of IT products, while more robust skills, such as conceptual thinking or deep knowledge of basic computing principles, are overlooked. And yet, basic knowledge would be the most valuable for developing intuitions on how to maintain a good balance between performance and maintainability in an IS. There is no deep mystery here. Filtering candidates on the basis of a list of products and then evaluating them on their degree of compliance with a given corporate culture requires fewer skills, from a recruiter, than evaluating a candidate's abilities at sound conceptual thinking.

A few years ago, one of the authors remembers the following joke that could be read on the signboards of the mathematics department in Princeton University. “First rank people hire first rank people. Second rank people hire third rank people…” This slightly elitist joke, expressed using the metaphor of a mathematical sequence, illustrates quite well, we believe, what is really at stake in any serious recruitment process that is based on real skills…

Assuming these precious and rare skilled employees have been hired, they should now be kept for a long enough period to favor the emergence of a genuine enterprise tech culture. This namely entails encouraging longer technical careers (through the classical incentives) rather than encouraging everybody to become a manager or to shift to supposedly nobler business activities. Unfortunately, many IT departments now look more like a hodgepodge rather than structures dedicated to maintaining high levels of competence and intellectual integrity.

No doubt, some will consider this as a symptom of an overly idealistic worldview. We believe, however, that these remarks deserve at least to be meditated upon a little, because they are at the core of much of the nonsense in today's IT world. Referring once more to our simplicity principles, we summarize the above remarks in the following motto:

The IT organization and the hiring process should focus on making simplicity through learning at all possible.

– Let us address the impossibility to make an educated choice among different IT solutions. Our advice here is simple: avoid altogether any long and expensive comparative studies between tools or software packages. Experience shows that, most of the time, these solutions are by now nearly equivalent.

What matters most is developing a true mastery and expertise of tools rather than the feature sets of these tools.

Favoring simplicity through learning, as advocated in the previous point, and systematically choosing the most standard and ubiquitous solutions, rather than the fanciest ones, is often your best bet. In the long run, knowing the details of the limitations of a product often proves more profitable than using fancy solutions that are only partly mastered. This is an example of simplicity through reduction (of the number of things to compare and to learn).

– One basic practice that is perhaps worth recalling to limit obsolescence is the following:

Limit all specific development to business code. Do not attempt to reinvent the wheel by developing technical frameworks in-house.

Technical frameworks (Spring, Hibernate, Struts, MVC implementations, etc.) are sophisticated systems that require narrow specialization in algorithms and months of dedicated development. Such endeavors are way beyond what most IT departments can afford. Frameworks developed in-house will soon be totally outdated by more reliable products from the open-source community, and will only make heterogeneity worse.

– Complexity that is a consequence of the illusion of limitless resources was discussed earlier in sections 2.2.4 and 2.3.2. For convenience, we recall here what the main conclusion was:

In many instances, encapsulating a lot of complexity can be traded for a moderate amount of learning.

4.1.3. Absence of technological vision and leadership

The last cause for growing heterogeneity we want to examine, after openness and rapid obsolescence of IT, is the absence of technological vision. Often, the absence of technological vision and leadership is itself a consequence of the fast pace at which IT evolves. Nevertheless, for the sake of clarity, we prefer here to consider it as a separate cause.

4.1.3.1. Why complexity increases

The context we have in mind here is the context where a do-it-yourself culture dominates the IT department. Developers, even beginners, are left without guidance and coordination; they are free to choose the technologies they like. In such circumstances, technology standards are often inexistent or, when they are available, they are not implemented as there is no authority to enforce them. IT choices are made opportunistically, project by project. Finally, there is no anticipation and problems are solved in a mostly reactive mode.

When tech gurus are in charge of making choices, they are often tempted to follow the latest hype, which they associate with a new playground.

A recent example of tech hype occurred in 2009-2010, when SharePoint, was presented as an ambitious portal solution. Just one year later however, many customers realized they had been fooled by marketing. They are now looking for a better-adapted solution.

This tech-hype situation is, however, increasingly rare.

Conversely, when managers with no IT background are making technological choices, they tend to favor short-term or purely political choices, without much consideration for building a sustainable IS.

When technology leadership is absent, the belief often develops that sophisticated project-management tools and processes can be a substitute for technical competence and responsibility. In other words, there is a “hide-behind-processes” kind of irresponsibility that develops, because making IT choices obviously requires both competence and responsibility! Let us face it: in many contexts, making decisions and assuming responsibilities does not actually favor climbing the professional ladder, especially if you are in a hurry.

Processes are no substitute for technological competence and a sense of responsibility.

In the end, absence of technological vision only favors more heterogeneity and prevents the emergence of a genuine technical culture in the long run. Absence of technological leadership is especially harmful when coordination is required across several departments or business sectors.

As an example, implementing SOA architectures requires harmonizing a number of things such as: creating a shared model of business objects, defining a policy for versioning successive releases of services and setting up a mechanism for publishing new services in a shared directory. All of these need an system-architecture group that has that authority to make choices and to enforce them across all projects. Many SOA endeavors failed in recent years because of such a transverse coordinating structure was missing, not because of technological immaturity of web services standards.

Appendix 3 discusses more thoroughly the reasons for the failure of many service-oriented architecture (SOA) projects.

4.1.3.2. Implementing simplicity

Solutions for creating and maintaining a healthy tech culture are basically the same as those needed to prevent IT dilettantism, discussed in section 4.1.2. They concern the hiring process and incentive mechanisms. Promoting IT leadership implies acknowledging the importance of establishing a sound hierarchy within the IT department that is based primarily on technological competence and the ability to share it with others. Again, good processes will never be a substitute for the latter. Simplicity through collective learning is a stake here. It can only be developed in-house and therefore the use of external skills should be limited to the most specialized at tasks.

The overemphasis on processes amounts to confusing simplicity through organization, which is no doubt necessary, with simplicity through learning, that, let us emphasize it once more, has no substitute.

To promote long-term thinking, we should probably relate the incentives of IT management to long-term results rather than to the success of single projects, which notoriously encourages short-term thinking and narrow-mindedness. Simplicity through time saving should be evaluated on significant timescales, which exceed a single project's lifetime.

4.2. Changing requirements

For decades, delivering software that matches user's needs, on-time and within budget, has been the Holy Grail of the entire software industry. It is probably fair to say that, until now, this fundamental issue has not yet received a satisfactory answer. Moreover, things do not look really better in the foreseeable future. Unsurprisingly, we will not pretend here that we have solved this longstanding issue and hope our readers will not be too disappointed about it! In this section, we more modestly focus on the issue of producing software that meets user's requirements. We examine this question, keeping in mind the growth of complexity, which results when these requirements change over time.

Figure 4.3. Changing requirements, during the project and after an application has been developed, is a major source of uncontrolled complexity when they are not anticipated

ch4-fig4.3.jpg

The problem with changing user requirements is twofold:

Applications must adapt to changing market conditions. This is true for functional requirements, that is new features are needed, as well as for non-functional requirements, that is, the application must scale to accommodate more users. Such changes usually occur over a period of several months or even years. We define flexibility as the ability to cope with such market-driven changes.

Specifications of applications change because they were not understood properly in the first place. This is more related to the difficulty in formalizing requirements in an unambiguous way. Changes of this nature occur over much shorter periods of time. They typically span the lifetime of a project. We define agility as the ability to cope with unclear requirements during the design of the system.

Numerous answers, both technical and organizational, have been proposed to address these fundamental issues. Before we review these answers, let us briefly describe the impact on value and let us examine why changing requirements increase complexity.

Impact on value

Flexibility, understood as the ability to adapt to changing market conditions, is quite straightforwardly related to strategic value. More precisely, it is related to maintaining a high strategic value when conditions will change in the future.

Flexibility will also help maintain user satisfaction when new features are needed. Again, flexibility will guarantee that use value remains high when conditions change. Agility is also, in some sense, related to user satisfaction, as this will guarantee their needs will really be taken into account starting with the design of applications.

Finally, flexibility is also, of course, an attribute of the sustainability value.

4.2.1. Why complexity increases

Quite generally, and beyond the present scope of just ISs, we should realize that tension is involved when we try to build systems that are both optimized and, at the same time, adaptable to changing requirements. The reason is simply because:

Systems that are optimal are usually not flexible.

Totally optimized systems are usually not adaptable. There is thus a subtle balance to be found between optimization and adaptability, where the contribution of a system to innovation (which then strongly contributes to the strategic value) reaches its maximum. This line of thought was originally developed by a group of theoretical ecologists around Robert E. Ulanowicz and it was later applied to various domains, such as monetary crises, the Internet, and even civilizations. As far as we know, it has not been applied to IT yet, but it seems reasonable to speculate that ISs that fail to find this subtle balance point between optimization (of performance, of number of features) and adaptability (to varying demands) probably contain much unwanted complexity. We will not pursue this line of thought here, which would definitely require a deeper analysis and instead refer the interested reader to Ulanowicz's seminal work [ULA 09].

Coming back to our daily experience with IT, we can recognize two causes of growing complexity. Both follow as we acknowledge that flexibility, when needed, should be anticipated during software design:

– Uncontrolled complexity is generated when flexibility appears to be needed but was not anticipated during design. Accommodating more flexibility then often turns out to be synonymous with increasing customization. The parameter space and data types that must be handled by the software become larger than initially expected. New exception handling mechanisms must be implemented to ensure that submitted data are valid and that its integrity is preserved when it is processed. Lack of anticipation most often translates into an unmanageable “combinatorial explosion” of cases that must be handled. Unanticipated new data types also require new conversion mechanisms that progressively accumulate until they get unmanageable. Then, all of these in turn introduce unpredictability into the system.

– Needless complexity is also generated, in a more insidious way, when flexibility was wrongly anticipated while it was actually not needed! Implementing flexibility definitely requires additional design effort and more abstraction layers. When flexibility was not needed, this work is useless and, worse, the layers of abstractions that were created lead to software whose complexity is incommensurate with its original purpose. Such situations have occurred frequently in recent years, partly because of extensive hype around SOA and flexibility, which both became almost an IT dogma (see Appendix 3 for a more detailed analysis).

We can tie in here with the analysis provided in the alignment trap paper that we mentioned in section 3.3.3 by making the following identification:

Companies which attempt to achieve flexibility in their business processes before they have even identified stable components and a stable business semantic are precisely those which are stuck in the alignment trap.

Finally, let us come to the complexity stemming from the need for agility. Recall that agility refers to coping with unclear or incomplete requirements while software is being designed. In this case, complexity is generated when it is not acknowledged that classical, linear project planning is not suitable anymore.

Later, we shall review the technical and organizational answers that have been traditionally given to these agility and flexibility issues and interpret them as simplicity principles.

4.2.2. Implementing simplicity

4.2.2.1. Technical answers

Technical answers primarily address the flexibility issue. Basically, there is just one useful idea that helps cope with changing requirements, namely identify things that do not change!7 Roughly speaking, changing requirements can be addressed by recombining existing, stable components with minimal additional design.

This is the idea of reuse that we shall now examine. There are many different kinds of reuses that can be useful in shaping a flexible IS. We will classify them according to the layer of the IS architecture where they occur.

4.2.2.1.1. Reuse in software architecture

At this level, reuse takes the form of libraries and frameworks that encapsulate expert knowledge in design patterns and algorithm implementation. More explicitly, this form of reuse implies some pieces of code being used several times in different places.

The Spring framework for instance, which implements the IoC design pattern, can be reused several times in different business applications.

This kind of reuse occurs at the design stage. In a sense, it is a static form of reuse. It is an application of simplicity through hiding (encapsulation of expert knowledge) and of simplicity through reduction (the same solution is factorized to be used in many places).

Promoting systematic reuse of this kind is the responsibility of a global architecture team that should be granted sufficient authority by the CIO to actually enforce a clear set of rules.

4.2.2.1.2. Reuse in functional architecture

It is not uncommon that different business applications have to use the same services. They all need an authentication service, for example. The authentication service, in turn, works with a corporate directory of users, which is also used to define access rights for business applications. Services that manage lists of customers or products, as well as billing services, are also profitably shared.

When these kinds of services are shared, we speak of reuse at the application-architecture level. The most advanced form of this type of reuse is actually described by SOA principles, which have received a lot of attention in recent years. Yet, blind application of these SOA principles has led to much disillusionment. We defer critical examination of SOA architectures to Appendix 3 because this requires the introduction of a new concept, the operating model of a company, which would be somewhat off-topic here.

When looking for stable elements in an IS, we should also look for stable data, stable business rules, and stable business processes. Identifying and managing referential data in a company is the subject called MDM8. Master data are opposed to transactional data because this is usually managed using different policies and different tools.

The IS Rating Tool promoted by Pierre Bonnet [BON 11] is a framework that aims to measure what he defines as the intrinsic value of an IS. The basic idea is to rate the knowledge, the governance and technical quality of these three types of repositories: MDM, business rules and business process repositories9. For this he enumerates an extensive list of measurement points.

4.2.2.1.3. Reuse on the semantic level

This is perhaps a less familiar form of reuse. It is especially useful when setting up SOA architecture, which requires defining a pivot format for data handled by the services. Such a format defines the semantics and syntax of data that are shared among the services. For larger companies, the business vocabulary to be formalized can include hundreds of entities. Experience shows that the task of organizing such a large number of concepts in a coherent whole is often beyond what even the largest IT department can afford. Establishing such reusable models of entities and processes is really an R&D task that must be organized at the level of an industry: banking, insurance, car industry, pharmacy, railroad industry, telecommunications, and so on.

The eTOM10 is such a model that defines the most widely used and accepted standard for business processes in the telecommunications industry. It is probably not just by chance that one of the fastest-moving industries was also one of the first to promote this new form of capitalization.

Whenever such semantic models exist, they should be used, even if this implies extending it to match a company's specificities. We believe there is a huge source of savings in using this type of industry-wide business object and/or process models. Unfortunately, the potential of this type of semantic and syntax capitalization has not yet been widely acknowledged. The reason is probably that companies that own such a model consider it as a hard-won competitive advantage that they are not eager to share.

To conclude this section on flexibility, note that there is actually a close parallel with section 4.1.2, namely:

Changes in technologies are best addressed by emphasizing what is stable in computing: basic concepts, patterns, and intuition on balancing performance with maintainability.

Changes in requirements are best addressed by emphasizing what is stable in the IS: software components and the semantic of business objects and processes.

The best way to cope with change is to clearly identify what is stable.

4.2.2.2. Organizational answers

Organizational answers address the need for agility during design time. They can also help identify situations in which building flexibility can be avoided, when writing quick-and-dirty throwaway code turns out to be the best solution. As these solutions are only loosely related to our simplicity principles, we list them here for the sake of completeness and coherence, but we will not go into any details.

4.2.2.2.1. Achieving agility

Many project-management methods were designed in the last years to achieve agility in software design: RUP11, XP12, and Scrum13, just to name a few. Countless books have been written on the subject, but they more or less all share a common set of features or best practices:

– They are iterative and incremental, meaning that software is designed in steps rather than planned in advance. It is fair to say that the implementation of an iterative process needs dedicated tools for performing continuous integration (Maven, Ant, etc.) of the project. The answer to agility is thus partly technical as well.

– They involve the users much more than more traditional project-management methods. In a sense, they all acknowledge that customers do not precisely know what they want at the beginning of a project and that they often will change their minds. Each increment or partial release of the system is meant to provoke remarks from customers, which in turn will provide a better understanding of what their needs really are.

Loosely speaking, these best practices can be related to simplicity through learning, because they all assume that learning what the users want is essential and that this takes an incompressible amount of time. XP explicitly advocates building a system in the simplest possible way but with no further guidance as to what exactly simplicity means.

Using continuous-integration tools is an instance of simplicity through hiding: low-level tasks of building the systems are handled by automations because humans are far too error-prone for such recurrent, low-level tasks.

Collective code ownership, promoted by XP, means that responsibility is shared among all developers in a team. One XP practice explicitly requires each developer to consider his colleagues as equally competent to change the code when this is required. Thus, we have an instance of simplicity through trust.

4.2.2.2.2. Deciding when writing throwaway code is the best option?

There are indeed cases when writing throwaway code turns out to be the best solution. There are no strict rules to follow here, as this will depend mostly on the expected duration of a piece of code and the likeliness of its future reuse. Nevertheless, a little classification of applications can probably help in making this decision:

– On the one hand, we have critical business applications for business users. These are usually complex systems, which will probably require change over time when market opportunities vary but which are stable enough to consider building sustainable software.

– On the other hand, we have applications for customers, which depend on much more haphazard events. For reasons of image or immediate competitiveness, it may be preferable for these applications to use the latest technologies, without worrying too much about sustainability.

In the finance sector, there are many situations that can warrant producing quick-and-dirty code. Portfolio managers, for instance, often quickly need some new service, which will help them perform risk or profitability analysis and will be used only for a very limited period of time.

These are good candidates for throwaway code.

Proponents of agile methods (see Lean Software Development, discussed in section 2.2) usually consider that emphasizing flexibility too much is a mistake. They claim that the amount of capitalization work involved to achieve flexibility is unrealistically large. We consider such a point of view as excessive and prefer to distinguish two categories of applications above: one deserves capitalization, while the other does not.

4.3. Human factors

From the beginning, we emphasized that evaluating the complexity or the value(s) of an IS is a difficult and ambiguous task, because the technical aspects of IT are deeply entangled with human factors. By human factors, we mean such diverse elements as: team coordination, commitment management, lifelong learning of technologies, or relations with customers. The range of social and cognitive skills that plays a significant role in shaping the evolution of an IS is broad. Analyzing them could probably warrant a book in itself. For this reason, we will not seek completeness here, but rather focus on a limited number of topics, which play a clear role regarding complexity issues. We restrict ourselves to human factors directly witnessed in the course of IT projects with which we were involved as IT consultants. Some of the issues addressed here will partly overlap those that we discussed earlier. Nevertheless, we consider it useful to look at these anew, this time from a more social and cognitive perspective.

Figure 4.4. Three important human factors that can significantly contribute to an uncontrolled increase in complexity: multidisciplinarity of skills, demotivation of individuals, and the fact that optimizing the global interest is usually different than local interest

ch4-fig4.4.jpg

The impact on value of these human factors will be discussed separately in section 4.3.14.3.3.

4.3.1. Multidisciplinarity

4.3.1.1. Why complexity increases

The task of building and maintaining an IS in operational conditions requires that many different skills collaborate, probably more than in most other professions. Uncontrolled complexity can result as a consequence of the many ambiguities and misunderstandings that occur when these people with different skills, vocabulary, and backgrounds must exchange information. Moreover, individuals, especially IT specialists, are commonly expected to master many different technologies. When this number of technologies grows too large, it will promote dilettantism, which in turn will favor disorder and unpredictability.

As an extreme example, consider the skills expected from a good IT project manager. He/she needs to communicate with three different kinds of stakeholders with very different backgrounds:

  – The development team, in charge of implementing the project. As a leader, the project manager is expected to understand at least the basics of the technologies that will be used in the project. This is necessary to be able to make sound decisions and arbitrate among various IT solutions. A good technical background will also contribute to his/her credibility with the team.

  – The business management, which has mandated the implementation of the system. As a privileged interlocutor of the business management, the project manager should be able to grasp the basic ins and outs of the business processes and also master the associated business vocabulary.

  – The CFO who is responsible for controlling costs and making sure that the project stays within budget. The project manager will need to negotiate with him/her any possible budget overrun.

In the end, the IT project manager will have to decide what is technically feasible or not under the given constraints of time, skills and budget.

The role of IT project manager is probably the extreme example of the need for multiple competences and for the ability to communicate on different levels. While most other stakeholders can (and often do) hide behind specialization in an attempt to limit their responsibilities (to technical, financial, or organizational matters), the IT project manager is exposed to the technical, organizational, and financial fronts simultaneously. Ideally, he or she is an IT specialist and a diplomatic negotiation expert. This is a tall order for a single individual.

Even within IT, the range of skills needed in an average project is usually quite large: developers, for instance, need to master many programming languages and IDEs14 and integration tools. This is related to what we termed the horizontal complexity in section 4.1.

A high horizontal complexity increases the number of technologies that each IT specialist should master.

Vertical complexity, which we recall is related to the different levels of abstraction present in an IS, is somewhat different.

A high vertical complexity increases the need for specialists in different IT skills to exchange information and knowledge across their domain of expertise.

As language and vocabulary usually depend on the domain of expertise, this raises the chances for ambiguities or misunderstanding.

When a business specialist refers to a “business object”, a developer might quickly associate this concept with an OOP “class”, while a relational database specialist will associate it to a “table” in a schema with a primary and foreign key. Although similar, these concepts are different and cannot be confused.

When many different skills are needed to accomplish a task, their respective specialists will first need to learn how to understand each other.

4.3.1.2. Implementing simplicity

Let us quickly recall the possible way to mitigate the above issues:

– Responsibilities should be shared among all stakeholders. Situations in which a single role (as the project manager described above) assumes most of the responsibilities should be avoided (simplicity through trust… that everybody will assume part of the responsibilities).

– Horizontal complexity is mitigated by defining and enforcing technological standards and avoiding a generalized DIY culture (simplicity through reduction… of the number of technologies).

– Vertical complexity is mitigated by:

‐ limiting the number of levels of abstraction in the architecture (simplicity through reduction). Recall that this was discussed in depth in the subsection “Good abstraction is a form of simplicity!” under section 2.3.2.

‐ requiring each IT or business-process specialist to learn the basics of the vocabulary and concepts of his or her co-workers (simplicity through learning).

Everyone should learn part of others'jobs!

‐ promoting, in the long run, a uniform and companywide communication culture, based on a consistent use of standard notations such as UML and BPMN.

‐ incentives that exist to favor communication and sharing skill at least as much as developing advanced technical skills.

Impact on value

Perhaps the most direct relation of multidisciplinarity to value is regarding use value. Understanding users' needs, being able to precisely document them, and to see what is reasonably feasible or not requires people with technical skills and with a deep understanding of the business processes, to see how these could benefit from IT.

On the purely technical side, limiting software complexity (horizontal and vertical) via the simplicity principles above can only improve the sustainability value.

4.3.2. Disempowerment of IT Skills

4.3.2.1. Why complexity increases

There is a vicious circle that generates complexity, which is somewhat implicit in several of our earlier remarks. Let us state it more explicitly here. Consider an IS with a large overall vertical complexity (i.e. many nested abstraction levels or scales) and also a large K-complexity in most of its architecture layers (i.e. models are extensive and detailed). This complexity (see section 2.1.5) makes it difficult for stakeholders to build a coherent mental overview of the system as whole. Under such circumstances, decisions regarding technical or functional changes in the IS are made in partial blindness. Nobody, even within the IT department, has a clear idea of the interdependence of the various components and chances for spotting reuse opportunities are often lost. As a result, needless complexity and randomness can only increase further. End of loop!

Being aware of this vicious circle, we could imagine that the final solution would be maintaining a consistent multiscale set of models for the whole system. The hardware infrastructure, the application architecture, the software architecture, the physical deployment, the business processes, and so on should have an up-to-date model and these should be available companywide. But actually, even this would not suffice because the lack of a global overview has a deeper and more insidious cause, namely the demotivation and disempowerment of IT stakeholders that result from a biased idea of rationalization. The point is that building an overall view of a complex system requires an incompressible amount of intellectual effort to which only self-motivated individuals will consent. No matter how much modeling is done, it will not help much if this basic motivation is absent from a majority of IT stakeholders. We are, here, at the true junction of technical and human issues.

Mastering complexity, in the end, can only be accomplished by self-motivated individuals.

As inelegant as it may sound, proletarianization15 is the term that captures best what we have in mind. This term has traditionally been associated with the working class, where it refers to people who are progressively disempowered because their initial expertise has been replaced by machines as a consequence of a rationalization effort. As the pressure for rationalization only increases, the computer industry is now facing proletarianization as well and it does so at all levels of responsibility:

Business users now see more and more of their original tasks being automated in software packages. Parts of their expertise and their creativity are considered obsolete and counterproductive.

IT users and developers use an increasing number of tools to automate low-level tasks. Continuous-integration tools, mentioned earlier, now perform sophisticated deployment tasks on clustered architectures. IDEs offer wizards to quickly set up the skeleton of applications. Software frameworks encapsulate sophisticated algorithms and pattern implementations. Thus, progressively, low-level workings are hidden. The danger is that this basic knowledge also gets progressively lost, and with it the intuition that is necessary to achieve the subtle balance between performance and maintainability. Yet, this intuition remains essential for designing sustainable systems.

System administrators of mail systems, databases, and enterprise directories could disappear from many IT departments, as the SaaS model is spreading, see section 4.1.2.

These examples are really different instances of complexity encapsulation, respectively: encapsulation of business procedures, of IT tasks, and of administration tasks. Recall that in section 2.2.2, we identified hiding complexity as one form of simplicity. However, we have also seen that the concept of simplicity is indeed much richer than just hiding complexity, hence our definition of simplicity is a combination of six different aspects. As a consequence, mitigating complexity cannot be achieved naively by just maximizing hiding.

An excessive faith in the possibility to encapsulate any kind of complexity can lead to disempowerment and loss of motivation.

Let us come more explicitly to the relationship of proletarianization to uncontrolled complexity. Proletarianization, understood as an excess of complexity encapsulation and standardization, encourages excessive specialization of skills. This overspecialization conflicts in turn with the ability to build a coherent mental picture of a complex system, which is essential to maintaining it in operational condition in the long run.

In IT departments, it is thus not uncommon to see people, especially young and inexperienced developers, being overwhelmed by the vertical complexity of their IT environment. Indeed, there are tens of APIs to master, the IDE, the various continuous-integration tools, the entanglement of frameworks, not to mention the intricacies of business processes, and so on. In these situations, when expert advice is unavailable to provide guidance through the local tech-jungle, many people spontaneously develop some immunity to nonsense: they simply let their critical thinking atrophy and accept acting in absurd ways.

Modern IDEs all offer sophisticated debugging tools that efficiently allow pinpointing errors and exceptions in code, by setting conditional break points, or by running code stepwise and investigating the content of variables. We have often met younger developers who avoided using such debugging tools altogether and preferred to go the old-fashioned way, namely by painstakingly writing explicit messages to the console. The explanation is simple: being overwhelmed with tens of tools, APIs and frameworks, they gave up even with the basic task of setting up and mastering their daily work environment.

Demotivated people end up concluding that “there is no serious hope in such a technical and organizational mess”. Getting used to nonsense eventually appears as the most reasonable and the least painful way to survive. In other words, proletarianization can eventually destroy the meaning of any kind of human activity. With critical thinking gone, so will any sense of initiative and any sense of responsibility.

Pushed to its logical limit, proletarianization amounts to considering people as simple resources, on the same footing as machines. Human minds end up behaving like machines, only in a less reliable way.

The example of the demotivated developer mentioned above is probably extreme but proletarianization definitely exists at different degrees in most IT departments. It could even gain momentum in the near future with the advent of the SaaS model that will inevitably redistribute responsibilities within IT organizations.

4.3.2.2. Implementing simplicity

Sociological studies16 have proven, time and again, that the classical economic incentives are not appropriate to developing the kind of motivation that is needed to master complex systems. Unfortunately, it looks like this well-established scientific fact has not yet made its way through much of the corporate environment.

Thus, the first step in implementing simplicity, against proletarianization, is perhaps acknowledging the following:

Proletarianization is the result of a simple mistake: forgetting about the specifics of the human brain, whose primary fuel, to work properly on complex systems, is to work on things that seem to make sense.

The opposite of proletarianization within an IT department is unfortunately difficult to characterize. There is nevertheless a concrete working model for it, which is provided by the open-source software (OSS) community. The OSS community has achieved a tremendous amount of high-quality work, realizing software infrastructure that still underlies much of the modern web: Linux OS, Apache web server, MySQL database, Hibernate O/R framework, to name just a few. The point here is that all of these have been achieved using values and incentives that connect directly with self-motivation: working on things that make sense (because they are widely used), creating and sharing knowledge, and being recognized by peers17.

Figure 4.5. The fine line between IT chaos and demotivation

ch4-fig4.5.jpg

There is certainly no easy and obvious answer for implementing such a contribution economy, locally, within an IT department. Nevertheless, keeping the OSS community in mind, as a living example of collective efficiency, cannot hurt.

The core of the difficulty lies in avoiding the following two antagonistic pitfalls:

Avoiding proletarianization implies setting up a contributive organization where individuals can take initiatives and be creative. This will promote individual responsibility and personal involvement. The “Empower the team” principle in the Lean Software Development addresses precisely this issue of proletarianization by asking managers to first listen to the developers, rather than the other way around, so that their suggestions will be better targeted. The idea is also to hire people who have enough autonomy to get the job done without requiring extensive management.

– Avoiding IT chaos implies setting up an organization from which generalized DIY is banished and where standards are defined and enforced. Recall that this aspect of things was discussed extensively in section 4.1.3.

The fine line lies in between and the only way the above assertions can be non-contradictory is for different rules to apply to different IT populations.

We conclude this section on disempowerment with the following summary:

This book is best considered as an invitation to deproletarizing IT. In other words, we claim that it is a good idea to stop believing that everything in IT can be encapsulated in rules, algorithms and templates. Many of the trickiest complexity issues are indeed caused by a progressive loss of both common sense and a few simplicity intuitions.

Impact on value

Fighting disempowerment will clearly contribute to the sustainability value, which is strongly dependent on IT skills being able to respond to unforeseen technical changes in a creative way.

In specific cases, excessive automation of business processes can also prevent creative answers to changing market opportunities that can only be addressed with imagination and creativity. In this sense, it is related to strategic value as well.

4.3.3. Local interest is not global interest

4.3.3.1. Why complexity increases

Finding the best strategy to maximize global interest versus individual interest is a generic social issue that is faced by all human groups, whatever their size is: countries, tribes, companies, and IT departments are, of course, no exception. As clearly neither politics nor sociology is the core subject of this book, we shall limit ourselves to a few simple remarks, which summarize situations that we have repeatedly witnessed on IT projects and that pertain directly to complexity.

The two common ways in which individual interest differs from global interest are the “resistance to change” syndrome, on the one hand, and the “playground syndrome”, on the other:

– The former attitude, conservatism, is the most natural way to react for individuals who have adapted to an existing situation and who perceive changes mainly as a threat to their privileges, comfort, or influence. Changes in IT are by no means neutral, because they often induce a redistribution of responsibilities. When an organization changes, as for instance with the advent of SaaS, this can obviate the need for some specific expertise that is now outsourced. Similarly, a younger generation sometimes has new skills that the previous generation did not bother to acquire.

When Java emerged, ten years ago, as a generic platform for enterprise computing, tension was often observed between the “old” COBOL mainframe experts and the younger OOP generation.

Complexity then accumulates as a consequence of this inertia:

Simple or simpler solutions are not implemented, for fear of disturbing some influential stakeholders in their beloved habits.

– At the other extreme of the psychological spectrum, we find the tech-hype, which pushes some minds to believe that the latest technologies are necessarily faster, better, or more reliable. Changing technology or tools, for them, is a matter for rejoicing. A new playground just opened and not trying it is just not an option for them. Complexity then accumulates as a consequence of the quick obsolescence of most of these one-day-hype technologies, as we discussed in section 4.1.3.

Sadly, patterns in which personal interest differs from global interest are nearly infinite.

Performing capitalization of IT expertise, which is really simplicity through collective learning applied, often requires substantial orderliness as this implies additional work once a project is finished. This is not always perceived as useful for the individual who is in charge and thus serves as yet another example where a company's interest and individual interests do not obviously coincide.

Besides the aforementioned examples, there are also many cases in which global interest is forgotten altogether because a group of stakeholders believe that they have an interest in the failure of some projects. In the tense psychological and political situations that characterize many IT departments, these motivations for failure can actually become the dominant force of the evolution of an IS, whether the management acknowledges this or not. Chaos is then not very far away.

4.3.3.2. Implementing simplicity

There are few general solutions that we are aware of to address these issues. Two of them may be trivial-sounding but are still worth mentioning:

– The first is to imagine a set of incentives to align, as much as possible, individual and global interests. Apart from traditional incentives, the best solution within an IT department is to use technology in a more contributive way, as discussed in section 4.3.2. This can imply performing technology intelligence, publishing expert information on corporate blogs and seminars, realizing prototypes, or getting involved in training for beginners. Maintaining tech-enthusiasm and professional self-esteem at healthy levels is certainly a good option to align individual and global interests.

– Second, as this is a non-ideal world, it should be acknowledged that it is simply impossible to always match local or personal interest with the global, companywide interest.

Consider the aim of achieving companywide flexibility in business processes, say, to maximize the strategic value of the IS. Most of the time, this goal will require defining companywide standards regarding how processes are executed. Defining reusable services typically also often involves defining new integrated data structures. These tasks will often disrupt the local decision processes and the habits within the various local business units. The local management may lose part of its autonomy in favor of the company management. Moreover, some local processes or data structures, that were considered perfectly efficient so far, may need to be replaced with less efficient ones only to comply with the companywide standards.

It thus looks important to acknowledge the following fact of life:

Achieving companywide flexibility can require decreasing the flexibility of local processes and the local decision autonomy.

Increasing global flexibility usually involves a stage where local flexibility first decreases. It is during this transitory period that global and local interests do not coincide. Such changes, of course, do not happen overnight. We can consider it as still another illustration that simplicity through learning definitely needs time.

Impact on value

There is no obvious preferred link to any of the three concepts of values here, as global interest could increase any of these. If we consider, however, that achieving companywide flexibility (as opposed to local flexibility) is the main goal of IT, then the link will more naturally be to the strategic value.

 

 

1 O/R stands for Object Relational mapping. This is the software layer, which is in charge of converting the relational data model used in databases to clusters of objects used in object-oriented languages.

2 ESB stands for Enterprise Service Bus. It is a piece of software, which enables the implementation of a service-oriented architecture. It supplies low-level services such as routing logic, security, and monitoring.

3 The most common version of this “law” states that computing power doubles every 18 months.

4 Ajax is a web-development method to create highly interactive web pages that do not need to be reloaded completely when data are updated, thus providing a much smoother user experience.

5 Dependency injection is an OOP design pattern ensuring maximal decoupling of components and thus a high maintainability of software. Spring supplies the basic software infrastructure for just this.

6 SaaS stands for Software as a Service.

7 Incidentally, it is interesting to note that physicists, who are interested in dynamics of physical systems, use exactly the same strategy; they first look for constants of motions that are consequences of sets of symmetries.

8 MDM = Master Data Management is a set of methods that consistently define and manage the non-transactional data entities of an organization.

9 A thorough presentation of Pierre Bonnet's ideas is given in his book [BON 11]. A detailed table of content is available at: http://www.sustainableitarchitecture.com/bookisrating.html.

10 eTOM = enhanced Telecom Operations Map.

11 RUP = Rational Unified Process is a process created by the Rational Software Corporation, now acquired by IBM. It is based on UP (Unified Process in complement to the UML language).

12 XP = Extreme Programming advocates frequent releases and strong collaboration with customers.

13 Scrum is not an acronym for anything!

14 IDE = Integrated Development Environment.

15 The concept of proletarianization has been recently promoted by Ars industrialis, an international association created and led by French philosopher Bernard Stiegler, who advocates a “new industrial policy for mind technologies”, see http://arsindustrialis.org/.

16 See, for example, career analyst Dan Pink on the “science of motivation” and references therein: http://www.ted.com/talks/dan_pink_on_motivation.html.

17 This is, by the way, also the primary incentive within the scientific community.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset