Chapter 5

Simplicity Best Practices

,

 

 

 

In seeking the unattainable, simplicity only gets in the way.

Alan Jay Perlis — Epigrams on Programming

 

5.1. Putting simplicity principles into practice

In the previous chapter, we categorized the various sources of uncontrolled increase of complexity for information systems and also discussed how general simplicity principles could favor creation of value. Now, to turn these principles into a more practical framework, we first describe a generic information system in terms of architecture layers, applications, and physical and logical dependencies. For each of the architecture layer, we then suggest concrete simplicity measures to mitigate the three main sources of unwanted complexity, namely growing technical heterogeneity, changing requirements, and the selection of human factors we decided to consider.

5.2. Defining a generic IS

The diversity of ISs is obviously huge and this makes it rather hard to define a sensible generic architecture. We thus prefer to take the point of view of IS architects here and define three standard architecture levels:

The physical architecture: This includes infrastructure components such as hardware, network, security tools, archiving tools, and administration and monitoring tools that underlie an IS.

The software architecture: This includes, broadly speaking, any piece of code, modules, libraries, and frameworks that are used in the applications that are part of the IS. We further distinguish three sub-layers in the software architecture that are commonly referred to in three-tiered architectures.

Data-access layer: This includes all code that directly manages data, irrespective of the format: relational databases, XML files, or flat files. This code might be written by hand or might be generated. This layer also includes all O/R code when OOP languages are used.

Service layer: This layer includes both technical and business services. Technical services, for instance, are in charge of providing an access to resources such as databases, to CICS1 transaction servers, document generators, mail services, or LDAP directories. Business services also provide a coherent view of business data such as customers and suppliers. They can either implement business rules or coordinate the invocation of other business services. Business services can rely on technical services.

User interface: This layer includes all code that supports the graphical user interface, whether it is a web interface or a rich client for stand-alone applications. It also includes portal technologies that aggregate various information sources.

The functional architecture: This covers the features of the IS essentially as they appear in functional specifications. If UML notation is used, this would include all Use Case diagrams, together with Sequence Diagrams and Activity Diagrams that complement them. Managing a customer contract or tracking of product shipping are examples of such Use Cases.

Figure 5.1. The three architecture levels to which we can apply simplicity actions: functional architecture, software architecture, physical architecture. The Software architecture is subdivided, in turn, into the usual three tiers: the user interface, the service layer, and the data access layer

ch5-fig5.1.jpg

An application in the IS usually spans these three layers, in the sense that it has relevant descriptions in each of them. Especially important for our complexity considerations are the dependences between applications. There can be a physical dependence between two applications, which means that they explicitly exchange data at one or more architectural levels. There can also be a logical dependence, which means that data from one application may be used indirectly as input to another, in a business process that is not entirely automated.

5.3. A simplicity framework

This section should be regarded as a best-practice guide. It is not necessarily meant to be read in sequence. The table below lists some key aspects in generating complexity within the different architecture layers; the items refer to topics covered in detail in the next five subsections.

images

The next five subsections are organized as follows: for each of the above five architecture layers or sub-layers, we discuss the specifics of three main causes of complexity. We then discuss how this complexity can be evaluated and mitigated using simplicity actions.

5.3.1. Simplicity in hardware

In this section, we analyze how the growing of complexity in the hardware layer can be mitigated. The perspective is that of IT production teams, whose responsibility is operating and maintaining the hardware in operational conditions. IT production and hardware accumulate most of the complexity factors that we identified in the last chapter and therefore make an appropriate starting point for describing a simplicity framework.

5.3.1.1. Growing technical heterogeneity

Heterogeneity can be quite pronounced in IT production departments and it is not uncommon that large IT departments still have mainframe AS/400 systems running in parallel with Windows and UNIX systems. For the sake of simplicity, we conventionally include the operating system when describing hardware.

Technical heterogeneity is often a consequence of the emergency mode in which much of the work is done: “Just get the darn thing working!” is the response. As a matter of fact, emergency often supersedes compliance with predefined standards and rules. Finding elegant solutions and using the latest technologies is considered even less important. In IT production departments, pragmatism is often pushed to its extreme and, once things run smoothly, the motto is: “DonC’t touch it!” Under such circumstances, provisional solutions and patches will accumulate without anybody in particular worrying about the overall coherence and homogeneity of the system. As the vision of the mutual dependences among the different pieces of hardware is progressively lost, reliability inevitably deteriorates.

When two companies and their respective IT departments merge, the heterogeneity can even increase in dramatic or unmanageable proportions.

5.3.1.1.1. Evaluation

Evaluating the heterogeneity of hardware should take into account both the diversity of systems and their age. Older technologies should certainly be penalized more than recent technologies.

A quick evaluation of the CAPEX and the OPEX2 can help determine whether moving to a new technology will prove profitable. This will provide an opportunity not only to mitigate heterogeneity, but also to help decrease maintenance costs.

5.3.1.1.2. Simplicity actions

As an application of the simplicity by reduction principle, the blueprint of each company should clearly define a default operating system for new systems and define clear rules for possible exceptions.

Applying the SaaS model, wherever it makes sense, is probably the best option for reducing technical homogeneity in hardware by suppressing it altogether. This is still another application of simplicity by hiding, as the hardware is actually not “suppressed” but only transferred to the service provider. The decision to move to the SaaS model should not be taken lightly, though. Facilities-management tasks should be taken into account, as well as how much loss in IT skills the SaaS model might induce (see section 5.3.1.3).

Replacing all equipment to homogenize technologies is rarely a realistic option. A specific form of simplicity through hiding and abstraction that can be applied here is virtualization3.

5.3.1.2. Changing requirements

Hardware is relatively stable in the long run and stays largely unaffected by changing requirements, at least when compared with software architecture, which changes on much shorter timescales.

5.3.1.2.1. Evaluation

Try to determine the proportion of recurrent requests, from internal customers (IT users, developers, etc.), whose fulfillment would better be addressed using more formal processes.

5.3.1.2.2. Simplicity actions

To face changing requirements using simplicity by reduction, a good solution for IT production is to provide a catalogue of solutions to their internal “customers” (research department, etc.). This restricted set of solutions should then be streamlined.

5.3.1.3. Human factors

IT operations are traditionally subdivided into many specialized tasks and thus, quite naturally, face a multidisciplinarity issue. These usually do not require much creativity or deep understanding of IT, though. Experience shows problems occur mainly because of a lack of communication between specialists than because of language ambiguities.

While IT operations can certainly appear quite inflexible at times to project teams, they are, by contrast, very flexible internally. The risk of proletarianization is thus not really significant, as IT-operation tasks remain largely craft. The advent of the SaaS model could, however, change things in the near future. Clearly, some knowledge (administration of servers, databases, etc.) will appear more and more useless in this new context. Due to its uselessness, this knowledge could then progressively be lost. This could later prove penalizing when that knowledge will actually be required. Some vigilance will also be required to make sure that the SaaS model will not destroy IT teams’ sense of responsibility by offering them the possibility to blame SaaS providers for all performance and availability issues.

IT operations people are often in close connection with business management and they are the ones who are woken up at night when something goes wrong. Their interests thus coincide rather naturally with the global interest of the company.

5.3.1.3.1. Evaluation

One obvious metric to use, to evaluate multidisciplinarity, is to count the number of people or skills necessary to solve each kind of issue.

5.3.1.3.2. Simplicity actions

One solution is to provide training to improve the diversity of skills mastered by each team member, which will thus decrease the number of people involved in solving a problem.

5.3.2. Simplicity in software – data access

5.3.2.1. Growing technical heterogeneity

Languages and tools for handling data are countless in this layer and they largely contribute to the horizontal complexity when no technological leadership is available and when no clear standards have been enacted. It is fair to say that this layer currently often remains quite anarchic. Over the last 10 years, technologies or APIs have appeared at such a fast pace that the former version could rarely be fully mastered before the next version arrived. Obsolescence, thus, has been relatively important, even if probably not as quick as in the user-interface layer. Some tools, such as ETL4, can also favor an increasing heterogeneity as they allow conversion from one format to another, thus delaying the painstaking task of designing more consistent data models.

5.3.2.1.1. Evaluation

Taking inventory of the number of technologies, languages, and tools is a first step. Most of the time, this comes down to mapping out existing data structures, protocols, and libraries.

The quality and availability of technical documentation will also provide a good insight into the maturity of the dataaccess layer. If standards have been defined at all, one should evaluate how well they are enforced on each new project.

5.3.2.1.2. Simplicity actions

The first measure is to define and enforce a set of consistent standards that will apply to any new project. The responsibility for this should be assigned to a global team in charge of defining data structures. More precisely, for each technology (Java, PHP, .NET, etc.) and each type of repository (database, LDAP directory, mainframe, etc.) a custom data-access library should be defined and its use made mandatory. These libraries should provide customized data access for the data structures specific to the company. Usage rules should be enacted as to which technology should be used in which case and when exceptions to the standard rules can be tolerated. A technical expert should be appointed for each technology. The expert will be the privileged interlocutor for project teams when they need assistance or advice on that technology. Code reviews should include a rigorous check that the standards have actually been implemented. Up-to-date documentation should be maintained under the responsibility of the global team in charge of the design of the data-access libraries. This documentation should be readily available to all IT teams and training provided when necessary.

Palliative actions often come down to centralizing things that were scattered over time. Preventive actions, on the other hand, should aim to limit the allowed technologies (databases, middleware, directories, etc.).

5.3.2.2. Changing requirements

Data structures are notoriously difficult and expensive to change because they have a global impact on many applications and services. The data elements that are expected to need flexibility in the future should be anticipated early on. Otherwise, unexpected changes will always increase the chances for future inconsistencies.

5.3.2.2.1. Evaluation

The aim is mostly to identify accidental redundancies, inconsistencies, and ambiguities that have accumulated over time because of previous changes in requirements. These might be the future sources of uncontrolled complexity. This is really the topic of MDM. See also the evaluation in section 5.3.2.1.

5.3.2.2.2. Simplicity actions

Simplicity actions more or less all imply various forms of reuse. One important form of reuse in the context of data access is semantic and syntactic.

Whenever a data model has been defined by an industry sector, as an XML schema or set of UML classes, it should be systematically used, either “as is” or in an extended form.

Finding out whether such a model exists may require conducting a little investigation with partners or competitors in the same industry. Such models indeed capitalize a huge amount of experience and using them will strongly mitigate the risk of extensive changes of the data structure in the future. On the other hand, when no such model exists, creating pivot formats, which ease the exchange of data between services, follows the same logic. Because of the extensive amount of work this usually implies, such a pivot format should, however, focus only on the core business activities and the essential business applications and services.

Another form of reuse is the definition of fine-grained CRUD5 services that can be used by more coarse-grained services.

In this context, all tasks pertaining to MDM are relevant: mapping out data structures, identifying duplicated data, removing obsolete or useless data, etc. They are all aimed at making the system more flexible.

5.3.2.3. Human factors

Some multidisciplinarity is unavoidable for the developers in charge of the data-access layer. Besides their favorite programming language, they should have at least a rudimentary knowledge of databases or LDAP directories, even when APIs exist that provide a unified access to those. It should be acknowledged, though, that true expertise in one domain, for example, databases, cannot be converted quickly into another expertise, for example, OOP programming, as management sometimes ingenuously seems to believe.

Proletarianization can play a significant role in the data-access layer when frameworks are used. The basic knowledge and intuition, which should guide the design of an efficient and maintainable data-access layer, is progressively lost when developers blindly use frameworks as an all-in-one solution for all persistence issues.

5.3.2.3.1. Evaluation

The only evaluation that seems practical in this context is to count the number of data-access technologies that are being used. This will, in turn, determine the number of technology experts that will be needed to provide reliable assistance on each.

5.3.2.3.2. Simplicity actions

Technological choices should be the responsibility of an architecture committee and must not be left to individual developers. The project manager must be wary of individual initiatives that deviate from the chosen standard.

Appropriate hiring, coaching, and training of IT skills are essential for mitigating the risks of disempowerment.

5.3.3. Simplicity in software – services

5.3.3.1. Growing technical heterogeneity

Technical heterogeneity can be quite high in this layer because middleware technologies and protocols abound and obsolescence is particularly quick. Physical dependencies among applications and services have a strong impact on complexity because heterogeneity will imply implementing many conversion mechanisms.

5.3.3.1.1. Evaluation

The number or redundant technologies is an obvious indicator for heterogeneity. The intricacy of the physical dependencies is also important to evaluate, as this is directly related to the quantity of protocol- or format-conversion operations that are required in a highly heterogeneous environment. Service activity monitoring tools6 could be useful for this diagnostic.

5.3.3.1.2. Simplicity actions

As for the data-access layer, a small team of architects should enact a number of technological and design choices and be invested with sufficient authority to have theses enforced on projects. The architect team should be able to arbitrate technological choices when required.

Not believing too naively the false promises of so-called magic tools, which pretend to be able to eradicate heterogeneity, is a good idea.

A palliative action, which could help mitigate complexity, without, however, decreasing heterogeneity, is to use a service orchestrator (or ESB) that will coordinate the invocation of many services while supporting all conversion tasks to cope with the diversity of protocols. Such a choice should be made with care, though. It will be justified only when a significant number of services or applications have to be coordinated, for example, more than five, as rule of thumb. For fewer applications, it is very likely that the complexity introduced by the bus will exceed the complexity introduced by using explicit conversion mechanisms.

5.3.3.2. Changing requirements

The service layer is in first position when dealing with changing requirements. If well designed, it should be able to absorb most of the changing requirements without excessive rework or expense. Reusability and modularity are the key here. The SOA paradigm, which has emerged over the last years, is an attempt to address this flexibility demand. Appendix 3 gives a critical review of recent SOA experiences from this perspective. For the moment, we only emphasize that services should not be defined to merely comply with some sort of fashionable architecture dogma but to address actual needs for reusing existing features, for opening parts of the system to the external world, or, ambitiously, for introducing more flexibility in business processes.

Recall also that much needless complexity can be generated when the flexibility is designed into a system that never needs it! This has been analyzed in section 4.2.

5.3.3.2.1. Evaluation

Reusability and modularity should be evaluated even though there is no obvious metric for these. Both are often related to the quality of business-object and business-process modeling and to how well this information is shared across the company.

5.3.3.2.2. Simplicity actions

A global coordination entity, with appropriate power, should be created to harmonize and enforce a strict semantic for business objects and processes to prevent ambiguities and to enable reuse across projects. This is an application of simplicity through trust, as minimizing ambiguities really amounts to trusting that business terms have the same meaning for everybody and that their semantic was carefully defined by qualified people.

Changing requirements are best handled when reuse is promoted by defining stable, fine-grained services on which more coarse-grained services are based. These finegrained services should be under the responsibility of the coordination entity, too.

Managing business rules and business processes in a dedicated repository is another measure that can prove useful. Using business-rule engines may help centralize essential business rules and make their modification faster and more reliable. These are really applications of simplicity through organization.

Setting up a systematic and consistent modeling of business rules and business processes, which are shared and accessible, is particularly important.

Use an iterative approach to capture the most complex business rules, not at once, but by presenting users with successive partial implementations of applications, to which they can react to clarify their needs.

5.3.3.3. Human factors

Multidisciplinarity plays a key role when designing services, because it is in the service layer that most business requirements are translated into code. People in charge of the design of the service layer should understand the problems as they are expressed by expert business users. Simultaneously, they must also be aware of the technical constraints.

Disempowerment and proletarianization can play a significant role in the service layer when frameworks and sophisticated automation tools are used. This topic is largely discussed in section 4.3.2. The basic knowledge and intuition, which should guide the design of an efficient and maintainable data-access layer, could be lost when developers use these tools blindly. Excessive regulation, with rigid software-architecture principles, can also result in the same kind of disempowerment.

A common disempowerment situation is one in which strict rules have been enacted to ensure decoupling of layers. These usually include rules that prescribe which kind of services can call other services. In some cases, these rules may result in coding a large number of pass-through services that really do nothing, except increase the number of lines of code. Some common sense is really needed to evaluate when strict compliance to a set of rules makes sense. Common sense and experience, fortunately, cannot always be formalized.

Local interest has many different forms, the most common form being developers using their own favorite technology rather than complying with the chosen enterprise standards.

5.3.3.3.1. Evaluation

Regarding multidisciplinarity, the question to ask is simply: Is there at least one individual who has both IT and business skills? Measuring disempowerment without resorting to a psychiatrist is surely not an easy matter. The best thing is probably to try to assess, one way or another, whether people understand what they are doing and whether they understand the purpose of the software productivity tools they use. Finally, regarding the global interest, ask whether there exists a respected coordination entity.

5.3.3.3.2. Simplicity actions

Regarding the need to understand both technical and business issues, there should be, for each project, one individual (at least) who is able to record or model user requirements in a way that developers can understand. His or her job is to bridge the two worlds of business users and pure IT experts.

Regarding disempowerment, hiring and training policies are again essential. Another point worth emphasizing is that the coding and design rules should really be implemented. A large catalogue of sophisticated rules is of little use if these are not applied. Experience shows that excessive formalization, with too many rules and conventions for coding and design, systematically results in such rules not being applied. Thus, rules and best practices should be kept to a strict minimum. This minimum, however, should be known to all and should be strictly enforced.

The global team in charge of defining and checking the architecture rules are often perceived as being annoying, as they tend to slow down projects. But this is nothing more than the normal price to pay for imposing general interest over local (related to individuals or to projects) interest.

5.3.4. Simplicity in software–user interface

5.3.4.1. Growing technical heterogeneity

The domain of graphical user interfaces is probably the domain that offers the largest array of technologies: PHP, Swing, GWT, applets, JSP, JSF, Adobe Flash, Adobe Flex, Adobe Air, HTML5, Silverlight, JavaScript, Ajax, etc. to name only a few.

It is also the layer where technology is in direct contact with end users. Economic issues related to this layer are numerous, simply because it is visible. This is where the hype and fashion exist; this is where demands fluctuate most rapidly. In this layer, changing requirements really create obsolescence, which in turn progressively increases heterogeneity. This is very different from the data layer, for instance, where obsolescence is mostly generated because of changing standards and by the need for increased performance.

Heterogeneity is a fact that should be accepted, because it is nearly unavoidable. It is quite unrealistic to assume that it will be possible, in the near future, to homogenize these technologies. Thus, focus should be more on decoupling to prevent interchangeable presentation technologies from impacting other layers of the software architecture.

5.3.4.1.1. Evaluation

Evaluate the number of technologies used for each type of application: web applications, mobile applications, and desktop applications. Are there any component libraries available for each?

5.3.4.1.2. Simplicity actions

A transverse organization should define and enforce standards and rules for which technologies are allowed under what circumstances and for which type of applications (or channel).

5.3.4.2. Changing requirements

Changing requirements are the rule rather than the exception in this domain. They may concern superficial aspects, such as color or layout, or more complex aspects such as changing the logic of navigation between a set of pages.

5.3.4.2.1. Evaluation

Evaluate the level of reuse of components or widgets. Perform code reviews to evaluate how well the presentation layer is decoupled from the services and from the data layer. This will directly impact the complexity generated by repeated changes to the user interface. To predict how much time will be necessary to perform changes, it is useful to categorize screens of applications as easy, standard, and complex. Each category should be assigned a maximal duration for its design and implementation, say 2 hours, half day, and a full day, respectively. This will provide rough estimates of the complexity of the presentation layer.

5.3.4.2.2. Simplicity actions

The key words for simplicity actions in this domain are decoupling and reuse.

Decoupling means that graphical user elements should be decoupled from the rest of the architecture. This will prevent changes to user-interface features from propagating to the service layer and the other way around. This principle is actually manifested in the well-known MVC7 design pattern.

Reuse implies defining component libraries for widgets8 and for portlets9 to be reused and combined as a means to match at least part of the changing requirements. Widgets are meant for developers, while portlets are for users who can customize and reposition them to fit their needs. Defining a reusable layout mechanism is another simplicity action.

But reuse should certainly not be applied in a dogmatic way, as writing disposable code could sometimes be the most sensible option. If the prescribed time limits to implement a screen or a page are exceeded too often, we should consider changing either the technology or the skills used.

We have simplicity through reduction and organization at work here.

5.3.4.3. Human factors

At times, proletarianization can play a role during design. Implementing a user interface is indeed a very technical topic. It is therefore essential to resort to resources who have specialized skills in one technology and who have a thorough working knowledge of the appropriate design tools. Developing screens quickly and reliably does not allow for dilettantism and manual coding should be forbidden.

For obvious productivity reasons, much of the graphical user-interface design is currently generated automatically by wizards of modern IDEs. This is where the danger for disempowerment lies. This generated code cannot be trusted blindly. Its often intricate structure must be understood by developers so that they can modify it when needed. Poor understanding will quickly generate code that is hard to maintain and increase needless complexity.

Creativity definitely has its place in the presentation layer.

5.3.4.3.1. Evaluation

Evaluate the range of skills and technologies that are necessary to develop user interfaces reliably for all types of clients and channels.

5.3.4.3.2. Simplicity actions

Technological choices should be the responsibility of an architecture committee and must not be left to individual developers. Project managers should be wary of individual initiatives that deviate from the chosen standard, just for fun.

Using appropriate productivity tools is essential in the presentation layer. For each user-interface technology (PHP, JSF, Swing, Struts, Silverlight, Flash, etc.), one development environment should be chosen and imposed on all developers. These tools will quickly generate the skeleton of the code, which then has to be modified manually.

5.3.5. Simplicity in functional architecture

5.3.5.1. Growing technical heterogeneity

Technical heterogeneity is of no direct relevance for the functional architecture, as technical infrastructure is not the primary concern here. It could nevertheless play a limited role when different modeling tools are used for specifying the business processes, requirements, or use cases that comprise the functional architecture.

5.3.5.1.1. Evaluation

How many different notations and tools are used to specify the functional architecture?

5.3.5.1.2. Simplicity actions

Choose a consistent and universal notation to define the functional architecture. Choose appropriate tools. Enforce these choices.

5.3.5.2. Changing requirements

Changing requirements impact, by definition, on the functional architecture. Useless complexity could be generated, not on a technical level, but rather on a semantic level, because of ambiguities and needless logical dependencies that could be created over time.

5.3.5.2.1. Evaluation

How much redundancy and ambiguity are present in business terms and in business processes? How much needless logical dependence has been created due to changing requirements? Are there shared repositories of business objects, business rules, and business processes?10 If so, are these known to all concerned stakeholders? Are they used systematically? How well are changes in application features synchronized with these models? How far can previous functional changes be traced back?

5.3.5.2.2. Simplicity actions

Define a role or a team in charge of defining a consistent set of business concepts, rules, and processes. Make sure these definitions are readily available to all relevant stakeholders and that they are used effectively on projects.

Experience shows that big-bang approaches for capturing user requirements are usually doomed to failure. Iterative procedures are usually better suited for capturing a complex and unclear set of requirements. One way to achieve this is to present users with successive partial implementations of an application to which they can react to clarify their needs.

If available, reuse existing industry-wide standard models for business objects, rules, and processes. This can require some interaction with partners or competitors in the same field but is usually well worth the effort, as it is likely to mitigate the impact of future changes.

5.3.5.3. Human factors

Multidisciplinarity is inherent to the definition of a functional architecture, which more often than not will involve different skills. The reliability of communication between experts of different domains is essential to avoid generating needless logical complexity in the functional architecture. Technical issues cannot be neglected altogether, as requirements can never be made totally independently of technical constraints. In a way, this situation is the mirror image of the service layer, which focused mostly on technical issues.

Proletarianization of business users will occur when automation is imposed for tasks that would otherwise benefit from the imagination and creativity of experienced users. Excessive automation through rigid BPM tools can be counterproductive and should be avoided.

Global interest versus local interest is at stake, for instance, when retro-modeling is required after some functionality has changed. Such modeling normally takes place once a project is finished and is therefore often perceived as having no immediate consequences on anything. Capitalization, although it is an investment in future knowledge, is often perceived as a burden and waste of time.

5.3.5.3.1. Evaluation

Regarding multidisciplinarity, ask whether domain specialists have a reasonable understanding of their colleagues’ expertise.

Regarding disempowerment, try to evaluate the proportion of business processes for which human creativity and expertise clearly outweigh the advantages of automation.

Finally, regarding global interest related to modeling (business objects, rules, and processes) ask the following: what is the degree of synchronization between models and deployed applications, assuming such models exist?

5.3.5.3.2. Simplicity actions

To face multidisciplinarity, train specialists in each other’s domain of expertise to enhance communication. Try to understand why some tools (modeling tools, reporting tools, collaborative tools), which could be useful to master complexity, are not used.

To avoid demotivation and disempowerment, identify those processes that need the most creativity and imagination or fine-grained expertise and avoid automating those in rigid BPM.

Finally, regarding the need for retro-modeling, which can be assimilated to the global interest, there are no truly magical solutions. One solution is to establish rituals, which will progressively become part of the corporate technical culture. Such rituals should be supported and sponsored by management.

 

 

1 CICS stands for Customer Information Control System. It is a transaction server that runs on IBM mainframes.

2 CAPEX = capital expenditures are expenditures creating future benefits. OPEX = operating expenditure is an ongoing cost for a system.

3 Virtualization is the creation of a virtual version of some computing resource, such as an operating system, a server, a storage device, and network resources.

4 ETL = Extract Transform Load.

5 CRUD = Create, Read, Update, Delete designates a fine-grained service to handle small pieces of data.

6 BAM = Business Activity Monitoring software usually includes this kind of tools.

7 MVC stands for Model View Controller. It is a design pattern meant to decouple the pieces of code that are responsible for the display (V), for the business logic (C), and the data (M) being displayed.

8 A widget is a simple reusable element of a graphical user interface.

9 A portlet is a pluggable frame that is managed and displayed within a web portal.

10 This is closely related to what can be measured with the IS Rating Tool from the Sustainable IT Architecture organization, founded by Pierre Bonnet.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset