Chapter 19. Software Architecture in the Future

Prediction is very difficult, especially about the future.

—Niels Bohr

The history of programming can be viewed as a succession of ever-increasing facilities for expressing complex functionality. In the beginning, assembly language offered the most elementary of abstractions: exactly where in physical memory things resided (relative to the address in some base register) and the machine code necessary to perform primitive arithmetic and move operations. Even in this primitive environment programs exhibited architectures: Elements were blocks of code connected by physical proximity to one another or knitted together by branching statements or perhaps subroutines whose connectors were of branch-and-return construction. Early programming languages institutionalized these constructs with connectors being the semicolon, the goto statement, and the parameterized function call. The 1960s was the decade of the subroutine.

The 1970s saw a concern with the structuring of programs to achieve qualities beyond correct function. Data-flow analysis, entity-relation diagrams, information hiding, and other principles or techniques formed the bases of myriad design methodologies, each of which led to the creation of subroutines or collections of them whose functionality could be rationalized in terms of developmental qualities. These elements were usually called modules. The connectors remained the same, but some module-based programming languages became available to enhance the programmer's ability to create them. Abstractions embedded in these modules became more sophisticated and substantial, and for the first time re-usable modules were packaged in a way so that their inner workings could theoretically be ignored. The 1970s was the decade of the module.

In the 1980s, module-based programming languages, information hiding, and associated methodologies crystallized into the concept of objects. Objects became the components du jour, with inheritance adding a new kind of (non-runtime) connector.

In the 1990s, standard object-based architectures, in the form of frameworks, started appearing. Objects have given us a standard vocabulary for elements and have led to new infrastructures for wiring collections of elements together. Abstractions have grown more powerful along the way; we now have computing platforms in our homes that let us treat complex entities, such as spreadsheets, documents, graphical images, audio clips, and databases, as interchangeable black-box objects that can be blithely inserted into instances of each other.

Architecture places the emphasis above individual elements and on the arrangement of the elements and their interaction. It is this kind of abstraction, away from the focus on individual elements, that makes such breathtaking interoperability possible.

In the current decade, we see the rise of middleware and IT architecture as a standard platform. Purchased elements have security, reliability, and performance support services that a decade ago had to be added by individual project developers. We summarize this discussion in Figure 19.1.

Figure 19.1. Growth in the types of abstraction available over time

image

This is where we are today. There is no reason to think that the trend toward larger and more powerful abstractions will not continue. Already there are early generators for systems as complex and demanding as database management and avionics, and a generator for a domain is the first sign that the spiral of programming language power for that domain is about to start another upward cycle. The phrase systems of systems is starting to be heard more commonly, suggesting an emphasis on system interoperability and signaling another jump in abstraction power.

In this chapter, we will revisit the topics covered in the book. Heeding Niels Bohr, our vision will be not so much predictive as hopeful: We will examine areas of software architecture where things are not as we would wish and point out areas where the research community has some work to do.

We begin by recapping what we have learned about the Architecture Business Cycle (ABC) and then discuss the process of creating an architecture, how architecture fits within the life cycle, and how we see components and component frameworks changing the tasks of an architect.

19.1 The Architecture Business Cycle Revisited

In Chapter 1, we introduced the ABC as the unifying theme of this book. We exemplified and elaborated this cycle throughout the book and have tried to convey some of the principles of architectural creation, representation, evaluation, and development along the way. If the study of software architecture is to have stamina, there must be areas of research that create a more mature field, with results that can be transitioned into practice. In this context, we can now identify and discuss four different versions of the ABC that appear to have particular promise in terms of future research:

• The simplest case, in which a single organization creates a single architecture for a single system

• One in which a business creates not just a single system from an architecture but an entire product line of systems that are related by a common architecture and a common asset base

• One in which, through a community-wide effort, a standard architecture or reference architecture is created from which large numbers of systems flow

• One in which the architecture becomes so pervasive that the developing organization effectively becomes the world, as in the case of the World Wide Web

Each of these ABCs contains the same elements as the original: stakeholders, a technical environment, an existing experience base, a set of requirements to be achieved, an architect or architects, an architecture or architectures, and a system or systems. Different versions of the ABC result from the business environment, the size of the market, and the goals pursued.

We believe that future software cost and benefit models, of which CBAM is an early version, will incorporate all of these versions of the ABC. In particular, they will take into account the upfront cost that architecture-based development usually entails, and they will be able to predict the quantitative benefits that architectures yield.

19.2 Creating an Architecture

In all of our case studies, we emphasized the quality requirements for the system being built, the tactics used by the architect, and how these tactics were manifested in the architecture. Yet this process of moving from quality requirements to architectural designs remains an area where much fruitful research can be done. The design process remains an art, and introducing more science into the process will yield large results.

Answers to the following questions will improve the design process:

Are the lists of quality attribute scenarios and tactics complete? We presented lists for six different quality attributes. Almost certainly they should be augmented with additional tactics and scenarios. Also, additional attributes should have scenarios and tactics created for them. Interoperability and buildability are two quality attributes that may be as important as the six we wrote about.

How are scenarios and tactics coupled? With what we have presented, the coupling is at the attribute level. That is, a scenario is generated according to the generation table for a particular attribute—performance, say. Then the tactics are examined to determine those most likely to yield the desired result. Surely, we can do better. Consider the performance scenario from our garage door opener example in Chapter 7: Halt the door in 0.1 second when an obstacle is detected. A series of questions can be asked that yield more insight into the choice of tactics. Can the obstacle be detected and the door halted in 0.1 second if there is nothing else going on in the system? If the answer is no, the tactic “increase computational efficiency” should be applied to the obstacle-detection algorithm. If the answer is yes, other questions regarding contention can be asked that should lead to the type of scheduler we choose in our design. Finding a systematic method for coupling scenarios and possible tactics would be an important research result.

How can the results of applying a tactic be predicted? A holy grail of the software engineering community is to be able to predict the qualities of a system prior to its construction. One approach to this problem is to predict the effect of applying a tactic. Tactics are motivated by analytic models (formal and informal) of various attributes. For some, it is possible to predict the results of applying them. For example, a modifiability tactic is to use a configuration file managed by the end user. From a modifiability perspective, the result of applying that tactic is to reduce the time of changing and deploying a configuration item from (essentially) the deployment time if the modification is performed by a developer to near zero (in the worst case, the time to reboot a system). This is a predictable result. Developing the same type of predictions (and understanding the parameters to which a prediction applies) is a large step toward constructing systems with predictable qualities.

How are tactics combined into patterns? In our garage door example, tactics were chosen and then, almost magically, combined into a pattern. Again, there should be a systematic method for this combination that maintains the predictability of quality responses as well. Since each tactic is associated with a predictable change in a quality attribute, tradeoffs in quality attributes can be considered within patterns. How these predictions are represented and combined becomes an open research question once tactics become combined into patterns.

What kind of tool support can assist in the design process? We are forecasting a world with larger building blocks having progressively more functionality and associated quality attributes. What are its implications on tool support? Can tactics and their combination into patterns be embedded into an expert design assistant, for example?

Can tactics be “woven” into systems? Aspect-oriented software development is an effort to develop methods and tools to deal with so-called “cross-cutting” requirements. A cross-cutting requirement applies to a variety of objects. Supporting diagnosis in an automobile, for example, is a requirement that applies to all of the automobile components and thus cross-cuts the requirements for the individual components. Quality attributes provide cross-cutting requirements, and tactics are methods for achieving particular responses. Can tactics, then, be treated as other cross-cutting requirements, and will the methods and tools developed by the aspect-oriented community apply?

19.3 Architecture within the Life Cycle

Although we have argued that architecture is the central artifact within the life cycle, the fact remains that a life cycle for a particular system comprises far more than architecture development. We see several areas ripe for research about architecture within the life cycle:

Documentation within a tool environment. In Chapter 9, we discussed architecture documentation but not how this documentation is generated. Ideally, knowledge of a system's architecture is embedded in a tool, from which documentation can be generated automatically or semi-automatically. The generation of documentation from a tool assumes that the tool has knowledge of architectural constructs. Not only does it have this knowledge, but it provides a method for moving from one view to another. This, in turn, assumes that there is a method for specifying the mapping between views.

The mapping between views comes with problems of its own: maintaining consistency across views—a change that is made in one view is automatically reflected in other views—and maintaining constraints both within and across views. For example, you should be able to specify that a process has no more than three threads (constraint within a view) and that particular modules should be bound into the same process (constraint across views).

Software architecture within configuration management systems. One reason software architecture reconstruction exists is to determine whether the as-built architecture conforms to the as-designed architecture. Suppose a configuration management system knows about the designed architecture and can verify that consistency when a new or revised code module is checked in. In that case, there is no need for architecture conformance testing since conformance is guaranteed by the configuration management system. In that way, one motivation for architectural reconstruction disappears.

Moving from architecture to code. Whenever there are multiple representations of a system, there is the problem of keeping these representations consistent, whether they are design models or architecture or code. The representation maintained becomes the correct one and the other representation degrades over time. If there is no tight coupling between the architecture and the code within some tool environment, then two problems exist. The first is moving from an architectural specification to code, since architecture design precedes coding. The second is maintaining the architecture in the face of system evolution, since code, not architecture, typically becomes the representation kept up to date.

19.4 The Impact of Commercial Components

As we said in Chapter 18, the capabilities and availability of commercial components are growing rapidly. So too are the availability of domain-specific architectures and the frameworks to support them, including the J2EE for information technology architectures. The day is coming when domain-specific architectures and frameworks will be available for many of today's common domains. As a result, architects will be concerned as much with constraints caused by the chosen framework as by green-field design.

Not even the availability of components with extensive functionality will free the architect from the problems of design, however. The first thing the architect must do is determine the properties of the used components. Components reflect architectural assumptions, and it becomes the job of the architect to identify them and assess their impact on the system being designed. This requires either a rich collection of attribute models or extensive laboratory work, or both. Consumers of components will want a trusted validation agency, such as Underwriters Laboratories, to stand behind the predictions.

Determination of the quality characteristics of components and the associated framework is important for design using externally constructed components. We discussed a number of options with J2EE/EJB in Chapter 16 and the performance impact of each. How will the architect know the effect of options that the framework provides, and, even more difficult, the qualities achieved when the architect has no options? We need a method of enumerating the architectural assumptions of components and understanding the consequences of a particular choice.

Finally, components and their associated frameworks must be produced and the production must be designed to achieve desired qualities. Their designers must consider an industry-wide set of stakeholders rather than those for a single company. Furthermore, the quality attribute requirements that come from the many stakeholders in an industry will likely vary more widely than the requirements that come from the stakeholders of a single company.

19.5 Summary

Where are the study and practices of software architecture going? Our clairvoyance is no more powerful than anyone else's, but, as with everyone else, that does not prevent us from indulging in some predictions. In addition to more powerful design techniques, an evolution of life-cycle tools to include more architectural information, and more sophisticated component building blocks for systems, we offer a prediction for the future where architecture is concerned.

Fred Brooks was once asked what made his book, The Mythical Man-Month, so timeless. He replied that it was not a book about computers but rather a book about people. Software engineering is like that. Dave Parnas says that the difference between programming and software engineering is that programming is all you need for single-person, single-version software, but if you expect other people to ever look at your system (or expect to look at it yourself later on), you need to employ the discipline of software engineering. Architecture is like that, as well. If all we cared about was computing the right answer, a trivial monolithic architecture would suffice. Architecture is brought to bear when the people issues are exposed: making the system perform well, building the system within cost constraints, achieving desired benefits, letting teams work cooperatively to build the system, helping the maintainers succeed, letting all the stakeholders understand the system.

With this in mind, we can offer our safest prediction. Architecture will continue to be important as long as people are involved in the design and development of software.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset