Chapter 12. Designing for Non-Functional Properties

Engineering software systems so that they satisfy all their myriad functional requirements is difficult. As we have seen so far, software architectures can help in that task through effective compositions of well-defined components and connectors, which have been verified to adhere to the necessary structural, behavioral, and interaction constraints. While satisfying functional requirements is essential, unfortunately it is not sufficient. Software developers must also provide for non-functional properties (NFPs) of software systems.

Definition. A non-functional property (NFP) of a software system is a constraint on the manner in which the system implements and delivers its functionality.

Important NFPs of software systems include security, reliability, availability, efficiency, scalability, and fault-tolerance. For example, in the case of the Lunar Lander system discussed throughout this book, the efficiency NFP may be characterized as the constraint that the system must respond to navigation commands within a specific, very short time span; the reliability NFP may be characterized as the constraint that the system may not be in a specified failure state longer than a specific time span.

Software engineering methodology lays stress on doing the correct things from the very beginning of the software development process in order to curb development and maintenance costs. In particular, the emergence of the study of software architecture has marked an important trend in software system design—one that in principle allows designers and developers to codify NFPs such as availability, security, and fault-tolerance early on in the architectural models and to maintain their traceability across the system's artifacts and throughout its lifespan.

There are, however, still many domains in which certain critical NFPs have not been explicitly identified or dealt with early in development. This is most frequently evidenced by experiences with Web-based systems and desktop operating systems. At least in part, developers of these systems have dealt with issues such as security and dependability as an afterthought. For example, many security patches for Microsoft Windows appear only after a security hole is exposed by a third party. Additionally, these existing systems frequently provide no explicit, active description of the system's architecture to be used as a basis for adding capabilities in support of their dependability. Even if they did, however, it is not always clear how those architectural models would be used. Existing literature is almost completely devoid of the relationship of key architectural elements (software components, connectors, and configurations) to a system's dependability.

An added difficulty is that many NFPs tend to be qualitative rather than quantitative and are often multidimensional. Thus, it is fundamentally hard to measure an NFP precisely and prove or disprove the extent to which it has been addressed in a given system. One such property is, in fact, dependability. Dependability has become an NFP of prime importance in modern distributed and decentralized systems. In an idealized scenario, a dependable system is one on which the users can completely rely. Clearly, this definition is inherently qualitative and can lend itself to subjective interpretations. Dependability can be described more precisely in terms of its different dimensions: A dependable system should be secure, reliable, available, and robust, and at the same time it should be able to deal with failures and malicious tampering such as denial-of-service attacks. However, even if there were well-defined measures for each of the dimensions, quantifying dependability would still present a challenge. For example, it may be difficult to decide in a general case which system is more dependable: one that is highly reliable but not as secure or one that is highly secure but less reliable.

Another source of pressures on software architects in ensuring NFPs has stemmed from nontechnical issues. Traditionally, NFPs such as security or reliability have taken a backseat to the more lucrative attractions of time-to-market, as well as the pressing concerns of functional requirements. In these scenarios, security has been shoehorned into a software system, sometimes as an afterthought, while reliability is addressed with often problematic implementation-level solutions. In the case of security, these practices are also complicated by the fact that security is hard to implement, so that malicious individuals have almost always had an advantage over software security practitioners.

While the picture painted may seem bleak, software architectures can significantly improve the prospects for achieving NFP goals. From the perspective of architectural design, employing good design practices can aid the achievement of these properties, just like poor design practices can harm a system. In this chapter, we provide a set of design guidelines that a software architect can follow in pursuing several common NFPs:

  • Efficiency

  • Complexity

  • Scalability

  • Adaptability

  • Dependability

The guidelines are just that: guidelines. They are not infallible rules that can be mindlessly applied and yet yield good results. Various caveats and exceptions will be discussed. The job of the architect is to think through the multiple issues and determine an appropriate strategy to pursue.

For each of the above NFPs, we will provide a definition of the NFP, discuss its impact on a software system's architecture, and, conversely, discuss the role of different architectural elements—which embody the architectural design decisions—as well as architectural styles—the guidelines for arriving at and constraints on those design decisions—in satisfying the NFP. We will consider both the characteristics that make an architecture more likely to exhibit a given property and the characteristics that may hamper an architecture's support for an NFP. Whenever appropriate, we will illustrate the discussion with concrete examples from existing architectures and architectural styles.

Note that the above list omits security, an NFP of great, and growing, importance. Because of its importance and significant architectural implications, all of Chapter 13 is dedicated to that topic.

Outline of Chapter 12

  • 12. Designing for Non-Functional Properties

    • 12.1 Efficiency

      • 12.1.1 Software Components and Efficiency

      • 12.1.2 Software Connectors and Efficiency

      • 12.1.3 Architectural Configurations and Efficiency

    • 12.2 Complexity

      • 12.2.1 Software Components and Complexity

      • 12.2.2 Software Connectors and Complexity

      • 12.2.3 Architectural Configurations and Complexity

    • 12.3 Scalability and Heterogeneity

      • 12.3.1 Software Components and Scalability

      • 12.3.2 Software Connectors and Scalability

      • 12.3.3 Architectural Configurations and Scalability

    • 12.4 Adaptability

      • 12.4.1 Software Components and Adaptability

      • 12.4.2 Software Connectors and Adaptability

      • 12.4.3 Architectural Configurations and Adaptability

    • 12.5 Dependability

      • 12.5.1 Software Components and Dependability

      • 12.5.2 Software Connectors and Dependability

      • 12.5.3 Architectural Configurations and Dependability

    • 12.6 End Matter

    • 12.7 Review Questions

    • 12.8 Exercises

    • 12.9 Further Reading

EFFICIENCY

Definition. Efficiency is a quality that reflects a software system's ability to meet its performance requirements while minimizing its usage of the resources in its computing environment. In other words, efficiency is a measure of a system's resource usage economy.

Note that this definition does not directly address the issue of system correctness. Instead, it implicitly assumes that the system will function as required. In other words, a system cannot be considered efficient if it implements the wrong functionality.

A common misconception is that software architecture has little to say about efficiency since architecture is a design-time entity while efficiency is a run time property. This is wrong. Selecting appropriate styles or patterns and instantiating them with appropriate components and connectors will have a direct and critical impact on the system's performance. While it is difficult to enumerate all of the possible performance requirements and related architectural decisions, we will outline certain general guidelines. The reader should be able to use these guidelines as a starting point for a more complete list that will be amassed over time in the reader's personal arsenal.

Software Components and Efficiency

Keep Components Small

For efficiency, ideally each component should satisfy a single need and serve a single purpose in the system. This helps avoid employing components when the majority of their services will not be used.

This guideline can directly impact a component's reusability: Many off-the-shelf components are very large and by design include significant functionality that may not be needed in a particular system. Put another way, constructing reusable components can be an impediment to constructing efficient systems.

As with all the guidelines introduced in this chapter, this one should be applied with appropriate caution. Clearly, there are off-the-shelf components whose memory footprint and/or run time performance have been optimized over time; such components will likely outperform one of a kind components built from scratch. Another, quite common exception is a direct by-product of caching: Caching data locally for later use can result in components with larger memory footprints, but it also can result in faster systems.

Keep Component Interfaces Simple and Compact

A software component is accessed only via its public interface. The interface should expose only those component services that are intended to be visible from the outside. Similarly, in general, a component should never expose its internal state other than via the operations intended to modify that state.

If a component's interface is cumbersome, generalized for a broad set of usage scenarios, or geared to a broad class of potential clients, the component's efficiency may be compromised. For example, the component may require different types of adaptors or wrappers to specialize it for use in specific contexts. Alternatively, the component may internally convert the parameters to or from a lowest common denominator form.

Conversely, an interface stripped to its bare bones can also negatively impact the efficiency of a component. An example is a Unix pipe-and-filter-based system, in which the components (filters) rely on untyped ASCII data streams to maximize their reusability. However, this may require constant conversion of such streams to typed data for more advanced processing, and for improving system reliability, as will be discussed further below.

Note that insisting on compact interfaces for the sake of efficiency may negatively impact other desirable properties of the component, such as its reusability, support for heterogeneity, and even scalability. This will be discussed in the context of the next guideline, and will be revisited later in the chapter.

Allow Multiple Interfaces to the Same Functionality

Software components are typically built to be usable in multiple run time contexts. Even within a single system, they may need to provide services to multiple client components executing on different platforms, implemented in different programming languages, or encoding their data using different standards. They may also need to enforce, for example, different protocols of interaction, transaction support, or persistency rules, as required by the different clients. In such situations, architects may choose to provide the needed services via multiple components that essentially replicate each other's functionality. Clearly, that would hurt the system's efficiency (unless the system is distributed in such a manner that a physically closer copy of the component in question is provided with the appropriate locally needed interface).

Another option is to wrap the component using an adaptor connector that performs all of the needed data conversions. Such a solution is more flexible as the component itself remains unchanged and maintains its memory footprint and run time performance properties. At the same time, the adaptor will likely introduce some amount of run time overhead.

Since the adaptor-based approach may result in a narrowly applicable component, a third option is to construct the component such that it natively exports multiple interfaces to its functionality. This solution will likely be more efficient than both of the above options. At the same time, this solution carries the danger that the component will be bloated in situations where only one of its interfaces may be used. The three options are illustrated in Figure 12-1.

Separate Processing Components from Data

Modeling data separately from processing has several potential efficiency benefits. First, it allows the data's internal representation to be fine-tuned or altered, based on local design objectives and without affecting the processing components. Similarly, this allows processing algorithms to be optimized without affecting the data representation, which may be used by multiple components in the system. Separating processing from data also allows architects to ensure, in the architecture, that the appropriate data is at the disposal of the appropriate processing components, thus aiding system correctness arguments.

Separate Data from Meta-Data

In many distributed, heterogeneous, data-intensive systems, system data is frequently separated from the meta-data. Meta-data is "the data about the data." In other words, in such systems it may be unclear a priori how the data is structured and intended to be used. The data may, for example, arrive at run time in unpredictable forms. Meta-data can be used as a way of describing the data so that the processing components can discover at run time how to process the data. Meta-data is heavily used to describe Web content elements and the content of e-mail messages.

Separating data from meta-data makes the data smaller, reducing the system's run time memory footprint; every time a data packet is sent from one processing component to another, it may not need to be accompanied with header information fully describing that data. Thus, if a component already is hardwired to process data of a particular type, it may do so directly. However, keeping the meta-data around and using it to interpret every data packet will, over time, induce a run time performance penalty in the general case.

Alternatives for allowing a component to export multiple interfaces to the same functionality. Multiple copies of the same component exporting different interfaces (top); a wrapper exporting the different interfaces (middle); the component itself exporting the three interfaces (bottom).

Figure 12-1. Alternatives for allowing a component to export multiple interfaces to the same functionality. Multiple copies of the same component exporting different interfaces (top); a wrapper exporting the different interfaces (middle); the component itself exporting the three interfaces (bottom).

Software Connectors and Efficiency

Carefully Select Connectors

As argued in Chapter 5, software connectors are first-class entities in a software architecture. They encapsulate all of the interaction facilities in a system. Especially in large, complex, and distributed software systems, the interactions among the system components become the determinants of key system properties, including efficiency. Careful selection of connectors is thus critical.

While many different types of connectors may provide the minimum level of service required by a given set of components in a system, some connectors will be a better fit than others, and thus may improve system efficiency. For example, the architect may select a message broadcasting connector, such as the connectors in the C2-style Lunar Lander architecture from Chapter 4, because that connector has been used in a number of systems, can accommodate varying numbers of interacting components (that is, its cardinality is N), and has proven to be highly dependable. However, if the particular context in which the connector is to be used involves only a pair of interacting components, then a more specialized, though less flexible, point-to-point connector (with cardinality 2) will likely be a more efficient choice.

Similarly, if the system under construction is safety-critical and has hard real-time computing or data delivery requirements, a starting point for the architect would likely be synchronous connectors with exactly-once data delivery semantics. On the other hand, if the system being constructed is a weather satellite ground station, a data stream connector with at-most-once data delivery semantics may be a better choice. It is highly unlikely that a loss of weather data packets over a short time span—even a few minutes—will severely impact the system or its users.

Use Broadcast Connectors with Caution

A system may be more flexible if the connectors used in it are capable of broadcasting data to multiple interested components. For example, new components can attach to such a connector seamlessly and begin observing the relayed information. However, that flexibility can come at the expense of other system properties, such as security and efficiency.

It is possible that a single interaction may involve multiple components, and the temptation may thus be to rely on broadcast. The downside occurs if some of the components receiving the information do not actually need it. In that case, the system's performance is impacted, not only because data is unnecessarily routed through the system, but also because the recipient components have to devote some of their processing time to establish whether the data is relevant to them.

An alternative to broadcast connectors is multicast connectors, which maintain an explicit mapping between interacting components. Another possibility is to rely on a publish-subscribe mechanism to establish such a mapping during run time.

Make Use of Asynchronous Interaction Whenever Possible

In a highly distributed and possibly decentralized setting, it may be difficult for multiple components to synchronize their processing so that their interactions take place at times that are ideal for all of the involved components. If this does not happen and the connectors servicing the components only support synchronous interaction, then, in essence, the slowest component will drag down the performance of the entire system, and may result in multiple components having to wait idly until an interaction can be completed.

In such situations, asynchronous interaction is preferable, where a component is able to initiate the interaction via the connector and then continue with its processing until a later time when it receives a response. Likewise, each invoked component will respond to the incoming service requests as its availability and processing load allow; after it services the request, the component will send its reply to the connector and immediately be able to continue with its processing.

Note that this will not always be possible (single-threaded systems being the simplest example), and that it does not come without a performance cost. If the interaction among a set of components is asynchronous, it will be the connector's responsibility to ensure that the service requests and replies are properly associated with one another.[17], This means that the connector will have to do additional processing to maintain many such mappings and determine the ordering of requests and replies as well as their destinations. Furthermore, the nature of a given component, or of a given request, may be such that the component has to suspend its processing and wait for a response, in which case the benefit of employing an asynchronous connector will be lost.

Use Location Transparency Judiciously

Location transparency, also referred to as distribution transparency, shields a distributed system's components from the details of their deployment. In principle, this allows components to be designed as if all of their interactions are local, that is, as if they are co-located on one host with all of the components with which they need to interact. In turn, this allows easy system adaptation, enabling redeployment of some of the components across hosts without affecting the remaining components; it is the task of the systems' connectors to ensure this transparency. An illustration is provided in Figure 12-2.

In practice, however, complete location transparency is difficult to achieve. Remote interactions, for example, are many times slower than local interactions. By some measurements they may be slower by a factor of forty or more. Thus both the interacting components and their users may quickly notice the difference. This means that any distributed system with specific performance requirements and the goal of location transparency may have to assume the worst-case scenario—that each component is located on a separate host. A preferred alternative is to clearly distinguish remote from local connectors. This may impact the system's adaptability but ensure appropriate performance.

Architectural Configurations and Efficiency

Keep Frequently Interacting Components Close

The number of indirections through which two or more components communicate will hamper that interaction's efficiency. This is especially true for systems with real-time performance requirements. Architectural styles or configurations that place many points of indirection between such components will be a poor fit. For example, if a component A naturally fits two or more layers above component B in a layered architecture, yet the two components need to interact frequently in a short time frame, a strictly layered architecture is not a good choice for the system: Every interaction will need to be forwarded by the intermediate layers.

An example of a layered architecture in which the distance between interacting layers proves to be a problem is that of the mobile robot system discussed by Mary Shaw and David Garlan (Shaw and Garlan 1996), and depicted in Figure 12-3. The Robot Control component observes the environment and invokes the Sensor Interpretation component in response to events of interest. In turn, the Sensor Interpretation component may need to invoke the component (that is, layer) above it, Sensor Integration, for a more meaningful, nonlocal interpretation of the sensed events. Likewise, the Sensor Integration component may require the help of the component above it, and so on, until eventually the highest-level component, Supervisor, is reached. Once a component in a given layer is capable of servicing the request it received, it returns the results to the component immediately below it; this sequence continues until the Robot Control component receives the necessary information to respond to the sensed event.

Distribution transparency. Conceptual view of the architecture (a) and deployment view of the same architecture (b). The system's components are unaware of the deployment profile. The connectors render that profile transparent. If the same connectors are used for local and distributed scenarios, the system's efficiency likely will be compromised.

Figure 12-2. Distribution transparency. Conceptual view of the architecture (a) and deployment view of the same architecture (b). The system's components are unaware of the deployment profile. The connectors render that profile transparent. If the same connectors are used for local and distributed scenarios, the system's efficiency likely will be compromised.

A layered architecture for a mobile robot system in which the strict layering may induce performance penalties.

Figure 12-3. A layered architecture for a mobile robot system in which the strict layering may induce performance penalties.

In this architecture, it is possible that a request from Robot Control must be propagated to as many as seven other components, and returned via them, before it is serviced. For example, if a given class of events from Robot Control pertains to the robot's Navigation, these events are still propagated, and the responses to them returned, via the three intermediate components. Clearly, this is not an efficient solution. Shaw and Garlan recognize this and discuss several alternative architectures for the system.

A common way of keeping frequently interacting components close is to cache, hoard, or prefetch data needed for their interaction. All three of these techniques try to anticipate the remote information that will be needed by a given component, and try to deliver that information during a convenient time, that is, when the impact on the system's overall performance will be minimal, before the information is actually needed. This is one way of mimicking location transparency. Caching, hoarding, and prefetching are a part of a software connector's duties. Note that such advanced facilities will invariably add to a connector's complexity, which is discussed further below.

Carefully Select and Place Connectors in the Architecture

Any large system is likely to comprise components with heterogeneous interaction requirements. As discussed in Chapter 5, it is possible to create large connectors that are capable of simultaneously servicing multiple components in different ways. Multiple such connectors could then be used in the system. However, each connector would support a superset of the needed interaction capabilities, meaning that, in the average case, many of the connector's features would remain unused, yet still consume resources. Furthermore, it will be difficult to optimize the larger, general-purpose connectors.

For this reason, it may be preferable for an architect to choose multiple connectors, each geared to the given subset of components' specific interaction needs, as opposed to fewer but larger connectors that can service more components in the system and their multiple needs. An obvious example is the procedure-call connector. Procedure calls are simple and efficient. By themselves, they may be unable to provide more advanced interaction facilities, such as asynchronous invocation, yet they are able to satisfy a majority of today's systems' needs, even in distributed settings. The higher-order connector facilities can, of course, still be selected and added to the system, but only if and when they are needed.

Likewise, if a given software component has multiple performance requirements, several options are at the architect's disposal. If the component exports multiple interfaces, the architect can employ multiple connectors to service the component, one per interface type. Alternatively, the architect can try to separate the multifaceted component into multiple separate components.

Consider the Efficiency Impact of Selected Architectural Styles and Patterns

Some styles are not a good match for certain types of problem. Examples abound; consider a few below.

  • Asynchronous interactions, such as those in publish-subscribe systems, cannot be used effectively in systems with real-time requirements.

  • Large repository-based systems, such as those adhering to the blackboard architectural style, may make it difficult to satisfy stringent memory constraints.

  • Systems that are required to process continuous data streams can be designed using event-based architectures, but transforming a stream into discrete events and possibly recomposing the stream after the transfer carries a computational penalty.

  • If data needs to be delivered to its user incrementally, a batch sequential system will be a poor fit. Similarly, a pipe-and-filter system may be a poor fit as well, unless the employed variant of the pipe-and-filter style supports incremental data delivery.

An analogous argument holds for architectural patterns. For example, the model-view-controller pattern may work well in a centralized setting, but it may not be a good fit for highly distributed and decentralized environments because of the frequent tight interaction between the components.

It also should be noted that, while adaptability in certain cases—such as supporting location transparency—may harm a system's efficiency, in other cases system adaptation can be used as a tool to address performance requirements. For example, a style that supports redeployment in a distributed architecture may be able to support certain performance requirements more effectively.

COMPLEXITY

Software complexity can be thought of from different perspectives, and thus defined in different ways. For example, the IEEE definition of complexity (IEEE 1991) is as follows.

Definition. Complexity is the degree to which a software system or one of its components has a design or implementation that is difficult to understand and verify.

This definition implies that complexity is partial to the specific perspectives of system understanding and verification. The definition does not say how complexity may be manifested in a software system or how it would impact other objectives, such as satisfying performance requirements. Perhaps worst of all, the determination of the degree to which a system is complex depends on the particular individual who is asked how "difficult to understand" the system is.

In order to understand the impact of software architecture on complexity, the traits of the system itself must be taken into account. A different and somewhat more useful definition follows.

Definition. Complexity is a software system's property that is proportional to the size of the system, the number of its constituent elements, the size and internal structure of each element, and the number and nature of the elements' interdependencies.

To make this definition more precise, we would also need to define "size," "internal structure," and "nature of interdependencies." Still, the reader should have an intuitive understanding of these terms from introductory programming classes. For example, size of a software system can be measured in terms of source lines of code, number of modules, or number of packages. The reader should also be familiar with an element's internal structure from the discussions of architectural models and with different types of module interdependencies from the discussion of software connectors in Chapter 5.

There are several observations that emerge from the second definition of complexity that also agree with our general intuitions about software systems. These observations help us discuss the architectural implications of complexity and formulate some guidelines to help reduce or mitigate its presence.

Software Components and Complexity

Separate Concerns into Different Components

Conventional software engineering wisdom suggests that each of the various types of tasks performed by a system should be supported by a different component or set of components. This guideline may be obvious to any software engineer, for it stems from the application of the fundamental software development principles of abstraction, modularity, separation of concerns, and isolation of change. At the same time, this guideline needs to be clarified with respect to a related, perhaps obvious, observation: All other things being equal, a software system with a greater number of components is more complex than a system with a smaller number of components.

At first blush, this would suggest that architects should strive to minimize the number of components in their system. In other words, architects should try to co-locate multiple concerns in a single component in order to reduce the number of components. In fact, the guideline (that different system concerns should be separated into different components) and the observation (that larger numbers of components result in added complexity) are not inconsistent. In an architecture, the sheer number of components may not be as relevant to the system's complexity as the number of component types. Components of the same type can be easier to understand and analyze, may be more naturally composable, and can help abstract away the details of their instances. Conversely, a system with relatively fewer instances of components but which are of many different types may, in fact, increase overall complexity.

The notion of types used here is broader than that typically employed in the areas of programming languages or formal methods. For the purpose of this discussion, a component type may refer, for instance, to a similar structure, behavior, application domain, organization that developed the components, or standardized API. In other words, a type in this context refers to any set of component features that may ultimately lessen the effort required for the encompassing system's construction, composition, integration, verification, or evolution. For example, a system developed entirely in CORBA may be easier to understand than the same system that is developed using a collection of heterogeneous technologies. The reason is that both systems may be composed of the same number of element instances, but the CORBA system is composed of a smaller number of element types.

Also of note is that reducing the number of components in a system does not guarantee lower complexity if it results in increasing the complexity of each individual component. In other words, a system with larger constituent components is typically more complex than a system with smaller components. Put a slightly different way, a system with more complex components is itself more complex. Again, this is a relative measure and it will depend on the number of components in question. For example, a small number of complex components, belonging to a small number of component types, need not necessarily result in a complex system.

One final observation, applicable both to components and to connectors, is that from the architectural perspective, individual components are often "black boxes." As long as they can be treated as such—that is, as long as the engineers can avoid examining the components' details when establishing a system's property of interest—their internal complexity does not matter. In other words, this is the very type of complexity that is abstracted away by good architectural design.

Of course, it is a fair question to ask how realistic it is that engineers will not have to consider a component's internal details. In fact, while component-based software development has aided engineers in some significant ways, experience to date suggests that the really insidious problems involve the interplay of multiple components and often require the consideration of both their interactions as well as their internal structure and behavior. This is why an architecture-based approach to software development allows the inclusion of many concerns, including intracomponent concerns, in the set of a system's principal design decisions.

Keep Only the Functionality Inside Components—Not Interaction

While components in a software system contain application-specific functionality and data, they must also interact with other parts of the system, often in intricate and heterogeneous ways. In most commercial software systems, a component is encumbered with at least some of its own interaction responsibilities. The most prevalent examples, of course, are the use of procedure calls and shared memory access. Components typically support one of these two mechanisms in order to integrate and interact with external components.

While there may be legitimate reasons for coupling computation or data with interaction (for example, in order to improve the component's efficiency in a given system), it ultimately violates the basic software engineering principle of separation of concerns and results in more complex components. Furthermore, a decision to place the interaction facilities inside a component may hamper the component's reusability: The interaction facilities will be integrated into the component, yet they may be a poor fit for future systems. For example, if a component incorporates elements enabling it to communicate synchronously via remote procedure calls, that component will be slated for use in certain distributed settings. On the other hand, such design actually will ultimately harm the efficiency of the component, as well as that of its encompassing systems, if the component needs to be used in an alternative setting in which its interactions need to be asynchronous or local, or both. In such cases, adaptor connectors will need to be used to integrate the component with the rest of the system.

Placing interaction facilities inside a component also violates the principle of separation of concerns from the point of view of the connector: The interaction facilities housed within the component will need to be isolated to make them reusable, optimize them, or improve their reliability over time. This will be discussed further below.

Keep Components Cohesive

This guideline is similar to the "Keep the components small" guideline from Section 12.1.1; refer back to that section for the essence of the argument. At the same time, one reality of software development cannot be avoided: Regardless of how principled or clever architectural design decisions are, complex problems frequently require complex solutions. Software architecture can play a role in controlling that complexity, but architecture cannot eliminate the complexity. Therefore, a component may in fact need to be complex if the functionality it provides is complex.

Nevertheless, that complexity will be easier to tackle if the component has a clear focus, that is, if it addresses a specific, well-defined need in the system and does not incorporate solutions for multiple, disparate system requirements. One example of components that are not cohesive was discussed in the previous guideline: Coupling functionality with interaction inside a component will ultimately increase the component's complexity, and may result in several other undesirable characteristics. Another potential such class of components will be discussed next.

Be Aware of the Impact of Off-the-Shelf Components on Complexity

In general, off-the-shelf reuse has many well-documented benefits. Among them are the potential to reduce development effort, time, and cost, and to improve system dependability. At the same time, off-the-shelf components are often very large and complex systems in their own right. They may demand a great deal of careful study to understand them, and intricate techniques for integration with other components.

Therefore, off-the-shelf components may impact a system's complexity in two ways: (1) as a by-product of their own internal complexity, and (2) by requiring that complex connectors be used in the system. The first source of complexity can be avoided as long as the component can be treated as a black box. However, as soon as the engineer has to "peek inside" the component, the complexity of the system in question may increase significantly.

The second source of complexity arises from a common misconception in off-the-shelf reuse-based development. That misconception is that clean, well-documented APIs will necessarily simplify the integration of a component into a new system. While such APIs are certainly needed to effectively use the component, they can still mask many idiosyncrasies in the way the component is actually intended to be used. For example.

  • The API typically will not provide guidance on how the component should be configured in a given environment and initialized for use.

  • The API may not give any hints about the assumptions made by the component. For instance, does the component assume that it will control the system's main thread of execution?

  • The API may not clearly indicate which operations are synchronous, and which ones asynchronous.

  • The order in which operations can be legally invoked may be unclear in the API.

  • The component state or mode of operation assumed for invoking a given element of the API may be unclear.

  • Any side-effects of operations may be unstated.

The connectors required to integrate off-the-shelf components have to account for such details. They will therefore be a very real source of complexity, and one of which a software architect must be keenly aware.

Insulate Processing Components from Changes in Data Format

Processing components that need access to system data will likely rely on that data being presented in a particular format. Should the data format change—which is possible, even likely, in long-lived, distributed, and decentralized systems—the components themselves may need to change. Indeed, it is conceivable that a simple change in the data may affect a large portion of a system. Furthermore, the impact of data changes may not be easily foreseen as many architecture modeling notations do not take into account data access dependencies.

To avoid such undesirable situations, architects should employ techniques to control the complexity of processing-to-data and data-to-data dependencies. One such solution is to use dedicated meta-level components to address data resource discovery and location tracking. Another solution is to make use of explicit adaptor connectors that enable the processing components to rely on the same data "interface." Finally, as discussed in the case of efficiency above, data should be separated from meta-data, allowing components to dynamically interpret the data they are accessing.

Software Connectors and Complexity

Treat Connectors Explicitly

The impact of coupling functionality, data, and interaction on a system's complexity has been discussed above, in the guidelines for designing, constructing, and selecting components. One of the key contributions of software development from an explicit architectural perspective is that application-specific functionality and data should be separated from the application-independent interaction. That interaction should be housed in explicit software connectors. Chapter 5 discusses the many arguments in favor of treating connectors explicitly.

Keep Only Interaction Facilities Inside Connectors

Connectors are in charge of the application-independent interaction facilities in a given software system. The task of a connector is to support the communication and coordination needs of two or more components, and to provide the needed conversion and facilitation capabilities to improve that communication and coordination.

Nonetheless, just as the temptation may exist to place advanced interaction facilities inside components, it may be similarly tempting to place application-specific functionality or data inside a connector. There are seldom any good reasons to do this—it will always be possible to add a new component to house any such functionality—and there are many reasons not to do it. Therefore, as a general guideline, making a connector responsible for providing application-specific functionality or data should always be avoided.

Separate Interaction Concerns into Different Connectors

Interaction concerns in a given system could be simply the general connector roles discussed in Chapter 5, such as separating communication from facilitation. Those concerns could also be more application-specific, such as decoupling the exchange of data between two specific distributed components from the compression of that data to enable more efficient transmission across the network.

The basic argument underlying this guideline is analogous to the one applied to software components in Section 12.2.1. In principle, each connector should have a single, specific, well-defined responsibility. This allows for connectors to be updated, even at system run time. It also allows the architect to clearly assess the components interacting through those connectors, as well as the impact of any modifications of those components. Any issues with the system will be easier to isolate and address as well.

Restrict Interactions Facilitated by Each Connector

In distributed software systems, interaction will often surpass computation as the dominant factor for determining a given system's key characteristics. For example, the study of battery power consumption in some distributed systems has shown that communication costs typically dwarf computational costs; because of this, a system's computational energy costs are, in some cases, considered to be little more than noise, and thus are ignored in the system's overall energy cost assessment.

Similar arguments can be made about other properties, including efficiency, adaptability, and, in particular, complexity. If a connector unnecessarily involves components in interactions in which they are not interested (for example, by sending them data), the system's overall complexity will increase. In the best-case scenario, the interaction attempt will be ignored. In certain situations, components may be forced to make an explicit effort to ignore the unwanted interaction overtures by the connector, that is, some processing will be expended unnecessarily, in addition to the unneeded data traffic. In the worst case, the components may accidentally engage in the interaction and erroneously affect the system's functionality and state. In such situations, discovering which interaction paths—intended or accidental—caused the system defect may be very difficult.

The simplest rule of thumb is to use direct point-to-point interaction whenever possible. Indirect interaction mechanisms, such as event-based or publish-subscribe, are very elegant means for ensuring many properties in distributed systems, such as adaptability, heterogeneity, or decentralization. However, the specific interactions that take place in the system and that may be causing run time defects can be difficult to track down when such mechanisms are used. Likewise, synchronous interaction lends itself better than asynchronous interaction to determining precisely the interaction paths among the system's components.

Put another way, while certain styles of interaction and system composition have become prevalent in large, distributed, and decentralized software systems, and are direct enablers of several desirable system properties, they tend to negatively impact the system's complexity. It is an engineering trade-off.

Be Aware of the Impact of Off-the-Shelf Connectors on Complexity

Connectors provide application-independent services, so they would appear to be a natural target for attempting reuse. However, there are a number of pitfalls that an engineer should avoid. Most of these are similar to the pitfalls discussed in the analogous guideline applied to components in Section 12.2.1. A simple example is reusing a connector that possesses far more features and capabilities than a given situation requires.

One additional observation, specific to off-the-shelf connectors, is that it typically will be more difficult to ensure the proper communication paths in the given system since the engineer will likely have a lower degree of control as compared to custom-built connectors.

Architectural Configurations and Complexity

Eliminate Unnecessary Dependencies

Large software systems are typically more complex than small ones. This means, by extension, that systems with larger software architectures also tend to be more complex. The size of an architecture in this sense can be measured in terms of the size of the architectural model as indicated, for example, by the number of statements or diagram elements in the modeling notation. It can also be measured in terms of the number of constituent components, connectors, and, perhaps most significantly, interaction paths in the architectural configuration.

The architecture of the Linux operating system as documented. Adapted from Bowman et al. (Bowman, Holt, and Brewster 1999) © 1999 ACM, Inc. Reprinted by permission.

Figure 12-4. The architecture of the Linux operating system as documented. Adapted from Bowman et al. (Bowman, Holt, and Brewster 1999) © 1999 ACM, Inc. Reprinted by permission.

Most frequently, a system with more interdependencies among its modules is more complex than a system with fewer interdependencies. The reasons are at least two-fold. First, there is a greater number of possible interaction paths in such a system. Second, it is more difficult to control the behavior and predict the properties of such a system, precisely because of the added interaction paths.

Consider, for example, the architecture of the Linux operating system, as studied by Ivan Bowman and colleagues and shown in Figure 12-4 and Figure 12-5. Figure 12-4 shows the "as-documented" architecture of Linux, which the authors extracted from the available Linux literature. Figure 12-5 shows the "as-implemented" architecture, which the authors extracted from the Linux source code. For the purpose of this discussion, we will treat each identified subsystem as a single component in the architecture.

The two diagrams are quite dissimilar: The as-documented architecture is very clean, with a small number of unidirectional dependencies. On the other hand, the as-implemented architecture depicted in Figure 12-5 is a fully connected graph in which almost all of the dependencies are bidirectional. Any modifications to this architecture will be significantly more difficult to effect correctly because of this added complexity.

For example, the Network Interface component in Figure 12-4 depends only on Process Scheduler, and is depended upon by File System. On the other hand, in Figure 12-5 the same component both depends on and is depended upon by nearly every other component in the system; the only exception is the Initialization module, with which Network Interface has a unidirectional dependency.

The architecture of the Linux operating system as implemented. Adapted from Bowman et al. (Bowman, Holt, and Brewster 1999) © 1999 ACM, Inc. Reprinted by permission.

Figure 12-5. The architecture of the Linux operating system as implemented. Adapted from Bowman et al. (Bowman, Holt, and Brewster 1999) © 1999 ACM, Inc. Reprinted by permission.

The main questions in this situation are why are there so many component interdependencies, and are they all necessary? Even if they are indeed necessary, an architectural configuration that is revealed via a fully connected graph of components and connectors very likely indicates a poor design.

In such a case, the architect needs to carefully reconsider his design decisions. He may also need to reconsider the selected architectural styles and patterns. While a style or pattern will be unable to eliminate the inherent complexity in a system, a given style or pattern will be appropriate for some classes of problems, but may be inappropriate for others.

Manage All Dependencies Explicitly

For illustration, we will continue referring to the example of Figure 12-4 and Figure 12-5. However, the reader should note that this example is by no means atypical. Cases in which an architecture degrades badly over time abound. The Linux developers clearly had chosen to call out, and likely rely on, the documented architecture (Figure 12-4) as the correct one. This decision may have been accidental, that is, a product of a long succession of small, undocumented violations to the architecture of which most Linux developers were unaware. On the other hand, the decision may also have been deliberate, as the number of dependencies that an engineer would have to consider—in case a new component had to be added, an existing one modified, or a defect in the system remedied—was much smaller and the architecture easier to justify than would be the case with the architecture in Figure 12-5.

At the same time, the documented architecture clearly omitted a majority of the dependencies. Just because those were hidden from the engineers, their modification tasks would not be made easier. Quite the contrary, engineers may have had to discover these missing dependencies on their own, after they discovered that system adaptations that should have behaved a given way in fact did not work properly. The engineers would in such situations ultimately be forced to study the source code, after realizing that the documentation is not only incomplete, but also misleading.

It is because of these pitfalls that the real complexity of a system should not, and cannot, be hidden, and that all dependencies should be managed explicitly in the architecture.

Use Hierarchical (De)Composition

Hierarchical decomposition of a software system's architecture, and hierarchical composition of a system's components, are important tools for managing a given system's complexity. The components of a given conceptual unit are grouped together into a larger, more complex component; that component may also be grouped with other like components into even larger components. Through the use of appropriate interfaces the underlying complexity is masked, allowing the system's architecture at its highest level to be more readily understandable.

For example, the Linux architecture is decomposed into seven key components (that is, Linux subsystems) and their dependencies. Each of these components exports an interface to the rest of the system, and thereby abstracts away the component's internal architecture. This is also frequently the case with off-the-shelf components discussed above, which may be systems in their own right. In principle, a hierarchically (de)composed system allows the architect to isolate the parts of a system that are relevant for a required adaptation.

SCALABILITY AND HETEROGENEITY

Definition. Scalability is the capability of a software system to be adapted to meet new requirements of size and scope.

Even though a system's scalability refers to its ability to be grown or shrunk to meet changes in the size of the problem, traditionally the difficulty faced by software engineers has been in supporting larger and thus more complex systems. Therefore, we will restrict our discussion to scaling software systems up.

Software architecture plays a critical role in supporting scalability. There are several dimensions to scalability, which we will discuss along with the role software architecture plays in supporting each. It should be noted that scalability can be achieved in an arbitrary case, but at a potentially exorbitant cost. The objective of this section is to provide guidelines that will allow software architects and engineers to improve a system's scalability without also prohibitively increasing its complexity and deteriorating its performance. In other words, a software system can be said to scale well if its rate of growth is not greater than the corresponding rate of complexity increase.

Two other NFPs deal with system aspects related to scalability: heterogeneity and portability.

Definition. Heterogeneity is the quality of a software system consisting of multiple disparate constituents or functioning in multiple disparate computing environments.

We will often want to speak of a system easily accommodating heterogeneous elements, that is, accommodating incorporation of disparate components and connectors into its structure, thus we will also use heterogeneity to refer to an ability.

Definition. Heterogeneity is a software system's ability to consist of multiple disparate constituents or function in multiple disparate computing environments.

Definition. Portability is a software system's ability to execute on multiple platforms (hardware or software) with minimal modifications and without significant degradation in functional or non-functional characteristics.

Portability can thus be viewed as a specialization of heterogeneity.

Like scalability, heterogeneity and portability are system properties reflective of the system's ability to accommodate change and difference. They refer to increasing numbers of types of execution environments, in the case of both portability and heterogeneity, as well as types of software elements and users, in the case of heterogeneity.

Heterogeneity can be looked at from two perspectives.

  • Internal heterogeneity: A system's ability to accommodate multiple types of components and connectors, possibly written in different programming languages, by different developer teams, and even different organizations.

  • External heterogeneity: A system's ability to adjust to and leverage different hardware platforms, networking protocols, operating systems, middleware infrastructures, and so on. This view of heterogeneity thus encompasses portability.

In the below discussion, for ease of exposition we will focus on scalability as the over arching property.

Software Components and Scalability

A software system's architecture may need to support the addition of new components. The components may be added as new requirements emerge during the system's life span. Alternatively, existing components may be replicated to improve system efficiency. In either case, architects should follow some general guidelines to ensure scalability. Again, the reader should remember that these are only guidelines. In general, one should adhere to them whenever possible, yet the reader will probably be familiar with, or will be able to conjure, software development scenarios that would argue against following these guidelines.

In addition to considering the impact of functional component design on scalability, this section also explicitly considers the impact of the system's data design. Many systems are highly data-intensive. Presently, that means storing, accessing, distributing, and processing terabytes and even petabytes of data. The World Wide Web can be thought of as such a system. Other examples include scientific applications deployed on a data grid [for example, Globus (Foster, Kesselman, and Tuecke 2001) and OODT (Mattmann et al. 2006)]; such systems are discussed in Chapter 11. In addition to the sheer overall size of the data used by a system, many systems must process that data within specific time constraints. We refer to the amount of data processed in a given amount of time as data volume (expressed in bytes per second). Architectural decisions will directly impact a system's ability to scale up to large data volumes. This section also identifies several heuristics architects should keep in mind when striving for data scalability.

Give Each Component a Single, Clearly Defined Purpose

This guideline has analogs in the cases of efficiency and complexity. An architect should avoid placing too much responsibility on any one component. Failing to adhere to this guideline will typically result in large, internally complex components with many dependencies on other components in the system. Such important components may also lack architectural integrity because they encapsulate multiple concerns. They may thus become single points of failure or performance bottlenecks. Scaling up a system that comprises such components will be a challenge because there is an increased chance that any newly added components will also need to rely on the important components, further adding to their workload. An example such component is Linux's Process Scheduler from Figure 12-5.

Give Each Component a Simple, Understandable Interface

A component should be easy to identify and understand, use and reuse, deploy and redeploy. A component with a simple interface will have few and clear dependencies on other components in the system. Adding new components will have a minimal impact on such a component. Connectors can be more easily adapted, or new ones introduced, to support interactions with such a component. Finally, such a component will be easier to replicate and distribute across multiple hardware hosts if needed.

Do Not Burden Components with Interaction Responsibilities

This is a common pitfall. Simply put, adding interaction facilities to a component violates the component's conceptual integrity. Furthermore, it decreases the component's reusability potential since reuse becomes an all-or-nothing proposition. As discussed previously, it also increases a component's size and complexity. This has a deleterious impact on scalability. For example, scaling up a system by replicating such a component may become an issue: It may be less clear how that component should interact with the rest of the system since it encapsulates interaction design decisions, which should be public and external to the component.

Avoid Unnecessary Heterogeneity

Component incompatibilities can be overcome, but typically at a price. While it may in some cases significantly reduce development costs and effort, reusing heterogeneous (typically off-the-shelf) components should be approached judiciously. Components that are not carefully tailored to work together can cause architectural mismatches. Many examples exist where the needed functionality that was embodied in different components could not be integrated because of discrepancies in the components' interfaces, assumptions, and constraints.

This, in turn has a direct impact on scalability: A system cannot scale up effectively if adding a single component can fundamentally alter the system's properties or, worse, break the system. Relying too much on sophisticated connectors is not the answer in such situations either; engineers may manage to integrate the needed functionality, but possibly at the expense of other properties, such as efficiency, adaptability, or dependability. This is why architectural analysis is an indispensable aid to system integration.

Distribute the Data Sources

It is possible to imagine a situation in which a system employs a single powerful database that is capable of storing all of its data. Such a decision may be justified for reasons of architectural simplicity or in order to decrease the project's costs. However, if the data needs to be accessed concurrently by many other components in the system, modified, and then stored again, the centralized database may not be able to support the needed data volume, especially if the system needs to grow. In other words, the data source component becomes the system's bottleneck. Furthermore, it becomes a single point of failure.

It is preferable in such situations to distribute the data sources, and possibly task each host with a specific, well-defined subset of the data and data consumer components. This enables multiple system components to access the needed data more efficiently and reliably. The added load on the data storage components that results from adding more data processing (that is, client) components will be distributed more evenly across the system. A significant growth in the size of stored data will be amortized across multiple components. Finally, even if one data source were to fail in such a situation, the remainder of the system could still function in a degraded mode. An example of this approach is BitTorrent (Cohen 2003), which is discussed in Chapter 11.

Replicate Data When Necessary

A common technique used to ensure scalable access to system data in distributed systems is data replication. Replication can help support growing numbers of data consumers; they do not all have to go to the same source. In many distributed systems data replication for local consumption—by a single software component or on a single host—is achieved by caching. Replication can also help the system's fault tolerance: If one of the components containing a copy of the needed data fails, requests for that data can be rerouted to another one of its copies.

Data replication must be approached with appropriate caution, however. Engineers must distinguish between mutable and immutable data. Immutable ("read only") data can typically be replicated with few concerns, except when the data is sensitive and access to it needs to be restricted. On the other hand, replicating mutable data requires that all replicas be synchronized. Constant synchronization of distributed copies of the same information can be very expensive in terms of performance, while stale (that is, unsynchronized) data can cause incorrect system behavior and may be unacceptable to the system's users.

Software Connectors and Scalability

As new functionality is added to a software system, the system will likely also need to grow in the number of interaction mechanisms, that is, connector types. New connector instances may also be added to improve the system's performance, for example, by reducing the load on existing connectors. As in the above case, we can identify some general guidelines that can help to improve a system's scalability in terms of interaction. Again, keep in mind that these are general guidelines and that it is possible to encounter software development scenarios that may not allow one to adhere to these guidelines.

Use Explicit Connectors

This guideline may appear obvious by now, but it needs to be stressed because the choice to adhere (or not adhere) to it will have significant implications on the given system's scalability. Connectors remove the burden of interaction from components. They are the natural points of scaling in a system. Even when a given connector, such as a remote procedure call, is unable to support the system's scaling up, architects have the opportunity to replace that connector with a more appropriate one, such as an event-passing or data-caching connector.

System adaptations such as adding new components, extending the system's distribution to new hosts, or increasing the amount of data and the number of data types, can be directly aided by the chosen connectors and, at the same time, should have minimal or no impact on the system's individual components. For example, a component should not need to be aware of the number of other components in the system with which it is interacting.

As discussed in Chapters 4 and 11, software architectures and architectural styles that result in highly scalable systems, such as publish-subscribe or REST, employ explicit, first-class connectors to achieve that scalability.

Give Each Connector a Clearly Defined Responsibility

If a connector is overburdened with supporting multiple interaction facets in a given system, large numbers of components, or large amounts of data, that connector may not be able to support adequately the system's further growth in size. There are scenarios in which a heavy burden on a connector clearly cannot be avoided. In other situations, a connector may be treated by architects as a software bus that should handle all of the interactions in a system or subsystem. An example is a message-oriented middleware product.

In such situations, using a larger number of connector instances of the same type is a simple remedy. Each individual connector thus ends up being simpler and responsible for a smaller portion of the overall system's interactions. As such, it will have the potential to support new interactions as needed in the future and hence to aid the system's scalability. Furthermore, adding new connectors becomes easier because their responsibilities are clearly delineated and component interaction points are clearly defined.

Decisions such as the above—to delegate interaction responsibilities to multiple connectors—may be only conceptual, design aids, but they will also likely manifest themselves in system implementations.

Choose the Simplest Connector Suited for the Task

An architect will often have multiple interaction choices at his disposal. For example, the architect may be able to choose between RPC or a publish-subscribe connector. A decision to go with the latter may be a result of envisioned future system growth, and the impulse may be to go with a far-reaching, more comprehensive solution.

This is not necessarily a wise strategy. Unnecessary complexity usually affects system performance. It is very likely that the system will, in fact, evolve. However, that evolution could be in a direction different from the one anticipated by the architect. Even if the architect turns out to be correct, future adaptations (in this case, future additions of new items such as components, users, points of distribution, and data) should be addressed by introducing the more complex connectors when they are needed. In the meantime, selecting the most appropriate connectors for the system as it currently stands helps to preserve the system's conceptual integrity and likely will speed implementation.

Be Aware of Differences Between Direct and Indirect Dependencies

Direct, explicit dependencies between components in a system, such as those captured by synchronous procedure call connectors, can aid architects with controlling the system's complexity and ensuring that its performance requirements are met. However, scaling up such a system may be nontrivial, precisely because the connectors ensure a tight fit among the system's current components.

The alternative is to use indirect, implicit dependencies, possibly simultaneously among multiple components, and possibly characterized by asynchronous interaction. In addition to better supporting scalability, such connectors can also aid system adaptability. Examples discussed previously are event broadcasting connectors and shared data access connectors. The advantages of connectors that support loose component coupling come at a cost, however. Such connectors can negatively impact both the system's complexity and performance. For example, how does one discover the sources of system defects when the component interactions are hidden? How does one ensure system efficiency when, in principle, multiple components can respond to a given request at arbitrary times and in arbitrary order.

A software architect must be aware of this fundamental trade-off and select connectors judiciously.

Do Not Place Application Functionality Inside Connectors

Placing application-specific functionality inside the ostensibly application-independent interaction facilities may be tempting for several reasons. For example, co-locating a connector's caching capability with some in situ data processing may accrue certain performance gains. However, doing so will violate the separation of concerns principle, and will result in an increase in the connector's complexity.

Such a decision will also impact the system's scalability. It likely will be more difficult for a connector to service an increasing number of components or route increasing data volumes if the connector is also encumbered with processing responsibilities. Furthermore, the processing done inside the connector may result in additional dependencies with the components, and possible architectural mismatches. Ironically, the ultimate epilogue may well be that the very property used to justify placing application processing inside the connector—improved system efficiency—may end up undermined.

Leverage Explicit Connectors to Support Data Scalability

There are certain types of application-independent data processing that are naturally housed inside a connector. These are techniques for bringing data closer to its consumers and serving it more efficiently or fluidly, and include buffering, caching, hoarding, and prefetching. Such services may need to be tailored to an application. For example, Should data be cached after the initial request or prefetched in anticipation of future requests? What is the volume of data that needs to be buffered before it is served to a component? Under what circumstances should a copy of the data be stored locally.

These services do not alter a system's functionality, hence the system's components should be completely insulated from them. On the other hand, these services can have a significant impact on the system's non-functional characteristics, in particular, its efficiency and scalability. Therefore, making connectors explicit, first-class entities in the system's architecture is a precondition to providing these data scalability services. Recall that this is a major lesson from the REST style (Chapters 1 and 11), which had to enable an unprecedented degree of data scalability in the REST-compliant systems, most notably the World Wide Web.

Architectural Configurations and Scalability

Avoid System Bottlenecks

If a large and growing number of components depend on services provided by a single other component in a system, that component may eventually become unable to provide its services efficiently or reliably to all of its clients. Similarly, if a large and growing number of components interact through a single connector, at some point that connector may become unable to satisfy the components' interaction needs. In such situations, the overburdened individual components and connectors may have a significant impact on the overall system's performance and may preclude further system growth. In other words, they may become system bottlenecks.

An explicit architectural model can aid with identifying and avoiding bottlenecks. Even a casual glance at the architectural configuration of certain systems can indicate possible bottlenecks. For example, Figure 12-4 suggests that the Library component is a potential bottleneck because all other major Linux components invoke it. In order to establish whether Library is indeed a bottleneck, additional information is needed: How frequently is it called? What is the latency of servicing requests? Are any requests dropped because of the component's inability to service them quickly enough? Is the performance of other system components significantly impacted by this component's performance.

If the component or connector is established as being a bottleneck in the system, the system may need to be redesigned to eliminate this problem. For example, a replica of the overburdened component may be introduced to service a portion of the requests; likewise, a new connector may be inserted to off-load the original connector by servicing a subset of the interacting components. In distributed systems literature, such techniques are commonly referred to as load balancing.

Make Use of Parallel Processing Capabilities

Certain types of problems lend themselves naturally to processing in multiple parallel threads. Examples are many scientific computing applications. In such cases, the system's scale in the amount of computation performed or the volume of data processed will depend upon the number of concurrently executing modules, each of which is likely executing on a separate physical processor. In other words, scalability is achieved through sheer distribution.

One limitation, of course, is that not all problems can be easily parallelized in this manner. Unless the problem is naturally, easily parallelized, increasing the scale can happen at the significant expense of efficiency.

Place the Data Sources Close to the Data Consumers

If components that need to access some data are distributed across a network, the decision to have a single, remotely located data storage component will result in a lot of network traffic. This, in turn, will affect the system's performance and its ability to accommodate additional client components that may yet further increase the data traffic. One solution is to always keep the data sources close to data consumers, minimizing the resulting network traffic, and even co-locating them on the same host whenever possible.

Clearly, this guideline will be difficult to apply in many distributed systems. Moving the data closer to the processing components can be done virtually through techniques such as caching and prefetching, as discussed earlier. However, in such cases the system needs to ensure that multiple copies of the same data are synchronized, which may add to the system's processing and communications load beyond the savings incurred by making data access local.

Another possibility is to co-locate the processing components. However, the application's location constraints may limit the architect's options. Furthermore, even if a given component could be redeployed to another host (that is, closer to the data), its interactions with other parts of the system may suffer as a result.

Try to Make Distribution Transparent

Relocating either processing or data to improve a system's performance or scalability in the manner discussed above requires some degree of location transparency. We have discussed the potential negative impact of a component's location transparency on efficiency. At the same time, location transparency can aid a system's scalability.

In a distributed setting, scaling up a system will often result in changing the deployment view of the system's architecture. For example, new users may need to be allowed to access the system via new hardware hosts. Processing or data components may need to be added in specific places in the architecture, or they may need to be relocated across hosts, to enable the new users. The outcome will be increased processing and data traffic in the system.

Redeployment may cause significant disturbances to a system. However, if connectors are capable of handling both local and remote interactions in a manner that is transparent to the involved components, then the system can be reconfigured in several ways, such as by redeploying consumer components closer to the needed data sources, replicating remote functionality locally for performance, or off-loading components to capacious remote hosts.

The impact of such adaptations always needs to be assessed carefully before they are effected.

Use Appropriate Architectural Styles

The architectural styles selected for a given system will have a significant influence on that system's scalability. As has been the case with other system properties, even if we do not take into account specific details of the system under development, certain architectural styles will be more appropriate for achieving scalability than others. Thus, for example, publish-subscribe and event-based architectures have been demonstrated to scale to very large numbers of components, users, devices, and data volumes. Other styles typically result in systems that scale well in one or more dimensions. For example, interpreter-based systems cannot easily accommodate growing numbers of users, but can usually handle addition of new functional operators. Likewise, pipe-and-filter may be unable to support increasingly large volumes of data, but can support increasing numbers of components, since arbitrarily long pipelines can be constructed.

ADAPTABILITY

Definition. Adaptability is a software system's ability to satisfy new requirements and adjust to new operating conditions during its lifetime.

Adaptability can be manual or automated. A software system's architecture has an impact on either type of adaptability. Chapter 14 is dedicated in its entirety to architectural adaptation, so we will make only brief general observations here.

Software Components and Adaptability

Architecturally relevant adaptability occurs at the level of system components, their interfaces, and their composition. In other words, if adaptation is required entirely within an individual component or connector, that adaptation is not considered to be architectural. Such adaptations can still be effected with the aid of the system's architecture, for example, by replacing an entire component with its newer version. This observation informs several guidelines for designing for adaptability.

Give Each Component a Single, Clearly Defined Purpose

This guideline has been discussed above—twice—in the context of efficiency and complexity. Since architecture-level adaptation occurs at the level of entire components, it is imperative to separate different system concerns into multiple components. This allows the architects and engineers to minimize the amount of degradation the system experiences during adaptation.

Minimize Component Interdependencies

Adapting a complex system is difficult. Each modification may impact multiple parts of the system. For example, Figure 12-5 indicates that modifying any major subsystem of Linux will, in principle, have an effect on every other subsystem. Defining the system's components to have simple interfaces, in a manner that precludes unnecessary interdependencies, can help to control the effects of adaptation.

Avoid Burdening Components with Interaction Responsibilities

Again, this guideline was discussed previously in the context of other NFPs. In the context of adaptability, the objective is to separate the system's functionality and data from interaction.

Separate Processing from Data

Adaptations to the system's processing components should be handled independently of adaptations to its data.

Separate Data from Meta-Data

Changes to the data in a large, long-lived software system will occur regularly. If the data is separated from the meta-data (that is, the data about the data), then in principle each can be adapted independently of the other. Furthermore, the processing components will be able to adapt more easily to the changes in both the data and the meta-data.

Software Connectors and Adaptability

Connectors are the key enablers of architectural adaptability. Components should be insulated from their particular context to the greatest extent possible in order to appropriately separate system concerns and maximize their reuse potential. It is the task of connectors to provide the necessary facilities that enable a given component to operate appropriately in its environment. Several guidelines for adaptability stem from this observation.

Give Each Connector a Clearly Defined Responsibility

If a connector is in charge of enabling the interactions of a given type among a specific set of components, it will be easier for architects and engineers to manage the required adaptations to those components or their interactions. However, this requires one additional property of connectors, discussed next.

Make the Connectors Flexible

At the least, connectors must be able to support different numbers of components, and possibly component types. If a connector is unable to do so by itself, composing it with other connectors may produce the desired effect, as elaborated next.

Support Connector Composability

To enable interactions among heterogeneous components, which may be added to a system during run time, connectors must be composable with other connectors. For example, Figure 12-6 shows a connector composed from three separate object request brokers (ORBs)—recall the discussion on CORBA from Chapter 4. In the architecture depicted in Figure 12-6, each component exchanges information only with the connector to which it is directly attached; in turn, the connector will (re)package that information and deliver it to its recipients using one or more middleware technologies—whatever is necessary to achieve the desired communication. Each such middleware-enabled connector exposes the interface expected by its attached components. The connector may change the underlying mechanism for marshaling and delivering messages, but externally appears unchanged.

Such a composite connector has the added advantage that it can also preserve the topological and stylistic constraints of the application. For example, the application in Figure 12-6 was designed according to the C2 style (recall Chapter 4). This means that, for example, Component A and Component B can interact with one another, while Component A and Component C cannot. Using a single ORB to connect the components—even if it were possible—would potentially violate these stylistic constraints.

Distributed components interact through a single conceptual connector, which is actually composed from multiple interacting ORBs. Each ORB exports the interfaces expected by its attached components. To do so, the ORB may need to include an adaptor, depicted as a highlighted ORB edge.

Figure 12-6. Distributed components interact through a single conceptual connector, which is actually composed from multiple interacting ORBs. Each ORB exports the interfaces expected by its attached components. To do so, the ORB may need to include an adaptor, depicted as a highlighted ORB edge.

Another advantage of such composite connectors is that they are independent of other connectors in the system, and can optimize aspects of communication in a homogeneous setting (for instance, if the interaction between Component A and Component B is local). Maintaining a connector's interface and semantics, coupled with its composability, allows much more flexibility in application development and deployment. For instance, if it were later decided that Component C and Component D should indeed run in the same process, it would be possible to make this change by simply reconfiguring the composite connector. The component code would not have to be changed at all.

Be Aware of Differences Between Direct and Indirect Dependencies

This guideline has been discussed above in the context of scalability. Much of the argument is applicable for adaptability. In principle, direct and explicit dependencies result in more efficient systems, while indirect and implicit dependencies allow the given system to adapt more easily.

Architectural Configurations and Adaptability

Leverage Explicit Connectors

A software system in which the connectors are implicit (that is, in which components are endowed with interaction capabilities) will be difficult to adapt since individual concerns may be distributed across multiple system elements.

Try to Make Distribution Transparent

As in the case of scalability, distribution transparency has its advantages when adaptability is concerned: Making modifications to a system is much easier if components are oblivious to the system's deployment profile. For example, the system or one of its parts may be redeployed without requiring changes to the system's components. Of course, the efficiency impact of such adaptations must be carefully taken into account.

Use Appropriate Architectural Styles

As with other NFPs, architectural patterns and styles can directly impact adaptability. Simply put, certain styles are better suited to support adaptability than others. For example, styles that support event-based interaction, such as publish-subscribe and implicit invocation, naturally and effectively support adaptability. On the other hand, styles that require direct dependencies among the components, such as virtual machines or distributed objects, can hamper adaptability.

DEPENDABILITY

Software dependability researchers Bev Littlewood and Lorenzo Strigini define dependability informally as a collection of system properties that allows one to rely on a system functioning as required (Littlewood and Strigini 2000). Dependability is a composite NFP that encompasses several other, related NFPs: reliability, availability, robustness, fault-tolerance, survivability, safety, and security. Security is discussed in detail in Chapter 13; here, we focus on the design guidelines required to ensure the remaining facets of dependability in a system's software architecture. The following definitions illustrate the interrelated nature of these different facets of dependability. The role of software architecture in ensuring a software system's dependability is then discussed.

Definition. A software system's reliability is the probability that the system will perform its intended functionality under specified design limits, without failure, over a given time period.

Definition. A software system's availability is the probability that the system is operational at a particular time.

Unlike reliability, which is a statistical measure of the system's overall health over a continuous period of time, availability represents the system's health at discrete snapshots in time.

Definition. A software system is robust if it is able to respond adequately to unanticipated run time conditions.

Robustness deals with those run time circumstances that have not been captured in a system's requirements specification. It can be said that responding to conditions specified in the requirements is a matter of system correctness, while responding to all other conditions is a matter of robustness. The adequacy of the response will vary depending on different factors, such as application domain, user, and execution context.

Definition. A software system is fault-tolerant if it is able to respond gracefully to failures at run time.

The failures affecting a system can be outside the system itself, whether caused by hardware malfunctions or system software defects, such as network or device driver failures. Alter natively, the system's own components may fail due to internal defects. What constitutes a graceful reaction to failure will depend on the system context. For example, one system may continue operating in a degraded mode; another one may introduce, possibly only temporarily, new copies of the failed components; yet another may offer the "blue screen of death" to the consternation of its users, requiring a reboot of the entire computer, after which the user might be able to continue with normal operation.

From a software architectural perspective, faults can be classified into the following.

  1. Faults in the system's environment (that is, outside the software architecture).

  2. Faults in components.

  3. Faults in connectors.

  4. Component-connector mismatches.

An NFP closely related to fault-tolerance is survivability.

Definition. Survivability is a software system's ability to resist, recognize, recover from, and adapt to mission-compromising threats.

Survivability must address three basic kinds of threats, cited below.

  1. Attacks, examples of which are intrusions, probes, and denials of service.

  2. Failures, which can be due to system deficiencies or defects in external elements on which the system depends.

  3. Accidents, which are randomly occurring but potentially damaging events such as natural disasters.

Therefore, fault tolerance and survivability are related because the goal of both is to effectively combat threats, and system faults can be viewed as one kind of threat.

It should be noted that the distinctions among the three categories of threats may not matter in the context of recovery from the threats. The key to survivability is to try to recover from these threats as gracefully as possible. In order to make a system survivable, the basic activities that can be performed are the following.

  1. Resistance to threats.

  2. Recognition of threat and extent of damage.

  3. Recovery of full and essential services after a threat.

  4. Adaptation and evolution to reduce the impact of future attacks.

The final facet of dependability that will be discussed here is software system safety. Several definitions of software safety are currently in use, and they usually are variations on the definition provided by software safety expert Nancy Leveson (Leveson 1995). We provide a similar definition.

Definition. Safety denotes the ability of a software system to avoid failures that will result in loss of life, injury, significant damage to property, or destruction of property.

It should be noted that different degrees of property damage and destruction, and even human injury, may be considered acceptable in the context of different systems and/or system usage scenarios. The elaboration of such circumstances is outside the scope of this book, however.

Software Components and Dependability

More dependable software components will, on the average, result in more dependable software systems—though it should be noted that dependable components need not always result in dependable systems, and that highly dependable systems may comprise undependable individual components (recall the "Honey-Baked Ham" sidebar in Chapter 8). Practice has shown that it is seldom possible to develop components that are completely reliable, robust, and fault-tolerant. However, engineers can follow certain practices to help with achieving these properties. In particular, the guidelines below are targeted at the outwardly visible facets of a component.

Carefully Control External Component Interdependencies

Changes in the behavior of a given component, including anomalous behavior and failures, should have a minimal impact on the remaining components in the system. This can be achieved by properly insulating components from one another. One specific guideline is to restrict all intercomponent dependencies to be explicit and only at the level of the components' public interfaces. Recall from the discussion earlier in this chapter that following this guideline may compromise a system's scalability and adaptability. Another guideline is to minimize, or completely disallow, side effects from component operations.

Provide Reflection Capabilities in Components

While a given component in a system may be unable to provide certain desired dependability guarantees, it should be possible to partially mitigate that by enabling querying of the internal state of the component. This will allow other parts of the system to assess the health of the component at times that are deemed important.

Provide Suitable Exception Handling Mechanisms

When an individual component fails, the rest of the system may be able to adjust to that failure. In order for that adjustment to happen quickly and gracefully, the failing component should be able to expose the necessary information to the rest of the system. One way of doing this is by providing exception reporting mechanisms via the component's public interface. This will allow the other components and connectors in the system to adjust their operation accordingly.

Specify the Components' Key State Invariants

Architects can explicitly state conditions that must hold at different times during the component's execution. Such invariants will allow the component's users to establish best-case, normative, and worst-case guarantees when interacting with that component. Different actions taken in response to that information will then have the potential to improve the overall system's dependability.

Software Connectors and Dependability

Employ Connectors that Strictly Control Component Dependencies

Unintended or implicit dependencies among a system's components will likely harm a system's dependability. Explicit, first-class connectors can be used to manage all dependencies among system components. If necessary, connectors can completely insulate components from one another. It should be noted that widely used connectors such as shared memory and procedure calls give developers too much leeway in creating component interdependencies. Many distributed systems employ connectors that, by necessity, strictly enforce component boundaries and control remote interactions. However, recall from Chapter 5 that even within a single address space it is possible to leverage connectors, such as event buses, that will enforce any component interaction requirements.

Provide Appropriate Component Interaction Guarantees

In certain scenarios, it is imperative that a component receive the data sent to it, even if that means sending the information multiple times. In other scenarios, the system's hardware resources may be overtaxed to the point that, regardless of other needs, the system cannot afford to have the same data transmitted or processed more than once. In yet other circumstances, the recipient components may process each event, even though these are replicas of the same event. In that case, the system's state may erroneously change and the results produced may be wrong. This could have disastrous consequences in a safety-critical system: Consider receiving, multiple times, a command to change the pitch, roll, or yaw of an aircraft by a specified value.

In large, distributed systems—especially those composed with heterogeneous, possibly off-the-shelf components—the individual components may, for instance, make different assumptions about each other, the desired interaction profiles, or the state of the hardware resources. In such situations, it is the responsibility of connectors to ensure that all interaction guarantees (for example, at least once, at most once, or exactly once) are provided, and possibly adapted in response to changing circumstances during the system's execution.

Support Dependability Techniques via Advanced Connectors

A number of dependability concerns and needs in a system cannot be met simply by constructing solid components, properly composing them, and ensuring that they interact only in the prescribed ways. Large, complex, distributed, embedded, and mobile systems need to respond to many external stimuli, and can suffer failures induced by human users, hardware malfunctions, or interference—whether accidental or malicious—from other software systems.

To deal with such situations, connectors may need to support advanced interaction facilities. Examples include providing replicas of failing or failed components on the fly, run time replacement of components (also referred to as "hot swaps"), and support for multiple versions of the same functionality, for example, to ensure the correctness of calculations. These facilities will often need to be provided such that the system's existing, still correctly functioning components are unaware of them. In other words, connectors should support seamless dependability.

Architectural Configurations and Dependability

Avoid Single Points of Failure

In many large systems, a large portion of a system may depend on the services provided by one component or a small number of components. If those services are critical to the system's mission, the failure of the component, or components, providing them will significantly or completely incapacitate the system.

There are numerous examples of such systems. For instance, any data-intensive system with a centralized repository cannot afford to lose the repository. Likewise, a spacecraft with a centralized controller will be lost if the controller fails. Yet another example is a swarm of robots relying on a centralized planning engine; the robots may end up roaming aimlessly if the planner malfunctions.

There are several techniques that can be employed to deal with such concerns, including replicating the component in question, clearly separating into multiple components the different concerns it may embody, and employing connectors with advanced capabilities such as those discussed immediately above. At the same time, it should be acknowledged that in some situations it may be impossible to avoid creating a design that contains one or more single points of failure. Regardless of the circumstances (that is, whether the single point of failure can be avoided or not), software architects need to be aware of this issue throughout the design process.

Provide Back-Ups of Critical Functionality and Data

This guideline should be obvious. It also is not universally applicable. For example, it is possible that a lack of system resources or hard real-time constraints effectively prevent back-ups. However, as with one's personal computer or PDA, back-ups are ultimately the best way of ensuring that critical functionality and data are not lost. Different techniques may be used to implement this guideline, including employing advanced connectors such as those discussed above.

Support Nonintrusive System Health Monitoring

It may be too late to try to deal with system malfunctions once they have already occurred. Frequently, however, a system will exhibit anomalous behavior and give hints that something is wrong for some time before it, or a part of it, actually fails. Periodically monitoring the system for certain events that indicate its health status can help engineers to anticipate, and possibly eliminate, any unexpected behavior.

Nonintrusive monitoring can be achieved in a number of different ways. One possibility is to employ a component model that supports the inclusion of explicit monitors at the level of application components. Another, even less intrusive possibility is to include monitoring facilities within the system's connectors. Component containers used in many middleware platforms present another possible target for placing system health monitors.

Support Dynamic Adaptation

A dynamically adaptable system is able to respond to events at run time; a static system may not. Dynamically adaptable systems are able to support the addition, removal, replacement, and reconnection of both components and connectors while the system is running. In decentralized settings they frequently also support dynamic discovery of available services. A thorough treatment of the role of software architecture in dynamic adaptation is provided in Chapter 14.

It should be noted that dynamic adaptation can harm a system's dependability, at least temporarily. Recall the discussion of mobility in Chapter 10 and, in particular, the diagram in Figure 10-12. In addition to rendering the system at least temporarily undependable during the dynamic adaptation, it is possible that the result of adaptation will be a system whose critical properties, including its dependability, are compromised. This is a reason why the stakeholders of many highly safety-critical systems are leery of dynamically modifying their systems. In other words, while dynamic adaptation can be a very useful tool at an architect's disposal, it should be applied with caution.

END MATTER

Engineering a software system to provide the required functionality is often quite challenging. This is why historically a majority of the software design and implementation techniques have focused on supporting and improving a system's functional capabilities. Ultimately, however, good engineers will usually manage to produce almost any functionality, no matter how complex. What they will still struggle with are the non-functional aspects of their systems.

Recall the discussion of the "better-cheaper-faster — pick any two" software engineering maxim from Chapter 10. It suggests that, so long as the system stakeholders are willing to sacrifice one or more of the system's non-functional characteristics, such as schedule, cost, or performance, developers will be able to produce the desired functional capabilities.

Producing those capabilities along with the desired non-functional system characteristics is significantly more challenging. There is much less guidance available, in particular to software system architects, for how such characteristics are to be "designed into" the system in question. One reason is that many of these characteristics are loosely defined and qualitative in nature. The challenge becomes even more daunting when system stakeholders have to consider multiple, possibly clashing non-functional characteristics simultaneously.

This chapter has distilled a large body of experience and good software engineering practices into a set of guidelines [or "tactics," as they have been referred to elsewhere in literature, for instance, (Rozanski and Woods 2005)] that software architects can follow in achieving specific non-functional properties. The guidelines should work most of the time, but they should be applied with caution. Whenever appropriate, each of the guidelines in this chapter has been coupled with specific admonitions that reflect the difficulties a software architect may encounter in certain development scenarios.

Both the guidelines and the accompanying admonitions should be kept in mind when designing software systems. The reader should also realize that this chapter did not, and could not, provide a complete enumeration of all possible architectural design scenarios, or all possible trade-offs among the choices made in achieving the desired non-functional characteristics. Chapter 13, which follows, is a companion to this chapter, in that it provides in-depth treatment of security, an NFP of tremendous importance in modern software systems. Together, these two chapters are intended to serve as a rich foundation for software designers targeting NFPs.

REVIEW QUESTIONS

  1. Define the following NFPs:

    1. Efficiency

    2. Complexity

    3. Adaptability

    4. Scalability

  2. What are the different dimensions of dependability?

  3. How is each of the above NFPs manifested in software architectures?

  4. How can each of the above NFPs be addressed at the architectural level?

  5. Identify at least three guidelines for achieving each of the above NFPs. Elaborate on the manner in which each of those guidelines supports the NFP in question.

  6. Can multiple NFPs be achieved simultaneously? If so, under what conditions? If not, why not?

EXERCISES

  1. It is often difficult to simultaneously satisfy multiple non-functional properties (NFPs). Discuss the trade-offs between the following NFPs with respect to their impact on a system's architecture. For each trade-off, provide the answer in three parts: Discuss how your architectural choices can help to maximize each of the two individual properties (possibly at the expense of the other property); then discuss architectural choices that can help you maximize both properties in tandem. Make sure to provide your answer at least in terms of the role and specific characteristics of components and connectors that impact the NFPs.

    1. Performance versus complexity

    2. Safety versus efficiency

    3. Reliability versus adaptability

  2. Recall the example Lunar Lander architectures discussed in Chapter 4. Select any two of the architectures from Chapter 4. For each NFP discussed in this chapter, analyze what can and cannot be determined about the selected architectures. Can you recognize any of the NFP design guidelines in either of the two architectures? What is your assessment of the architectures' respective support for the NFP under consideration?

  3. Now consider an ADL model of Lunar Lander from Chapter 6. Does the model aid or hamper your ability to answer the questions from the preceding exercise? Be sure to justify your answer.

  4. Discuss how one of the architectures selected in the two preceding exercises can be adapted to improve:

    1. Efficiency

    2. Scalability

    3. Adaptability

  5. The list of guidelines for achieving the different NFPs provided in this chapter is incomplete. Add at least one additional guideline for each existing NFP.

  6. Devise a set of design guidelines for achieving additional properties, such as:

    1. Heterogeneity

    2. Compositionality

    3. Security

    If necessary, you should locate the definitions and any other necessary explanation of these properties. Note that security is discussed in Chapter 13.

  7. Analyze the pair–wise trade-offs among one of the properties you devised in the previous exercise and those introduced in this chapter.

FURTHER READING

Several software engineering textbooks provide useful overviews of non-functional properties (NFPs). Carlo Ghezzi, Mendi Jazayeri, and Dino Mandrioli (Ghezzi, Jazayeri, and Mandrioli 2003) are particularly thorough in their treatment of NFPs. They divide the NFPs into internal (relevant to architects and developers) and external (relevant to customers and users). Furthermore, they divide NFPs into those relevant to the developed product and those relevant to the development process. The NFPs studied in this chapter are the internal product NFPs.

Ghezzi et al. do not focus on the design guidelines for accomplishing the various NFPs. A recent book by Nick Rozanski and Eoin Woods (Rozanski and Woods 2005) attempts to do just that. The authors identify a set of NFPs relevant to software architects. These include performance, scalability, security, and availability. They also propose a number of guidelines, called tactics, targeted at achieving the NFPs. Several of their tactics are relatively general (such as "capture the availability requirements"), and are not targeted separately at different architectural facets such as components, connectors, and configurations.

A volume on the Future of Software Engineering (FoSE) (Finkelstein 2000), accompanying the proceedings of the 22nd International Conference on Software Engineering (ICSE 2000), provided a set of useful overviews and research roadmaps for several NFPs, including reliability and dependability (Littlewood and Strigini 2000), performance (Pooley 2000), and safety (Lutz 2000). These topics have been revisited in the FoSE volume (Briand and Wolf 2007) accompanying the 29th International Conference on Software Engineering (ICSE 2007) by Michael Lyu on reliability (Lyu 2007), Murray Woodside et al. on performance (Woodside, Franks, and Petriu 2007), and Mats Heimdahl on safety (Heimdahl 2007).

Many of the architectural guidelines advocated in this chapter targeted at accomplishing NFPs emerged over time from general software engineering principles. For example, modularity and separation of concerns was articulated by David Parnas more than thirty years ago (Parnas 1972). More recently, Robert DeLine has argued for a decoupling of a component's "essence" from its "packaging" (DeLine 2001). Robert Allen, David Garlan, and John Ockerbloom have shared a very useful experience on the effects of off-the-shelf component integration on a system's NFPs, and the inherent architectural causes (Garlan, Allen, and Ockerbloom 1995; Garlan, Allen, and Ockerbloom 1995).



[17] In many systems, this may be accomplished by having the components themselves maintain the correspondence between requests and replies, for example, by associating and exchanging special identifiers ("tokens") with the request and reply messages. While such solutions may work in practice, this association is ultimately an interaction issue, and as such belongs in the connector.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset