Chapter 13. Security and Trust

The preceding chapter introduced several non-functional properties, such as efficiency, scalability, and dependability, and described architectural design strategies to help achieve those properties. Security is another non-functional property; its increasing, critical importance warrants the separate, in-depth look provided in this chapter. As with many other non-functional properties, it is most effectively addressed while designing a system's architecture.

Consider the example of building architectures introduced in Chapter 1. A building is designed with various structural properties and the owner's requirements in mind. If such requirements and design do not encompass security needs, problems can arise. For instance, if a building has windows or doors that are easy to access from the outside, or its structure prevents the installation of security alarms, the building may be vulnerable to unwanted visitors. If these considerations are addressed during the building's design, however, a secure structure at a reasonable price is achievable.

If the building is not designed from the outset with security in mind, it may still be possible to add external reinforcements to improve security when such demands later occur. For example, thin walls can be reinforced by adding extra layers; doors and windows that represent potential points of entry can be safeguarded using suitable lock mechanisms. In some cases, a building may not be securable by itself but it might be housed within a gated enclosure that can safeguard an entire community against external intruders.

The caveat of adding security afterward, though, is that it is generally more expensive than taking the proper measures from the beginning. Imagine opening up the walls, installing extra wires and cameras, then closing the walls. This is more expensive than envisioning the requirements from the start and designing the building around those requirements. The same is true for software. It is therefore imperative that security be considered and treated when developing a system's architecture. Using a software architecture-based approach for security allows developers to leverage experience and achieve desired security properties. Software architecture also provides a sound basis for reasoning about security properties.

Note that while external reinforcements can be used to provide a certain degree of security post-development, it still entails that the software be designed to allow addition of such reinforcements without compromising required functionalities. This makes reasoning about security at the architectural level even more important. Further, since software systems often go through an extended round of releases as new functionalities are added, a security-based architectural approach can provide guidance through the various software evolution cycles and help ensure that essential security properties continue to be achieved through each release.

Security, as important as it is, is only a part of the overall system. It has to be balanced with other non-functional properties. For example, encryption is generally used in software systems to keep data secret, but using encryption can be computationally expensive, and the performance of the system might be adversely impacted by such operations. Even more importantly, security, along with other non-functional properties, must be balanced against a system's general functional requirements. For example, a browser that displays and executes all types of content would provide the richest experience for users, but such indiscreet execution is almost doomed to bring malicious software into a user's computer. When facing such choices, uninformed stakeholders, such as end-users and product-planning teams, might choose functionality over other critical properties. A software architecture approach can ameliorate this condition by providing the necessary abstraction and tools that will help stakeholders to make sound decisions.

The chapter begins with introduction of the different aspects of security, including confidentiality, integrity, and availability. Section 13.2 discusses several general design principles for security. These principles have been developed by theoreticians and practitioners over the years and have been applied in many systems. We illustrate how these principles can be applied to software architectures. In some contrast to the preceding chapter, however, the design guidance provided is not as straightforward. Properties such as efficiency and dependability have been important to software designers since the beginning of software engineering, hence it is not surprising that the preceding chapter was replete with crisp techniques for achieving them. Security is of more recent prominence, and is arguably more subtle, so a broad understanding and application of general design principles is needed. In Section 13.3, a technique for architectural access control is presented that complements other design techniques with capabilities to specify and regulate intercomponent communication. The chapter concludes with presentation in Section 13.4 of an architectural approach for constructing trust-enabled decentralized applications. These types of applications play an important role in the emerging collaborative Web world, where autonomous users communicate and collaborate in a community environment and require trust management to protect themselves from malicious users.

Outline of Chapter 13

  • 13. Security and Trust

    • 13.1 Security

    • 13.2 Design Principles

    • 13.3 Architectural Access Control

      • 13.3.1 Access Control Models

      • 13.3.2 Connector-centric Architectural Access Control

    • 13.4 Trust Management

      • 13.4.1 Trust

      • 13.4.2 Trust Model

      • 13.4.3 Reputation-Based Systems

      • 13.4.4 Architectural Approach to Decentralized Trust Management

    • 13.5 End Matter

    • 13.6 Review Questions

    • 13.7 Exercises

    • 13.8 Further Reading

SECURITY

The National Institute of Standards and Technology defines computer security as, "The protection afforded to an automated information system in order to attain the applicable objectives of preserving the integrity, availability and confidentiality of information system resources (includes hardware, software, firmware, information/data, and telecommunications)" (Guttman and Roback 1995). According to this definition, there are three main aspects of security: confidentiality, integrity, and availability. We briefly introduce them here; for a comprehensive treatment, see the references in the "Further Reading" section at the end of the chapter.

Confidentiality

Preserving the confidentiality of information means preventing unauthorized parties from accessing the information or perhaps even being aware of the existence of the information. Confidentiality is also referred to as secrecy.

This concept is as applicable in the domain of building architectures as it is to computer systems. For example, in a large office complex having two buildings, if the office management does not want others to know when certain items are moved between the two buildings, the management could build a covered passage between them so that others outside the building are unable to see what is moved within the passage. Even better, the passage could be built underground so people would not even be aware of the existence of the passage.

Applying this concept to software architectures, software systems should take proper measures while exchanging information to protect confidential information from being intercepted by rogue parties. Likewise, systems should store sensitive data in a secure way so unauthorized users cannot discover the content or even the existence of such data.

Integrity

Maintaining the integrity of information means that only authorized parties can manipulate the information and do so only in authorized ways. For example, in a building protected by a door that can be opened by entering an access code on a numeric panel, only the owner should be able to change the code. Moreover, when the code is changed, it should only be changeable to another numeric code—not to an alphanumeric one.

Analogous constructs exist at the programming language level, such as access modifiers of member fields and functions. For example, in Java, member fields and functions are given access levels as private, package, protected, and public, such that only certain parts of the class hierarchy can access those members. A private field of a class can only be accessed by the methods of the same class, whereas a package method can be called by all classes of the same package. Likewise, in C++, const pointers can be defined that enforce the condition that data accessed through those pointers cannot be changed.

At the software component and architectural level, similar protective mechanisms can be applied to the interfaces of components. For example, if a particular interface of a component changes the most critical information belonging to that component, invocation of that interface should be ensured to be limited to only authorized components. Therefore, such an interface should be designated and separated from the others and receive more scrutiny during design.

To establish the identity of a user—and hence to determine whether a user is authorized or not—an authentication process is used to verify that the user is really who the user claims to be. The most common form of authentication is the user name/password pair: If a user can correctly supply the password associated with a user name, then the user is authenticated as that specific user. This is a form of authentication that relies on what a user knows; other forms of authentication include checking who the user is (for example, by scanning the iris of the user and comparing it to a set of authorized irises) and what the user has. (For example, a user must posses a security token that can generate the correct number at the correct time.)

Depending on security requirements, different levels of authentication may be used for software components and connectors. For example, in the Microsoft DCOM middleware technology, authentication may be bypassed completely. However, if needed, authentication can be performed at the beginning of a communication session, for each method, or even for each communication packet. The most secure authentication level is, of course, the packet authentication level, but it is also the most computationally expensive. The authentication requirements of a communicating client and server determine what the DCOM middleware connector must do. In particular, the middleware connector ensures that both parties operate on the chosen authentication level.

To deter potential intruders, a software system can maintain an audit trail that records important historical information. By analogy, the security guard of a gated housing community can record every visitor's name, license plate, and visiting time, so any security incident could be correlated to possible suspects. Likewise, security cameras can be deployed to record the activities of residents. (Of course, such measures have to be balanced against privacy requirements!) Correspondingly, in the case of software components, audit trails can be maintained internally; that is, a component may log requests and responses from an authenticated user and then produce an audit trail of those requests and responses at a later time. Connectors may also be used to log component invocations that pass through the connector. Further, since the architecture provides a systemwide view of the configuration of components and connectors, an audit trail can be captured recording patterns of access through the system.

Availability

Resources are available if they are accessible by authorized parties on all appropriate occasions. In contrast, if a system cannot deliver its services to its authorized users because of the activities of malicious users, then its services are unavailable; it is said to be the victim of a denial of service (DoS) attack.

Applications that are distributed across a network, such as the Internet, may be susceptible to a distributed denial of service (DDoS) attack. Such attacks try to bring down the many distributed elements of an application. For example, the Domain Name System (DNS), which is in charge of resolving URLs to IP addresses and is distributed across different levels of operation, has occasionally been the target of such attacks. When such attacks succeed, access to Web resources, for example, may be denied.

DESIGN PRINCIPLES

Security aspects of software systems should be considered from a project's start. During system conception the security requirements should be identified and corresponding security measures designed. Patching security problems after a system is built can be prohibitively expensive, if not technically infeasible. Security requirements also evolve with other requirements. Thus, an architect should anticipate possible changes and design flexibility into the security architecture. The architecture of the system is the place for software developers to identify the security requirements, design the security solutions, and design to accommodate future changes.

This section highlights several design principles that help guide the design of secure software. These principles emerged from the research community and have since been applied in many commercial software systems. Such principles are by no means sufficient by themselves for the design of secure software, but do play an important role in guiding designers and architects through possible alternatives and choosing an appropriate solution.

Principle of Least Privilege

The principle of least privilege states that a subject should be given only those privileges it needs to complete its task. The rationale is that even if a subject is compromised, the attacker has access only to a limited set of privileges, which limits the damage to certain specific parts of the system.

Currently, many less-informed Windows users browse the Internet using an account with many administrative privileges. This is not only unnecessary for the simple task of browsing the Web but is potentially dangerous since it opens paths for malicious software to take control of the user's computer. This practice owes its origin to early versions of Windows that were generally shipped with only one account, the administrator account. Based on the principle of least privilege, a minimally privileged account should be used for daily simple activities such as browsing and e-mail. Embodying this principle, Internet Explorer 7, shipped in late 2006, can lower its privileges during execution to below those of the launching user's privileges.

Software architecture makes it easier to determine the least privileges components should have since explicit models of the architecture enable analysis of communication and control paths to determine the necessary attributes. A component should not be given more privileges than are necessary for it to interact with other appropriate components.

Principle of Fail-Safe Defaults

The principle of fail-safe defaults states that unless a subject is granted explicit access to an object, it should be denied access to that object. This scheme might deny some safe requests that otherwise would have been granted, but it assures that each granted access is a safe access.

A simple illustration of this principle is the case of Internet browsers requesting a resource ("GETing a URL") on behalf of a user. Fetching and displaying a resource is a form of granting permissions based on the user-selected URL. Since a URL can be expressed and encoded in different forms (such as using absolute paths versus relative paths), it is not always straightforward to list and reject all invalid URLs. Thus, this rule suggests that accesses to all URLs should be denied unless their form can be verified as belonging to a known, valid kind. Based on this principle, a connector connecting two components should only allow the specific communications that satisfy some approval criterion, rejecting all others.

Principle of Economy of Mechanism

The principle of economy of mechanism states that security mechanisms should be as simple as possible—also referred as the KISS principle (Keep it Simple and Small). While this rule generally is useful with any type of design, it is especially important for security systems. Complexity is the enemy of security because complex interactions make verifying the security of software systems more difficult and hence could possibly lead to a security breach.

One way to apply this principle is to isolate, consolidate, and minimize security controls. Redundant security mechanisms should be simplified. For example, in Internet Explorer prior to Version 7, there were multiple places where URLs were analyzed and results of these analyses were used to make decisions. Such redundancy and inconsistency led to security vulnerabilities. This issue was corrected in Internet Explorer Version 7 by centralizing the handling of URLs. An architecture description provides a suitable abstraction to apply this principle more generally. It allows architects to analyze the locations of security controls, identify potential redundancy, and evaluate alternatives to choose a suitable place for the control.

Principle of Complete Mediation

The principle of complete mediation requires that all accesses to entities be checked to ensure that they are allowed, irrespective of who is accessing what. The check should also ensure that the attempted access does not violate any security properties.

Applying this principle to a software system requires all communication to be checked thoroughly. Such an inspection is greatly facilitated through the systematic view of the system provided by an accurate architectural model. A security architect can evaluate each possible interaction among the components in all types of configurations to make sure that none of the interactions and configurations violate the intended security rules.

The principle of economy of mechanism helps achieve complete mediation. Where there are only a limited number of security control mechanisms it is easier to apply security control, verifying that each access actually goes through these mechanisms.

Principle of Open Design

The principle of open design states that the security of a mechanism should not depend upon the secrecy of its design or implementation. While secrecy is a desired security property, secrecy itself should not be used as a mechanism. A secure design should not rely on the fact that an intruder does not know the internal operations of the software system. While keeping the internals secret might initially make it more difficult for an attacker to break into a system, simply relying on such secrecy is unreliable. It is inevitable that such information will be discovered by malicious users in a world where many different types of information and computational resources are available to attackers. Trivially, employees could leak the secret, either intentionally or unintentionally. Among other options, the attacker can also try clever reverse engineering or simple brute force attacks.

Revealing the internals of a system can actually increase its security. In early stages of design other security reviewers can inspect and evaluate the design and provide insights. Further, during its operation and evolution phases, the system can be studied and refined accordingly to make it more secure. For instance, a system's security should not rely upon a software connector implementing a proprietary (secret) communication protocol. The likelihood of that idiosyncratic communication protocol having flaws is very high. Rather, using a protocol that has passed extensive external scrutiny is far more likely to provide the security desired.

Principle of Separation of Privilege

The principle of separation of privilege states that a system should not grant permission based on a single condition. It suggests that sensitive operations should require the cooperation of more than one key party. For example, a purchase order request generally should not be approvable solely by the requestor; otherwise, an unethical employee could keep requesting and approving inappropriate purchase orders without immediate detection by others.

Software architecture descriptions facilitate the checking of this principle. If an architect discovers that some component possesses multiple privileges that should be separated, the architect should redesign the system and the component so that the privileges are partitioned amongst multiple components.

Principle of Least Common Mechanism

The principle of least common mechanism states that mechanisms used to access separate resources should not be shared. The objective of the principle is to avoid the situation where errors or compromises of the mechanism while accessing one resource allow compromise of all resources accessible by the mechanism. For instance, use of separate machines, separate networks, or virtual machines can help fulfill this principle and avoid cross-contamination.

In the context of software architectures, this implies the need for careful scrutiny when certain software architectural styles are used. For example, in the case of the blackboard style, where all data is maintained on the shared blackboard and access to it is mediated by the blackboard component, the architect must ensure that the existence of the shared store and common mechanism does not introduce unintended security problems.

Principle of Psychological Acceptability

The principle of psychological acceptability states that security mechanisms should not make the resource more difficult to access for legitimate users than if the security mechanisms were not present. Likewise, the human interface of security mechanisms should be designed to match the mental model of the users and should be easily usable. Otherwise, the users will either attempt to bypass the security measure because it is too difficult to use, or use it incorrectly because the user interface is error-prone.

This principle did not receive much attention in the past, but now that software has become a mainstream phenomenon and most computer users are not technically savvy, it is increasingly important to design security mechanisms keeping users' psychological acceptability in mind.

By analogy, a building may have several security capabilities to safeguard it, such as specially designed door and security alarms. Yet if the building owner does not use them because they are too cumbersome or error-prone, then essentially the building becomes as vulnerable to potential threats as if the safeguards did not exist. With regard to software systems, an application may support security techniques such as digital authentication and cryptography, but if the end users do not use those techniques because they do not understand the mechanisms or cannot use the mechanisms correctly, the resulting system may become vulnerable to security attacks such as impersonation and repudiation.

Principle of Defense in Depth

The principle of defense in depth states that a system should have multiple defensive countermeasures to discourage potential attackers. Since an attacker will have to break through each of these countermeasures, it increases the likelihood of being able to identify and prevent an attack from occurring.

This principle requires each component in a path that leads to a critical component to implement proper security measures in its own context. This ensures that the security of the whole system will not be violated just because of one component's failure to implement proper security control.

A good example is the way Microsoft Internet Information Service (IIS) Version 6 (a Web server) handles WebDAV requests. By re-architecting IIS, utilizing the underlying support provided by the operating system, and applying appropriate security measures at multiple points along the access path, IIS has become a far more secure system than its previous versions. The different mechanisms applied by different components along the WebDAV access path are shown in Figure 13-1.

Security for Microsoft IIS. Table data from Table 1 in (Wing 2003) © IEEE 2003.

Figure 13-1. Security for Microsoft IIS. Table data from Table 1 in (Wing 2003) © IEEE 2003.

This principle does not contradict the principle of economy of mechanism because it does not duplicate identical security checks, or worse, implement similar but inconsistent checks. Instead, each component provides unique security safeguards that are most appropriate in its local context and thus helps to collectively form a more secure system.

ARCHITECTURAL ACCESS CONTROL

Having introduced the design principles for building secure software, we now present one technique, architectural access control, to demonstrate how software architects can follow the above-described principles in designing secure software systems. We define the basic access control models in security, illustrate how these models can be applied during architectural design, introduce software tools that facilitate the utilization of these models, and, through examples, show how these concepts and techniques can be practiced.

Access Control Models

The most basic security mechanism used to enforce secure access is a reference monitor. A reference monitor controls access to protected resources and decides whether access should be granted or denied. The reference monitor must intercept every possible access from external subjects to the secured resources and ensure that the access does not violate any policy. Widely accepted practices require a reference monitor to be tamper-proof, non-bypassable, and small. A reference monitor should be tamper-proof so that it cannot be altered. It should be non-bypassable so that each access is mediated by the reference monitor. It should be small so that it can be thoroughly verified.

Two dominant types of access control models are discretionary access control (DAC) models and mandatory access control (MAC) models. In a discretionary model, access is based on the identity of the requestor, the accessed resource, and whether the requestor has permission to access the resource. This permission can be granted or revoked at the resource owner's discretion. In contrast, in a mandatory model, the access decision is made according to a policy specified by a central authority.

Classic Discretionary Access Control

The Access Matrix Model is the most commonly used discretionary access control model. It was first proposed by Butler Lampson (Lampson 1974) and later formalized by Michael Harrison, Walter Ruzzo, and Jeffrey Ullman (Harrison, Ruzzo, and Ullman 1976). In this model, a system contains a set of subjects (also called principals) that have privileges (also called permissions) and a set of objects on which these privileges can be exercised. An access matrix specifies the privilege a subject has on a particular object. The rows of the matrix correspond to the subjects, the columns correspond to the objects, and each cell lists the allowed privileges that the subject has on the object. The access matrix can be implemented directly resulting in an authorization table. More commonly, it is implemented as an access control list (ACL), where the matrix is stored by column, and each object has one column that specifies the privileges each subject has over the object. A less common implementation is a capability system, where the access matrix is stored by rows, and each subject has a row that specifies the privileges (capabilities) that the subject has over all objects.

Role-Based Access Control

A role-based access control (RBAC) model is a more recent extension of the classic access control model. In this model, an extra level of indirection, called a role, is introduced. Roles become the entities that are authorized with permissions. Instead of authorizing a user's access to an object directly, the authorization is expressed as a role's permissions to an object and the user can be assigned to the corresponding role. RBAC allows roles to form a hierarchy. In such a hierarchical RBAC model, a senior role can inherit from a junior role. Every user that takes the senior role can also take the junior role, thus obtaining all the permissions associated with the junior role. The RBAC model, thus, eases management of access control in large-scale organizations. Instead of granting and revoking permissions individually to many users, all relevant users can be assigned a single role, and the permissions can be granted and revoked to this role. Role-based access control allows a clear specification of the roles that cannot be performed simultaneously by a user.

Mandatory Access Control

Mandatory access control models are less common and more stringent than discretionary models. They can prevent both direct and indirect inappropriate access to a resource. The most common types of mandatory models work in a multilevel security (MLS) environment, which is typical in a military setting. In that environment, each subject (denoting a user) and each object are assigned a security label. These labels have a dominance relationship between them. For example, the top-secret label dominates the classified information label. A subject can only access information whose label is dominated by the label of the subject. Thus, a subject with only classified information clearance cannot access top secret information, but a subject with top secret clearance is able to access content that is labeled classified information.

Connector-Centric Architectural Access Control

This section presents a connector-centric approach that describes one way in which the above-described access control models can be applied and enforced at the architectural level. Specifically, we describe how an architectural description can be extended to model security and how the resultant description can be checked to examine whether the architecture successfully addresses the security needs of the system.

Basic Concepts

The core concepts that are necessary to model access control at the architecture level are subject, principal, resource, privilege, safeguard, and policy.

Subject. A subject is the user on whose behalf a piece of software executes. The concept of subject is key in security, but is typically missing from software architectural models. Many software architectures assume that (a) all of its components and connectors execute under the same subject, (b) this subject can be determined at design-time, (c) the subject generally will not change during run time, either inadvertently or intentionally, and (d) even if there is a change, it will have no impact on the software architecture. As a result, there is typically no modeling facility to capture the allowed subjects of architectural components and connectors. Consequently, the allowed subjects cannot be checked against actual subjects at execution time to ensure security conformance. In order to address these needs for architectural access control, basic component and connector constructs must be extended with the subject for which they perform, thus enabling architectural design and analysis based on different security subjects.

Principal. A subject can take upon it multiple principals. Essentially, principals encapsulate the credentials that a subject possesses to acquire permissions. There are different types of credentials. In the classic access control model, the principal is synonymous with the subject and directly denotes the identity of the subject. But there exist other types of principals that provide indirection and abstraction necessary for more advanced access control models. In a role-based access control model, each principal can denote one role that the user adopts. The results for accessing resources will vary depending on the different principals a subject possesses.

Resource. A resource is an entity for which access should be protected. Example resources and access controls on them are files that should be read-only, password databases that should only be modified by administrators, and ports that should only be opened by the root user. Traditionally, resources are passive and accessed by active software components operating for different subjects. However, in the case of software architecture, resources can also be active. Specifically, software components and connectors may also be considered resources, access to which should be protected. Such an active view is lacking in traditional architectural modeling. Explicitly enabling this view can give architects more analysis and design power to improve security assurance.

Permission, Privilege, and Safeguard. Permissions describe operations on a resource that a component may perform. A privilege describes what permissions a component possesses depending upon the executing subject. Privilege is an important security concept that is missing from traditional architecture description languages. Most current modeling approaches take a maximum privilege route wherein a component's interfaces list all the privileges that that component could possibly need. This could become a source for privilege escalation vulnerabilities that are caused when a less privileged component is given more privileges than it properly should be granted in a particular usage context. A more disciplined modeling of privileges is therefore needed to reduce such vulnerabilities.

There are two types of privileges corresponding to the two types of resources. The first type handles passive resources and enumerates, for instance, which subject has read/write access to which files. The second type deals with active resources. These privileges include architecturally important privileges such as instantiation and destruction of architectural elements, connection of components with connectors, execution through message routing or procedure invocation, and reading and writing architecturally critical information. These privileges are pivotal in ensuring secure execution of software systems.

A notion corresponding to privilege is safeguard, which describes conditions that are required to access the interfaces of protected components and connectors. A safeguard attached to a component or a connector specifies the privileges that other components and connectors must possess before they can access the protected component or connector.

Policy. A policy ties together the concepts above. It specifies what privileges a subject, with a given set of principals, should have in order to access resources that are protected by safeguards. It is the foundation needed by the architectural elements to make access control decisions. Components and connectors consult the policy to decide whether an architectural access should be granted or denied.

The Central Role of Architectural Connectors

Architectural access control is centered on connectors because connectors propagate privileges that are necessary for access control decisions. They regulate communication between components and can also support secure message routing.

Components: Supply Security Contract. A security contract specifies the privileges and safeguards of an architectural element.

In the ensuing discussion, for purposes of specific illustration, we will utilize and refer to modeling architectures using the xADL language (Dashofy, van der Hoek, and Taylor 2005), as presented in Chapter 6. For component types, the above modeling constructs are modeled as extensions to the base xADL types. The extended security modeling constructs describe the subject the component type acts for, the principals this component type can take, and the privileges the component type possesses.

The base xADL component type supplies interface signatures that describe the basic functionality of components of this type. These signatures become the active resources that should be protected. Thus, each interface signature is augmented with safeguards that specify the necessary privileges an accessing component must possess before the interfaces can be accessed.

Connectors: Regulate and Enforce Contract. Connectors play a key role in regulating and enforcing the security contract specified by components. They can determine the subjects for which the connected components are executing. For example, in a normal SSL (secure socket layer) connector, the server authenticates itself to the client, thus the client knows the executing subject of the server. A stronger SSL connector can also require client authentication, thus both the server component and the client component know the executing subjects of each other.

Connectors also determine whether components have sufficient privileges to communicate through the connectors. For example, a connector can use the information about the privileges of connected components to decide whether a component executing under a certain subject can deliver a request to the serving component. This regulation is subject to the policy specification of the connector. The recent version of DCOM, for example, introduces such regulation on local and remote connections.

Connectors can also potentially serve to provide secure interaction between insecure components. Since many components in component-based software engineering can only be used "as is" and many of them do not have corresponding security descriptions, a connector is a suitable place to assure appropriate security. A connector decides which communications are secure and should be allowed, which communications are dangerous and should be rejected, or which communications are potentially insecure and require close monitoring.

A Secure Architecture Description Language: Secure xADL Secure xADL is a software architecture description language that describes security properties of a software architecture. Secure xADL combines the xADL language with the architectural access control concepts defined in the preceding paragraphs. Figure 13-2 depicts the core syntax of Secure xADL. The central construct is SecurityPropertyType, which is a collection of the subject, the principals, the privileges, and the policies of an architectural element. The SecurityPropertyType can be attached to component and connector types in xADL. Figure 13-2 illustrates that it is attached to a connector type to make a secure connector type. The SecurityPropertyType can also be attached to components and connectors, making them secure components and connectors. Finally, the SecurityPropertyType can also be attached to the specifications of subarchitectures and the description of the global software architecture.

An access control policy describes what access control requests should be permitted or denied. The policies for Secure xADL are embedded in the xADL syntax and written with the eXtensible Access Control Markup Language (XACML) (OASIS 2005). XACML is an open standard from OASIS (Organization for the Advancement of Structured Information Standards) to describe access control policies for different types of applications. It is utilized in an environment where a policy enforcement point (PEP) asks a policy decision point (PDP) whether a request, expressed in XACML, should be permitted. The PDP consults its policy, also expressed in XACML and makes a decision. The decision can be one of the following: permit, deny, not applicable (when the PDP cannot find a policy that clearly gives a permit or a deny answer), and indeterminate (when the PDP encounters other errors).

Secure xADL schema.

Figure 13-2. Secure xADL schema.

The core XACML is based on the classic discretionary access control model where a request for performing an action on an object by a subject is permitted or denied. In XACML, an object is termed a resource. Syntactically, a PDP has a PolicySet, which consists of a set of Policy. Each Policy in turn consists of a set of Rule. Each Rule decides whether a request from a subject for performing an action on a resource should be permitted or denied. When a PDP receives a request that contains attributes of the requesting subject, action, and resource, it tries to find a matching Rule, whose attributes match those of the request, from the Policy and PolicySet, and uses the matching rule to make a decision about permitting or denying access to the resource.

An Algorithm to Check Architectural Access Control

In xADL, each component and connector has a set of interfaces that represent externally accessible functionalities. An interface can be either an incoming interface, denoting functionality the element provides, or an outgoing interface, denoting functionality that the element requires. Each incoming interface can be protected by a set of safeguards that specify the permissions that components or connectors must possess before they can access that interface. Each outgoing interface can also possess a set of privileges that is generally the same as those of the owning element, that is, the privileges of the element having that outgoing interface.

The interfaces are connected to form a complete architecture topology. A pair of connected interfaces has one outgoing interface and one incoming interface. Such a connection specifies that the element with the outgoing interface accesses the element at the incoming interface. Each such connection defines an architectural access. For example, in the C2 architecture style, a component sends a notification from its bottom interface to a top interface of a connector if the component has sufficient privileges. Architectural access is not limited to direct connections between interfaces. Two components could be connected through a connector. Thus, a meaningful architectural access might involve two components that only indirectly communicate through a connector.

At the architecture level, the concerning decision is whether an architectural access in a software architecture description should be granted or denied. More precisely, given a software architecture description written in Secure xADL, for a pair of components (A, B), should A be allowed to access B? Finding the answer to this question can help an architect design secure software from two different perspectives. First, the answer helps the architect decide whether the given architecture allows intended access control. If there is some access that is intended by the architect but is not allowed by the description, the description should be changed to accommodate the access. Second, the answer can help the architect decide whether there are architectural vulnerabilities that introduce undesired access. If some undesired access is allowed, then the architect must modify the architecture and architectural description to eliminate such vulnerabilities.

From an architectural modeling viewpoint, the security-related decisions made by components and connectors might be based on factors, or contexts, other than the decision maker and the protected resource. The four most common types of contexts that can affect access control decisions are the neighboring components and connectors, the type of components and connectors, the subarchitecture containing components and connectors, and the global architecture.

Given knowledge of the executing subjects, an algorithm can be used to decide whether the outgoing interface of an accessing component carries sufficient privileges to satisfy the safeguards of the incoming interface of an accessed component. The accessing component can acquire privileges from multiple sources. The component may itself possess some privileges. It can also get privileges from its type, the containing subarchitecture, and the complete architectural model. Further, privileges can also propagate to the accessing component through connected components and connectors, subject to the privilege propagation capability of the connectors. The accessed components can acquire safeguards from similar sources. One notable difference in acquiring safeguards is that this process does not involve the connected element context, and thus does not go through a propagation process.

The simplest approach to make a decision whether to allow such access is to check whether the accumulated privileges of the accessing element covers the accumulated permissions of the accessed element. However, the accessed element can choose to use a different policy, and the sources of the policy can be from the accessed element, the type of the element, the subarchitecture containing the element, and the complete architecture.

Access control check algorithm.

Figure 13-3. Access control check algorithm.

A simple architectural access control check algorithm is sketched in Figure 13-3. The algorithm first checks whether the accessing interface and the accessed interface are connected in the architecture topology. If not, the algorithm denies the architectural access. However, if they are connected, the algorithm proceeds to find the interface in the path that is nearest to the accessed interface, namely the direct accessing interface. If the accessing interface and the accessed interface are directly connected, this direct accessing interface is the same as the accessing interface. Then, the privileges of the direct accessing interface are accumulated using various contexts. Similarly, the safeguards and policies of the accessed interface are also collected. If a policy is explicitly specified by the architect, then the policy is consulted to decide whether the accumulated privileges are sufficient for the access. If there is no explicit policy, then the access is granted if the accumulated privileges contain the accumulated safeguards as a subset. This simple algorithm assumes a known, fixed assignment of subjects on whose behalf the architecture operates. Changing subjects requires re-analysis. Dynamic contexts require a more sophisticated approach.

The algorithm in Figure 13-3 checks architectural access control for a pair of interfaces. Extending it to the global system architecture can be achieved by enumerating each pair of interfaces and then applying the algorithm to each pair. If the global architecture contains subarchitectures, then a completely flattened architecture graph, where containers' privileges are propagated to the contained elements, is first constructed. Afterward, the algorithm is used to check architectural access control between relevant pairs of interfaces belonging to this architecture graph.

We now examine how the models and techniques for architectural access control can be applied to two applications, one based on secure cooperation and Firefox as the other.

Example: Secure Cooperation

The first example of architectural access control is a simplistic, notional application that requires secure cooperation between its participants. The software architecture of the application is expressed in the C2 architectural style. The application allows two parties to share data with each other, but these two parties do not necessarily fully trust each other; thus, the data shared must be subject to the control of each party.

The two parties participating in this hypothetical application are an insurance company and a hospital. Each can operate independently and display the messages they receive from their own information sources. For example, the insurance company may internally exchange messages about an insured person's policy status and the hospital sends a patient's medical history among its departments. The two parties also need to share some messages so the insurance company can pay for the service the hospital provides to patients. To accomplish this, the hospital sends a message to the insurance company, including the patient's name and the service performed. After verifying the policy, the insurance company sends a message back to the hospital, authorizing paying a certain amount from a certain account. While the two parties need to exchange information, such sharing is limited to certain types of messages. Governing laws, such as the United States' Health Insurance Portability and Accountability Act (HIPAA), might prohibit one party from sending certain information to the other, such that the hospital cannot send a person's full medical report to the insurance company. Moreover, maintaining business competitiveness also requires each party to not disclose unnecessary information.

Figure 13-4 depicts the application architecture that uses a secure connector on each side that securely routes messages between the insurance company and the hospital. When the insurance-to-hospital connector receives a notification message, it inspects the message, and if the message can be delivered to both the company and the hospital, such as a payment authorization message, then the message is forwarded to both sides. Otherwise, the message is only transferred within the insurance company. The hospital-to-insurance connector operates in a similar fashion.

The data sharing can be controlled in a number of ways by setting different policies on the connectors. For instance, each of the connectors can be denied instantiation that will prevent any sharing to occur. Even if both connectors are instantiated, the connections with other components and connectors can still be rejected to prevent message delivery and data sharing. When the connectors are instantiated and properly connected with other elements, each of them can use its own policy on internal message routing to control the messages that can be delivered to its own and the other side.

This architecture also promotes understanding and reuse: Only two secure connectors are used; these connectors perform a single task of secure message routing, and they can be used in other cases by adopting a different policy.

Example: Firefox

Firefox is an open-source Web browser first released in November 2004. It uses three key platform technologies: XPCOM, a cross-platform component model; JavaScript, the Web development programming language that is also used to develop front-end components of Firefox; and XPConnect, the bidirectional connection between native XPCOM components and JavaScript objects.

Insurance hospital interorganization information routing.

Figure 13-4. Insurance hospital interorganization information routing.

Trust Boundary Between Chrome and Content. When a user uses the Firefox browser to browse the Web, the visible window contains two areas. The chrome, which consists of decorations of the browser window, such as the menu bar, the status bar, and the dialogs, are controlled by the browser. The browser is trusted to perform arbitrary actions to accomplish the intended task. Borrowing the chrome term that originally refers to the user interface elements, the browser's code is called the chrome code. Such code can perform arbitrary actions. Any installed third-party extensions also become a part of the chrome code.

The other area, the content area, is contained within the browser chrome. The content area contains content coming from different sources that are not necessarily trustworthy. Some of this content may contain active JavaScript code. Such content code should not be allowed to perform arbitrary actions unconditionally and must be restricted accordingly. Otherwise, such code could abuse privileges to cause damage or harm to the users. This boundary between the chrome code and the content code is the most important trust boundary in Firefox.

Because of the architectural choice of using XPCOM, JavaScript, and XPConnect to develop the Firefox browser and extensions, both chrome code and content code written in JavaScript can use XPConnect to access interfaces of XPCOM components that interact with the underlying operating system services. The XPCOM components are represented as the global Components collection in JavaScript.

XPConnect, as the connector between the possibly untrustworthy accessing code and the accessed XPCOM components, should protect the XPCOM interfaces and decide whether the access to those interfaces should be permitted.

Trust Boundary Between Contents from Different Origins. Another trust boundary is between contents having different origins. The origin of content is determined by the protocol, the host name, and the port used to retrieve the content. Contents differing in either the protocol, the host name, or the port would be considered to have different origins. Users may browse many different sites, and any page can load content from different origins. The content coming from one source should only be able to read or write content originating from the same source. This is called the same-origin policy. Otherwise, a malicious page from one source could use this cross-domain access to retrieve or modify sensitive information from another origin, such as the password that the user uses for authentication with the other origin. This is an architectural access control process where interfaces of a content component from one origin should not be inappropriately accessed by another content component from another origin.

Principals. Since the JavaScript language does not specify how security should be handled, the Firefox JavaScript implementation defines a principal-based security infrastructure to support enforcing the trust boundaries. There are two types of principals. When a script is accessing an object, the executing script has a subject principal and the object being accessed has an object principal.

Firefox uses principals to identify code and content coming from different origins. Each unique origin is represented by a unique principal. The principal in Firefox corresponds to the Subject construct in Secure xADL. Such Subjects are used to regulate architectural access control.

XPConnect: Secure Connector. The security manager within the XPConnect architectural connector coordinates critical architectural operations. It regulates the access by scripts running as one principal to objects owned by another principal (if the subject principal is not the system principal, then both principals should be the same for the access to be allowed), it decides whether a native service can be created, obtained, and wrapped (one type of architectural instantiation operation), and it also arbitrates whether a URL can be loaded into a window (another type of architectural instantiation operation).

Figure 13-5 depicts the Firefox component security architecture. Interfaces of the native XPCOM components executing with the chrome role are accessible from other chrome components but should be protected from other content components. The XPConnect connector maintains this boundary between content code and chrome code. The content components from one origin, including the containing window or frame and the DOM nodes contained within them, form a subarchitecture. Their interfaces can be manipulated by chrome components, but should be protected from content components from other origins. The XPConnect connector maintains this boundary of same origin and helps achieve the needed protection.

Firefox component security architecture.

Figure 13-5. Firefox component security architecture.

To briefly summarize this section, we first defined the access control models needed to enforce security and illustrated how they can be applied at the architecture level through the use of concepts such as subject, principal, resource, privilege, safeguard, and policy. We next demonstrated how these concepts can be incorporated into an architecture description using the Secure xADL architecture description language as an example. The resulting architecture description can be checked to verify that the architectural accesses occur only as intended. The concepts, languages and algorithms allow an architect to evaluate the security properties of alternative architectures and choose designs that suit secure requirements. We also discussed two example applications to illustrate these benefits.

In the next section, we discuss another security notion—trust—and show how an architecture approach can be successfully used to integrate trust management within environments where participants are decentralized and make independent trust decisions in the absence of a centralized authority.

TRUST MANAGEMENT

Trust management concerns how entities establish and maintain trust relationships with each other. Trust management plays a significant role in decentralized applications (such as the peer-to-peer architectures discussed in Chapter 11) where entities do not have complete information about each other and thus must make local decisions autonomously. Entities must account for the possibility that other entities may be malicious and may indulge in attacks with an intention to subvert the system. In the absence of a centralized authority that can screen the entry of entities in the system, manage and coordinate the peers, and implement suitable trust mechanisms, it becomes the responsibility of each decentralized entity to adopt suitable measures to safeguard itself. Trust relationships, in such applications, help entities gauge the trustworthiness of other entities and thus make informed decisions to protect themselves from potential attacks.

It is therefore critical to choose an appropriate trust management scheme for a decentralized application. This by itself is not enough, however. Consider the analogy of a house, access to which is restricted by a lock on the front door. The owner may be worried that the lock may be easily picked by thieves and so may explore new kinds of locks that are harder to break; however, the owner may not realize that the windows are unsecured and can be easily penetrated. Thus, it is important to focus on the lock as well as the windows to ensure the integrity of the house. Similarly, it is important to focus not only on a reliable trust management scheme but also on fortifying each entity in a decentralized application. This is possible through a software architecture approach that guides the integration of a suitable trust model within the structure of each entity and includes additional security technologies to address the pervasive concerns of security and trust.

Our focus in this section is primarily on incorporating reputation-based trust management systems within the architecture of decentralized entities. Reputation-based trust management systems are those that use an entity's past reputation to determine its trust-worthiness. However, before delving deeper into reputation-based systems, we present a brief introduction to the concepts of trust and reputation.

Trust

The concept of trust is not new to humans and it is not limited only to electronic entities. Trust is an integral part of our social existence: Our interactions in society are influenced by the perceived trustworthiness of other entities. Thus, in addition to computer scientists, researchers from other fields such as sociology, history, economics, and philosophy have devoted significant attention to the issue of trust (Marsh 1994). Given the fact that trust is a multidisciplinary concept, several definitions of trust exist in the literature. However, since the discussion here is in the context of software development, we adopt a definition of trust coined by Diego Gambetta that has been widely used by computer scientists. He defines trust as

... a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he can monitor such action (or independently of his capacity ever to be able to monitor it) and in a context in which it affects his own action. (Gambetta 2000)

This definition notes that trust is subjective and depends upon the view of the individual; the perception of trustworthiness may vary from person to person. Further, trust can be multidimensional and depends upon the context in which trust is being evaluated. For example, A may trust B completely to repair electronic devices but may not trust B to repair cars. The concept of context is thus critical since it can influence the nature of trust relationships significantly.

Gambetta also introduced the concept of using values for trust. These values may express trust in several different ways. For example, trust may be expressed as a set of continuous real values, binary values, or a set of discrete values. The representation and expression of trust relationships depends upon the application requirements. For example, binary values for trust may be used in an application that needs only to establish whether an entity can be trusted. If instead the application requires entities to compare trustworthiness of several entities, a richer expression of trust values is required, motivating the need for continuous trust values.

Trust is conditionally transitive. This means that if an entity A trusts entity B and entity B trusts entity C, it may not necessarily follow that entity A can trust entity C. There are a number of parameters that influence whether entity A can trust entity C. For example, entity A may trust entity C only if certain possibly application-specific conditions are met, or if the context of trust is the same.

Trust Model

A trust model describes the trust relationships between entities. Realizing the immense value of managing trust relationships between entities, a number of trust models have been designed. These models are geared at different objectives and targeted at specific applications and hence embody different definitions of "trust model." For some, the model may mean just a trust algorithm and a way of combining different trust information to compute a single trust value; for others, a trust model may also encompass a trust-specific protocol to gather trust information from other entities. Yet others may want a trust model to also specify how and where trust data is stored.

Definition. A trust model describes the trust information that is used to establish trust relationships, how that trust information is obtained, how that trust information is combined to determine trustworthiness, and how that trust information is modified in response to personal and reported experiences.

This definition of a trust model identifies three important components of a trust model. The first component specifies the nature of trust information used and the protocol used to gather that information. The second component dictates how the gathered information is analyzed to compute a trust value. The third component determines not only how an entity's experiences can be communicated to other entities but also how it can be incorporated back into the trust model.

Reputation-Based Systems

Related to trust is the concept of reputation. Alfarez Abdul-Rahman and Stephen Hailes (Abdul-Rahman and Hailes 2000) define reputation as an expectation about an individual's behavior based on information about or observations of its past behavior. In online communities, where an individual may have little information to determine the trustworthiness of others, reputation information is typically used to determine the extent to which they can be trusted. An individual who is more reputed generally is considered to be more trustworthy.

Reputation may be determined in several ways. For example, a person may rely on his direct experiences, the experiences of other people, or a combination of both to determine the reputation of another person. Trust management systems that use reputation to determine the trustworthiness of an entity are termed reputation-based systems. There are several applications, such as Amazon.com and eBay, that employ such reputation-based systems.

Reputation-based systems can be either centralized or decentralized. A decentralized reputation-based system is one where every entity directly evaluates other entities, maintains those evaluations locally, and interacts directly with other entities to exchange trust information. A centralized reputation-based system, on the other hand, relies on a single centralized authority to either facilitate evaluations and interactions between entities or to store relevant trust information. Amazon.com and eBay provide a central repository to store reputation information provided by their users while XREP, a trust model for P2P file-sharing applications, is an example of a decentralized reputation-based system. Next, we next take a deeper look at eBay and XREP.

eBay

eBay is an electronic marketplace where diverse users sell and buy goods. Sellers advertise items and buyers place bids for those items. After an auction ends, the winning bidder pays the seller for the item. Both buyers and sellers rate each other after the completion of a transaction. A positive outcome results in a+1 rating and a negative outcome results in a−1 rating. These ratings form the reputation of buyers and sellers. This reputation information is stored and maintained by eBay instead of by its users. eBay is, thus, not a purely decentralized reputation system. If eBay's centralized data stores were to become unavailable, eBay users would have no access to the trust information of other users.

eBay allows this trust information to be viewed through feedback profiles. A user can click on the feedback profile of buyers or sellers to view their past interaction histories and trust information. The profile includes the number of total interactions a user has been involved in along with his total trust score. This score, called the feedback score, is computed as follows: A positive rating increases the feedback score by 1, a negative rating decreases the feedback score by 1, and a neutral rating leaves the feedback score unaffected. A user can only affect another user's feedback score by one point per week. For example, if one user were to leave three positive ratings for another user, the feedback score would only increase by 1. Similarly, even if a user were to leave five negative ratings and two positive ratings, the feedback score would only decrease by 1. The profile also lists the user's total number of positive, negative, and neutral ratings. The profile also displays the aggregate of the most recent ratings received in the last one month, six months, and twelve months. A user viewing a profile can also choose to read all the comments written about a particular buyer or seller.

Such a system can be manipulated to defeat its purpose, of course. The case of eBay in 2000 is a classic example of a set of peers engaging in fraudulent actions (Dingledine et al. 2003). A number of peers first engaged in multiple successful eBay auctions (to establish strong trust ratings). Once their trust ratings were sufficiently high to engage in high-value deals, they used their reputations to start auctions for high-priced items, received payment for those items, and then disappeared, leaving the buyers defrauded.

XREP

XREP, proposed by Ernesto Damiani and colleagues (Damiani et al. 2002), is a trust model for decentralized peer-to-peer (P2P) file-sharing applications. Development of trust models is an active area of research and XREP is chosen here to serve as an example. P2P file-sharing applications consist of a distributed collection of entities, also called peers, that interact and exchange resources, such as documents and media, directly with other peers in the system. A decentralized P2P file-sharing application, such as one based on Gnutella, is characterized by the absence of a centralized authority that coordinates interactions and resource exchanges among peers. Instead, each peer directly queries its neighboring peers for files and this query subsequently is forwarded to other peers in the system. Each queried peer responds positively if it has the requested resource. Upon receiving these responses, the query originator can choose to download the resource from one of the responding peers.

While such decentralized file-sharing applications offer significant benefits, such as no single point of failure and increased robustness in addition to allowing users at the edge of the network to directly share files with each other, they are also prone to several attacks by malicious peers. This is because such decentralized P2P file-sharing applications are also open, implying that anyone can join and leave the system at any time without any restrictions. Peers with malicious intent may offer tampered files or may even disguise Trojan horses and viruses as legitimate files and make them available for download. In the January 2004 issue of Wired magazine, an article by Kim Zetter, "Kazaa Delivers More Than Tunes" (Zetter 2004) mentions a study in January 2004 that reported that 45 percent of 4,778 executable files downloaded through the Kazaa file-sharing application contained malicious code such as viruses and Trojan horses. When unsuspecting users download such files, they may not only harm their own computers but also unknowingly spread the malicious files to other users.

Clearly, there is a need for mechanisms that will help determine the trustworthiness of both peers and the resources offered by them. Decentralized reputation-based trust schemes offer a potential solution to this problem by using reputation to determine the trustworthiness of peers and resources. XREP is an example of such a reputation-based scheme for decentralized file-sharing applications. XREP includes a distributed polling algorithm to allow reputation values to be shared among peers, so that a peer requesting a resource can assess the reliability of both the peer and the resource offered by a peer.

The XREP distributed protocol consists of the following phases: resource searching, resource selection and vote polling, vote evaluation, best servent check, and resource downloading, as illustrated in Figure 13-6. Resource searching is similar to that in Gnutella and involves a servent (that is, a Gnutella peer; servent is a neologism formed from server and client) broadcasting to all its neighbors a Query message containing search keywords. When a servent receives a Query message, it responds with a QueryHit message. In the next phase, upon receiving QueryHit messages, the originator selects the best matching resource among all possible resources offered. At this point, the originator polls other peers using a Poll message to ask their opinions about the resource or the servent offering the resource. Upon receiving a Poll message, each peer may respond by communicating its votes on the resource and servents using a PollReply message. These messages help distinguish reliable from unreliable resources and trustworthy from fraudulent servents.

In the third phase, the originator collects a set of votes on the queried resources and their corresponding servents. Then it begins a detailed checking process that includes verification of the authenticity of the PollReply messages, guarding against the effect of a group of malicious peers acting in tandem by using cluster computation, and sending TrueVote messages to peers that request confirmation on the votes received from them. At the end of this checking process, based on the trust votes received, the peer may decide to download a particular resource. However, since multiple servents may be offering the same resource, the peer still needs to select a reliable servent. This is done in the fourth phase when the servent with the best reputation is contacted to check the fact that it exports the resource. Upon receiving a reply from the servent, the originator finally contacts the chosen servent and requests the resource. It also updates its repositories with its opinion on the downloaded resource and the servent who offered it.

Phases in XREP.

Figure 13-6. Phases in XREP.

Architectural Approach to Decentralized Trust Management

The nature of decentralized systems and their susceptibility to various types of attacks makes it critical to design such decentralized systems carefully. Software architecture provides an excellent basis to reason about these trust properties and can serve to provide comprehensive guidance on how to build such systems. In particular, it provides guidance on how to design and build each decentralized entity so that it can protect itself against attacks, as well as retain its independence to make local autonomous decisions. There are three main steps involved in such an architectural approach: understanding and assessing the real threats to a system, designing countermeasures against these threats, and incorporating guidelines corresponding to these countermeasures into an architectural style.

Threats to Decentralized Systems

Impersonation. Malicious peers may attempt to conceal their identities by portraying themselves as other users. This may happen to capitalize on the preexisting trust relationships of the identities they are impersonating and the targets of the impersonation. Therefore, the targets of the deception need the ability to detect these incidents.

Fraudulent Actions. It is also possible for malicious peers to act in bad faith without actively misrepresenting themselves or their relationships with others. Users can indicate that they have a particular service available even when they knowingly do not have it. Therefore, the system should attempt to minimize the effects of bad faith.

Misrepresentation. Malicious users may also decide to misrepresent their trust relationships with other peers in order to confuse. This deception could either intentionally inflate or deflate the malicious user's trust relationships with other peers. Peers could publish that they do not trust an individual that they know to be trustworthy. Or, they could claim that they trust a user that they know to be dishonest. Both possibilities must be taken into consideration.

Collusion. A group of malicious users may also join together to actively subvert the system. This group may decide to collude in order to inflate their own trust values and deflate trust values for peers that are not in the collective. Therefore, a certain level of resistance needs to be in place to limit the effect of malicious collectives.

Denial of Service. In an open architecture, malicious peers may launch an attack on individuals or groups of peers. The primary goal of these attacks is to disable the system or make it impossible for normal operation to occur. These attacks may flood peers with well-formed or ill-formed messages. In order to compensate, the system requires the ability to contain the effects of denial of service attacks.

Addition of Unknowns. In an open architecture, the cold start situation arises: Upon initialization, a peer does not know anything about anyone else on the system. Without any trust information present, there may not be enough knowledge to form relationships until a sufficient body of experience is established. Therefore, the ability to bootstrap relationships when no prior relationships exist is essential.

Deciding Whom to Trust. In a large-scale system, certain domain-specific behaviors may indicate the trustworthiness of a user. Trust relationships generally should improve when good behavior is perceived of a particular peer. Similarly, when dishonest behavior is perceived, trust relationships should be downgraded accordingly.

Out-of-Band Knowledge. Out-of-band knowledge occurs when there is data not communicated through normal channels. While trust is assigned based on visible in-band interactions, there may also exist important invisible interactions that have an impact on trust. For example, Alice could indicate in person to Bob the degree to which she trusts Carol. Bob may then want to update his system to adjust for Alice's out-of-band perception of Carol. Therefore, ensuring the consideration of out-of-band trust information is essential.

Measures to Address Threats

Use of Authentication. To prevent impersonation attacks, it is essential to use some form of authentication so that message senders can be uniquely identified. For instance, entities sign outgoing messages and receiving entities verify those signatures to validate the authenticity of those messages. Signature-based authentication, such as that discussed at the beginning of this chapter, also helps protect against potential repudiation attacks—attacks where an entity may falsely claim that it never sent the message.

Separation of Internal Beliefs and Externally Reported Information

In a decentralized system, each entity has its own individual goals, which may conflict with those of other entities. It is therefore important to model externally reported information separately from internal beliefs. This separation helps resolve conflicts between externally reported information and internal perceptions. For example, a peer may favor information it has perceived directly and believes to be accurate over information reported by others. A peer may also not want to disclose sensitive data, so it must have the ability to report information that differs from what it actually believes. ("What I've heard is ...")

Making Trust Relationships Explicit. Without a controlling authority that governs the trust process, peers require information to make decisions whether or not to trust what they perceive. Active collaboration between peers may provide enough knowledge for peers to reach their local decisions. Thus it is important that information about trust relationships be explicit and exchangeable between peers. There is a possibility that exposing trust information may be misused by malicious peers to take advantage of certain peers; however, it should be remembered that exchanged information may not truly reflect the trust perceptions of the entities.

Comparable Trust. Ideally, published trust values should be syntactically and semantically comparable; that is, equivalent representations in one implementation should have the same structure and meaning in another. If the same value has different meanings across implementations, then accurate comparisons across peers cannot be made.

Corresponding Guidelines to Incorporate into an Architectural Style

Digital Identities. Without the ability to associate identity with published information, it is a challenge to develop meaningful trust relationships. Thus, the concept of identities, both physical and digital, is necessary to facilitate meaningful relationships. However, it is important to understand the limitations of digital identities with respect to physical identities.

There may not be a one-to-one mapping between digital and physical identities as one person may utilize multiple digital identities or multiple people may share the same digital identity. Additionally, anonymous users may be present who resist digital identification. Therefore, it is not always possible to tie a digital identity to one physical individual and make accurate evaluations of a person. Instead, a critical criterion of trust relationships in decentralized applications should be the actions performed by digital identities, not by physical identities. The architectural style should therefore consider trust relationships only between digital identities.

Separation of Internal and External Data. Explicit separation of internal and external data supports the separation of internal beliefs from externally reported information within a peer. Therefore, the architectural style should adopt the explicit separation of internal and external data.

Making Trust Visible. Trust information received externally from entities is used within the peer architecture to make local decisions. In order to process this trust information internally across the architecture, trust cannot be localized to only one component. Each component responsible for making local decisions needs the ability to take advantage of this perceived trust. If the perceived trust is not visible, then accurate assessments may not be made. Therefore, the architectural style should require trust relationships to be visible to the components in the peer's architecture as well as be published externally to other peers.

Expression of Trust. There has been no clear consensus in the trust literature as to which trust semantics provide the best fit for applications, therefore it is believed that indiscriminately enforcing a constraint at the architectural level to use a particular trust semantic is inappropriate. While trust values should be semantically comparable, a generic architectural style might impose only the constraint that trust values must at least be syntactically comparable. For example, this can be done by enforcing that trust values be represented numerically.

Resultant Architectural Style

The principles and constraints identified above can be combined to create an architectural style for decentralized trust management. In addition to these constraints, based on the common elements of trust models, four functional units of a decentralized entity are first identified. These are Communication, Information, Trust, and Application. The Communication unit handles interaction with other entities, the Information unit is responsible for persistently storing trust and application-specific information, the Trust unit is responsible for computing trustworthiness and guides trust-related decisions, and the Application unit includes application-specific functionality and is responsible for enabling local decision making. The Communication unit does not depend upon any other units while the Information unit depends upon information received from other entities and thus depends upon interaction with them. The Trust unit depends upon the Communication and Information units and the Application unit builds upon all the other three units.

Given this interplay between the four units, adopting a layered architectural style enables a natural structuring of these units according to their interdependencies and also offers several benefits such as reusability of components. Since decentralized entities are autonomous, they have the privilege of refusing to respond to requests of other entities. As a result, decentralized applications typically employ asynchronous event-based communication protocols. In order to reflect this communication paradigm within the internal architecture of an entity, an event-based architectural style for the architecture of an entity is natural. Moreover, event-based architectural styles have been known to successfully facilitate loose coupling among components. This can, for instance, allow for the replacement of trust models and protocol handlers in the architecture.

C2 is one such event-based layered architectural style. As discussed in Chapter 4, C2 includes specific visibility rules—components belonging to a layer are only aware of components in layers above them and are unaware of components below them. C2 thus naturally fits in with the constraints of a trust–centric architectural style. Further, C2 also has existing tool support that can be leveraged by an architectural style based on C2. Therefore, the PACE architectural style, (Practical Architectural Style for Composing Egocentric applications) described next, extends the C2 style.

PACE Architectural Style

The PACE architectural style includes all the above-described guidelines and constraints and provides guidance on the components that must be included within the architecture of an entity and how they should interact with each other. The style is described here to illustrate one way of combining insights from the preceding discussion into a coherent architecture. Corresponding to the four functional units, PACE divides the architecture of a decentralized entity into four layers: Communication, Information, Trust, and Application. Each of these layers, along with their components, is illustrated in Figure 13-7.

The Communication layer is responsible for handling communication with other peers in the system. It consists of several components designed to support various standard communication protocols, the Communications Manager, and the Signature Manager. The communication protocol components are responsible for translating internal events to external communications. The Communications Manager instantiates the protocol components while the Signature Manager signs requests and verifies notifications.

To separate the internal trust beliefs of a peer from those received from other peers, the Information layer consists of two components: the Internal Information component that stores self-originating messages and the External Information component, which stores messages received from others.

The Trust layer incorporates the components that enable trust management. This layer consists of the Key Manager, which generates the local PKI keypair; the Credential Manager, which manages the credentials of other peers; and the Trust Manager, which computes trust values for messages received from other peers.

The Application layer encapsulates all application-specific components. The Application Trust Rules component encapsulates the chosen rules for assigning trust values based on application-specific semantic meanings of messages, and supports different dimensions of trust relationships. The Application subarchitecture represents the local behavior of a peer. While components in the other layers can be reused across different applications, components in the Application layer are application-dependent and hence not reusable across domains. The application developer is thus expected to implement components for this layer depending upon the application's needs; its internal architecture may be in an entirely different style. All external communication must go through the PACE stack, however.

PACE components.

Figure 13-7. PACE components.

PACE-Induced Benefits

PACE's guiding principles induce properties that act as countermeasures against threats to a decentralized system. Some of these properties are effected by the PACE architectural style and canonical implementations of its standard components; others are application specific and so involve the Application layer. We now take a look at some of the common threats and the way PACE helps address them. It should be noted that it is not mandatory for all peers participating in a decentralized system to be built in the PACE architectural style in order to function and interact with PACE-based peers. However, those peers cannot avail themselves of the benefits of the PACE style.

Impersonation. Impersonation refers to the threat caused by a malicious peer posing as another in order to misuse the peer's privileges or reputation. PACE addresses this threat through the use of digital signatures and message authentication. All external communication in the PACE architecture is constrained to the Communication layer, thus it offers a single point where impersonation can be detected. A malicious peer that tries to impersonate a user without the correct private key or does not digitally sign the message can be detected by verifying signatures. Additionally, if a private key has been compromised, a revocation for that key can be transmitted. PACE components can then refuse to assign trust values to revoked public keys.

Fraudulent Actions. Malicious peers may engage in fraudulent behavior including advertising false resources or services and not fulfilling commitments. Since PACE is designed for open, decentralized architectures, there is little that may be done to prevent the entry of malicious peers. However, malicious actions may be detected by the user or through the Application layer. Explicit warnings can then be issued concerning those malicious peers, which may help others in their evaluations of these peers.

Misrepresenting Trust. A malicious peer may misrepresent its trust with another in order to positively or negatively influence opinion of a specific peer. Since PACE facilitates explicit communication of comparable trust values, a peer can incorporate trust relationships of others. By using a transitive trust model in the Trust Manager, if Alice publishes that she distrusts Bob, then Carol can use that information to determine if she should trust Bob's published trust relationships.

Collusion. Collusion refers to the threat caused by a group of malicious peers that work in concert to actively subvert the system. It is thus of greater concern than a single peer misrepresenting trust. It has been proven that explicitly signed communication between peers can overcome a malicious collective in a distributed setting. Adapting and combining these results with efficient schemes to identify noncooperative groups in a decentralized setting [such as in NICE (Lee, Sherwood, and Bhattacharjee 2003)] with PACE's ability to detect impersonation allows collusion to be addressed.

Denial of Service. Malicious peers may also launch attacks against peers by flooding them with well-formed or ill-formed messages. The separation of the Communication layer allows isolation and response to the effects of such denial of service attacks. Incorrectly formed messages can be disposed of by the protocol handlers. The Communications Manager can also compensate for well-formed message floods by introducing rate limiting or interacting with neighboring firewalls to prevent further flooding.

Addition of Unknowns. When the system is first initialized, there can be a cold-start problem because there are no existing trust relationships. Even though a peer may not have previously interacted with another peer or a message may be known to be forged, PACE's Application layer can still receive these events. Without enough information to make an evaluation, the message will not be assigned a trust value by the Trust Manager.

However, the user can still make the final decision to trust the contents of the message based on out-of-band knowledge that is not captured explicitly.

Deciding Whom To Trust. In a large-scale system, certain domain-specific behaviors may indicate the user's trustworthiness. Trust relationships should generally improve when good behavior is perceived of a particular peer and vice versa. In PACE, the application trust rules component allows for automated identification of application-dependent patterns. The detection of good or bad behavior by this component can cause the trust level of the corresponding peer to be increased or decreased respectively along a particular trust dimension.

Out-of-Band Knowledge. It is essential to ensure that out-of-band information is also considered in establishing trust relationships. While PACE confines all electronic communication to the Communication layer, out-of-band trust information can originate as requests from the user through the Application layer.

Building a PACE-Based Trust-Enabled Decentralized File-Sharing Application

In this section, we present a walk-through of how PACE can be used to design and construct applications. Specifically, we explore how the PACE architectural style can be used to guide the construction of a trust-enabled decentralized file-sharing application. Since the XREP trust model for file-sharing applications was presented earlier, it will be used as the candidate trust model for integration within the PACE style.

The first step in designing an appropriate architecture for each file-sharing entity is to identify the components in the four layers. Since the PACE architectural style already specifies components for the Communication, Information, and Trust layers, the main task here is to identify the components of the Application layer. For the file-sharing application, the Application layer can be decomposed, for example, into eight different components organized into three sublayers as shown in Figure 13-8.

The top sublayer contains only the Application Trust Rules component while the bottom sublayer comprises the User Interface. The middle sublayer consists of the components: Library, Search Manager, Poll Manager, File Exchanger, Evaluator, and Preferences. The Library component maintains the list of files that have been downloaded and that can be shared with other peers; the files themselves are persistently stored in the Internal Information. The Search Manager component is responsible for issuing Query, Poll, and TrueVote messages and displaying received responses to those messages through the user interface.

The Poll Manager component responds to Poll messages by sending PollReply messages. The File Exchanger component is responsible for uploading and downloading files, displaying uploaded and downloaded files to the user interface, saving downloaded files to the Internal Information storage, and deleting files from the Internal Information. The Evaluator component is responsible for checking the authenticity of PollReply messages and analyzing peer votes received about resources. The Preferences component manages login information for the user and enables the user to specify preferences including whether to automatically connect to the P2P network, the number of hops, the number of permissible uploads and downloads, and the destination for the library folder.

Once the components of the Application layer are identified, it is important to determine the interactions between the components in order to identify the relevant request and notification messages that traverse the architecture. This includes modeling the different kinds of trust messages exchanged between peers as dictated by the XREP trust model so the relevant components can appropriately react to them.

XREP-based architecture of file-sharing peer.

Figure 13-8. XREP-based architecture of file-sharing peer.

In the next step, components belonging to each layer must be appropriately implemented. If existing implementations for any of these components exist, the application developer can choose to reuse them as long as the PACE style is not violated. Since PACE prototypes in other application domains already exist, it is possible to reuse all components from the Communication, Information, and Trust layers without any modifications. The only exception would be the Trust Manager component, which will need to be modified to enable the evaluation of poll results within each XREP-based file-sharing peer.

Finally, the above-described architecture of a file-sharing peer may be described in a suitable architecture description language such as xADL. The ArchStudio tool suite could be used to both describe the architecture as well as instantiate it. Each instantiation corresponds to a particular peer; thus, the same architectural description may be instantiated repeatedly to create multiple trust-enabled file-sharing peers. The resulting application then could be subjected to threat scenarios to evaluate how well XREP-enabled peers built in the PACE style counter the threats typical to file-sharing applications.

END MATTER

This chapter presented principles, concepts, and techniques relevant to the design and construction of secure software systems. The emphasis has been on showing how security concerns may be assessed and addressed as part of a system's architecture. Use of explicit connectors offers one key way for directly addressing some security needs. Connectors can be used to examine all message traffic and help ensure that no unauthorized information exchange occurs. The latter portion of the chapter explored how the use of a particular architectural style, attuned to security needs in open decentralized applications, can help mitigate risks. Trust models are central to this approach.

Making a system secure may, however, compromise other non-functional requirements. For example, enforcing secure interactions may make the architecture less flexible. In particular, while preventing a component from interacting with unknown components may discourage security attacks, it also implies that the capabilities offered by the untrusted components cannot be leveraged.

Another example is the impact on performance resulting from use of SPKI (Standard Public Key Infrastructure) for digitally authenticating messages. This mechanism involves signing transmitted messages with keys and verifying the signatures at the receiving end. Message content itself may be encrypted at the sending end and decrypted at the receiving end to ensure confidentiality of message content. Since algorithms used for authentication and encryption are known to be computationally intensive, use of them may negatively affect the performance of an application. This is likely to be particularly troublesome when message-exchange dominates application behavior.

A third example relates to the trade-off between security and usability. If the security mechanism in a software system requires a user to perform cumbersome or repetitive actions (for instance, by showing redundant or unnecessary dialog prompts repeatedly), the user may choose to completely turn off the security mechanism.

These examples illustrate the importance of properly considering such trade-offs. It is critical for software architects and application developers to apply careful analysis while designing and constructing secure software systems.

Chapter 12 began exploration of architecturally-based techniques for achieving a variety of non-functional properties. This chapter continued that theme, focusing on the specific NFPs of security and trust. Chapter 13 completes the theme, by focusing in-depth on the NFP of adaptability.

REVIEW QUESTIONS

  1. What is security? What are the properties that a software system must exhibit in order to be called secure?

  2. List and briefly describe each design principle explained in this chapter.

  3. What challenges to security are introduced by decentralized applications? What is trust management?

EXERCISES

  1. Identify and describe any trade-offs between the design principles explained in the chapter.

  2. What are the security benefits and the security risks associated with the following architectural styles from Chapter 4

    1. Pipe-and-filter

    2. Blackboard

    3. Event-based

    4. C2

    In your answer, consider how malicious actions in one component can or cannot affect the actions of other components.

  3. Choose any software application, such as Firefox, and use UML sequence diagrams to show how the security mechanisms within the application operate.

  4. Describe at least two applications, other than online auctions and file sharing, where trust or reputation management may prove useful, and explain the rationale for your choices.

  5. Evaluate whether the PACE architectural style could be used to build the participants in the above applications.

  6. Study the architecture of Internet Explorer 7 and evaluate whether IE 7 has any architectural security deficiencies. If so, how can these vulnerabilities be addressed? Use xACML and Secure xADL to propose a more secure architecture for IE 7.

FURTHER READING

An excellent introduction to the field of computer security is given by Bruce Schneier in (Schneier 2000). This highly readable book explores the interplay of technical and nontechnical issues in achieving system security. Matt Bishop's text (Bishop 2003) provides details of the technologies involved. Pfleeger and Pfleeger's book (Pfleeger and Pfleeger 2003) is another comprehensive reference.

Research frontiers for software engineering for security are presented in Devanbu and Stubblebine's roadmap paper (Devanbu and Stubblebine 2000) from the 2000 International Conference on Software Engineering.

Further details on the PACE architectural style can be found in (Suryanarayana et al. 2006; Suryanarayana, Erenkrantz, and Taylor 2005). Further details on the connectorcentric approach to security are available in (Ren et al. 2005; Ren and Taylor 2005).

More recently, new results have been presented at the International Workshop on Software Engineering for Secure Systems, beginning with the 2005 workshop.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset