6.4. Dependability, Security, and Their Attributes

The original definition of dependability refers to the ability to deliver a service that can be justifiably trusted. The alternative definition is the ability to avoid service failures that are more frequent and severe than is acceptable. The concept of trust can be defined as accepted dependence, and dependability encompasses the following attributes [7]:

  • Availability: Readiness for correct service. The correct service is defined as what is delivered when the service implements a system function.

  • Reliability: Continuity of correct service.

  • Safety: Absence of catastrophic consequences on the users and environment.

  • Integrity: Absence of improper system alterations.

  • Maintainability: Ability to undergo modifications and repairs.

Security has attributes of confidentiality, integrity, and availability. In this chapter, it is assumed that a system does have concern about the security and has reasonable security mechanisms in place. Confidentiality, however, is absent from the above interpretation of dependability. Interestingly, other attributes, such as authenticity and nonrepudiation, are not considered in the previous work. Avizienis et al. [7] merged the attributes of dependability and security together, as shown in Figure 6.2. Similarly, the above attributes can be reframed as follows under the proposed framework that is shown in Figure 6.3:

  • Availability: Readiness for correct service. The correct service is defined as delivered system behavior that is within the error tolerance boundary.

  • Reliability: Continuity of correct service. This is the same as the conventional definition.

  • Safety: Absence of catastrophic consequences on the users and the environment. This is the same as the conventional definition.

  • Integrity: Absence of malicious external disturbance that makes a system output off its desired service.

  • Maintainability: Ability to undergo modifications and repairs. This is the same as the conventional definition.

  • Confidentiality: Property that data or information are not made available to unauthorized persons or processes. In the proposed framework, it refers to the property that unauthorized persons or processes will not get system output or be blocked by the filter.

  • Authenticity: Ability to provide services with provable origin. In other words, the output can be verifiably linked to a system.

  • Nonrepudiation: Services provided cannot be disclaimed later. In our model, once the system provided an output, there is no way to deny it.

Note that authenticity and nonrepudiation have not been addressed before in the security and dependability framework [7, 11, 12]. These two attributes do not fit into conventional attributes of availability, confidentiality, and integrity (ACI) of security. It seems difficult to include authenticity as a part of ACI. However, we observe that the authenticity issue connects to any of the availability, confidentiality, and integrity problems. Hence, it is more appropriate to express authenticity as an intermediate event toward security faults and also a means to achieving security and dependability. Similarly, it is difficult to include the nonrepudiation as part of ACI. However, unlike the authenticity issue, nonrepudiation problems do not necessarily lead to any of problems of availability, confidentiality, and integrity. Therefore, it seems appropriate to classify nonrepudiation as an independent attribute. An expanded security and dependability tree is given in Figure 6.4.

Figure 6.4. Expanded dependability and security tree.


6.4.1. Taxonomy of Faults

In the conventional framework, a fault is defined as a cause of an error. Under the proposed approach, a failure is linked to the error that is outside of the error tolerance boundary and is caused by a fault. As for the classification of faults, the most popular method is to categorize them as either malicious or nonmalicious [7].

According to the conventional definition, malicious faults have the objective of altering the functioning of a system during use [7, 9]. Hence “exploit” is classified as operational, external, human-made, software, and malicious interaction fault. Intrusion attempts are also considered as faults. This approach has several flaws. For instance, people often exploit their own system security vulnerability in order to identify security loopholes that do not represent a “malicious objective.” Exploit events are not always faults. Some harmless intrusions that are just designed for fun do not damage a system and do not have malicious objective to interfere with the normal operation of the system. Even if we consider such a fun exercise as malicious, it does not affect the correct service or cause a service error. A fault claim does not fit the definition of faults.

Avizienis et al. [7] has also proposed eight elementary fault classes. However, the combination of these elementary fault classes can generate nonexisting faults. To address this problem, three major partially overlapping groupings, namely, development faults, physical faults, and interaction faults, are introduced in Avizienis et al. [7]. The framework suffers from the problem of classifying nonmalicious activities or error-free activities as malicious faults. We attempt to provide a set of elementary fault classes with minimum overlapping. An intuitive choice is to start with classes that have minimum overlap. We start with two classes, namely, human-made faults (HMF) and nonhuman-made faults (NHMF).

HMF

Human-made faults result from human actions. They include absence of actions when actions should be performed (i.e., omission faults). Performing wrong actions leads to commission faults. Avizienis et al. [7] have categorized human-made faults into two basic classes: malicious faults and nonmalicious faults. They are distinguished by the objective of a developer or of the humans interacting with a system during its use. An exploit activity is classified as malicious fault. As mentioned above, this classification originated from the fault-analysing community and does not integrate well with security. We propose the following new definitions and classifications. HMFs are categorized into two basic classes: faults with unauthorized access (FUA), and other faults (NFUA).

Faults with unauthorized access (FUA)

This class attempts to cover traditional security issues. We investigate FUA from the perspective of availability, integrity, and confidentiality. Nonrepudiation events normally have the authorized access and hence do not fit in the FUA category.

FUA and confidentiality

Confidentiality refers to the property that information or data are not available to unauthorized persons or processes, or that unauthorized access to a system’s output will be blocked by the system’s filter. Apparently, confidentiality faults fit FUA nicely and can be regarded as a subclass of FUA. Confidentiality faults are mainly caused by access control problems originating in cryptographic faults, security policy faults, hardware faults, and software faults. Cryptographic faults can originate from encryption algorithm faults, decryption algorithm faults, and key distribution methods. Security policy faults are normally management problems and can appear in different forms (e.g., as contradicting security policy statements).

FUA, integrity, and authenticity

Integrity is referred to as the absence of malicious external disturbance that causes a system to produce incorrect output. This deviated output can be the result of component failure, but can also be linked to unauthorized access. An integrity problem can arise if, for instance, internal data are tampered with and the produced output relies on the correctness of the data. Integrity problems are related to but different from authenticity problems, as in the latter case where output produced somewhere else is attributed to the system regardless of correctness. As an example, a person-in-the-middle attack can produce integrity and authenticity faults by altering a message or by producing a totally new one. A confidentiality fault can also occur, if the person-in-the-middle attack gains access to confidential information. This example illustrates that one incident can result in different types of faults.

FUA and availability

Availability refers to a system’s readiness to provide correct service. Availability faults can be human-made or nonhuman-made. A typical cause of such faults is some sort of denial of service (DoS) attack that can, for example, use some type of flooding (SYN, ICMP, UDP) to prevent a system from producing correct output. The perpetrator in this case has gained access to a system, albeit a very limited one, and this access is sufficient to introduce a fault. Most viruses and worms also interfere with availability when executing. Some malware that is activated remotely might turn a system into a zombie or sleeping agent. System availability is reduced, sometimes to zero, when these zombies are activated by a perpetrator, when at other times the system is normally available. While availability is affected only temporarily, the fault (i.e., the malware), is continuously present in the system.

Many FUA faults aim at making system output deviate from its desired trajectory and beyond tolerance. At other times, the fault is unintentional (e.g., the result of an operator error). To make a clear distinction between these two cases, we introduce a new concept not discussed elsewhere: malicious attempt fault.

Malicious attempt fault has the objective of damaging a system. A fault is produced when this attempt is combined with other system faults. From the perspective of elementary security attributes—availability, confidentiality, integrity, and nonrepudiation—we classify malicious attempt faults according to their aims as:

  1. Intention to disrupt service (e.g., DoS attack).

  2. Attempt to access confidential information.

  3. Intention to improperly modify a system.

  4. Having gained services.

Note that a malicious attempt fault is not a real fault unless it is combined with other faults.

NFUA

There are human-made faults that do not belong to FUA. Most of such faults are introduced by error, such as configuration problems, incompetence issues, accidents, and so on. Fault detection activity, including penetration testing, is not considered to be a fault itself, as it does not cause system output to deviate from its desired trajectory. Nonrepudiation fault also belongs to the NFUA category, as it normally has an authorized access.

Nonhuman-made faults (NHMF)

NHMF refers to faults caused by natural phenomena without human participation. These are physical faults caused by a system’s internal natural processes (e.g., physical deterioration of cables or circuitry), or by external natural processes. The latter ones originate outside a system but cross system boundaries and affect the hardware either directly, such as radiation, or via user interfaces, such as input noise [7]. Communication faults are an important part of the picture. They can also be caused by natural phenomena. For example, in communication systems, a radio transmission message can be destroyed by an outer space radiation burst, which results in system faults, but has nothing to do with system hardware or software faults. Such faults have not been discussed before in the existing literature.

From above discussions, we propose the following elementary fault classes, as shown in Figure 6.5. From these elementary fault classes, we can construct a tree representation of various faults, as shown in Figure 6.6.

Figure 6.5. Elementary fault classes.


Figure 6.6. Tree representation of faults.


Figure 6.7 shows different types of availability faults. The Boolean operation block performs either “Or” or “And” operations or both on the inputs. We provide several examples to explain the above structure. We consider the case when the Boolean operation box is performing “Or” operations. F1.1 (a malicious attempt fault with intent to availability damage) combined with software faults will cause an availability fault. A typical example is the Zotob virus that can lead to shutting down the Windows operation system. It gains access to the system via a software fault (buffer overflow) in Microsoft’s plug-and-play software, and attempts to establish permanent access to the system (back door). F1.1 in combination with hardware faults can also cause an availability fault. F7 (natural faults) can cause an availability fault. F1.1 and F8 (networking protocol) can cause a denial of service fault. Figure 6.8 shows the types of integrity faults.

Figure 6.7. Detailed structure of S1.


Figure 6.8. Detailed structure of S2.


The interpretation of S2 is similar to that of S1. The combination of F1.2 and F2 can alter the function of the software and generate an integrity fault. Combining F1.2 and F4 can generate a person-in-the-middle attack and so on. Figure 6.9 shows types of confidentiality faults.

Figure 6.9. Detailed structure of S3.


The interpretation of S3 is very similar to those of S1 and S2. Combination of F1.3 and F2 can generate a spying type of virus that steals users’ logins and passwords. It is easy to deduce other combinations.

Now let us look at the complex case of a Trojan horse. The Trojan horse may remain quiet for a long time or even forever, and so it will not cause service failure during the quiet period. This is hard to model by conventional frameworks. Within our framework, we need to observe two factors first for the classification. The first factor is the consequence of introducing the Trojan horse, that is, whether it causes a fault or combination of faults, such as availability, integrity, and confidentiality faults. If there is no consequence (i.e., no service deviation error) after introducing it then it is not considered as a fault. This conforms to the basic definition of faults. The second factor is whether the intrusion belongs to a malicious attempt. Apparently, a network scan by the system administrator is not considered as a fault. When the objective of a Trojan horse is not malicious and it never affects system service, it is not considered as a fault in our framework. Such scenarios have not been addressed properly in many other frameworks where exploit-type activities are characterized as faults even though they may never cause service deviation. If, however, a Trojan horse has a malicious attempt fault and does cause service deviation, then it is considered as a fault classified by S1, S2, and S3 components.

Because a service failure is mainly due to faults, we concentrate our discussion on faults and means to attain fault prevention, fault tolerance, fault detection, and fault removal in this chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset