8.1. Introduction

Information and communication technologies (ICT) are tightly interwoven in every aspect of our modern society. The paradigm of ubiquitous computing and communication touches upon every activity in our everyday lives. High-capacity networks and web services have turned the Internet into today’s main area for information interchange and electronic commerce. The Internet plays an increasingly important role in supporting critical applications. In summary, we have to trust ICT-based systems for the economical, social, and physical well-being of individuals and organizations. Hence, being able to analyze and quantify the trustworthiness of ICT systems is of utmost importance.

As indicated in Figure 8.1, trustworthiness is determined by the combined dependability and security of systems. Although the interrelation between these properties is recognized and has been known for a long time [1, 2], dependability and security have been analyzed separately and by different means in the past. There are several reasons why we should change this situation by pursuing a combined approach:

  • It is desirable to have some overall quantification of the trustworthiness of a system. Some attributes are also common to the dependability and security domains (Figure 8.1). We are interested in the overall quality (here, lack of vulnerability) of a system.

  • Many of the threats are common. It is too simplistic to regard random faults as threats toward the dependability, and intentional faults as threats toward security. This is illustrated in Figure 8.2.

  • In order to have a model that sufficiently reflects the reality, it is necessary to include the coincident effect of combinations of various types of intentional and random faults, system management actions, automatic restoration actions, and the current state of a system.

Figure 8.1. Dependability and security attributes. From Avizienis et al. [1].


Figure 8.2. Trustworthiness subject to random and intentional threats.


The objective pursued in this chapter is to establish a modeling methodology that enables us to take into account both intentional and unintentional threats to a system, and to deal with the interrelated effect of these combined threats with actions related to system operation. Proceeding toward this objective, we recognize that in spite of all efforts to make a system secure, it is widely accepted that, due to the unavoidable presence of (undetected) vulnerabilities, design faults and administrative errors, an ICT system will never be totally secure. The threats making a system less than totally secure have a probabilistic nature. As pointed out in Greenwald et al. [3], the present security evaluation tools and methodologies are only adequate for securing systems on a small scale. For example, cryptography is one of the most well-studied and rigorously modeled aspects in the security field. Still, cryptography alone is not sufficient to secure a system. Most security breaches are caused by faulty software that can be exploited by, for example, buffer overflows, which unfortunately cannot be avoided by cryptographic techniques. As already mentioned, as a consequence 100% security is very difficult, if not impossible, to achieve. To be able to rely on the service that an ICT system provides, the users need to know to what extent it can be trusted. There is an urgent need for quantitative modeling methods that can be used to analyze and evaluate the trustworthiness of systems. Today, there exists several methods for assessing the qualitative security level of a system, one of the most well-known being the Common Criteria [4]. However, even though such methods give an indication of the quality of the security achieved during design and implementation, they do not say anything about how a system will actually behave when operating in a particular threat environment. To be able to measure security, a new approach for quantitative evaluation is needed.

For these reasons, we seek to interpret and assess the security of a system in a probabilistic manner, where the goal is to provide a supplement to the established methods for security and dependability assessment. Probabilistic modeling and analysis to determine the dependability attributes of a system, or more precisely, the modeling and analysis under the assumption of randomly occurring faults, is well established and known. In this chapter, we will show how dependability modeling and analysis by means of a continuous-time Markov chain (CTMC) may be extended to include intentional faults (called attacks in the rest of the chapter) and also be used for obtaining quantitative probabilistic measures of system security as well as measures for system trustworthiness.

8.1.1. Previous Work

Challenges in obtaining quantitative measures of system security and trustworthiness have been recognized by other researchers. In the paper by Littlewood et al. [5], a first step toward operational measures of computer security is discussed. The authors point out the lack of quantitative measures for determining operational security and relate security assessment to the reliability domain. Quantitative measures, such as the mean effort to security breach, are defined and discussed. Ortalo et al. [6] present a quantitative model to measure known Unix security vulnerabilities using a privilege graph, which is transformed into a Markov chain. The model allows for the characterization of operational security expressed as the mean effort to security failure as proposed by Littlewood et al. [5]. Furthermore, Madan et al. and Wang et al. [79] use traditional stochastic modeling techniques to capture attacker behavior and a system’s response to attacks and intrusions. A quantitative security analysis is carried out for the steady-state behavior of the system. In Singh et al. [10], an approach for probabilistic validation of an intrusion-tolerant replication system is described. They provide a hierarchical model using stochastic activity nets (SANs), which can be used to validate intrusion-tolerant systems and evaluate merits of various design choices. Finally, the paper by Nicol et al. [11] provides a survey over the existing model-based system dependability evaluation techniques, and summarizes how they are being extended to evaluate security.

The major challenge in establishing probabilistic models that properly reflect reality is to include attacks. The novelty in the approach introduced in this chapter is the recognition that a networked system is under continuous threat from an infinite number of attackers. Hence, the potential process of attacks may be assumed to be Poissonian. In each state of a system, the actions that attackers actually take should ideally be represented as a probability distribution over the possible attack actions. Therefore, we define and make use of attacker strategies as a part of the transition probabilities between states. To compute the expected attacker strategies, we use stochastic game theory. Game theory in a security-related context has been utilized in previous papers. A decision and control framework for a distributed intrusion detection system (IDS), where game theory is used to model and analyze attacker and IDS behavior, is proposed in Alpcan and Basar [12]. In Liu and Zang [13], a preliminary framework for modeling attacker intent, objectives, and strategies (AIOS) is presented. In Lye and Wing [14], a game-theoretic method for analyzing the security of computer networks is described. The interactions between an attacker and an administrator are modeled as a two-player stochastic game for which best-response strategies (Nash equilibrium) are computed. The approach presented in this chapter has been presented in a series of papers [1518]. Stochastic game theory is used to compute the expected attacker behavior, where the use of the optimal strategies as a part of the transition probabilities in state transition models are introduced. The applied game theoretic method is heavily inspired by the work in Lye and Wing [14]. However, our main focus is on predicting attacker behavior rather than to find optimal defense strategies for system administrators. Hence, we use a zero-sum game rather than a general-sum model. Moreover, we model the outcome of the game elements as the possible consequences of an attacker’s actions being detected or not.

8.1.2. Outline

Modeling and analysis of systems subject to random failures and handling of these is a well-established discipline, and it is assumed that the reader is familiar with dependability modeling by continuous time Markov chains (CTMCs). In this chapter, the focus is on including intrusions (intentional faults) and the detection and handling of these. Modeling attacks as a stochastic process and the construction of CTMCs for combined dependability and security evaluation are discussed in Section 8.2. Attacker behavior is predicted by game theory. The procedure for how this is done is detailed in Sections 8.3 and 8.4. In Section 8.5, it is discussed how attackers are expected to behave, depending on their risk awareness. A small case study of how our model can be used to evaluate the trustworthiness of a DNS system is presented in Section 8.6.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset