Chapter 35

Risk Management

Sokratis K. Katsikas, University of Piraeus

Integrating security measures with the operational framework of an organization is neither a trivial nor an easy task. This explains to a large extent the low degree of security that information systems operating in contemporary businesses and organizations enjoy. Some of the most important difficulties that security professionals face when confronted with the task of introducing security measures in businesses and organizations are:

• The difficulty to justify the cost of the security measures

• The difficulty to establish communication between technical and administrative personnel

• The difficulty to assure active participation of users in the effort to secure the information system and to commit higher management to continuously supporting the effort

• The widely accepted erroneous perception of information systems security as a purely technical issue

• The difficulty to develop an integrated, efficient and effective information systems security plan

• The identification and assessment of the organizational impact that the implementation of a security plan entails

The difficulty in justifying the cost of the security measures, particularly those of a procedural and administrative nature, stems from the very nature of security itself. Indeed, the justification of the need for a security measure can only be proved “after the (unfortunate) event,” whereas, at the same time, there is no way to prove that already implemented measures can adequately cope with a potential new threat. This cost does not only pertain to acquiring and installing mechanisms and tools for protection. It also includes the cost of human resources, the cost of educating and making users aware, and the cost for carrying out tasks and procedures relevant to security.

The difficulty in expressing the cost of the security measures in monetary terms is one of the fundamental factors that make the communication between technical and administrative personnel difficult. An immediate consequence of this is the difficulty in securing the continuous commitment of higher management to supporting the security enhancement effort. This becomes even more difficult when organizational and procedural security measures are proposed. Both management and users are concerned about the impact of these measures in their usual practice, particularly when the widely accepted concept that security is purely a technical issue is put into doubt.

Moreover, protecting an information system calls for an integrated, holistic study that will answer questions such as: Which elements of the information system do we want to protect? Which, among these, are the most important ones? What threats is the information system facing? What are its vulnerabilities? What security measures must be put in place? Answering these questions gives a good picture of the current state of the information system with regard to its security. As research1 has shown, developing techniques and measures for security is not enough, since the most vulnerable point in any information system is the human user, operator, designer, or other human. Therefore, the development and operation of secure information systems must equally consider and take account of both technical and human factors. At the same time, the threats that an information system faces are characterized by variety, diversity, complexity, and continuous variation. As the technological and societal environment continuously evolves, threats change and evolve, too. Furthermore, both information systems and threats against them are dynamic; hence the need for continuous monitoring and managing of the information system security plan.

The most widely used methodology that aims at dealing with these issues is the information systems risk management methodology. This methodology adopts the concept of risk that originates in financial management, and substitutes the unachievable and immeasurable goal of fully securing the information system with the achievable and measurable goal of reducing the risk that the information system faces to within acceptable limits.

1. The Concept of Risk

The concept of risk originated in the 17th century with the mathematics associated with gambling. At that time, risk referred to a combination of probability and magnitude of potential gains and losses. During the 18th century, risk, seen as a neutral concept, still considered both gains and losses and was employed in the marine insurance business. In the 19th century, risk emerged in the study of economics. The concept of risk, then, seen more negatively, caused entrepreneurs to call for special incentives to take the risk involved in investment. By the 20th century a total negative connotation was made when referring to outcomes of risk in engineering and science, with particular reference to the hazards posed by modern technological developments.2,3

Within the field of IT security, the risk R is calculated as the product of P, the probability of an exposure occurring a given number of times per year times C, the cost or loss attributed to such an exposure, that is, R = P x C.4

The most recent standardized definition of risk comes from the ISO,5 voted for on April 19, 2008, where the information security risk is defined as “the potential that a given threat will exploit vulnerabilities of an asset or group of assets and thereby cause harm to the organization.”

To complete this definition, definitions of the terms threat, vulnerability, and asset are in order. These are as follows: A threat is “a potential cause of an incident, that may result in harm to system or organization.” A vulnerability is “a weakness of an asset or group of assets that can be exploited by one or more threats.” An asset is “anything that has value to the organization, its business operations and their continuity, including information resources that support the organization’s mission.”6 Additionally, harm results in impact, which, is “an adverse change to the level of business objectives achieved.”7 The relationships among these basic concepts are pictorially depicted in Figure 35.1.

image

Figure 35.1 Risk and its related concepts.

2. Expressing and Measuring Risk

As noted, risk “is measured in terms of a combination of the likelihood of an event and its consequence.”8 Because we are interested in events related to information security, we define an information security event as “an identified occurrence of a system, service or network state indicating a possible breach of information security policy or failure of safeguards, or a previously unknown situation that may be security relevant.”9 Additionally, an information security incident is “indicated by a single or a series of unwanted information security events that have a significant probability of compromising business operations and threatening information security.”10 These definitions actually invert the investment assessment model, where an investment is considered worth making when its cost is less than the product of the expected profit times the likelihood of the profit occurring. In our case, the risk R is defined as the product of the likelihood L of a security incident occurring times the impact I that will be incurred to the organization due to the incident, that is, R = L x I.11

To measure risk, we adopt the fundamental principles and the scientific background of statistics and probability theory, particularly of the area known as Bayesian statistics, after the mathematician Thomas Bayes (1702–1761), who formalized the namesake theorem. Bayesian statistics is based on the view that the likelihood of an event happening in the future is measurable. This likelihood can be calculated if the factors affecting it are analyzed. For example, we are able to compute the probability of our data to be stolen as a function of the probability an intruder will attempt to intrude into our system and of the probability that he will succeed. In risk analysis terms, the former probability corresponds to the likelihood of the threat occurring and the latter corresponds to the likelihood of the vulnerability being successfully exploited. Thus, risk analysis assesses the likelihood that a security incident will happen by analyzing and assessing the factors that are related to its occurrence, that is, the threats and the vulnerabilities. Subsequently, it combines this likelihood with the impact resulting from the incident occurring to calculate the system risk. Risk analysis is a necessary prerequisite for subsequently treating risk. Risk treatment pertains to controlling the risk so that it remains within acceptable levels. Risk can be reduced by applying security measures; it can be transferred, perhaps by insuring; it can be avoided; or it can be accepted, in the sense that the organization accepts the likely impact of a security incident.

The likelihood of a security incident occurring is a function of the likelihood that a threat appears and of the likelihood that the threat can successfully exploit the relevant system vulnerabilities. The consequences of the occurrence of a security incident are a function of the likely impact that the incident will have to the organization as a result of the harm that the organization assets will sustain. Harm, in turn, is a function of the value of the assets to the organization. Thus, the risk R is a function of four elements: (a) V, the value of the assets; (b) T, the severity and likelihood of appearance of the threats; (c) V, the nature and the extent of the vulnerabilities and the likelihood that a threat can successfully exploit them; and (d) I, the likely impact of the harm should the threat succeed, that is, R = f(A, T, V, I).

If the impact is expressed in monetary terms, the likelihood being dimensionless, then risk can be also expressed in monetary terms. This approach has the advantage of making the risk directly comparable to the cost of acquiring and installing security measures. Since security is often one of several competing alternatives for capital investment, the existence of a cost/benefit analysis that would offer proof that security will produce benefits that equal or exceed its cost is of great interest to the management of the organization. Of even more interest to management is the analysis of the investment opportunity costs, that is, its comparison to other capital investment options.12 However, expressing risk in monetary terms is not always possible or desirable, since harm to some kinds of assets (e.g., human life) cannot (and should not) be assessed in monetary terms. This is why risk is usually expressed in nonmonetary terms, on a simple dimensionless scale.

Assets in an organization are usually quite diverse. Because of this diversity, it is likely that some assets that have a known monetary value (e.g., hardware) can be valued in the local currency, whereas others of a more qualitative nature (e.g., data or information) may be assigned a numerical value based on the organization’s perception of their value. This value is assessed in terms of the assets’ importance to the organization or their potential value in different business opportunities. The legal and business requirements are also taken into account, as are the impacts to the asset itself and to the related business interests resulting from a loss of one or more of the information security attributes (confidentiality, integrity, availability). One way to express asset values is to use the business impacts that unwanted incidents, such as disclosure, modification, nonavailability, and/or destruction, would have to the asset and the related business interests that would be directly or indirectly damaged. An information security incident can impact more than one asset or only a part of an asset. Impact is related to the degree of success of the incident. Impact is considered as having either an immediate (operational) effect or a future (business) effect that includes financial and market consequences. Immediate (operational) impact is either direct or indirect.

Direct impact may result because of the financial replacement value of lost (part of) asset or the cost of acquisition, configuration and installation of the new asset or backup, or the cost of suspended operations due to the incident until the service provided by the asset(s) is restored. Indirect impact may result because financial resources needed to replace or repair an asset would have been used elsewhere (opportunity cost) or from the cost of interrupted operations or due to potential misuse of information obtained through a security breach or because of violation of statutory or regulatory obligations or of ethical codes of conduct.13

These considerations should be reflected in the asset values. This is why asset valuation (particularly of intangible assets) is usually done through impact assessment. Thus, impact valuation is not performed separately but is rather embedded within the asset valuation process.

The responsibility for identifying a suitable asset valuation scale lies with the organization. Usually, a three-value scale (low, medium, and high) or a five-value scale (negligible, low, medium, high, and very high) is used.14

Threats can be classified as deliberate or accidental. The likelihood of deliberate threats depends on the motivation, knowledge, capacity, and resources available to possible attackers and the attractiveness of assets to sophisticated attacks. On the other hand, the likelihood of accidental threats can be estimated using statistics and experience. The likelihood of these threats might also be related to the organization’s proximity to sources of danger, such as major roads or rail routes, and factories dealing with dangerous material such as chemical materials or petroleum. Also the organization’s geographical location will affect the possibility of extreme weather conditions. The likelihood of human errors (one of the most common accidental threats) and equipment malfunction should also be estimated.15 As already noted, the responsibility for identifying a suitable threat valuation scale lies with the organization. What is important here is that the interpretation of the levels is consistent throughout the organization and clearly conveys the differences between the levels to those responsible for providing input to the threat valuation process. For example, if a three-value scale is used, the value low can be interpreted to mean that it is not likely that the threat will occur, there are no incidents, statistics, or motives that indicate that this is likely to happen. The value medium can be interpreted to mean that it is possible that the threat will occur, there have been incidents in the past or statistics or other information that indicate that this or similar threats have occurred sometime before, or there is an indication that there might be some reasons for an attacker to carry out such action. Finally, the value high can be interpreted to mean that the threat is expected to occur, there are incidents, statistics, or other information that indicate that the threat is likely to occur, or there might be strong reasons or motives for an attacker to carry out such action.16

Vulnerabilities can be related to the physical environment of the system, to the personnel, management, and administration procedures and security measures within the organization, to the business operations and service delivery or to the hardware, software, or communications equipment and facilities. Vulnerabilities are reduced by installed security measures. The nature and extent as well as the likelihood of a threat successfully exploiting the three former classes of vulnerabilities can be estimated based on information on past incidents, on new developments and trends, and on experience. The nature and extent as well as the likelihood of a threat successfully exploiting the latter class, often termed technical vulnerabilities, can be estimated using automated vulnerability-scanning tools, security testing and evaluation, penetration testing, or code review.17 As in the case of threats, the responsibility for identifying a suitable vulnerability valuation scale lies with the organization. If a three-value scale is used, the value low can be interpreted to mean that the vulnerability is hard to exploit and the protection in place is good. The value medium can be interpreted to mean that the vulnerability might be exploited, but some protection is in place. The value high can be interpreted to mean that it is easy to exploit the vulnerability and there is little or no protection in place.18

3. The Risk Management Methodology

The term methodology means an organized set of principles and rules that drives action in a particular field of knowledge. A method is a systematic and orderly procedure or process for attaining some objective. A tool is any instrument or apparatus that is necessary to the performance of some task. Thus, methodology is the study or description of methods.19 A methodology is instantiated and materializes by a set of methods, techniques, and tools. A methodology does not describe specific methods; nevertheless, it does specify several processes that need to be followed. These processes constitute a generic framework. They may be broken down into subprocesses, they may be combined, or their sequence may change. However, every risk management exercise must carry out these processes in some form or another.

Risk management consists of six processes, namely context establishment, risk assessment, risk treatment, risk acceptance, risk communication, and risk monitoring and review.20 This is more or less in line with the approach where four processes are identified as the constituents of risk management, namely, putting information security risks in the organizational context, risk assessment, risk treatment, and management decision-making and ongoing risk management activities.21 Alternatively, risk management is seen to comprise three processes, namely risk assessment, risk mitigation, and evaluation and assessment.22 Table 35.1 depicts the relationships among these processes.23,24

Table 35.1

Risk management constituent processes

ISO/IEC FDIS 27005:2008 (E) BS 7799-3:2006 SP 800-30
Context establishment Organizational context
Risk assessment Risk assessment Risk assessment
Risk treatment Risk treatment and management decision making Risk mitigation
Risk acceptance
Risk communication Ongoing risk management activities
Risk monitoring and review Evaluation and assessment

Context Establishment

The context establishment process receives as input all relevant information about the organization. Establishing the context for information security risk management determines the purpose of the process. It involves setting the basic criteria to be used in the process, defining the scope and boundaries of the process, and establishing an appropriate organization operating the process. The output of context establishment process is the specification of these parameters.

The purpose may be to support an information security management system (ISMS); to comply with legal requirements and to provide evidence of due diligence; to prepare for a business continuity plan; to prepare for an incident reporting plan; or to describe the information security requirements for a product, a service, or a mechanism. Combinations of these purposes are also possible.

The basic criteria include risk evaluation criteria, impact criteria, and risk acceptance criteria. When setting risk evaluation criteria the organization should consider the strategic value of the business information process; the criticality of the information assets involved; legal and regulatory requirements and contractual obligations; operational and business importance of the attributes of information security; and stakeholders expectations and perceptions, and negative consequences for goodwill and reputation. The impact criteria specify the degree of damage or costs to the organization caused by an information security event. Developing impact criteria involves considering the level of classification of the impacted information asset; breaches of information security; impaired operations; loss of business and financial value; disruption of plans and deadlines; damage of reputation; and breaches of legal, regulatory or contractual requirements. The risk acceptance criteria depend on the organization’s policies, goals, objectives and the interest of its stakeholders. When developing risk acceptance criteria the organization should consider business criteria; legal and regulatory aspects; operations; technology; finance; and social and humanitarian factors.25

The scope of the process needs to be defined to ensure that all relevant assets are taken into account in the subsequent risk assessment. Any exclusion from the scope needs to be justified. Additionally, the boundaries need to be identified to address those risks that might arise through these boundaries. When defining the scope and boundaries, the organization needs to consider its strategic business objectives, strategies, and policies; its business processes; its functions and structure; applicable legal, regulatory, and contractual requirements; its information security policy; its overall approach to risk management; its information assets; its locations and their geographical characteristics; constraints that affect it; expectations of its stakeholders; its socio-cultural environment; and its information exchange with its environment. This involves studying the organization (i.e., its main purpose, its business; its mission; its values; its structure; its organizational chart; and its strategy). It also involves identifying its constraints. These may be of a political, cultural, or strategic nature; they may be territorial, organizational, structural, functional, personnel, budgetary, technical, or environmental constraints; or they could be constraints arising from preexisting processes. Finally, it entails identifying legislation, regulations, and contracts.26

Setting up and maintaining the organization for information security risk management fulfills part of the requirement to determine and provide the resources needed to establish, implement, operate, monitor, review, maintain, and improve an ISMS.27 The organization to be developed will bear responsibility for the development of the information security risk management process suitable for the organization; for the identification and analysis of the stakeholders; for the definition of roles and responsibilities of all parties, both external and internal to the organization; for the establishment of the required relationships between the organization and stakeholders, interfaces to the organization’s high-level risk management functions, as well as interfaces to other relevant projects or activities; for the definition of decision escalation paths; and for the specification of records to be kept. Key roles in this organization are the senior management; the chief information officer (CIO); the system and information owners; the business and functional managers; the information systems security officers (ISSO); the IT security practitioners; and the security awareness trainers (security/subject matter professionals).28 Additional roles that can be explicitly defined are those of the risk assessor and of the security risk manager.29

Risk Assessment

This process comprises two subprocesses, namely risk analysis and risk evaluation. Risk analysis, in turn, comprises risk identification and risk estimation. The process receives as input the output of the context establishment process. It identifies, quantifies or qualitatively describes risks and prioritizes them against the risk evaluation criteria established within the course of the context establishment process and according to objectives relevant to the organization. It is often conducted in more than one iteration, the first being a high-level assessment aiming at identifying potentially high risks that warrant further assessment, whereas the second and possibly subsequent iterations entail further in-depth examination of potentially high risks revealed in the first iteration. The output of the process is a list of assessed risks prioritized according to risk evaluation criteria.30

Risk identification seeks to determine what could happen to cause a potential loss and to gain insight into how, where, and why the loss might happen. It involves a number of steps, namely identification of assets; identification of threats; identification of existing security measures; identification of vulnerabilities; and identification of consequences. Input to the subprocess is the scope and boundaries for the risk assessment to be conducted, an asset inventory, information on possible threats, documentation of existing security measures, possibly preexisting risk treatment implementation plans, and the list of business processes. The output of the subprocess is a list of assets to be risk-managed together with a list of business processes related to these assets; a list of threats on these assets; a list of existing and planned security measures, their implementation and usage status; a list of vulnerabilities related to assets, threats and already installed security measures; a list of vulnerabilities that do not relate to any identified threat; and a list of incident scenarios with their consequences, related to assets and business processes.31

Two kinds of assets can be distinguished, namely primary assets, which include business processes and activities and information, and supporting assets, which include hardware, software, network, personnel, site, and the organization’s structure. Hardware assets comprise data-processing equipment (transportable and fixed), peripherals, and media. Software assets comprise the operating system; service, maintenance or administration software; and application software. Network assets comprise medium and supports, passive or active relays, and communication interfaces. Personnel assets comprise decision makers, users, operation/maintenance staff, and developers. The site assets comprise the location (and its external environment, premises, zone, essential services, communication and utilities characteristics) and the organization (and its authorities, structure, the project or system organization and its subcontractors, suppliers and manufacturers).32

Threats are classified according to their type and to their origin. Threat types are physical damage (e.g., fire, water, pollution); natural events (e.g., climatic phenomenon, seismic phenomenon, volcanic phenomenon); loss of essential services (e.g., failure of air-conditioning, loss of power supply, failure of telecommunication equipment); disturbance due to radiation (electromagnetic radiation, thermal radiation, electromagnetic pulses); compromise of information (eavesdropping, theft of media or documents, retrieval of discarded or recycled media); technical failures (equipment failure, software malfunction, saturation of the information system); unauthorized actions (fraudulent copying of software, corruption of data, unauthorized use of equipment); and compromise of functions (error in use, abuse of rights, denial of actions).33 Threats are classified according to origin into deliberate, accidental or environmental. A deliberate threat is an action aiming at information assets (e.g., remote spying, illegal processing of data); an accidental threat is an action that can accidentally damage information assets (equipment failure, software malfunction); and an environmental threat is any threat that is not based on human action (a natural event, loss of power supply). Note that a threat type may have multiple origins.

Vulnerabilities are classified according to the asset class they relate to. Therefore, vulnerabilities are classified as hardware (e.g., susceptibility to humidity, dust, soiling; unprotected storage); software (no or insufficient software testing, lack of audit trail); network (unprotected communication lines, insecure network architecture); personnel (inadequate recruitment processes, lack of security awareness); site (location in an area susceptible to flood, unstable power grid); and organization (lack of regular audits, lack of continuity plans).34

Risk estimation is done either quantitatively or qualitatively. Qualitative estimation uses a scale of qualifying attributes to describe the magnitude of potential consequences (e.g., low, medium or high) and the likelihood that these consequences will occur. Quantitative estimation uses a scale with numerical values for both consequences and likelihood. In practice, qualitative estimation is used first, to obtain a general indication of the level of risk and to reveal the major risks. It is then followed by a quantitative estimation on the major risks identified.

Risk estimation involves a number of steps, namely assessment of consequences (through valuation of assets); assessment of incident likelihood (through threat and vulnerability valuation); and assigning values to the likelihood and the consequences of a risk. We discussed valuation of assets, threats, and vulnerabilities in an earlier section. Input to the subprocess is the output of the risk identification subprocess. Its output is a list of risks with value levels assigned.

Having valuated assets, threats, and vulnerabilities, we should be able to calculate the resulting risk, if the function relating these to risk is known. Establishing an analytic function for this purpose is probably impossible and certainly ineffective. This is why, in practice, an empirical matrix is used for this purpose. Such a matrix, an example of which is shown in Table 35.2, links asset values and threat and vulnerability levels to the resulting risk. In this example, asset values are expressed on a 0–4 scale, whereas threat and vulnerability levels are expressed on a Low-Medium-High scale. The risk values are expressed on a scale of 1 to 8. When linking the asset values and the threats and vulnerabilities, consideration needs to be given to whether the threat/vulnerability combination could cause problems to confidentiality, integrity, and/or availability. Depending on the results of these considerations, the appropriate asset value(s) should be chosen, that is, the one that has been selected to express the impact of a loss of confidentiality, or the one that has been selected to express the loss of integrity, or the one chosen to express the loss of availability. Using this method can lead to multiple risks for each of the assets, depending on the particular threat/vulnerability combination considered.35

Table 35.2

Example Risk Calculation Matrix36

Image

Finally, the risk evaluation process receives as input the output of the risk analysis process. It compares the levels of risk against the risk evaluation criteria and risk acceptance criteria that were established within the context establishment process. The process uses the understanding of risk obtained by the risk assessment process to make decisions about future actions. These decisions include whether an activity should be undertaken and setting priorities for risk treatment. The output of the process is a list of risks prioritized according to the risk evaluation criteria, in relation to the incident scenarios that lead to those risks.

Risk Treatment

When the risk is calculated, the risk assessment process finishes. However, our actual ultimate goal is treating the risk. The risk treatment process aims at selecting security measures to reduce, retain, avoid, or transfer the risks and at defining a risk treatment plan. The process receives as input the output of the risk assessment process and produces as output the risk treatment plan and the residual risks subject to the acceptance decision by the management of the organization.

The options available to treat risk are to reduce it, to accept it, to avoid it, or to transfer it. Combinations of these options are also possible. The factors that might influence the decision are the cost each time the incident related to the risk happens; how frequently it is expected to happen; the organization’s attitude toward risk; the ease of implementation of the security measures required to treat the risk; the resources available; the current business/technology priorities; and organizational and management politics.37

For all those risks where the option to reduce the risk has been chosen, appropriate security measures should be implemented to reduce the risks to the level that has been identified as acceptable, or at least as much as is feasible toward that level. These questions then arise: How much can we reduce the risk? Is it possible to achieve zero risk?

Zero risk is possible when either the cost of an incident is zero or when the likelihood of the incident occurring is zero. The cost of an incident is zero when the value of the implicated asset is zero or when the impact to the organization is zero. Therefore, if one or more of these conditions are found to hold during the risk assessment process, it is meaningless to take security measures. On the other hand, the likelihood of an incident occurring being zero is not possible, because the threats faced by an open system operating in a dynamic, hence highly variable, environment, as contemporary information systems do, and the causes that generate them are extremely complex; human behavior, which is extremely difficult to predict and model, plays a very important role in securing information systems; and the resources that a business or organization has at its disposal are finite.

When faced with a nonzero risk, our interest focuses on reducing the risk to acceptable levels. Because risk is a nondecreasing function in all its constituents, security measures can reduce it by reducing these constituents. Since the asset value cannot be directly reduced,38 it is possible to reduce risk by reducing the likelihood of the threat occurring or the likelihood of the vulnerability being successfully exploited or the impact should the threat succeed. Which of these ways (or a combination of them) an organization chooses to adopt to protect its assets is a business decision and depends on the business requirements, the environment, and the circumstances in which the organization needs to operate. There is no universal or common approach to the selection of security measures. A possibility is to assign numerical values to the efficiency of each security measure, on a scale that matches that in which risks are expressed, and select all security measures that are relevant to the particular risk and have an efficiency score of at least equal to the value of the risk. Several sources provide lists of potential security measures.39,40 Documenting the selected security measures is important in supporting certification and enables the organization to track the implementation of the selected security measures.

When considering reducing a risk, several constraints may appear. These may be related to the timeframe; to financial or technical issues; to the way the organization operates or to its culture; to the environment within which the organization operates; to the applicable legal framework or to ethics; to the ease of use of the appropriate security measures; to the availability and suitability of personnel; or to the difficulties of integrating new and existing security measures. Due to the existence of these constraints, it is likely that some risks will exist for which either the organization cannot install appropriate security measures or for which the cost of implementing appropriate measures outweighs the potential loss through the incident related to the risk occurring. In these cases, a decision may be made to accept the risk and live with the consequences if the incident related to the risk occurs. These decisions must be documented so that management is aware of its risk position and can knowingly accept the risk. The importance of this documentation has led risk acceptance to be identified as a separate process.41 A special case where particular attention must be paid is when an incident related to a risk is deemed to be highly unlikely to occur but, if it occurred, the organization would not survive. If such a risk is deemed to be unacceptable but too costly to reduce, the organization could decide to transfer it.

Risk transfer is an option whereby it is difficult for the organization to reduce the risk to an acceptable level or the risk can be more economically transferred to a third party. Risks can be transferred using insurance. In this case, the question of what is a fair premium arises.42 Another possibility is to use third parties or outsourcing partners to handle critical business assets or processes if they are suitably equipped for doing so. Combining both options is also possible; in this case, the fair premium may be determined.43

Risk avoidance describes any action where the business activities or ways to conduct business are changed to avoid any risk occurring. For example, risk avoidance can be achieved by not conducting certain business activities, by moving assets away from an area of risk, or by deciding not to process particularly sensitive information. Risk avoidance entails that the organization consciously accepts the impact likely to occur if an incident occurs. However, the organization chooses not to install the required security measures to reduce the risk. There are several cases where this option is exercised, particularly when the required measures contradict the culture and/or the policy of the organization.

After the risk treatment decision(s) have been taken, there will always be risks remaining. These are called residual risks. Residual risks can be difficult to assess, but at least an estimate should be made to ensure that sufficient protection is achieved. If the residual risk is unacceptable, the risk treatment process may be repeated.

Once the risk treatment decisions have been taken, the activities to implement these decisions need to be identified and planned. The risk treatment plan needs to identify limiting factors and dependencies, priorities, deadlines and milestones, resources, including any necessary approvals for their allocation, and the critical path of the implementation.

Risk Communication

Risk communication is a horizontal process that interacts bidirectionally with all other processes of risk management. Its purpose is to establish a common understanding of all aspects of risk among all the organization’s stakeholders. Common understanding does not come automatically, since it is likely that perceptions of risk vary widely due to differences in assumptions, needs, concepts, and concerns. Establishing a common understanding is important, since it influences decisions to be taken and the ways in which such decisions are implemented. Risk communication must be made according to a well-defined plan that should include provisions for risk communication under both normal and emergency conditions.

Risk Monitoring and Review

Risk management is an ongoing, never-ending process that is assigned to an individual, a team, or an outsourced third party, depending on the organization’s size and operational characteristics. Within this process, implemented security measures are regularly monitored and reviewed to ensure that they function correctly and effectively and that changes in the environment have not rendered them ineffective. Because over time there is a tendency for the performance of any service or mechanism to deteriorate, monitoring is intended to detect this deterioration and initiate corrective action. Maintenance of security measures should be planned and performed on a regular, scheduled basis.

The results from an original security risk assessment exercise need to be regularly reviewed for change, since there are several factors that could change the originally assessed risks. Such factors may be the introduction of new business functions, a change in business objectives and/or processes, a review of the correctness and effectiveness of the implemented security measures, the appearance of new or changed threats and/or vulnerabilities, or changes external to the organization. After all these different changes have been taken into account, the risk should be recalculated and necessary changes to the risk treatment decisions and security measures identified and documented.

Regular internal audits should be scheduled and should be conducted by an independent party that does not need to be from outside the organization. Internal auditors should not be under the supervision or control of those responsible for the implementation or daily management of the ISMS. Additionally, audits by an external body are not only useful, they are essential for certification.

Finally, complete, accessible, and correct documentation and a controlled process to manage documents are necessary to support the ISMS, although the scope and detail will vary from organization to organization. Aligning these documentation details with the documentation requirements of other management systems, such as ISO 9001, is certainly possible and constitutes good practice. Figure 35.2 pictorially summarizes the different processes within the risk management methodology, as we discussed earlier.

image

Figure 35.2 The risk management methodology.

Integrating Risk Management into the System Development Life Cycle

Risk management must be totally integrated into the system development life cycle. This cycle consists of five phases: initiation; development or acquisition; implementation; operation or maintenance; and disposal. Within the initiation phase, identified risks are used to support the development of the system requirements, including security requirements and a security concept of operations. In the development or acquisition phase, the risks identified can be used to support the security analyses of the system that may lead to architecture and design tradeoffs during system development. In the implementation phase, the risk management process supports the assessment of the system implementation against its requirements and within its modeled operational environment. In the operation or maintenance phase, risk management activities are whenever major changes are made to a system in its operational environment. Finally, in the disposal phase, risk management activities are performed for system components that will be disposed of or replaced to ensure that the hardware and software are properly disposed of, that residual data is appropriately handled, and that system migration is conducted in a secure and systematic manner.44

Critique of Risk Management as a Methodology

Risk management as a scientific methodology has been criticized as being shallow. The main reason for this rather strong and probably unfair criticism is that risk management does not provide for feedback of the results of the selected security measures or of the risk treatment decisions. In most cases, even though the trend has already changed, information systems security is a low-priorityproject for management, until some security incident happens. Then, and only then, does management seriously engage in an effort to improve security measures. However, after a while, the problem stops being the center of interest, and what remains is a number of security measures, some specialized hardware and software, and an operationally more complex system. Unless an incident happens again, there is no way for management to know whether their efforts were really worthwhile. After all, in many cases the information system had operated for years in the past without problems, without the security improvements that the security professionals recommended.

The risk management methodology, as has already been stated, is based on the scientific foundations of statistical decision making. The Bayes Theorem, on which the theory is based, pertains to the statistical revision of a priori probabilities, providing a posteriori probabilities, and is applied when a decision is sought based on imperfect information. In risk management, the decision for quantifying an event may be a function of additional factors, other than the probability of the event itself occurring. For example, the probability for a riot to occur may be related to the stability of the political system. Thus, the calculation of this probability should involve quantified information relevant to the stability of the political system. The overall model accepts the possibility that any information (e.g., “The political system is stable”) may be inaccurate. In comparison to this formal framework, risk management, as applied by the security professionals, is simplistic. Indeed, by avoiding the complexity that accompanies the formal probabilistic modeling of risks and uncertainty, risk management looks more like a process that attempts to guess rather than formally predict the future on the basis of statistical evidence.

Finally, the risk management methodology is highly subjective in assessing the value of assets, the likelihood of threats occurring, the likelihood of vulnerabilities being successfully exploited by threats, and the significance of the impact. This subjectivity is frequently obscured by the formality of the underlying mathematical-probabilistic models, the systematic way in which most risk analysis methods work, and the objectivity of the tools that support these methods.

If indeed the skepticism about the scientific soundness of the risk management methodology is justified, the question: Why, then, has risk management as a practice survived so long? becomes crucial. There are several answers to this question.

Risk management is a very important instrument in designing, implementing, and operating secure information systems, because it systematically classifies and drives the process of deciding how to treat risks. In doing so, it facilitates better understanding of the nature and the operation of the information system, thus constituting a means for documenting and analyzing the system. Therefore, it is necessary for supporting the efforts of the organization’s management to design, implement, and operate secure information systems.

Traditionally, risk management has been seen by security professionals as a means to justify to management the cost of security measures. Nowadays, it does not only that, but it also fulfills legislative and/or regulatory provisions that exist in several countries, which demand information systems to be protected in a manner commensurate with the threats they face.

Risk management constitutes an efficient means of communication between technical and administrative personnel, as well as management, because it allows us to express the security problem in a language comprehensible by management, by viewing security as an investment that can be assessed in terms of cost/benefit analysis.45 Additionally, it is quite flexible, so it can fit in several scientific frameworks and be applied either by itself or in combination with other methodologies. It is the most widely used methodology for designing and managing information systems security and has been successfully applied in many cases.

Finally, an answer frequently offered by security professionals is that there simply is no other efficient way to carry out the tasks that risk management does. Indeed, it has been proved in practice that by simply using methods of management science, law, and accounting, it is not possible to reach conclusions that can adequately justify risk treatment decisions.

Risk Management Methods

Many methods for risk management are available today. Most of them are supported by software tools. Selecting the most suitable method for a specific business environment and the needs of a specific organization is very important, albeit quite difficult, for a number of reasons46:

• There is a lack of a complete inventory of all available methods, with all their individual characteristics.

• There exists no commonly accepted set of evaluation criteria for risk management methods.

• Some methods only cover parts of the whole risk management process. For example, some methods only calculate the risk, without covering the risk treatment process. Some others focus on a small part of the whole process (e.g., disaster recovery planning). Some focus on auditing the security measures, and so on.

• Risk management methods differ widely in the analysis level that they use. Some use high-level descriptions of the information system under study; others call for detailed descriptions.

• Some methods are not freely available to the market, a fact that makes their evaluation very difficult, if at all possible.

The National Institute of Standards and Technology (NIST) compiled, in 1991, a comprehensive report on risk management methods and tools.47 The European Commission, recognizing the need for a homogenized and software-supported risk management methodology for use by European businesses and organizations, assigned, in 1993, a similar project to a group of companies. Part of this project’s results were the creation of an inventory and the evaluation of all available risk management methods at the time.48 In 2006 the European Network and Information Security Agency (ENISA) recently repeated the endeavor. Even though preliminary results toward an inventory of risk management/risk assessment methods have been made available,49 the process is still ongoing.50 Some of the most widely used risk management methods51 are briefly described in the sequel.

CRAMM (CCTA Risk Analysis and Management Methodology)52 is a method developed by the British government organization CCTA (Central Communication and Telecommunication Agency), now renamed the Office of Government Commerce (OGC). CRAMM was first released in 1985. At present CRAMM is the U.K. government’s preferred risk analysis method, but CRAMM is also used in many countries outside the U.K. CRAMM is especially appropriate for large organizations, such as government bodies and industry. CRAMM provides a staged and disciplined approach embracing both technical and nontechnical aspects of security. To assess these components, CRAMM is divided into three stages: asset identification and valuation; threat and vulnerability assessment; and countermeasure selection and recommendation. CRAMM enables the reviewer to identify the physical, software, data, and location assets that make up the information system. Each of these assets can be valued. Physical assets are valued in terms of their replacement cost. Data and software assets are valued in terms of the impact that would result if the information were to be unavailable, destroyed, disclosed, or modified. CRAMM covers the full range of deliberate and accidental threats that may affect information systems. This stage concludes by calculating the level of risk. CRAMM contains a very large countermeasure library consisting of over 3000 detailed countermeasures organized into over 70 logical groupings. The CRAMM software, developed by Insight Consulting,53 uses the measures of risks determined during the previous stage and compares them against the security level (a threshold level associated with each countermeasure) to identify whether the risks are sufficiently great to justify the installation of a particular countermeasure. CRAMM provides a series of help facilities, including backtracking, what-if scenarios, prioritization functions, and reporting tools, to assist with the implementation of countermeasures and the active management of the identified risks. CRAMM is ISO/IEC 17799, Gramm-Leach-Bliley Act (GLBA), and Health Insurance Portability and Accountability Act (HIPAA) compliant.

The methodological approach offered by EBIOS (Expression des Besoins et Identification des Objectifs de Sécurité)54 provides a global and consistent view of information systems security. It was first released in 1995. The method takes into account all technical entities and nontechnical entities. It allows all personnel using the information system to be involved in security issues and offers a dynamic approach that encourages interaction among the organization’s various jobs and functions by examining the complete life cycle of the system. Promoted by the DCSSI (Direction Centrale de la Sécurité des Systèmes d’ Information) of the French government and recognized by the French administrations, EBIOS is also a reference in the private sector and abroad. It is compliant with major IT security standards. The EBIOS approach consists of five phases. Phase 1 deals with context analysis in terms of global business process dependency on the information system. Security needs analysis and threat analysis are conducted in Phases 2 and 3. Phases 4 and 5 yield an objective diagnostic on risks. The necessary and sufficient security objectives (and further security requirements) are then stated, proof of coverage is furnished, and residual risks made explicit. Local standard bases (e.g., German IT Grundschutz) are easily added on to its internal knowledge bases and catalogues of best practices. EBIOS is supported by a software tool developed by Central Information Systems Security Division (France). The tool helps the user to produce all risk analysis and management steps according to the EBIOS method and allows all the study results to be recorded and the required summary documents to be produced. EBIOS is compliant with ISO/IEC 27001, ISO/IEC 13335 (GMITS), ISO/IEC 15408 (Common Criteria), ISO/IEC 17799, and ISO/IEC 21827.

The Information Security Forum’s (ISF) Standard of Good Practice55 provides a set of high-level principles and objectives for information security together with associated statements of good practice. The Standard of Good Practice is split into five distinct aspects, each of which covers a particular type of environment. These are security management; critical business applications; computer installations; networks; and systems development. FIRM (Fundamental Information Risk Management) is a detailed method for monitoring and controlling information risk at the enterprise level. It has been developed as a practical approach to monitoring the effectiveness of information security. As such, it enables information risk to be managed systematically across enterprises of all sizes. It includes comprehensive implementation guidelines, which explain how to gain support for the approach and get it up and running. The Information Risk Scorecard is an integral part of FIRM. The Scorecard is a form used to collect a range of important details about a particular information resource such as the name of the owner, criticality, level of threat, business impact, and vulnerability. The ISF’s Information Security Status Survey is a comprehensive risk management tool that evaluates a wide range of security measures used by organizations to control the business risks associated with their IT-based information systems. SARA (Simple to Apply Risk Analysis) is a detailed method for analyzing information risk in critical information systems. SPRINT (Simplified Process for Risk Identification) is a relatively quick and easy-to-use method for assessing business impact and for analyzing information risk in important but not critical information systems. The full SPRINT method is intended for application to important, but not critical, systems. It complements the SARA method, which is better suited to analyzing the risks associated with critical business systems. SPRINT first helps decide the level of risk associated with a system. After the risks are fully understood, SPRINT helps determine how to proceed and, if the SPRINT process continues, culminates in the production of an agreed plan of action for keeping risks within acceptable limits. SPRINT can help identify the vulnerabilities of existing systems and the safeguards needed to protect against them; and define the security requirements for systems under development and the security measures needed to satisfy them. The method is compliant to ISO/IEC 17799.

IT-Grundschutz56 provides a method for an organization to establish an ISMS. It was first released in 1994. It comprises both generic IT security recommendations for establishing an applicable IT security process and detailed technical recommendations to achieve the necessary IT security level for a specific domain. The IT security process suggested by IT-Grundschutz consists of the following steps: initialization of the process; definition of IT security goals and business environment; establishment of an organizational structure for IT security; provision of necessary resources; creation of the IT security concept; IT structure analysis; assessment of protection requirements; modeling; IT security check; supplementary security analysis; implementation planning and fulfillment; maintenance, monitoring, and improvement of the process; and IT-Grundschutz Certification (optional). The key approach in IT-Grundschutz is to provide a framework for IT security management, offering information for commonly used IT components (modules). IT-Grundschutz modules include lists of relevant threats and required countermeasures in a relatively technical level. These elements can be expanded, complemented, or adapted to the needs of an organization. IT-Grundschutz is supported by a software tool named Gstool that has been developed by the Federal Office for Information Security (BSI). The method is compliant with ISO/IEC 17799 and ISO/IEC 27001.

MEHARI (Méthode Harmonisée d’Analyse de Risques Informatiques)57 is a method designed by security experts of the CLUSIF (Club de la Sécurité Informatique Français) that replaced the earlier CLUSIF-sponsored MARION and MELISA methods. It was first released in 1996. It proposes an approach for defining risk reduction measures suited to the organization objectives. MEHARI provides a risk assessment model and modular components and processes. It enhances the ability to discover vulnerabilities through audit and to analyze risk situations. MEHARI includes formulas facilitating threat identification and threat characterization and optimal selection of corrective actions. MEHARI allows for providing accurate indications for building security plans, based on a complete list of vulnerability control points and an accurate monitoring process in a continual improvement cycle. It is compliant with ISO/IEC 17799 and ISO/IEC 13335.

The OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Evaluation)58 method, developed by the Software Engineering Institute of Carnegie-Mellon University, defines a risk-based strategic assessment and planning technique for security. It was first released in 1999. OCTAVE is self-directed in the sense that a small team of people from the operational (or business) units and the IT department work together to address the security needs of the organization. The team draws on the knowledge of many employees to define the current state of security, identify risks to critical assets, and set a security strategy. OCTAVE is different from typical technology-focused assessments in that it focuses on organizational risk and strategic, practice-related issues, balancing operational risk, security practices, and technology. The OCTAVE method is driven by operational risk and security practices. Technology is examined only in relation to security practices. OCTAVE-S is a variation of the method tailored to the limited means and unique constraints typically found in small organizations (less than 100 people). OCTAVE Allegro is tailored for organizations focused on information assets and a streamlined approach. The Octave Automated Tool has been implemented by Advanced Technology Institute (ATI) to help users with the implementation of the OCTAVE method.

Callio Secura 1779959 is a product from Callio Technologies. It was first released in 2001. It is a multiuser Web application with database support that lets the user implement and certify an ISMS and guides the user through each of the steps leading to ISO 27001 / 17799 compliance and BS 7799-2 certification. Moreover, it provides document management functionality as well as customization of the tool’s databases. It also allows carrying out audits for other standards, such as COBIT, HIPAA, and Sarbanes-Oxley, by importing the user’s own questionnaires. Callio Secura is compliant with ISO/IEC 17799 and ISO/IEC 27001.

COBRA60 is a standalone application for risk management from C&A Systems Security. It is a questionnaire-based Windows PC tool, using expert system principles and a set of extensive knowledge bases. It has also embraced the functionality to optionally deliver other security services, such as checking compliance with the ISO 17799 security standard or with an organization’s own security policies. It can be used for identification of threats and vulnerabilities; it measures the degree of actual risk for each area or aspect of a system and directly links this to the potential business impact. It offers detailed solutions and recommendations to reduce the risks and provides business as well as technical reports. It is compliant with ISO/IEC 17799.

Allion’s product CounterMeasures61 performs risk management based on the US-NIST 800 series and OMB Circular A-130 USA standards. The user standardizes the evaluation criteria and, using a “tailor-made” assessment checklist, the software provides objective evaluation criteria for determining security posture and/or compliance. CounterMeasures is available in both networked and desktop configurations. It is compliant with the NIST 800 series and OMB Circular A-130 USA standards.

Proteus62 is a product suite from InfoGov. It was first released in 1999. Through its components the user can perform gap analysis against standards such as ISO 17799 or create and manage an ISMS according to ISO 27001 (BS 7799-2). Proteus Enterprise is a fully integrated Web-based Information Risk Management, Compliance and Security solution that is fully scalable. Using Proteus Enterprise, companies can perform any number of online compliance audits against any standard and compare between them. They can then assess how deficient compliance security measures affect the company both financially and operationally by mapping them onto its critical business processes. Proteus then identifies risks and mitigates those risks by formulating a work plan, maintains a current and demonstrable compliance status to the regulators and senior management alike. The system works with the company’s existing infrastructure and uses RiskView to bridge the gap between the technical/regulatory community and senior management. Proteus is a comprehensive system that includes online compliance and gap analysis, business impact, risk assessment, business continuity, incident management, asset management, organization roles, policy repository, and action plans. Its compliance engine supports any standard (international, industry, and corporate specific) and is supplied with a choice of comprehensive template questionnaires. The system is fully scalable and can size from a single user up to the largest of multinational organizations. The product maintains a full audit trail. It can perform online audits for both internal departments and external suppliers. It is compliant with ISO/IEC 17799 and ISO/IEC 27001.

RA2 art of risk63 is the new risk assessment tool from AEXIS, the originators of the RA Software Tool. It was first released in 2000. It is designed to help businesses to develop an ISMS in compliance with ISO/IEC 27001:2005 (previously BS 7799 Part 2:2002), and the code of practice ISO/IEC 27002. It covers a number of security processes that direct businesses toward designing and implementing an ISMS. RA2 art of risk can be customized to meet the requirements of the organization. This includes the assessment of assets, threats, and vulnerabilities applicable to the organization, and the possibilities to include security measures additional to the ones in ISO/IEC 27002 in the assessment. It also includes a set of editable questions that can be used to assess the compliance with ISO/IEC 27002. RA2 Information Collection Device, a component that is distributed along with the tool, can be installed anywhere in the organization as needed to collect and feed back information into the risk assessment process. It is compliant with ISO/IEC 17799 and ISO/IEC 27001.

RiskWatch for Information Systems & ISO 1779964 is the RiskWatch company’s solution for information system risk management. Other relevant products in the same suite are RiskWatch for Financial Institutions, RiskWatch for HIPAA Security, RiskWatch for Physical & Homeland Security, RiskWatch for University and School Security, and RiskWatch for NERC (North American Electric Reliability Corporation) and C-TPAT-Supply Chain. The RiskWatch for Information Systems & ISO 17799 tool conducts automated risk analysis and vulnerability assessments of information systems. All RiskWatch software is fully customizable by the user. It can be tailored to reflect any corporate or government policy, including incorporation of unique standards, incident report data, penetration test data, observation, and country-specific threat data. Every product includes both information security as well as physical security. Project plans and a simple workflow make it easy to create accurate and supportable risk assessments. The tool includes security measures from the ISO 17799 and USNIST 800-26 standards, with which it is compliant.

The SBA (Security by Analysis) method65 is a concept that’s existed since the beginning of the 1980s. It is more of a way of looking at analysis and security work in computerized businesses than a fully developed method. It could be called the “human model” concerning risk and vulnerability analyses. The human model implies a very strong confidence in knowledge among staff and individuals within the analyzed business or organizations. It is based on the fact that it is those who are working with the everyday problems, regardless of position, who have got the greatest possibilities to pinpoint the most important problems and to suggest the solutions. SBA is supported by three software tools. Every tool has its own special method, but they are based on the same concept: gathering a group of people who represent the necessary breadth of knowledge. SBA Check is primarily a tool for anyone working with or responsible for information security issues. The role of analysis leader is central to the use of SBA Check. The analysis leader is in charge of ensuring that the analysis participants’ knowledge of the operation is brought to bear during the analysis process in a way that is relevant, so that the description of the current situation and opportunities for improvement retain their consistent quality. SBA Scenario is a tool that helps evaluate business risks methodically through quantitative risk analysis. The tool also helps evaluate which actions are correct and financially motivated through risk management. SBA Project is an IT support tool and a method that helps identify conceivable problems in a project as well as providing suggestions for conceivable measures to deal with those problems. The analysis participants’ views and knowledge are used as a basis for providing a good picture of the risk in the project.

4. Risk Management Laws and Regulations

Many nations have adopted laws and regulations containing clauses that, directly or indirectly, pertain to aspects of information systems risk management. Similarly, a large number of international laws and regulations exist. In the following, a brief description of such documents with an international scope, directly relevant to information systems risk management,66 is given.

The “Regulation (EC) No 45/2001 of the European Parliament and of the Council of 18 December 2000 on the protection of individuals with regard to the processing of personal data by the Community institutions and bodies and on the free movement of such data”67 requires that any personal data processing activity by Community institutions undergoes a prior risk analysis to determine the privacy implications of the activity and to determine the appropriate legal, technical, and organizational measures to protect such activities. It also stipulates that such activity is effectively protected by measures, which must be state of the art, keeping into account the sensitivity and privacy implications of the activity. When a third party is charged with the processing task, its activities are governed by suitable and enforced agreements. Furthermore, the regulation requires the European Union’s (EU) institutions and bodies to take similar precautions with regard to their telecommunications infrastructure, and to properly inform the users of any specific risks of security breaches.68

The European Commission’s Directive on Data Protection went into effect in October 1998 and prohibits the transfer of personal data to non-EU nations that do not meet the European “adequacy” standard for privacy protection. The United States takes a different approach to privacy from that taken by the EU; it uses a sectoral approach that relies on a mix of legislation, regulation, and self-regulation. The EU, however, relies on comprehensive legislation that, for example, requires creation of government data protection agencies, registration of data bases with those agencies, and in some instances prior approval before personal data processing may begin. The Safe Harbor Privacy Principles69 aim at bridging this gap by providing that an EU-based entity self-certifies its compliance with them.

The “Commission Decision of 15 June 2001 on standard contractual clauses for the transfer of personal data to third countries, under Directive 95/46/EC” and the “Commission Decision of 27 December 2004 amending Decision 2001/497/EC as regards the introduction of an alternative set of standard contractual clauses for the transfer of personal data to third countries”70 provide a set of voluntary model clauses that can be used to export personal data from a data controller who is subject to EU data protection rules to a data processor outside the EU who is not subject to these rules or to a similar set of adequate rules. Upon acceptance of the model clauses, the data controller must warrant that the appropriate legal, technical, and organizational measures to ensure the protection of the personal data are taken. Furthermore, the data processor must agree to permit auditing of its security practices to ensure compliance with applicable European data protection rules.71

The Health Insurance Portability and Accountability Act of 199672 is a U.S. law with regard to health insurance coverage, electronic health, and requirements for the security and privacy of health data. Title II of HIPAA, known as the Administrative Simplification (AS) provisions, requires the establishment of national standards for electronic health care transactions and national identifiers for providers, health-insurance plans, and employers. Per the requirements of Title II, the Department of Health and Human Services has promulgated five rules regarding Administrative Simplification: the Privacy Rule, the Transactions and Code Sets Rule, the Security Rule, the Unique Identifiers Rule, and the Enforcement Rule. The standards are meant to improve the efficiency and effectiveness of the U.S. health care system by encouraging the widespread use of electronic data interchange.

The “Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications)”73 requires that any provider of publicly available electronic communications services takes the appropriate legal, technical and organizational measures to ensure the security of its services; informs his subscribers of any particular risks of security breaches; and takes the necessary measures to prevent such breaches, and indicates the likely costs of security breaches to the subscribers.74

The “Directive 2006/24/EC of the European Parliament and of the Council of 15 March 2006 on the retention of data generated or processed in connection with the provision of publicly available electronic communications services or of public communications networks and amending Directive 2002/58/EC”75 requires the affected providers of publicly accessible electronic telecommunications networks to retain certain communications data to be specified in their national regulations, for a specific amount of time, under secured circumstances in compliance with applicable privacy regulations; to provide access to this data to competent national authorities; to ensure data quality and security through appropriate technical and organizational measures, shielding it from access by unauthorized individuals; to ensure its destruction when it is no longer required; and to ensure that stored data can be promptly delivered on request from the competent authorities.76

The “Regulation (EC) No 1907/2006 of the European Parliament and of the Council of 18 December 2006 concerning the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH), establishing a European Chemicals Agency, amending Directive 1999/45/EC and repealing Council Regulation (EEC) No 793/93 and Commission Regulation (EC) No 1488/94 as well as Council Directive 76/769/EEC and Commission Directives 91/155/EEC, 93/67/EEC, 93/105/EC and 2000/21/EC”77 implants risk management obligations by imposing a reporting obligation on producers and importers of articles covered by the regulation, with regard to the qualities of certain chemical substances, which includes a risk assessment and obligation to examine how such risks can be managed. This information is to be registered in a central database. It also stipulates that a Committee for Risk Assessment within the European Chemicals Agency established by the Regulation is established and requires that the information provided is kept up to date with regard to potential risks to human health or the environment, and that such risks are adequately managed.78

The “Council Framework Decision 2005/222/JHA of 24 February 2005 on attacks against information systems”79 contains the conditions under which legal liability can be imposed on legal entities for conduct of certain natural persons of authority within the legal entity. Thus, the Framework decision requires that the conduct of such figures within an organization is adequately monitored, also because the decision states that a legal entity can be held liable for acts of omission in this regard. Additionally, the decision defines a series of criteria under which jurisdictional competence can be established. These include the competence of a jurisdiction when a criminal act is conducted against an information system within its borders.80

The “OECD Guidelines for the Security of Information Systems and Networks: Towards a Culture of Security” (25 July 2002)81 aim to promote a culture of security; to raise awareness about the risk to information systems and networks (including the policies, practices, measures, and procedures available to address those risks and the need for their adoption and implementation); to foster greater confidence in information systems and networks and the way in which they are provided and used; to create a general frame of reference; to promote cooperation and information sharing; and to promote the consideration of security as an important objective. The guidelines state nine basic principles underpinning risk management and information security practices. No part of the text is legally binding, but noncompliance with any of the principles is indicative of a breach of risk management good practices that can potentially incur liability.82

The “Basel Committee on Banking Supervision—Risk Management Principles for Electronic Banking”83 identifies 14 Risk Management Principles for Electronic Banking to help banking institutions expand their existing risk oversight policies and processes to cover their ebanking activities. The Risk Management Principles fall into three broad, and often overlapping, categories of issues that are grouped to provide clarity: board and management oversight; security controls; and legal and reputational risk management. The Risk Management Principles are not put forth as absolute requirements or even “best practice,” nor do they attempt to set specific technical solutions or standards relating to ebanking. Consequently, the Risk Management Principles and sound practices are expected to be used as tools by national supervisors and to be implemented with adaptations to reflect specific national requirements and individual risk profiles where necessary.

The “Commission Recommendation 87/598/EEC of 8 December 1987, concerning a European code of conduct relating to electronic payments”84 provides a number of general nonbinding recommendations, including an obligation to ensure that privacy is respected and that the system is transparent with regard to potential security or confidentiality risks, which must obviously be mitigated by all reasonable means.85

The “Public Company Accounting Reform and Investor Protection Act of 30 July 2002” (commonly referred to as Sarbanes-Oxley and often abbreviated to SOX or Sarbox),86 even though indirectly relevant to risk management, is discussed here due to its importance. The Act is a U.S. federal law passed in response to a number of major corporate and accounting scandals including those affecting Enron, Tyco International, and WorldCom (now MCI). These scandals resulted in a decline of public trust in accounting and reporting practices. The legislation is wide ranging and establishes new or enhanced standards for all U.S. public company boards, management, and public accounting firms. Its provisions range from additional Corporate Board responsibilities to criminal penalties, and require the Securities and Exchange Commission (SEC) to implement rulings on requirements to comply with the new law. The first and most important part of the Act establishes a new quasi-public agency, the Public Company Accounting Oversight Board (www.pcaobus.org), which is charged with overseeing, regulating, inspecting, and disciplining accounting firms in their roles as auditors of public companies. The Act also covers issues such as auditor independence, corporate governance and enhanced financial disclosure.

The “Office of the Comptroller of the Currency (OCC) – Electronic Banking Guidance”87 is fairly high level and should be indicative of the subject matter to be analyzed and assessed by banking institutions, rather than serving as a yardstick to identify actual problems.88

The “Payment Card Industry (PCI) Security Standards Council—Data Security Standard (DSS)”89 provides central guidance allowing financial service providers relying on payment cards to implement the necessary policies, procedures, and infrastructure to adequately safeguard their customer account data. PCI DSS has no formal binding legal power. Nevertheless, considering its origins and the key participants, it holds significant moral authority, and noncompliance with the PCI DSS by a payment card service provider may be indicative of inadequate risk management practices.90

The “Directive 2002/65/EC of the European Parliament and of the Council of 23 September 2002 concerning the distance marketing of consumer financial services and amending Council Directive 90/619/EEC and Directives 97/7/EC and 98/27/EC (the ‘Financial Distance Marketing Directive’)”91 requires that, as a part of the minimum information to be provided to a consumer prior to concluding a distance financial services contract, the consumer must be clearly and comprehensibly informed of any specific risks related to the service concerned.92

5. Risk Management Standards

Various national and international, de jure, and de facto standards exist that are related, directly or indirectly, to information systems risk management. In the following, we briefly describe the most important international standards that are directly related to risk management.

The “ISO/IEC 13335-1:2004—Information technology—Security techniques—Management of information and communications technology security—Part 1: Concepts and models for information and communications technology security management” standard93 defines and describes the concepts associated with the management of IT security. It also identifies the relationships between the management of IT security and management of IT in general. Further, it presents several models, which can be used to explain IT security, and provides general guidance on the management of IT security. It is a commonly used code of practice and serves as a resource for the implementation of security management practices and as a yardstick for auditing such practices.94

The “BS 25999-1:2006—Business continuity management—Part 1: Code of Practice” standard95 is a code of practice that takes the form of guidance and recommendations. It establishes the process, principles, and terminology of business continuity management (BCM), providing a basis for understanding, developing, and implementing business continuity within an organization and to provide confidence in business-to-business and business-to-customer dealings. In addition, it provides a comprehensive set of security measures based on BCM best practice and covers the whole BCM life cycle.96

The “ISO/IEC TR 15443-1:2005—Information technology—Security techniques—A framework for IT security assurance”97 technical report introduces, relates, and categorizes security assurance methods to a generic life-cycle model in a manner enabling an increased level of confidence to be obtained in the security functionality of a deliverable. It allows security professionals to determine a suitable methodology for assessing a security service, product, or environmental factor (a deliverable). Following this technical report, it can be determined which level of security assurance a deliverable is intended to meet and whether this threshold is actually met by the deliverable.

The “ISO/IEC 17799:2005— Information technology—Security techniques—Code of practice for information security management” standard98 establishes guidelines and general principles for initiating, implementing, maintaining, and improving information security management in an organization. It provides general guidance on the commonly accepted goals of information security management. The standard contains best practices of security measures in many areas of information security management. These are intended to be implemented to meet the requirements identified by a risk assessment. The standard is intended as a common basis and practical guideline for developing organizational security standards and effective security management practices, and to help build confidence in interorganizational activities.

The “ISO/IEC 18028:2006—Information technology—Security techniques—IT network security” standard99 extends the security management guidelines provided in ISO/IEC TR 13335 and ISO/IEC 17799 by detailing the specific operations and mechanisms needed to implement network security measures in a wider range of network environments, providing a bridge between general IT security management issues and network security technical implementations. The standard provides detailed guidance on the security aspects of the management, operation and use of IT networks, and their interconnections. It defines and describes the concepts associated with, and provides management guidance on, network security, including on how to identify and analyze the communications-related factors to be taken into account to establish network security requirements, with an introduction to the possible areas where security measures can be applied and the specific technical areas.

The “ISO/IEC 27001:2005—Information technology—Security techniques—Information security management systems—Requirements” standard100 is designed to ensure the selection of adequate and proportionate security measures that protect information assets and give confidence to interested parties. The standard covers all types of organizations (e.g., commercial enterprises, government agencies, not-for-profit organizations) and specifies the requirements for establishing, implementing, operating, monitoring, reviewing, maintaining, and improving a documented ISMS within the context of the organization’s overall business risks. Further, it specifies requirements for the implementation of security measures customized to the needs of individual organizations or parts thereof. Its application in practice is often combined with related standards, such as BS 7799-3:2006 which provides additional guidance to support the requirements given in ISO/IEC 27001:2005.

The “BS 7799-3:2006—Information security management systems—Guidelines for information security risk management” standard101 gives guidance to support the requirements given in BS ISO/IEC 27001:2005 regarding all aspects of an ISMS risk management cycle and is therefore typically applied in conjunction with this standard in risk assessment practices. This includes assessing and evaluating the risks, implementing security measures to treat the risks, monitoring and reviewing the risks, and maintaining and improving the system of risk treatment. The focus of this standard is effective information security through an ongoing program of risk management activities. This focus is targeted at information security in the context of an organization’s business risks.

The “ISO/IEC TR 18044:2004—Information technology—Security techniques—Information security incident management” standard102 provides advice and guidance on information security incident management for information security managers, and information system, service, and network managers. It is a high-level resource introducing basic concepts and considerations in the field of incident response. As such, it is mostly useful as a catalyst to awareness raising initiatives in this regard.

The “ISF Standard of Good Practice” standard103 is a commonly quoted source of good practices and serves as a resource for the implementation of information security policies and as a yardstick for auditing such systems and/or the surrounding practices.104 The standard covers six distinct aspects of information security, each of which relates to a particular type of environment. The standard focuses on how information security supports an organization’s key business processes.

The “ISO/TR 13569:2005—Financial services—Information security guidelines” standard105 provides guidelines on the development of an information security program for institutions in the financial services industry. It includes discussion of the policies, organization and the structural, legal and regulatory components of such a program. Considerations for the selection and implementation of security measures, and the elements required to manage information security risk within a modern financial services institution are discussed. Recommendations are given that are based on consideration of the institution’s business environment, practices, and procedures. Included in this guidance is a discussion of legal and regulatory compliance issues, which should be considered in the design and implementation of the program.

The U.S. General Accounting Office “Information security risk assessment: practices of leading organizations” guide106 is intended to help federal managers implement an ongoing information security risk assessment process by providing examples, or case studies, of practical risk assessment procedures that have been successfully adopted by four organizations known for their efforts to implement good risk assessment practices. More important, it identifies, based on the case studies, factors that are important to the success of any risk assessment program, regardless of the specific methodology employed.

The U.S. NIST SP 800-30 “Risk management guide for information technology systems”107 developed by NIST in 2002, provides a common foundation for experienced and inexperienced, technical, and nontechnical personnel who support or use the risk management process for their IT systems. Its guidelines are for use by federal organizations which process sensitive information and are consistent with the requirements of OMB Circular A-130, Appendix III. The guidelines may also be used by nongovernmental organizations on a voluntary basis, even though they are not mandatory and binding standards.

Finally, the U.S. NIST SP 800-39 “Managing Risk from Information Systems: An Organizational Perspective” standard108. The standard provides guidelines for managing risk to organizational operations, organizational assets, individuals, other organizations, and the nation resulting from the operation and use of information systems. It provides a structured yet flexible approach for managing that portion of risk resulting from the incorporation of information systems into the mission and business processes of organizations.

6. Summary

The information systems risk management methodology was developed with an eye to guiding the design and the management of security of an information system within the framework of an organization. It aims at analyzing and assessing the factors that affect risk, to subsequently treat the risk, and to continuously monitor and review the security plan. Central concepts of the methodology are those of the threat, the vulnerability, the asset, the impact, and the risk. The operational relationship of these concepts materializes when a threat exploits one or more vulnerabilities to harm assets, an event that will impact the organization. Once the risks are identified and assessed, they must be treated, that is, transferred, avoided, or accepted. Treating the risks is done on the basis of a carefully designed security plan, which must be continuously monitored, reviewed, and amended as necessary. Many methods implementing the whole or parts of the risk management methodology have been developed. Even though most of them follow closely the methodology as described in pertinent international standards, they differ considerably in both their underlying philosophy and in their specific steps. The risk management methodology has been and is being applied internationally with considerable success and enjoys universal acceptance. However, it does suffer several disadvantages that should be seriously considered in the process of applying it. Particular attention must be paid to the subjectivity of its estimates, which is often obscured by the formality of the underlying probabilistic models and by the systematic nature of most of the risk management methods. Subjectivity in applying the methodology is unavoidable and should be accepted and consciously managed. A number of international laws and regulations contain provisions for information system risk management, in addition to national provisions. The risk management methodology is standardized by international organizations.


1E.A. Kiountouzis and S.A. Kokolakis, “An analyst’s view of information systems security”, in Information systems security: Facing the information society of the 21st century, Katsikas, S. K., and Gritzalis, D. (eds.), Chapman & Hall, 1996.

2M. Gerber and R. von Solms, “Management of risk in the information age,” Computers & Security, Vol. 24, pp. 16–30, 2005.

3M. Douglas, “Risk as a forensic resource,” Daedalus, Vol. 119, Issue 4, pp. 1–17, 1990.

4R. Courtney, “Security risk assessment in electronic data processing,” in the AFIPS Conference Proceedings of the National Computer Conference 46, AFIPS, Arlington, VA, pp. 97–104, 1977.

5ISO/IEC, Information technology – Security techniques-Information security risk management, ISO/IEC FDIS 27005:2008 (E).

6British Standards Institute, Information technology-Security techniques-Management of information and communications technology security—Part 1: Concepts and models for information and communications technology security management, BS ISO/IEC 13335-1:2004.

7ISO/IEC, Information technology – Security techniques-Information security risk management, ISO/IEC FDIS 27005:2008 (E).

8ISO/IEC, Information technology – Security techniques-Information security risk management, ISO/IEC FDIS 27005:2008 (E).

9British Standards Institute, Information technology—Security techniques—Information security incident management, BS ISO/IEC TR 18044:2004.

10British Standards Institute, Information technology—Security techniques—Information security incident management, BS ISO/IEC TR 18044:2004.

11R. Baskerville, “Information Systems Security Design Methods: Implications for Information Systems Development,” ACM Computing Surveys, Vol. 25, No. 4, pp. 375–414, 1993.

12R. Baskerville, “Risk analysis as a source of professional knowledge,” Computers & Security, Vol. 10, pp. 749–764, 1991.

13ISO/IEC, Information technology – Security techniques-Information security risk management, ISO/IEC FDIS 27005:2008 (E).

14British Standards Institute, ISMSs—Part3: Guidelines for information security risk management, BS 7799-3:2006.

15British Standards Institute, ISMSs—Part3: Guidelines for information security risk management, BS 7799-3:2006.

16British Standards Institute, ISMSs—Part3: Guidelines for information security risk management, BS 7799-3:2006.

17ISO/IEC, Information technology – Security techniques-Information security risk management, ISO/IEC FDIS 27005:2008 (E).

18British Standards Institute, ISMSs—Part3: Guidelines for information security risk management, BS 7799-3:2006.

19R. Baskerville, “Risk analysis as a source of professional knowledge,” Computers & Security, Vol. 10, pp. 749–764, 1991.

20ISO/IEC, Information technology – Security techniques-Information security risk management, ISO/IEC FDIS 27005:2008 (E).

21British Standards Institute, ISMSs—Part3: Guidelines for information security risk management, BS 7799-3:2006.

22G. Stoneburner, A. Goguen and A. Feringa, Risk Management guide for information technology systems, National Institute of Standards and Technology, Special Publication SP 800-30, 2002.

23ISO/IEC, Information technology – Security techniques-Information security risk management, ISO/IEC FDIS 27005:2008 (E).

24British Standards Institute, ISMSs—Part3: Guidelines for information security risk management, BS 7799-3:2006.

25ISO/IEC, Information technology – Security techniques-Information security risk management, ISO/IEC FDIS 27005:2008 (E).

26ISO/IEC, Information technology – Security techniques-Information security risk management, ISO/IEC FDIS 27005:2008 (E).

27ISO/IEC, Information Security Management—Specification With Guidance for Use, ISO 27001.

28G. Stoneburner, A. Goguen and A. Feringa, Risk Management guide for information technology systems, National Institute of Standards and Technology, Special Publication SP 800-30, 2002.

29British Standards Institute, ISMSs—Part3: Guidelines for information security risk management, BS 7799-3:2006.

30ISO/IEC, Information technology – Security techniques-Information security risk management, ISO/IEC FDIS 27005:2008 (E).

31ISO/IEC, Information technology – Security techniques-Information security risk management, ISO/IEC FDIS 27005:2008 (E).

32ISO/IEC, Information technology – Security techniques-Information security risk management, ISO/IEC FDIS 27005:2008 (E).

33ISO/IEC, Information technology – Security techniques-Information security risk management, ISO/IEC FDIS 27005:2008 (E).

34ISO/IEC, Information technology – Security techniques-Information security risk management, ISO/IEC FDIS 27005:2008 (E).

35British Standards Institute, ISMSs—Part3: Guidelines for information security risk management, BS 7799-3:2006.

36British Standards Institute, ISMSs—Part3: Guidelines for information security risk management, BS 7799-3:2006.

37ISO/IEC, Information technology – Security techniques-Information security risk management, ISO/IEC FDIS 27005:2008 (E).

38As we will see later, it is possible to indirectly reduce the value of an asset. For example, if sensitive personal data are stored and the cost of protecting them is high, it is possible to decide that such data are too costly to continue storing. This constitutes a form of risk avoidance. As another example, we may decide that the cost for protecting our equipment is too high and to resort to outsourcing. This is a form of risk transfer.

39ISO/IEC, Information Security Management—Specification With Guidance for Use, ISO 27001.

40British Standards Institute, Information technology-Security techniques-Information security incident management, BS ISO/IEC 17799:2005.

41ISO/IEC, “Information technology—security techniques—information security risk management,“ ISO/IEC FDIS 27005:2008 (E).

42C. Lambrinoudakis, S. Gritzalis, P. Hatzopoulos, A. N. Yannacopoulos, and S. K. Katsikas, “A Formal Model for Pricing Information Systems Insurance Contracts,” Computer Standards and Interfaces, Vol. 27, pp. 521–532, 2005.

43S. Gritzalis, A.N. Yannacopoulos, C. Lambrinoudakis, P. Hatzopoulos, S. K. Katsikas, “A probabilistic model for optimal insurance contracts against security risks and privacy violation in IT outsourcing environments,” International Journal of Information Security, Vol. 6, pp. 197–211, 2007.

44G. Stoneburner, A. Goguen and A. Feringa, Risk Management guide for information technology systems, National Institute of Standards and Technology, Special Publication SP 800-30, 2002.

45R. Baskerville, “Information Systems Security Design Methods: Implications for Information Systems Development,” ACM Computing Surveys, Vol. 25, No. 4, pp. 375–414, 1993.

46R. Moses, “A European Standard for Risk Analysis”, in Proceedings, 10th World Conference on Computer Security, Audit and Control, Elsevier Advanced Technology, pp. 527–541, 1993.

47NIST, Description of automated risk management packages that NIST/NCSC risk management research laboratory have examined, March 1991, available at http://w2.eff.org/Privacy/Newin/New_nist/risktool.txt, accessed April 28, 2008.

48INFOSEC 1992, Project S2014Risk Analysis, Risk Analysis Methods Database, January 1993.

49ENISA Technical department (Section Risk Management), Risk Management: Implementation principles and Inventories for Risk Management/Risk Assessment methods and tools, available at www.enisa.europa.eu/rmra/files/D1_Inventory_of_Methods_Risk_Management_Final.pdf, June 2006, accessed April 28, 2008.

50www.enisa.europa.eu/rmra/rm_home_01.html, accessed April 28, 2008.

51ENISA Technical department (Section Risk Management), Risk Management: Implementation principles and Inventories for Risk Management/Risk Assessment methods and tools, available at www.enisa.europa.eu/rmra/files/D1_Inventory_of_Methods_Risk_Management_Final.pdf, June 2006, accessed April 28, 2008.

52www.cramm.com, accessed April 28, 2008.

53www.insight.co.uk/, accessed April 28, 2008.

54www.ssi.gouv.fr/en/confidence/ebiospresentation.html, accessed April 28, 2008.

55Information Security Forum, The standard of good practice for information security, 2007 (available at https://www.isfsecuritystandard.com/SOGP07/index.htm, accessed April 28, 2008).

56www.bsi.de/english/gshb/index.htm, accessed April 28, 2008.

57https://www.clusif.asso.fr/en/production/mehari/, accessed April 28, 2008.

58www.cert.org/octave/, accessed April 28, 2008.

59www.callio.com/secura.php, accessed April 28, 2008.

60www.riskworld.net/method.htm, accessed April 28, 2008.

61www.countermeasures.com, accessed April 28, 2008.

62www.infogov.co.uk/proteus/, accessed April 28, 2008.

63www.aexis.de/RA2ToolPage.htm, accessed April 28, 2008.

64www.riskwatch.com/index.php?option=com_content&task=view&id=22&Itemid=34, accessed April 28, 2008.

65www.thesbamethod.com, accessed April 28, 2008.

66J.Dumortier and H. Graux, “Risk Management / Risk Assessment in European regulation, international guidelines and codes of practice,” ENISA, June 2007 (available at www.enisa.europa.eu/rmra/files/rmra_regulation.pdf).

67http://europa.eu.int/smartapi/cgi/sga_doc?smartapi!celexapi!prod!CELEXnumdoc&lg=EN&numdoc=32001R0045&model=guichett, accessed April 29, 2008.

68J.Dumortier and H. Graux, “Risk Management / Risk Assessment in European regulation, international guidelines and codes of practice,” ENISA, June 2007 (available at www.enisa.europa.eu/rmra/files/rmra_regulation.pdf).

69www.export.gov/safeharbor/SH_Documents.asp, accessed April 29, 2008.

70http://ec.europa.eu/justice_home/fsj/privacy/modelcontracts/index_en.htm, accessed April 29, 2008.

71J.Dumortier and H. Graux, “Risk Management / Risk Assessment in European regulation, international guidelines and codes of practice,” ENISA, June 2007 (available at www.enisa.europa.eu/rmra/files/rmra_regulation.pdf).

72www.legalarchiver.org/hipaa.htm, accessed April 29, 2008.

73http://europa.eu.int/smartapi/cgi/sga_doc?smartapi!celexapi!prod!CELEXnumdoc&lg=en&numdoc=32002L0058&model=guichett, accessed April 29, 2008.

74J.Dumortier and H. Graux, “Risk Management / Risk Assessment in European regulation, international guidelines and codes of practice,” ENISA, June 2007 (available at www.enisa.europa.eu/rmra/files/rmra_regulation.pdf).

75http://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:32006L0024:EN:NOT, accessed April 29, 2008.

76J.Dumortier and H. Graux, “Risk Management / Risk Assessment in European regulation, international guidelines and codes of practice,” ENISA, June 2007 (available at www.enisa.europa.eu/rmra/files/rmra_regulation.pdf).

77http://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2006:396:0001:0849:EN:PDF, accessed April 29, 2008.

78J.Dumortier and H. Graux, “Risk Management / Risk Assessment in European regulation, international guidelines and codes of practice,” ENISA, June 2007 (available at www.enisa.europa.eu/rmra/files/rmra_regulation.pdf).

79http://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:32005F0222:EN:NOT, accessed April 29, 2008.

80J.Dumortier and H. Graux, “Risk Management / Risk Assessment in European regulation, international guidelines and codes of practice,” ENISA, June 2007 (available at www.enisa.europa.eu/rmra/files/rmra_regulation.pdf).

81www.oecd.org/dataoecd/16/22/15582260.pdf, accessed April 29, 2008.

82J.Dumortier and H. Graux, “Risk Management / Risk Assessment in European regulation, international guidelines and codes of practice,” ENISA, June 2007 (available at www.enisa.europa.eu/rmra/files/rmra_regulation.pdf).

83www.bis.org/publ/bcbs98.pdf, accessed April 29, 2008.

84http://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:31987H0598:EN:HTML, accessed April 29, 2008.

85J.Dumortier and H. Graux, “Risk Management / Risk Assessment in European regulation, international guidelines and codes of practice,” ENISA, June 2007 (available at www.enisa.europa.eu/rmra/files/rmra_regulation.pdf).

86http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=107_cong_bills&docid=f:h3763enr.tst.pdf, accessed April 29, 2008.

87www.occ.treas.gov/netbank/ebguide.htm, accessed April 29, 2008.

88J.Dumortier and H. Graux, “Risk Management / Risk Assessment in European regulation, international guidelines and codes of practice,” ENISA, June 2007 (available at www.enisa.europa.eu/rmra/files/rmra_regulation.pdf).

89https://www.pcisecuritystandards.org/tech/download_the_pci_dss.htm, accessed April 29, 2008.

90J.Dumortier and H. Graux, “Risk Management / Risk Assessment in European regulation, international guidelines and codes of practice,” ENISA, June 2007 (available at www.enisa.europa.eu/rmra/files/rmra_regulation.pdf).

91http://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:32002L0065:EN:NOT, accessed April 29, 2008.

92J.Dumortier and H. Graux, “Risk Management / Risk Assessment in European regulation, international guidelines and codes of practice,” ENISA, June 2007 (available at www.enisa.europa.eu/rmra/files/rmra_regulation.pdf).

93British Standards Institute, Information technology-Security techniques-Management of information and communications technology security—Part 1: Concepts and models for information and communications technology security management, BS ISO/IEC 13335-1:2004.

94J.Dumortier and H. Graux, “Risk Management / Risk Assessment in European regulation, international guidelines and codes of practice,” ENISA, June 2007 (available at www.enisa.europa.eu/rmra/files/rmra_regulation.pdf).

95British Standards Institute, Business continuity management—Code of practice, BS 25999-1:2006.

96J.Dumortier and H. Graux, “Risk Management / Risk Assessment in European regulation, international guidelines and codes of practice,” ENISA, June 2007 (available at www.enisa.europa.eu/rmra/files/rmra_regulation.pdf).

97ISO/IEC, Information technology—Security techniques—A framework for IT security assurance—Overview and framework, ISO/IEC TR 15443-1:2005.

98British Standards Institute, Information technology—Security techniques—Code of practice for information security management, BS ISO/IEC 17799:2005.

99ISO/IEC, Information technology—Security techniques— IT network security—Part 1: Network security management, ISO/IEC TR 18028:2006.

100ISO/IEC, Information Security Management—Specification With Guidance for Use, ISO 27001.

101British Standards Institute, ISMSs—Part3: Guidelines for information security risk management, BS 7799-3:2006.

102British Standards Institute, Information technology—Security techniques—Information security incident management, BS ISO/IEC TR 18044:2004.

103Information Security Forum, The standard of good practice for information security, 2007 (available at https://www.isfsecuritystandard.com/SOGP07/index.htm, accessed April 28, 2008).

104J.Dumortier and H. Graux, “Risk Management / Risk Assessment in European regulation, international guidelines and codes of practice,” ENISA, June 2007 (available at www.enisa.europa.eu/rmra/files/rmra_regulation.pdf).

105ISO, Financial services-Information security guidelines, ISO/TR 13569:2005.

106U.S. General Accounting Office, Information security risk assessment: practices of leading organizations, 1999.

107G. Stoneburner, A. Goguen and A. Feringa, Risk Management guide for information technology systems, National Institute of Standards and Technology, Special Publication SP 800-30, 2002.

108R. Ross, S. Katzke, A. Johnson, M. Swanson and G. Stoneburner, Managing Risk from Information Systems: An Organizational Perspective, US NIST SP 800-39, 2008 (available at http://csrc.nist.gov/publications/PubsDrafts.html#SP-800-39, accessed April 29, 2008).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset