CHAPTER 5:
PRE-AUTHORIZATION ACTIVITIES – THE FUNDAMENTALS

Success is neither magical nor mysterious. Success is the natural consequence of consistently applying the basic fundamentals.48

Jim Rohn, Motivational Speaker, Philosopher and Entrepreneur

In this chapter:

Establishing the authorization team
Categorizing the information system
Defining the information system boundary
Establishing a risk management process

The primary objective of the pre-certification activities is to set the stage for the authorization activities to follow. Certain activities, executed early, will minimize effort later and facilitate the authorization process. These include:

Establish the authorization team.

Train the authorization team.

Define the information system.

Define the accreditation boundary, which includes identifying the approving authority.

Conduct the risk assessment.

Align with the system life cycle.

It is important to note that these activities do not necessarily have to occur in sequence. Some may pre-exist the initiation of an authorization process, such as the authorization team or the overall accreditation boundary. Others may be conducted by external entities, such as the organization risk assessment.

Establish the authorization team

In Chapter 3, we introduced the potential roles that should be part of a successful information systems security program. Here, we reiterate those that are essential to the authorization process.

Basic rule: the composition of the authorization team depends on the size and complexity of the system under examination. Recognizing that organizations have widely varying missions, sizes, and organizational structures, there will be differences in how specific responsibilities are allocated among organizational personnel (e.g. multiple individuals filling a single role or one individual filling multiple roles).49 However, the basic functions remain the same.

49 Caution should be exercised when one individual fills multiples roles in the security authorization process to ensure that the individual retains an appropriate level of independence and remains free from conflicts of interest.

Accrediting a small or simple information system will certainly not require an entire team. Larger, complex systems or a large network may require a more robust team to implement the controls, conduct the validation testing, gather and analyze the data, and provide the appropriate information to the authorizing official. Where there may be multiple authorizing officials, a more robust team may be necessary to resolve issues that may arise and to provide information needed to form a proper MOA between the authorizing officials.

At the very least, the following roles must be involved in the authorization process:

Authorizing official (AO)

Certifying authority (CA)

Security control assessor

Information assurance manager (IAM)/information system security manager (ISSM).

These are other roles that may be included in the authorization process as required:

Program manager (PM) or system owner/steward.

Data owner/steward.

Information assurance officer (IAO)/information system security officer (ISSO).

Information system security engineer (ISSE).

User representative.

Subject matter experts (SME) to implement the required security controls or to provide specific area-specialty information.

Unless they are already permanent members of the security staff, authorization team members should be appointed in writing. This can be done with a simple memorandum or with a form. Regardless of the format selected, the appointment orders should state the role to which the team member has been assigned, the responsibilities associated with that role, and the duration of the appointment.

Often, the authorization team consists of a matrix of individuals from various offices within the organization who are temporarily detailed to assist with the authorization activities. While this allows the integration of specialists who may not be available in-house, it is important to recognize that these individuals may not be available full time to work on the authorization process. Also, they may only have limited experience in performing authorization.

In addition to determining the members of the team, it is important to assign one person as the team lead or primary point of contact for authorization activities.

A successful authorization process starts with the assignment of the right leader. The authorization project leader is often the organization’s information assurance manager (IAM) or information assurance officer (IAO). For new and significantly modified systems or applications, the authorization project leader often works closely with the system development team.

In Chapter 4, we introduced the general roles and responsibilities of the members of the organization’s security program. Let’s now discuss the specific authorization-related roles and how they fit into the authorization of an information system.

Authorization roles by team member

The AO plays a central role in the authorization process. He/she is responsible for all accreditation decisions. This starts with approving the determination of the required security measures and safeguards (e.g. IA controls) and ends with the rendering of a risk-based accreditation decision. The AO is the only individual with the authority to assume the risk of operating an information system.

The CA and the security control assessor also have crucial roles in the C&A activities. The CA has responsibility for conducting certification activities, such as: assisting in determining system and computing environment security requirements and the associated IA controls; conducting verification and validation testing50 to determine the level of compliance; assessing and recommending security countermeasures; identifying residual risks; and – most importantly – making an accreditation recommendation to the DAA. The CA and the controls assessor should be independent from the organization responsible for acquiring and operating the information system. This is due to the fact that the CA must review the test results objectively and the security controls assessor must test independently for compliance and make a justified recommendation regarding the information system’s level of security and the associated risks of operation.

The system owner or PM is ultimately responsible for the system acquisition – from concept, to development, to integration of the information system into its target operational environment. The PM is the best individual to represent the interests of the information system during the authorization process. The PM is also one of the best information resources for the DAA, the CA, and the IAM, since they often have much of the documentation required to ascertain the level of information system security compliance. This includes system description, system design, security architecture, hardware and software inventories, and core services list.

Frequently, it is the IAM/ISSM who carries much of the burden of addressing the IA controls implementation and developing the essential evidence of compliance. The IAM/ISSM often prepares the authorization package for the CA and AO to review as part of

50 Also known as Security Testing and Evaluation (ST&E).

the responsibility for establishing, implementing, and maintaining a system level information systems security program. The IAM/ISSM is often supported by an IAO/ISSO, who may assist the ISSM in the authorization process.

The user representative has a smaller, but no less important role in the authorization process. The user representative functions as the individual and/or organization which represent the user community in the definition of the information system operational and security requirements. Input from the user representative is often critical in establishing the essential balance between utility and security of the information system.

Data owners are responsible for assisting the DAA and the CA in establishing and verifying the necessary level of data protection. The DAA/AO and the CA do not always have the knowledge needed to fully understand the sensitivity of the data processed by the information system – only the data owner can provide this information – which is an essential element of the risk-based authorization decision.

Last, but not least, is the role played by the SME during the authorization process. The SME can assist throughout the authorization process in the implementation and testing of the security measures and safeguards, whether technical or procedural. SMEs can be varied, ranging from the system administrator, network architect, and firewall expert to the expert in the development and testing of contingency plans.

SMEs from other security disciplines, such as physical security, can also provide value to the authorization process. These individuals may assist in site surveys, administrative security analysis, and countermeasures analysis. Although funding and training organizations are not directly involved in the authorization process, their support can be critical in the authorization effort by providing the required funding and by supporting the training needs.

If the information system requires a high degree of assurance, the authorization team can work closely with the vendor or the logistical support organization to obtain data on mean time between failure in order to determine if the reliability, maintainability, and availability (RMA) of the system’s components meet the criteria for high assurance. Although the authorization team is not directly responsible for information system configuration management, the authorization team must understand the configuration control process in order to determine its strengths and weaknesses. For most information systems, particularly large IS, configuration management responsibilities are usually part of the day-to-day IS life cycle, usually a configuration control board (CCB). This organization can be an important resource for the authorization team.

Training the authorization team should not be an afterthought

In many organizations, the individuals responsible for authorization-related activities and tasks have other job related duties and responsibilities that they perform on a daily basis. For example, the AO is often one of the senior leaders in the organization with responsibility for the overall mission; the SME may be heavily burdened with the operational necessities of maintaining the network. Often, their involvement with the authorization process may only occur sporadically.

At the same time, authorization related duties may be highly complex and technical, influenced by the ever-increasing amount of legislation and compliance requirements, as well as the increasing complexity of the technology itself.

Consequently, early and comprehensive training is essential to ensure that the participating authorization team members have a solid understanding of the regulatory guidance and the prescribed authorization process. Some of the training may be required by regulation, such as the mandatory DAA/AO training and certification requirements. Other may be voluntary, such as the certification of an SME in a particular technology.

Many organizations offer training in specific authorization processes, such as the DIACAP or the processes specified in the NIST guidance. Just a small amount of research into authorization training and certification will result in an abundance of training information – in fact, a single Google search came up with 138,000 results. So, before you sign up for training, check out the training provider and the instructors and be sure that they have tangible experience in successfully and cost-effectively executing authorization activities to completion.

The benefits of ensuring a trained authorization team will be readily apparent as soon as the organization begins to execute its first authorization process. In other words, a small investment up front in proper training and certification can result in larger savings during the authorization process itself.

Categorizing the information system

The simplest definition of an information system is “anything that creates, processes, stores, transmits, displays, and disseminates data.” The DOD takes a much broader approach in defining an information system: “A set of information resources organized into an entire infrastructure, organization, personnel, and components composed for the collection, processing, storage, maintenance, use, sharing, transmission, display, and disposition of information.”51

Take a look at both of these definitions. One concept that should be very clear is that an information system is NOT the hardware inventory. And software alone is also NOT an information system. An information system is a combination of the software and hardware – as well as the workstations, servers, services, and processes that run on them. In fact, the very work “system” implies that there are multiple elements that must be combined to make up the whole of an information system. When considering the definition of an information system in the context of the authorization process, consider too that a system also consists of the business functions defined in terms of mission, processes, and personnel.

51 Source: DODD 8500.1

There are many ways to classify information systems. Some define information systems by the business activities they support.

Transaction processing systems automate the handling of data about business activities or transactions.

Management information systems take the information generated by transaction processing systems and convert it into aggregated forms meaningful to managers.

Decision support systems are designed to help organizational decision makers make decisions by providing an interactive environment that uses data and models.

Expert systems represent attempts to codify and manipulate knowledge rather than information by mimicking experts in particular knowledge domains.

In the federal government, however, these categorizations are rarely used in the context of the authorization process. More frequently, information systems are defined by the structure and the nature of the system itself. Information systems can range from diverse computing platforms to high-end supercomputers to personal digital assistants (PDAs). Information systems can also be highly specialized systems and devices, such as testing and calibration devices, telecommunications systems, weapons systems, command and control systems, and environmental control systems.

Information systems can also be single standalones, application-based information systems performing single or multiple specific functions, local area networks (LAN), or large and complex systems consisting of multiple LANs. Information systems can be government owned or they can be outsourced information systems owned by contractors, but supporting essential government functions.

Defining the information system – whether by its function, its data, its size, and/or its environment, or a combination of all of these factors – is an essential first step in establishing the scope of the authorization activities. Improperly establishing the information system type can either lead to too much security and the associated costs or too little security and the associated risks.

NIST SP 800-60 specifies a useful methodology for defining information and information systems as a pre-requisite for determining the required IA controls and safeguards and for establishing the accreditation boundary. Let’s first take a look at defining the type of information system.

Identifying the type of information system

The type of information processed by the information system is the primary criterion for determining the level of protection necessary for the information system. Determining the actual type of information system is also a factor in determining the scope of the authorization and protection factors.

The federal government, specifically NIST, identifies two primary types of information systems:

General support system (GSS): an interconnected set of information resources under the same direct management control which shares common functionality. A system normally includes hardware, software, information, data, applications, communications, and people. A system can be, for example, a local area network (LAN) including smart terminals that support a branch office, an agency-wide backbone, a communications network, a departmental data processing center including its operating system and utilities, a tactical radio network, or a shared information processing service organization (IPSO).”52

52 Definition from OMB Circular A-130, Appendix III.

Major application (MA): an application that requires special attention to security due to the risk and magnitude of the harm resulting from the loss, misuse, or unauthorized access to or modification of the information in the application. It involves the use of information resources to satisfy a specific set of user requirements.”53

Typically, the MA is developed and implemented under the support of a program office/manager and possibly deployed through similar configurations in multiple environments.

If an MA, the authorization team should also identify the GSS upon which it resides. Identifying this link will assist with the identification and implementation of the appropriate security controls for both the MA and the GSS. Additionally, due to the existence of this connection, the security categorization of the GSS might have to be rated, at a minimum, at the same level as the highest-rated MA that resides on that GSS.

The information system categories of major application or general support system address most of the information system types in the federal government’s inventory.

The DOD has expanded on these definitions and identified two additional types of information systems. These categorizations are used primarily only in the US DOD, but they are presented here since they offer special consideration of two additional types of information systems: platform IT and outsourced IT.

53 Definition from OMB Circular A-130, Appendix III.

Figure 5: Information system types

In particular, federal agencies, including DOD, have increased levels of contractor support either on site or on the contractor site. In addition, increased amounts of federal data processing is being managed or executed by externally contracted organizations. The protection of this information is a critical consideration. As a result, the identification and implementation of security controls for outsourced IT takes on increased importance.

While platform IT will likely remain a type of information system unique to the DOD, the concerns of outsourced IT are not unique to the DOD, or even the federal government. It is a concern of almost all organizations – including commercial entities.

So, included here are the definitions of information system types provided by the DOD.

Enclave

The Enclave54 is the core type of information system. In other publications, the enclave may be referred to as a site. An enclave is essentially a collection of information system environments under the control of a single authority and security policy. The enclave

54 Enclaves are analogous to the general support system identified in OMB A-130.

may provide information systems security capabilities for all of the information systems within it, such as boundary defense, incident detection, and certificate management.

The enclave assumes the highest level of protection required by the information systems supported within the enclave. Generally, an enclave will not change its own security mechanisms when connected with other enclaves, but will generally employ a controlled interface between enclaves. Examples of enclaves include local or wide area networks, backbone networks, and data processing centers.

Automation information system (AIS) application

The AIS application55 is usually the product of a specific acquisition or development program. It may be a single software application; multiple software applications integrated to provide a single service (e.g. personnel management); or a combination of hardware, firmware, and software designed to support specific functions across a range of missions or organizations. In earlier publications, an AIS application might have been referred to as a type accreditation.

An AIS application performs clearly defined functions for which there are identifiable security requirements that must be addressed during the system life cycle. An AIS application may be deployed within an enclave and often takes advantage of the information system security services provided by the enclave.56 While the program manager for the AIS application is generally responsible for the integration of security measures within the application, once it is deployed to an enclave for operations, the enclave assumes responsibility for its secure operation.

In order to properly determine the security requirements for the application, program managers for acquisitions of AIS applications

55 AIS applications are analogous to the major application identified in OMB A-130.

56 In DOD, this is called “inherited controls”; NIST refers to this as “common controls” or “inheritance.”

should coordinate early in the acquisition process with the enclaves that will potentially host the applications to address operational security risks the system may impose upon the enclave. This also helps in identifying all system security needs that may be more easily addressed by enclave services than by system enhancement – thus reducing both the cost of development and the resources required for the authorization of the application.

The AO responsible for the enclave receiving an AIS application is also responsible for accepting the risks of integrating the AIS application into the enclave. The burden for ensuring the AIS application itself is adequately secured is a shared responsibility of both the AIS application system owner or program manager and the AO for the hosting enclave; however, the responsibility for initiation of this negotiation process lies clearly with the system owner or program manager. To the greatest extent possible, systems owners/program managers should capitalize on the common security safeguards that can be provided by the hosting enclave.

Outsourced IT

Increasingly, organizations outsource major elements of their IT support to outside providers. This raises specific security concerns due to the lack of direct control over the information systems. As a result, federal agencies, including DOD, have identified specific authorization requirements for outsourced IT providers.

Outsourced IT may refer to specific business processes supported by private sector information systems, specific information technologies, or specialized information services.

In the case of outsourced IT, the technical security is the responsibility of the service provider; however, procedural and administrative security requirements are often shared between the government client and the service provider. For example, if a payroll system is operated by a contractor, but part of the system is loaded on an agency’s computers to perform a business function, the contractor is responsible for ensuring the overall security of the information system, but the agency is responsible for ensuring appropriate security controls are in place for that automated information resource on their computer. In the best of all worlds, the security requirements should be addressed during the contracting phase and defined in the statement of work (SOW) and the service level agreement (SLA).

Platform IT

Platform IT, while not limited to the DOD, is a highly specialized category of information system and will generally not be a consideration for most organizations. Nevertheless, it can be useful to understand the definition. Platform IT refers to specialized mission-related information systems, such as weapons, training simulators, diagnostic test and maintenance equipment, calibration equipment, equipment used in the research and development (R&D) of weapons systems, transport vehicles, medical technologies such as radiology systems, and utility distribution systems such as water and electric.

The PMs for the acquisition of platform IT are ultimately responsible for the platform’s overall security requirements. If the platform has an interconnection with the larger network, the system owner/PM is also responsible for identifying the safeguards needed to ensure both the protection of the platform, as well as the interconnecting enclave. The connecting enclaves have responsibility for extending the security services (such as identification and authentication) to ensure a secure interconnection between the platform and the enclave.

Identifying the information

NIST SP 800-60 provides the following methodology for identifying the information processed by an information system:

Identify the fundamental business areas (management and support) or mission areas (mission-based) supported by the system under review.

Identify, for each business or mission area, the operations or lines of business that describe the purpose of the system in functional terms.

Identify the sub-functions necessary to carry out each area of operation or line of business.

Select basic information types associated with the identified sub-functions.

And, where appropriate:

Identify any information type processed by the system that is required by statute, executive order, or agency regulation to receive special handling (e.g. with respect to unauthorized disclosure or dissemination). This information may be used to adjust the information type or system impact level.

Once the type of information has been categorized, the organization should review the information processed by the system to determine if there are other information types that need to be categorized for authorization purposes. Knowing the type of information processed by the information system will guide you in knowing what you need to protect, why you need to protect it, and the best safeguards to put in place to protect it. This process will be discussed in greater detail in Chapter 6, since the identification of the information protection requirements is directly linked to the selection of security controls. However, assigning the information to one or more of the above listed categories is generally sufficient to support the identification of the accreditation boundary.

Defining the nature of the information system and identifying the information processed and the associated protection requirements provides the foundation for the next step – defining the accreditation boundary.

Defining the boundary ensures manageable and measurable authorization

Let’s say this once more, because it is very important: the goal of defining the accreditation boundary57 is to ensure that the authorization process is manageable and measurable.

But how difficult is it to define a system boundary and why? Defining the accreditation boundary is one of the most difficult and challenging determinations facing authorizing officials and those responsible for executing the authorization process. The primary reason – system boundary definition is largely a subjective process.

But defining the accreditation boundary helps in defining the scope of protection for information systems (i.e. what the organization agrees to protect under its direct control or within the scope of its responsibilities) and identifying the people, processes, and technologies that are part of the systems supporting the organization’s missions and business processes. Organizations also need to establish the accreditation boundary before they can determine the security categorization and develop any system security plans.

Organizations have a great deal of flexibility in determining what constitutes an information system and the accreditation boundary associated with that system. The difficulty of defining a system/accreditation boundary is influenced by the complexity of the information system, as well as the environment in which it operates.

So, exactly what does defining an accreditation boundary really mean? It is the “unique assignment of information resources to an information system for the purpose of executing C&A.”58 It is important because it will influence the scope of the accreditation activities – as well as the level of effort and cost.

57 Will be referred to as the “authorization boundary” in upcoming Federal legislation and guidance. 58 NIST SP 800-60.

Accreditation boundaries which are unnecessarily expansive (i.e. including too many hardware, software, and firmware components or other elements) can make the authorization process unwieldy and complex. Boundaries which are too limited or narrow can actually increase the number of authorization activities that need to be conducted and drive up the total security costs for the organization.

There are some very basic guidelines for establishing the accreditation boundary:

There is some form of direct management control.

The information systems have the same function or mission objective and essentially the same operating characteristics and information security needs.

The information systems reside in the same general operating environment. In the case of geographically distributed information systems, they should have similar operating environments even if they reside in various locations.

You may be one of the fortunate ones – your organization may have already defined your accreditation boundary for you. But if you are not so lucky, what are some of the criteria in determining an accreditation boundary?

First, begin by getting the answers to several important questions about the system itself. These include:

What is the primary mission of the information system?

Is it a standalone, a local network, or does it include all of the network domains in a building or a location?

Is the information system distributed across multiple buildings or even multiple geographic locations?

Does it process sensitive or classified information?

Are information systems from multiple data owners and with different accreditation boundaries (AOs or DAAs) interconnected within the same network?

Let’s take a detailed look at the primary criteria for making the accreditation boundary determinations. These may be used individually or in a combination of multiple factors.

Network topology

The network topology refers to the technical components of the information system, including both the physical and logical features. The physical components consist of the hardware, software and firmware. These include the firewalls, routers, intrusion detection systems, and other boundary protection devices. The logical features of the network topology include IP addresses, network protocols, domains, virtual private networks (VPNs), and trust relationships. Some accreditation boundaries are defined by the topology of the network.

Organization

It is possible to determine the accreditation boundary based on the organization using the information system(s). There are two primary considerations when using organization to determine accreditation boundaries: ownership (who owns it?); and operations (who uses it?). While this seems like a simple determination, it is frequently not quite so easy to define. In many cases, there may be information systems from multiple system owners within a single organization. In this case, the organization may require formal agreements to ensure that the individual information systems comply with the organization’s unique requirements.

Mission

The mission of the information system(s) can be a useful criterion for determining the accreditation boundary. Information about the system mission can be acquired from many of the documents generated during the system life cycle, such as the mission need statement (MNS) or statement of need, the mission impact statement (MIS), the operational requirements document (ORD), the system security policy (SSP), and the information system concept of operations (CONOPS). Some of the missions executed by information systems include operational support, administrative office functions, or tactical operations. Part of the mission determination is also identifying how critical the information system is to the overall mission of the organization. Criticality can be based on factors such as:

loss of life or injury;

inability to execute the organization’s overall mission;

damage to organizational resources (physical and/or logical);

damage to the organization’s reputation;

damage to national security.

Location

Some information systems and their accreditation boundaries can be easily defined along geographic boundaries. Information system components can be confined to a single floor, building or region and can be evaluated within these obvious boundaries.

Location may also refer to an operational requirement, such as a remotely deployed element of the organization. For example, mobile or fielded elements of the information system require security safeguards and should be considered part of the accreditation boundary. If the organization is geographically dispersed, each organization will likely have a local area network or enclave that is connected virtually to the larger organizational network. For accreditation purposes, these dispersed organizations may also be considered part of the larger boundary.

Data sensitivity or classification

Data sensitivity or security classification is also a defining factor in determining the accreditation boundary. As a common rule, unclassified (public), proprietary, and classified networks are defined as separate systems. Systems processing information at varying levels of sensitivity or classification may be resident within a single organization and may be included within a single accreditation boundary. In this case, it might be useful to decompose the network into the subsystems with individual accreditations which will contribute to the overall authorization of the enclave.

Boundary considerations: too narrow or too broad

When defining an accreditation boundary, there are still two looming questions: How much is too little? And, how much is too large?

If the accreditation boundary is too narrowly defined, the authorization team may exert great effort and use a lot of resources to achieve a limited objective. The interfaces, both internal and external to the accreditation boundary, may be difficult to define, since the boundary stops short before addressing all of the applicable components “touched” by the information system. There may also be gaps where all relevant devices are not accounted for, leaving the potential for missing significant risks to the information system and the organization.

If the accreditation boundary is too broadly defined, the authorization team may be faced with an overwhelming task. A boundary that is overly expansive will necessitate the evaluation and testing of a multitude of components and information processes. The result: there may not be time for a thorough analysis of the test results. Further, there is increased likelihood of frequent re-accreditation simply because there is an increased chance that something will change due to the massive number of components included in the boundary.

System boundaries may also overlap, causing redundant authorization efforts and an increased possibility for inter-organizational disputes (e.g. “turf battles”). This accreditation boundary error presents the potential for conflicting results stemming from duplicate testing on the same equipment. Conflicting regulatory requirements may also cause problems in determining which one has precedence and where to focus efforts.

Helpful hints

Here are a few ideas that may help in mastering the difficult task of determining the accreditation boundary.

An information system will have at least one system administrator designated in writing.

It is often simpler to design in security based on function, so begin with that criterion.

If your system boundaries extend beyond your location, plan for security for the information systems under your control. Coordinate security on those which you cannot control.

The accreditor should have some type of configuration control over the information systems within the accreditation boundary.

Ensure an accurate system definition to avoid disagreements regarding information system boundaries.

All parties involved in the authorization process must agree on the security requirements prior to the onset of the authorization effort.

All personnel associated with the authorization effort, especially management, must agree on the accreditation boundary, level of effort, schedule, and security requirements.

Establishing a risk management process

“The first step in the risk management process is to acknowledge the reality of risk. Denial is a common tactic that substitutes deliberate ignorance for thoughtful planning.”

Charles Tremper

Security personnel often complain that “leadership just doesn’t get it” when they try to discuss risk. An often-cited truism in information systems security is that the only truly secure computer is one isolated in a concrete bunker, without power, and no connection to any network. This may be true – because any exposure opens an information system to potential compromise. An information system like the one described above may indeed be secure, but it certainly doesn’t help your organization accomplish its mission. Inevitably, the security of any useful information systems environment will be less than perfect, and that has to be factored into the security planning process.

Any environment involving information systems and information technology includes an element of risk. Even the most rigorously planned information environments contain uncertainties. Unfortunately, in the real world, unexpected events can and often do occur. Planning for and managing these unexpected events is a fundamental element to a secure information environment.

Our experience with organizations has demonstrated that, far more often than not, leaders DO get it. They understand that risk cannot be eliminated, it can only be managed. In fact, leaders are intelligent, sharp individuals who live and breathe risk management as a fundamental element of what they do on a daily basis. Leaders think “risk management”, while security people often tend to think “avoidance” – which are subtly, but critically different.

Risk avoidance and risk management are two approaches to dealing with an uncertain environment. Risk avoidance involves implementing all of the necessary countermeasures to eliminate every specter of risk. The question then becomes: when does the expense of eliminating risk overwhelm any potential return on investment.

Risk management is more of a process for selecting and implementing appropriate countermeasures to arrive at an acceptable level of risk at an acceptable cost to the organization. In fact, the risk management model can provide big wins for an enterprise because it directs information systems security spending where it is needed most, often resulting in a stronger security posture. Risk management attempts to answer the question: “What is the best way to invest my constrained, available resources, considering the variety of alternative options, to best accomplish my assigned mission in a potentially hostile threat environment?”

When making decisions that affect our personal lives, we usually have an intuitive understanding of risk concepts without requiring formal definitions or complex analyses.

Here’s an example of the risk management decision process based on a highly unlikely event.

We intuitively understand that the destruction caused by a supernova of the sun would be devastating – not only to our way of life, but to our very existence. But, we also intuitively know that this is unlikely to occur in our own lifetime. As we consider both the potential impact and the probability of the event actually happening, we have our own method of determining our personal level of concern for that combined state of impact and probability. What we do about it is dependent upon:

Our personal fear of the harm that may result.

Our ability to influence the probability and/or the impact caused by the event.

Our willingness to invest in influencing either the probability and/or the impact.

We have some choices in this situation:

Accept the risk and just not worry about it.

Accept the risk, but continue to worry about it.

Try to prevent the event from occurring.

Invest in countermeasures to change the probability and/or the impact.

The choice would most likely be based upon our own individual concern for the risk posed by this event, the costs and benefits of each of the alternatives, and the resources available. The intuitive risk management decision process in this situation might look like this:

Risk management process example

OPERATIONAL OBJECTIVE: Provide a long, safe, and prosperous life for ourselves and our children.

EVENT: A supernova of the sun.

IMPACT: Annihilation of the earth, our way of life, and our existence!

PROBABILITY: Probably won’t happen in our lifetime.

LEVEL OF CONCERN: There are probably many other things that have a greater impact than this.

RISK RESULT: This is below the intuitive threshold of concern, so you probably wouldn’t invest much, if anything.

ALTERNATIVES:

Do nothing – accept the risk and not worry about it.

Worry – accept the risk but continue to worry about it.

Supernova legislation – establish a policy prohibiting supernovas.

“Flail at the wind” – invest wildly in various countermeasures to try to change the probability and/or impact.

COST/BENEFIT ANALYSIS:

Do nothing – no current or future costs; will not waste time or energy on something that probably can’t be influenced anyway; does nothing about impact or probability; would be unprepared if the event occurs.

Worry – no current or future direct expenditure required, but wastes time and energy on something that probably won’t occur; does nothing about impact or probability; would be unprepared if the event occurs.

Supernova legislation – no current direct expenditures. May require future expenditure to enforce the established policy. Enforcement is probably futile with current technology. Since it is unenforceable, any expenditure would be wasteful at this time. When and if the policy is enforceable, establishing the policy may prove to be beneficial. Currently this would waste time, energy and resources on something that probably can’t be influenced anyway.

Flail at the wind – would expend current and long term maintenance resources. Would invest in countermeasures that cannot really do anything about the impact and/or probability. Wastes time, energy and resources on something that probably can’t be influenced anyway.

DECISION/SELECTION: Probably alternative 1: Do nothing. By choosing this option, a level of risk is being “accepted,” although the possible consequences may be undesirable or meet a standard of “acceptability”. But, it is simply the best of the available alternatives at this time.

DECISION RATIONALE: Alternatives 2, 3 and 4 expend resources or energy on an impossible attempt to either reduce the likelihood of a supernova or the consequences if it does occur. There is virtually nothing that can be done to change this event. Since it is futile to develop countermeasures or prohibit the harmful acts, and because the event is so improbable, the decision is to not expend resources on an event that cannot be controlled or prevented.

However, we still might want to be able to choose a more proactive response and invest in a level of research and development to look at future options. These future options may provide the capability to meet a more desired state of affairs.

Risk management is a process and it is cyclical in nature. It must adjust and respond to changes – in the system design and configuration, in the operating environment, and to the organization’s mission – since these might result in a change in risk. So, it is necessary to periodically revisit the risks associated with operating within a current and projected environment, and determine if a change in safeguards (e.g. processes, technology, or people) is necessary.

There are six phases of risk management:

Definition – focuses on deriving the security requirements or security policy from the operational need for the information system to provide vital functions and services.

Assessment – focuses on gaining insight into the assets, threats, and vulnerabilities that could or will be incurred based upon system use, design, and the operating environment. This phase includes the vulnerability/attack analysis, threat analysis, and mission impact analysis. These provide guidelines on determining the operational, budgetary, and risk issues that are important to the decision maker.

Selection – focuses on selecting possible courses of action together with the costs and benefits. Options can include “no change,” “shut down,” or various combinations of technical, procedural, and personnel changes intended to mitigate potential threats and/or vulnerabilities or reduce the impact of a successful attack from a threat source.

Decision – focuses on deciding between the possible courses of action. This is a critical step in the risk management process. Up to this point, the whole process is geared toward providing the decision maker(s) with the best possible information about courses of actions available to them. The decision maker must a) have the authority to accept risk on the part of the organization, b) understand the issues and information about the possible courses of action, c) be willing and able to make a decision that reflects the best possible balance between operations and security, and d) have the authority to make sure that the selected course(s) of action will be implemented. The information presented to the decision maker needs to address the decision maker’s critical issues, provide objective analysis, and be presented in a format that is useful to the decision maker. The resulting decisions must be documented and then implemented.

Implementation – effective implementation of the selected measures ensures that the risk decisions have the most effect on reducing the potential risk factors.

Repeat – when there are changes to the information system and/or its environment, the risks also change. Changes may initiate a review of the risk assessment process to determine whether or not the applied safeguards are still providing adequate security to the information system, organization, and mission.

The following diagram portrays a risk management framework. Integrating the risk framework into the overall authorization process will generally involve the functions associated with the risk executive.

The scope of this section is to provide introductory information on basic risk assessment and risk management concepts as they apply to information systems. It is not intended to be a comprehensive treatment of the subject. The selection and implementation of the security controls will be discussed in Chapter 6.

The risk assessment process

Just as there is no single approach to forecasting the weather or what’s going to happen on the stock market, there is also no single approach to conducting a risk assessment. It is dependent upon a variety of factors such as:

The availability of the information needed to make a risk decision.

The quality of the data developed during the assessment.

The analytical techniques used to develop the results.

The experience and skills of the individuals conducting the assessment and the analysis.

The validity of the results of the analysis.

The risk mitigation strategy.

The degree of risk tolerance.

The preferences of the decision maker.

Figure 6: Risk management framework

Risk, risk assessment, and risk management are large and complex subjects. So, the authorization team should take advantage of all of the resources at their disposal. Many organizations conduct business impact analyses (BIA) as part of their normal operations, particularly in the corporate world. The BIA has a wealth of information that can contribute to a security-focused risk assessment.

In writing this document, we’ve tried to balance the need to provide enough information so that risk concepts and the overall framework are clear and useful, while keeping the length manageable. As a result, the discussion that follows can best be described as an introduction and primer for conducting a risk assessment.

This process is not intended to be viewed as an exact mathematical equation to use in quantitative determinations of risk level. It is important to keep in mind that assignment of risk and acceptance of risk will always involve a level of subjective decision making involving the leadership of the organization. But, let’s lay a foundation for understanding the language of risk before we proceed.

Ask a hundred information security professionals to define risk and you are likely to get a hundred different responses. Read almost any book on information systems security and you will probably discover that the terms risk, threat, and vulnerability have been used almost interchangeably – and they really aren’t the same thing. One of the recurring problems in the information systems security profession has been the lack of a consistent taxonomy.

In informal discussions among security professionals this may not pose a problem, since we can usually understand what is meant within the context of the conversation. But it can become a negative factor when trying to communicate risk concepts to those outside the security profession – particularly to smart leaders who are very familiar with the fundamental concepts of risk management. Misuse of terms and concepts in these situations can potentially damage our credibility as professionals and certainly reduce the effectiveness of our message.

Before we take a closer look at each of these individual elements, let’s establish a high level definition of each of the components of the formula as we use them in this text.

Assets – the tangible and intangible components of an organization.

Threat – “any circumstance or event with potential to harm an information system through unauthorized access, destruction, disclosure, modification of data, and/or denial of service.”

When discussing information systems security, a vulnerability is a “weakness in an information system, system security procedures, internal controls, or implementation that could be exploited by a threat.” More importantly for our discussion, a true vulnerability is a weakness that – after analysis – is a condition in which the capability of the threat agent to exploit it is greater than the ability to counter the threat agent effectively.

Cost – the total price tag of the impact of a particular threat exercised upon a vulnerable target. Costs can be tangible and intangible. They can be measured in terms of damages to hardware or software, as well as quantifiable IT staff time and resources spent repairing these damages. “Soft” costs can also be incurred; these might include lost transactions during the downtime, lost employee productivity, loss of reputation, damage control, a decrease in user or public confidence, or other lost business opportunities.

Countermeasures, also known as controls, can be defined as the processes, tools, technologies, procedures, and configurations that might reduce threats and vulnerabilities, thus mitigating risk until it comes closer to the organization’s security acceptance threshold.

And finally, risk refers to the probability, frequency, and scale of future loss. Note the use of the term “probability.” In the final analysis, risk is not a possibility issue (e.g. either something is possible or not), but a probability issue. In other words, risk lies somewhere on the continuum between the absolute certainty of something happening and the impossibility of its occurrence.

Risk assessments are perishable and should be repeated at discrete time points (e.g. quarterly, once a year, on demand, etc.). After initialization through the results of the risk assessment, risk management becomes an ongoing activity that monitors the implemented safeguards. The role of risk management as part of the ongoing authorization process will be discussed in the subsequent chapters. Here, we are first going to address the risk assessment process.

If you’ve been in the security field for a while, you probably already know that there are about as many risk assessment methodologies as there are risks themselves. For this book, we have selected a very generic approach. For those interested in exploring other methodologies, we have provided a list of some of the more prevalent of these on the accompanying CD.

The risk assessment process

There are 8 major steps in the risk assessment process we describe. These steps are generic in their approach, so they may be tailored in terms of length and detail depending on the size and nature of the environment under consideration.

Table 5: Steps in the risk assessment process

Step

Output

1. Prepare and plan

Risk assessment plan

2. Identify assets

List of critical assets

3. Perform asset sensitivity analysis

Asset criticality report

4. Conduct a threat analysis

Threat analysis report

5. Conduct a vulnerability analysis

Vulnerability analysis report

6. Execute cost/impact analysis

Cost/impact report

7. Finalize risk assessment

Risk assessment and analysis report

8. Assess residual risk against risk tolerance

Final risk analysis

Let’s look at each of these steps in greater detail.

Step 1: Prepare and plan the risk assessment

1.1 Understand the overall process: The risk assessment process is normally initiated based on an identified need. For example, a concern might have been raised because a recent incident occurred or, in the case of this book, the information system is being submitted for authorization. In any event, the responsible individual(s) will initiate a risk assessment for part or all of a specific IT system. The most important aspect of any risk assessment is knowing exactly what it is that needs to be protected, and why.

Normally, the responsible individual will provide the following preliminary information:

which information system is to be assessed (that is, the boundary of the assessment);

the reason for the assessment; and

the urgency or priority of the assessment.

1.2 Determine the scope of the risk assessment: The scope of the risk assessment should be tailored to the needs of the audience. For example, the assessment might require only a high level analysis to support decisions regarding further action made by senior management or it might be a detailed analysis completed for the level of management directly responsible for the information system and its security or operation.

1.3 Identify the required resources: The resources for the risk assessment can include people, time, and funding. When considering the required resources, identify and record any limitations that might affect the scope of the assessment, along with any assumptions that need to be made because of those limitations.

Examples of limitations that could restrict the scope of an assessment include: departmental policies and standards; available resources; costs; and time limits. Such limitations could greatly affect the focus and the results of the assessment, especially if it is a very complex system. These limitations need to be properly identified for the decision makers as part of the reporting process.

1.4 Identify the boundary of the assessment: Before starting the assessment, it is critical to identify both physical and logical boundaries by clearly outlining, at a high level, what the assessment will include.

The physical boundary should include: physical environment; domains; system components and subcomponents; and connections to other internal and external IT systems.

The logical boundary should include: interfaces with other internal information systems; the information assets that flow between the IT system and other internal systems and through connections to external systems; the methods of transporting these flows; and the end sources (that is, where information originates, as well as its final destination).

1.5 Identify the assessment team: The size of the assessment team depends on the size and complexity of the information system, as well as on the proposed scope and boundary of the assessment. If the magnitude of the risk assessment is such that it requires a larger team, each team member should have a vested interest in the IT system, the business function, the data being processed, or the applications used to process the data.

Assessment team members should know exactly why they are included on the team, and what contributions are expected of them. The more they know before assessment begins, the more likely it is that the assessment will be conducted quickly and successfully.

1.6 Collect the necessary information: Information needed for the risk assessment can include: description of the information system(s), copies of policies and procedures, architecture diagrams and descriptions, organizational charts, and lists of key personnel and others.

The information available for the risk assessment will also depend on when the assessment is being done in terms of the system life cycle. For an information system still in development, available information may be limited; however, more detailed information should be available for an operational information system. It is important to review all of the information collected in order to determine whether there are any aspects of the information system and its environment that may have been overlooked, but should be included in the assessment. As required, the scope of the assessment might have to be revised.

1.7 Develop the risk assessment plan: This sub-task involves developing the assignments for the assessment team; finalizing any materials needed for the assessment, such as questionnaires; identifying any tools required and personnel to be interviewed; and the approximate schedule for completing the assessment.

Step 2: Identifying assets

2.1 Conduct asset inventory: Too often, organizations view their assets only as the “hard” elements of their information technology. A thorough inventory of assets must include people, all types of property, core business operations, information systems, and information. The people inventory can include employees, tenants, guests, vendors, visitors, and any others directly or indirectly connected to or involved with the organization and its mission.

Figure 7: Asset categories

Property includes both tangible assets, such as the facility and other valuables, and intangible assets, such as intellectual property and critical information. Core business operations consist of the primary mission of the organization, including its reputation. Information systems include all systems, infrastructures, and equipment associated with data, telecommunications, and information processing assets. The primary criterion for selection: does the asset add value to the organization?

2.2 Interview key personnel and asset owners: Asset owners are generally the most knowledgeable about the assets that require protection and which are most sensitive and valuable. To gather asset data in an objective manner, it helps to interview those who know the most about that asset. It is also useful to develop a structured interview process and associated checklists detailing the asset subjects to be covered during interviews with site personnel.

2.3 Conduct site visits: Members of the risk assessment team should also conduct site visits as part of the asset identification process. This provides the opportunity for impromptu interviews, but many also result in the identification of additional assets requiring protection. This may also provide an opportunity to record any existing countermeasures or safeguards.

2.4 Review results and develop asset list: After all of the asset data gathering has been completed, review the results for completeness and develop the final list of assets. A sample list of assets can be found on the CD accompanying this book.

Step 3: Perform asset sensitivity analysis

In order to implement appropriate security safeguards, it is essential to know not only what critical information and assets exist, but also their respective criticality or sensitivity levels. Analyzing asset sensitivity determines the importance of information assets to the business of an organization by identifying and assigning value to those assets. Valuing the assets allows the organization’s leaders to determine which areas have the highest priority, and consequently where security efforts should be focused.

3.1 Analyze the asset sensitivity: An asset’s sensitivity can be rated in terms of confidentiality, integrity and availability and can be measured both qualitatively (in relative terms) as well as quantitatively (in terms of dollar losses).

When assessing sensitivity of an asset, the analysts(s) should also consider other factors, such as the loss of prestige, trust or business opportunity (that is, the impact on intangible assets) that would result as well as the cost of replacement.

Confidentiality impact is that which would result from the deliberate, unauthorized or inadvertent disclosure of the asset.

The integrity impact is that which would result from the deliberate, unauthorized or inadvertent modification of the asset.

An availability impact is that which would result from the deliberate or accidental denial of the asset’s use.

(Note that the confidentiality, integrity, and availability impacts can only be determined if the asset owner has defined the required levels of each for the asset in question.)

Finally, the total financial cost to the organization is that which results from the physical or virtual loss or destruction of the asset.

Table 6: Asset sensitivity rating scale

Rating

Sensitivity description

1

Exploitation of the asset could result in little or no loss or injury.

2

Exploitation of the asset could result in minor loss or injury.

3

Exploitation of the asset could result in serious loss or injury; mission or business processes could be negatively affected.

4

Exploitation of the asset could result in very serious loss or injury; mission or business processes could fail.

5

Exploitation of the asset could result in high dollar losses, exceptionally grave loss or injury to the organization and/or individual(s) - including loss of life; mission or business processes will fail.

3.2 Review the asset and asset sensitivity list with the asset owners and create the sensitivity report: After the sensitivity of assets has been initially rated in terms of confidentiality, integrity, availability and replacement value, review these with the asset owners to make sure the list is complete and accurate. The final results should be assembled in an asset sensitivity report. (NOTE: this report can take the form of a matrix, listing the asset and the respective assigned level of criticality/sensitivity.)

Step 4: Conduct a threat analysis

“Currently, approximately 19 million people worldwide have skills to mount a cyberattack.”59

Frank G. Cilluffo, Director, Task Force on Information Warfare & Information Assurance

Threat identification results in an inventory of realistic threats consisting of the persons, things, events, or ideas that intentionally or unintentionally pose some danger to the information system resources of an organization. The existence of a threat that may have a potential to exploit system vulnerabilities may compromise the confidentiality, integrity, or availability of the system or its data. This threat inventory is then used to focus the process for identifying vulnerabilities.

4.1 Determine sources of threat data: Understanding the threat(s) also involves an understanding of capabilities, intent,

59 How can 19 million people have this capability? Approximately 95% of hacking attacks are executed by “script kiddies” – individuals without extensive computer/programming or security knowledge, but who are able to exploit vulnerabilities with the assistance of tools readily available on the Internet. Most of them know very little about the scripts they use and the potential results.

motives, likelihood, and history. Access to threat-related information is often limited due to security classification issues, so unless the threat has been adequately researched or defined, or information is conclusive, this can be the weakest link in the overall risk assessment process. Information can be obtained essentially from two types of sources: unclassified or “open source” information and classified sources. Depending on the type and depth of the assessment and the access available to the organization, one or both sources may be used.

The primary source of classified threat information is the organization’s own internal intelligence resources, if applicable. The second source is information published by various elements of the Intelligence Community. Individuals conducting the risk should determine the type of information needed on a continuous basis, and then register with their supporting intelligence organization for regular updates of the information.

In addition, security surveys, prior internal analyses, and security incident reports may often contain useful information about threats that have resulted in incidents or concerns in the past.

Sources of unclassified threat information are too numerous to list in detail; however, several are worth mentioning. These include media, such as newspapers, websites, magazines, and other publications. The US Government printing office, official Internet sites, individual agencies and departments, think tanks, and other US agencies disseminate information on almost every imaginable threat topic.

Official speeches and open testimony can also be a source of evaluated intelligence that is made available to the public. Directors of agencies, such as the FBI or CIA, provide frequent updates to Congress on threat issues, which are often available to the general public.

Special interest groups and professional associations are also a valuable resource for threat information. These include the Carnegie-Mellon University Software Engineering Institute’s Computer Emergency Response Team (CERT) and the respective organizational CERTs, such as the Army CERT. Professional associations, such as the International Systems Security Association (ISSA), may also cover a range of security threat information.

The process outlined will not guarantee that individuals conducting the risk assessment will obtain all the information needed to thoroughly identify and assess the threat(s); however, it does provide a framework for collecting threat information and a process for making judgments about the reality of the threat.

4.2 Identify potential threat agents: We have identified four broad groups of threat agents:

Intentional human

Unintentional human

Nature/natural disasters

Environmental.

Intentional human threats are defined as malicious, destructive exploits executed against an information system by authorized users or intruders. These can include, but are not limited to:

Intrusion or unauthorized access: Involves the act of gaining access to information system resources for malicious (attack) or non-malicious (curiosity) purposes.

Exploitation of known weaknesses/malicious code exploitation: The deliberate act of bypassing security controls for the purpose of gaining information or privileges. The exploited weaknesses could be at the operating system, application, or access control levels of an information system.

Malicious code insertion: Refers to the intentional release of malicious code against an information system and/or a network in order to affect the system. These include viruses, worms, Trojan horses, logic bombs, and others.

Misrepresentation of identity or social engineering: A technique which capitalizes on interpersonal skills to obtain access to unauthorized information and/or access to information systems.

Denial of service or saturation of system resources: Denial of service (DOS) and distributed denial of service (DDOS) are usually concerted, malicious efforts to prevent an information system, network, or service from functioning. Saturation, a common method for DOS and DDOS, involves a condition in which the information system has reached its maximum traffic handling capacity creating an unstable environment, potentially resulting in lack of availability of a system resource.

Tampering: The unauthorized modification of an information system which alters the proper functioning of the equipment, potentially degrading the security functionality or trust in the information system and/or its information.

Eavesdropping: Deliberate efforts to gain access to information by “listening” using electronic bugs, inductive amplifiers on unprotected cables, packet sniffers, and keystroke monitoring.

Espionage: The covert act of obtaining information through various means. It can be conducted by foreign governments through technical means, such as eavesdropping, or through human means, such as recruiting an agent inside the targeted organization. Espionage can also take advantage of legitimate business agreements, such as licensing and on-site contractors, to gain unauthorized access to information.

Terrorism: The deliberate and potentially violent act undertaken by a group or an individual, whose motives extend beyond the act itself, generally expressing some form of social or political statement. Terrorism can be a physical act or can take advantage of all of the above listed mechanisms to achieve a goal within the realm of cyberspace.

Theft, sabotage, vandalism: Deliberate malicious acts that can result in the damage, destruction, or loss of information system assets.

Abuse or fraud by authorized users: Actions by authorized users to abuse assigned access privileges to gain additional information, privileges, or for personal monetary gain.

Procedural violation: The act of not complying with existing procedures or instructions, which could result in an information system weakness.

Unintentional human threats are those that result from human actions without a clear motive or intent. These can include, but are not limited to:

Inadvertent acts or carelessness: Acts that could cause information system damage, performance degradation, loss, or unauthorized access. These can include:

confidentiality breaches, where a user commits an error that allows information access to the wrong individuals or places information in the wrong location;

data deletion, where a user accidentally deletes data or changes system data;

integrity breaches involving user error that introduces erroneous data into the information system or causes erroneous actions by the information system;

system security feature degradation, where a user’s inadvertent actions undermines the system security features;

programming and development errors which can result in unintentional software performance errors or vulnerabilities.

Errors or omissions: Data entry errors or oversights that can result in information inconsistency or other threats to system resources. These include unintentional data entry mistakes, failure to disable or delete unnecessary or old accounts, or failure to recover common access cards, keys, or other access tools from departed or terminated users.

Improper handling of media: Improper marking, handling, and disposal of sensitive media can result in the unintentional exposure of information to unauthorized individuals.

Installation errors: Errors in the implementation of hardware, firmware, and/or software that could result in information system weaknesses or undermine existing security safeguards. Examples include not implementing built-in software security features, incorrect installation or set up of devices, authorizing users to download and install external and uncontrolled programs, and untested installation of patches.

Accidents: Accidents can result from spills, exposure of the information system to hazardous substances, or physical damage to the information system.

Threats from nature and/or natural disasters are those that are not related to human actions or devices. According to the National Security Institute, more information system loss is associated with natural threats than from more widely publicized threats, such as malicious code or unauthorized network attack. Examples of threats from nature and natural disasters include hurricanes, tornadoes, floods, earthquakes, extreme cold or heat, and lightning.

Environmental threats are those that can be introduced by the conditions in which the information system is operating. These include, but are not limited to:

Environmental conditions: The result of the controlled or uncontrolled environmental conditions in which the information system is operating. Examples include water leaks in server facilities, excess humidity in the network operating center, poor ventilation, or air conditional failures.

Power fluctuations/failures: A power fluctuation is a short-term disruption in the primary power source, such as a power surge, spike, brownout or blackout, resulting in either insufficient or excessive power. A power failure usually involves a much broader effect, such as an overall utility failure or broad scale power disruption.

4.3 Analyze the threat agent: The process of defining threats does not end with a list of all of the potential and relevant threats. Once information is gathered and an inventory of possible threats is developed, the next step is to characterize the threats and assign a level of probability.

Threat intent is determined most frequently by inference, generally by asking questions such as: Does an adversary have a need for the asset we are trying to protect?

Could an adversary gain by exploiting or destroying an asset?

What are the possible motivations, e.g. political agenda, terrorist activity, criminal gain?

When assessing a threat capability, there are two general considerations. The first is the capability to obtain, damage, or destroy an asset. The second is the adversary’s ability to capitalize on the asset once it has been obtained. Some of the questions to ask include:

Does the adversary know the asset exists and where it is located?

What are the adversary’s demonstrated modes of operation?

A history of a threat being exercised is another predictor of possible future activity. Reviewing incident data is one method for developing historical data. Here it is useful to address the following questions:

Can the source of the incident be identified?

Have there been similar incidents in the past?

Can they be attributed to the same source?

The following tables demonstrate one way to look at threat agent capability in combination with motivation, intent, and history.

Table 7: Threat agent capability with motivation, intent, and history

Capability

Rating

Motivation, intent, history

Little or no capability to mount an attack.

1

Little or no motivation or demonstrated intent. No history of attack. Not inclined to act.

Moderate capability. Has knowledge, skills to mount attack, but is lacking in some resources. Or, lacking some knowledge but has sufficient resources to mount an attack.

2

Moderate level of motivation or demonstrated intent. Limited history. Would act if prompted, or provoked.

Highly capable. Has knowledge, skills and resources to mount an attack.

3

Highly motivated with demonstrated intent. Prior history of attack. Almost certain to attempt an attack.

Table 8: Threat agent rating combination

Table 9: Overall threat agent rating

Rating

Sensitivity description

1

Little or no capability or motivation, intent and history.

2

Little or no capability; moderate level of motivation, intent and history. Or moderate capability and little or no level of motivation, intent and history.

3

High capability; little or no motivation, intent and history. Or limited capability; high level of motivation, intent and history. Or moderate capability and moderate level of motivation, intent and history.

4

High capability; moderate motivation, intent and history. Or moderate capability; high level of motivation, intent and history.

5

High capability; high motivation, intent and history.

All of these variables and the subtle interactions between them must be considered as part of the likelihood. As a result, defining threat likelihood is often the most difficult aspect of risk to characterize due to the linkages between these various pieces of information. The following table provides three possible definitions of likelihood:

Table 10: Likelihood definitions

Likelihood level

Likelihood definition

High

The threat-source is highly motivated and sufficiently capable, and controls to prevent the vulnerability from being exercised are ineffective.

Medium

The threat-source is motivated and capable, but controls are in place that may impede successful exercise of the vulnerability.

Low

The threat-source lacks motivation or capability, or controls are in place to prevent, or at least significantly impede, the vulnerability from being exercised.

4.4 Summarize the threat analyses in the threat report: The last step is to record the results of the threat analysis in a threat report, indicating those which have the highest likelihood of occurrence.

Step 5: Conduct a vulnerability analysis

Vulnerability identification is the next step in the risk management process. Due to the very breadth of this subject, the vulnerability discussion presented here cannot be exhaustive. It is intended rather to serve as a guide to spur vigilance and discussion.

Information system weaknesses or vulnerabilities exist everywhere. The mere presence of a vulnerability does not necessarily cause any harm. A vulnerability is merely a condition or set of conditions that might allow an information system and/or its associated activities to be harmed by a threat agent. In other words, a vulnerability for which there is no credible threat does not necessarily require a response by the security processes.

Vulnerabilities can occur whenever systems are not effectively designed, improperly implemented, and/or inadequately protected. The types of vulnerabilities that can be identified may vary depending upon the maturing of that system’s development – its phase within the system life cycle:60

If the information system is in the concept phase or very early in the design phase, the identification of vulnerabilities should focus on the design schematics, planned security controls, and the vendor or developer’s product analyses and the concept documentation.

If the information system is at the production and early implementation phases, vulnerability identification should be expanded to include more specific design information, such as the planned security features described in the security design documentation and the results of certification test and evaluation.

If the information system is already operational, the process of identifying vulnerabilities includes an analysis of the system security features and the security controls and safeguards, technical and procedural, assigned to protect the information system.

5.1 Gather vulnerability information: There are five primary ways to gather vulnerability information:

Questionnaires/checklists

On-site interviews

Document reviews

Observation

Testing.

60 Source: NIST Special Publication 800-30, Risk Management Guide for Information Technology Systems.

Questionnaires and checklists: Risk assessment personnel can use questionnaires and checklists to determine the security controls planned for integration into the information system or already implemented. Questionnaires and checklists of this type will generally be provided to technical personnel, developers, system administrators, and non-technical management personnel responsible for designing and supporting the IT system. Questionnaires and checklists are primarily useful for determining if the information system in question addresses known vulnerabilities or has included pre-determined security safeguards.

There are also many credible industry sources that provide information useful in developing the questionnaires, checklists and preparing for the interviews. Many of these can be found on the Internet or in system information descriptions provided by vendors. These will also often include fixes, service packs, patches, or other mitigations for known vulnerabilities.

There are also “pre-made” checklists available for use. Many of these can be found through the Information Assurance Support Environment (IASE) managed by the DOD Information Systems Agency (DISA) located at http://iase.disa.mil/stigs/stig/index.html. On this site, there are a number of Security Technical Implementation Guides (STIG) on a number of security configurations, such as application security and development, databases, domain name servers, enclaves, instant messaging, networking, personal computers, remote computing, Unix, voice over IP, web servers, Windows® operating systems, wireless and others.

Interviews: Interviews with information system, network support, and security management personnel can also enable a risk assessment team to collect useful information about the information system, especially how the information system is operated and managed.

Document reviews: Policies, system documentation (e.g. user guides, system design and requirements documents, acquisition documents), and security related documentation (e.g. audit reports, system test results, security plans) are a good source of information about the security controls planned for and implemented in the information system. It is also useful to review the organization’s mission and asset criticality assessments for information regarding information system and data criticality and sensitivity.

On-site visits: On-site visits by the assessment team will generally result in observations about the physical, environmental, and operational security of the information system. For information systems still in the concept and design phases, on-site visits provide the assessment team with an opportunity to evaluate the viability of the security safeguards and the target physical environment in which the information system will operate.

Testing: Testing using manual methods and automated scanning tools are another efficient source to gather information system vulnerabilities. For example, an automated vulnerability scanning tool, such as Retina, can identify vulnerabilities and unauthorized services that may be present on an information system. In almost every case, several of the vulnerabilities identified by the automated scanning tool may be false positives, e.g. not real vulnerabilities in the context of the system configuration and/or operational environment. When using these tools, it is important to have a follow on analysis conducted by technical personnel knowledgeable about the information systems configuration, operating requirements, and environment.

Security test and evaluation, also called verification and validation, is another method used to identify information system vulnerabilities. Generally, the purpose of this type of testing is to determine the level of effectiveness of the information systems security controls as they have been applied to the information system and/or its environment. This process will be addressed in greater detail in Chapter 7.

Penetration testing can be used to complement the other methods of information system testing. The objective of penetration testing is to test the security of the information system from the perspective of a threat agent and to identify potential points of failure. NIST Special Publication 800-42, Network Security Testing Overview, provides a methodology for information systems testing and for the use of automated tools.

5.2 Assign to vulnerability areas: When assessing an information system and its environment for potential vulnerabilities, there are three primary areas to which these can be assigned: management, operational, and technical.

Management area vulnerabilities: Management vulnerabilities are generally evidenced in the lack of proper or comprehensive policies and procedures. While not directly vulnerabilities themselves, these often enable the existence of other vulnerabilities associated with the information system and/or its operational environment. Management area vulnerabilities are divided into:

administrative policy and procedures;

physical security policy and procedures;

personnel security policy and procedures.

Administrative policy refers to the formal, documented procedures for selecting and implementing security measures and safeguards. Vulnerabilities are often found in weak countermeasures and deficiencies in the development and maintenance of procedures, guidance documents, definition of responsibilities, insufficient life cycle management, and lack of security standards.

Physical security vulnerabilities occur when there are weak countermeasures in the physical layout of, or access to, facilities and environments where information systems are located. These weaknesses include inadequate or ineffective physical access controls or intrusion detection (e.g. badge systems, alarms, cameras) and lack of or deficient security controls for the physical site boundary protection (e.g. perimeter fencing, door locks).

Personnel security vulnerabilities are deficiencies in the controls and procedures that ensure that all personnel have the required information access authorization, including clearances, for access to information and information systems. These can be demonstrated in weak safeguards for screening staff, processing background and security checks, hiring and termination processes, and security training.

Operational area vulnerabilities: Operational vulnerabilities are associated with the security procedures in the operational environment in which the information system is being used. Vulnerabilities in the operational area span many practices and procedures, including, but not limited to:

Security monitoring

Auditing

Media protection

Security documentation

Account management

System backup

Contingency planning

System maintenance

Configuration management

Labeling and data control

Sanitization and disposal.

Technical area vulnerabilities: Technical vulnerabilities are those weaknesses associated with the hardware, firmware, and software, as well as the information system architecture and technical configuration. Most of the focus on addressing vulnerabilities has been centered on technical vulnerabilities and technical solutions, largely because this is one of the most visible and highly publicized areas. Some of the technical vulnerabilities considerations include, but are not limited to:

Account management

Passwords

System access

System integrity monitoring and reporting

Session controls

External and internal connectivity

Telecommunications

Boundary protection, such as firewalls, intrusion detection, proxy servers

Encryption

Anti-virus protection

Audit technology

Remote access.

A sample list of vulnerabilities is provided on the companion CD.

5.3 Rate likelihood of exploitation: The likelihood of a threat agent exploiting a vulnerability is based on:

The exposure level of the vulnerability

The severity level of the vulnerability.

The following tables demonstrate a mechanism for rating vulnerabilities.

Table 11: Vulnerability severity and exposure rating

Severity

Rating

Exposure

Minor: Vulnerability requires significant resources to exploit, with little potential for loss.

1

Minor: Asset is not exposed. Effects of vulnerability tightly contained. Does not increase the probability of additional vulnerabilities being exploited.

Moderate: Vulnerability requires significant resources to exploit, with significant potential for loss. Or, vulnerability requires some resources to exploit, moderate potential for loss.

2

Moderate: Asset has some exposure. Vulnerability can be expected to affect more than one system element or component. Exploitation increases the probability of additional vulnerabilities being exploited.

High: Vulnerability requires few resources to exploit, with significant potential for loss.

3

High: Asset is exposed. Vulnerability affects a majority of system components. Exploitation significantly increases the probability of additional vulnerabilities being exploited.

Table 12: Vulnerability rating combination

Table 13: Overall vulnerability rating

Rating

Description

1

Minor exposure, minor severity.

2

Minor exposure, moderate severity; or moderate exposure, minor severity.

3

Highly exposed, minor severity; or minor exposure, high severity; or moderate exposure, moderate severity.

4

Highly exposed, moderate severity; or, moderate exposure, high severity.

5

Highly exposed, high severity.

5.4 Determine existing countermeasures: In place countermeasures may mitigate the risk to a vulnerability even in the presence of a malevolent and capable threat agent, together with a vulnerability which could potentially be exploited by that threat. All else being equal, more countermeasures can result in less risk, and so countermeasures appear in the denominator to the algorithm presented at the beginning of the section. Countermeasures can reduce the likelihood of a successful attack and so reduce risk.

Information systems security countermeasures can be technical and non-technical in nature. Technical countermeasures are controls that are an integral part of the information system’s hardware, software or firmware, including access control mechanisms, identification and authentication mechanisms, encryption and intrusion detection systems. Non-technical controls can be management and operational processes and procedures, such as personnel and physical security procedures. We will discuss the actual selection and implementation of security controls in greater detail in Chapter 6.

The focus of countermeasure analysis is the determination of how effectively the applied control has addressed the risks identified in the risk assessment process. What remains after the controls have been applied and their effectiveness has been evaluated is termed residual risk.

The Office of Management and Budget (OMB), Circular No. A-130, defines residual risk as the “risk that remains in operation of an information system after all possible, cost-effective threat mitigation measures have been applied.” The level of residual risk presented by system operation is the final output of the risk management process introduced in this section. This residual risk analysis forms the true basis for the determination by the AO to either allow or deny authorization to operate an information system.

Step 6: Execute cost/impact analysis

As part of the risk assessment process, an organization needs to determine the actual costs of theft, modification, or destruction of a critical asset. Often called impact, the cost to an organization can be either tangible or intangible. Costs or impacts are incurred when a threat agent exploits a vulnerability resulting in some effect on an asset.

6.1 Assign costs/impact: The costs to an organization of a successful attack depend greatly on the value of the target. If the cost or impact of a security failure is limited, then the allocation of scarce resources to promote security systems and processes should also be limited. For example, the loss of routine office correspondence might occasion little concern. On the other hand, there are some security failures with exceptionally dire consequences.

For example, a failure of the public switched network that carries telephone and computer communications could be devastating and could even inhibit deployment of military forces, emergency response teams or law enforcement officials. In the extreme case of cyberwarfare – attacks on a nation’s information infrastructure – the results could be serious enough to affect the outcome of a geopolitical crisis without a single shot being fired. Obviously, as the value of the target rises, the impact of a successful attack goes up as well, and so our sense of risk increases. Consequently, cost is also considered a multiplier in the risk algorithm.

A key point here is that different organizations will have very different cost (impact) concerns. For example, a government agency will have dramatically different areas of consideration than a financial institution. There are also, however, some common considerations related to confidentiality, integrity, and availability of their information and information systems.

Some of the basic cost considerations are:

How long can we live without access to our information and/or information systems before there is a dramatic impact on mission and operations?

If we lose the confidentiality of our information or our customer’s information, what is the cost to our organization?

If the integrity of sensitive records is questionable, what is the effect on our organization and our reputation?

The actual determination of cost (impact) will depend heavily on the perspective of the organization. A good starting point for making cost determinations is to begin with those considered high – e.g. having a dramatic cost (impact) on the organization. Examples might include:

Loss of life

Inability to execute a mission

Excessive downtime

Major loss of money.

Typically, a medium cost is one that is considered significant by the organization. These might include:

Loss of customer confidence

Significant delay in a mission

Loss of a strategic advantage

Significant loss of money.

Low cost determinations are reserved for those considerations that might have only a limited or lesser cost to the organization. Some examples are:

Limited loss of money

Customer complaints

Limited delay in a mission.

It is not unusual for an organization to feel that any loss, degradation, or delay will present a major cost or impact. So, there is a tendency to rate a large number of possibilities presented by the exploitation of a vulnerability by a threat agent as high. If everything is rated as high, however, this would actually provide little value to the organization. In reality, not all assets (tangible and intangible) deserve the same level of protection.

6.2 Prepare cost/impact report: Once the costs/impacts have been determined, assemble the information in a cost or impact report. This does not have to be a formal or separate report, but can be an association of the level of impact/cost of impact to an asset.

Step 7: Finalize risk assessment and analysis

7.1 Consolidate the asset, threat, vulnerability, and cost/impact information: This is the step where all of the above data is consolidated: the assets and their respective criticalities have been defined; likely/probable threat agents have been determined; the asset vulnerabilities, exposure and existing countermeasures or safeguards have been identified; and the potential costs/impact of a threat agent compromising an asset have been determined.

7.2 Review existing/planned safeguards: An initial review of existing safeguards was conducted when assessing the vulnerabilities. In this step, these safeguards/countermeasures are once again reviewed in light of all of the collected data. In addition, any planned safeguards will be considered.

The following table provides an example listing of threat agents and their interrelationships with the data described in the preceding sections.

Table 14: Assessment interrelationships

7.3 Identify residual risk: Determine what, if any, risk remains after consideration of existing and planned safeguards. Also consider possible constraints that might affect the implementation of safeguards/countermeasures, including:

legal constraints;

contractual constraints (lease agreements, etc.);

collective agreements;

cost;

potential loss of productivity;

operational overhead;

enforceability;

management style.

7.4 Prepare the risk assessment and analysis report: The risk assessment and analysis report should contain a prioritized record of the assets at risk from the identified threat agents after consideration of all of the data described in paragraph 7.1 and 7.2. This includes a statement of residual risk.

Step 8: Assess residual risk against risk tolerance

The Committee of Sponsoring Organizations of the Treadway Commission (COSO) has defined risk tolerance or “appetite” as “… the amount of risk, on a broad level, an entity is willing to accept in pursuit of value (and its mission).” Risk appetite is influenced by the organization’s culture, operational strategies, and infrastructure. It is not a constant; risk appetite is influenced by and must be able to adapt to changes in the environment.

Defining the organization’s risk tolerance must be an executive responsibility based on the organization’s goals and objectives. Management assesses the alternatives, sets objectives aligned with strategy, develops business processes to accomplish the plan, and manages any inherent risks. Risk tolerance can be defined as the residual risk the organization is willing to accept upon reaching the state of having determined its risk and implemented its set of risk-mitigation and monitoring processes.

The full risk assessment: Yes or No?

It is easy to see from the detailed risk assessment and analysis process described above that this can be a long and costly process. Is it really necessary to conduct the full process in order to determine essential security controls and make an authorization to operate decision?

Some form of risk determination is an essential part of the AO’s risk based decision; however, it may not be necessary to exercise the full extent of the process in order to arrive at a reasonable risk determination.

Even though Appendix III of OMB Circular No. A-130 does not require a formal risk assessment such as that described in this section, Appendix III does state that “the need to determine adequate security will require that a risk-based approach be used.” This approach should at least consider the major factors in risk management at a high level: the value of the information and the information system, threats, vulnerabilities, and the effectiveness of current or proposed safeguards.

Ultimately, the risk management process is about making decisions. The cost of a successful attack on an organization’s information infrastructure and the level of risk that is acceptable in any given situation are by necessity individual policy decisions. The threat is whatever it is and while it may be mitigated, controlled or subdued through the selection of the appropriate countermeasures, it still remains beyond the direct control of the information systems security process.

In order to ensure greater success in managing risk, the process must address weaknesses in the information system’s hardware, firmware, software and architecture during the design, development, fabrication and implementation phases of our facilities, equipment, systems and networks.

Risk assessments and the resulting capability to manage risk may seem inherently complex, but even complex issues can be understood when broken down into simple steps. So, let’s take this entire section and boil it down into the following simple questions:

What assets do you want to protect? While this question may seem self-evident, many organizations do not take the time to understand what is really valuable to them.

What are the threats to these assets and the likelihood that these threats will be exercised? This may never be fully known, but looking at capabilities and history may provide some insight.

What are vulnerabilities or weaknesses in the system that could be exploited by a willing and capable threat agent? Remember an asset or a vulnerability may not be visible to a threat agent, so it may not warrant the high cost of certain protections.

What would be the cost of a successful attack? If a vulnerability can be exercised by a threat agent, understanding the cost of protection should be in relationship to the potential cost to the organization as a result of a successful attack.

What are the potential countermeasures or security safeguards and how well do they mitigate the risk? Residual risk is what remains after looking at the remaining level threat, vulnerabilities, and cost after the application of safeguards.

How will the above information be analyzed and presented in order to make an appropriate risk management decision? The risk assessment and management process is not a singular activity. Organizations must establish risk monitoring and evaluation activities as part of a continuous process.

This section focused on information system-related security risk. But this is just one component of the large organizational risk that senior leaders tackle as a routine part of their ongoing management responsibilities. Risk can take many forms, e.g. investment risk, budgetary risk, program management risk, legal liability risk, safety risk, inventory risk, and the risk from information systems.

Effective risk managers know that organizations operate in highly complex and interconnected worlds using information systems to accomplish critical missions and to conduct important business. Organizations recognize that well-informed management decisions are necessary in order to balance the benefits gained from the use of these information systems with the risk to the organization posed by the same systems. The risk assessment is the means to provide leaders with the information they need to make these decisions.

Managing risk, either information system-related security risk or other types of risk, will never be an exact science. It can only represent the best collective judgment of those individuals responsible for ensuring the day-to-day operations of organizations.

Align with the system life cycle61 (SLC)

NIST Special Publication 800-64, Security Considerations in the Information System Development Life Cycle, defines the SLC as “the scope of activities associated with a system, encompassing the system’s initiation, development and acquisition, implementation, operation and maintenance, and ultimately its disposal that instigates another system initiation.”

Information systems security, including the authorization process, should be considered throughout the SLC, starting with the preliminary system concept. Identifying IA safeguards early in the SLC will ensure that key elements, such as technical security requirements, scheduling, and cost and funding issues associated with executing requirements for IA and authorization, are addressed and maintained.

The security requirements of information resources must be considered as they are planned to operate when fully functional, not necessarily how they currently operate. Security safeguards should be considered for the data that will be processed by the information system and the planned system configuration, even if that information is not yet being processed and the design is not fully solidified. The data requirements and system configuration may change throughout the life cycle of the information system, but it is important to have accurate classifications at each stage of the life cycle, so that appropriate security controls can be identified and

61 Many publications refer to the system development life cycle (SDLC) in this same context. But we feel that the integration of information systems security and information systems life cycle goes far beyond development and extends through the full life of the system until it is removed from service.

applied. As the need for changes to the information classification and the system configuration surface, the system description should be updated to accurately reflect the current state of sensitivity or mission criticality.

As a result, the number and nature of suitable security controls will vary depending on the phase within a SLC and acquisition cycle. The relative maturity of an information system’s architecture and design may influence the types of appropriate security controls. The blend of security controls is also dependent upon the mission of the organization and the role of the information system within the organization in supporting that mission. One way to identify the ideal mix of management, operational, and technical security controls is through the risk assessment, analysis management process.

We will provide a more exhaustive description of the relationship between the authorization process and the SLC in Chapter 14.

Milestones from the pre-certification and accreditation activities:

Before proceeding to the next phase, the actual initiation of activities for a specific authorization requirement, let’s take a final look at what you should achieve in this preliminary phase.

The authorization team is established and each member is familiar with their role(s).

Each member of the authorization team is trained in their respective specialties, as well as in the authorization processes.

The information and the information system are characterized.

The accreditation boundary is determined and the AO notified of the pending authorization.

The enterprise and system level risk assessment is complete and the risk management process is initiated.

The authorization activities are aligned with the system life cycle and are part of the process of development, deployment, and operations.

Much of the effort you put into this preliminary phase remains in place or “re-usable” for parallel or future authorization efforts. This includes the authorization team, the enterprise risk assessment, and perhaps the accreditation boundary.

Further reading

Alberts, Christopher and Dorofee, Audrey. Managing Information Security Risks: The OCTAVE (SM) Approach. Addison Wesley Professional, 2002.

Calder, Alan and Watkins, Steve G. Information Security Risk Management for ISO27001/ISO17799. IT Governance Publishing, 2007.

OCTAVE (Operationally Critical Threat, Asset and Vulnerability Evaluation). Available at http://www.cert.org/octave/.

Roper, Carl. Risk Management for Security Professionals. Butterworth-Heinemann, 1999.

Schneier, Bruce. Beyond Fear: Thinking About Security in an Uncertain World. Springer, 2006.

References

Barker, William C. Guide for Mapping Types of Information and Information Systems to Security Categories. ITL Bulletin, July 2004.

Department of Defense Instruction 8500.2, Information Assurance Implementation, 2003.

National Institute of Standards and Technology (NIST) Special Publication 800-30, Risk Management Guide for Information Systems.

National Institute of Standards and Technology (NIST) Special Publication 800-59, Guideline for Identifying and Information System as a National Security System.

National Institute of Standards and Technology (NIST) Special Publication 800-60, Guide for Mapping Types of Information and Information Systems to Security Categories.

Peltier, Tom. Information Security Risk Analysis. Auerbach, 2001.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset