3

Cyber Security Objectives

Given the complex nature of cyber security technology, and the fact that cyber security threats only escalate, it might be expected that policymakers are constantly confronted with decisions on how to react to the latest threat. However, because it is often the case that decisions concerning cyber security measures are delegated to technologists, a policymaker may not actually see these decisions being made, and thus not have a chance to weigh in on the organizational impact of various alternative approaches. In fact, the cyber security arms race often seems to offer very few alternative options. Almost immediately after cyber security technology is introduced, its usage is declared industry standard by some regulatory body, and this locks organizations into the identified countermeasure approach. For example, if a regulated organization decided to use a cyber security approach that did not make use of firewalls, they would face detailed scrutiny by their regulatory auditors. It seems easier to continue keeping up with the latest security tools and technologies than rethinking an organizational approach to cyber security.

Nevertheless, if there is any lesson in Chapter 2, it is that new paradigms for cyber security are sorely needed. In this chapter, we critically examine the policy objectives that evolved with the history of cyber security as described in Chapter 2. Note that these cyber security policy objectives did not then and do not necessarily now correspond to organizational goals for cyber security. Nevertheless, in this chapter, we also review methods used to determine that cyber security policy goals have been met. We observe that those who set security objectives often mistake achievement of objectives for accomplishing security goals. We conclude that current cyber security metrics do not measure security at all. The chapter ends with three case studies that illustrate how cyber security goals may be established and how cyber security goal achievement may be measured.

3.1 Cyber Security Metrics

Measurement is the process of mapping from the empirical world to the formal, relational world. The measure that results characterizes an attribute of some object under scrutiny. Combinations of measures corresponding to an elusive attribute are considered derived measures and are subject to interpretation in the context of an abstract model of the thing to be measured (ISO/IEC 2007). Metrics is a generic term that refers to the set of measures that characterize a given field. Cyber security is not the direct object of measurement, nor a well-enough-understood attribute of a system to easily define derived measures or metrics. So those engaged in cyber security metrics are measuring other things and drawing conclusions about security goal achievement from them. This challenge has spawned a field of study called security metrics (Jaquith and Geer 2005).

Metrics in physical security traditionally have concentrated on the ability of a system to meet the goal of withstanding a design basis threat (DBT) (Garcia 2008). A DBT describes characteristics of the most powerful and innovative adversary that it is realistic to expect to protect against. In New York City, it may be a terrorist cell equipped with sophisticated communications and explosive devices. In Idaho, it may be a 20-person-strong posse of vigilantes carrying automatic assault weapons on motorcycles. Adopting a DBT approach to security implies that the strength of security protection required by a system should be calculated with respect to a technical specification of how it is likely to be attacked. In physical security, this process is straightforward. If the DBT is a force of 20 people with access to explosives of a given type, then the strength of the physical barriers to unauthorized entry must withstand the ton of force that these 20 people could physically bring into system contact. Barrier protection materials are specified, threat delay and response systems are designed, and validation tests are conducted accordingly.

In cyber security, the terms perpetrator, threat, exploit, and vulnerability are terms of the trade, their meaning is distinct and interrelated. As depicted in the systemigram of Figure 3.1, a perpetrator is an individual or entity. A threat is a potential action that may or may not be committed by a perpetrator. An exploit refers to the technical details that comprise an attack. A vulnerability is a system characteristic that allows an exploit to succeed. Thus, the mainstay of the systemigram of Figure 3.1 is read as, “Security thwarts perpetrators who enact threats that exploit system vulnerabilities to cause damage that adversely impacts value” (Bayuk, Barnabe et al. 2010).

Figure 3.1 Security systemigram mainstay.

c03f001

Since the advent of computer systems, DBTs for computer security have considered potential perpetrators such as hackers in the form of joyriders, malicious agents of cyber destruction, and espionage agents. However, unlike a physical security analysis of DBT, the countermeasures designed in response to the threat did not concentrate on the threat actors themselves, and what their latest tactics might be, but on the technology vulnerabilities that were exploited to enact the most recent threat. As each type of system vulnerability reached the stage of security community awareness, a corresponding set of security countermeasure technologies came to the market, and became part of an ever-increasing number of best practice recommendations. Countermeasures were applied to vulnerable system components, and threats to systems were assumed to be covered by the aggregated result of implementing all of them. Figure 3.2 illustrates this approach by adding these concepts and the relationships between them to the systemigram of Figure 3.1. Figure 3.2 shows that cyber security metrics, management approaches, audits, and investigation techniques are based on security tools and techniques. Unfortunately, as described in Chapter 2, they have been derived from the tools and techniques in use rather than specified as system requirements.

Figure 3.2 Full security systemigram.

c03f002

The consensus that security goals are met by countermeasure technology has come at the expense of addressing DBTs as part of the system design itself. Figure 3.3 illustrates the difference between this traditional approach to security architecture and a more holistic, system-level approach. It depicts vulnerable attributes of a system as a subset of system attributes, and perpetrator targets as a subset of the system’s vulnerable attributes. Traditionally, security engineering has attacked this problem with security-specific components, derogatorily referred to as “bolt-ons.” These are often labeled “compensating controls,” which is a technical term in the audit that refers to management controls that are devised because the system itself has no controls that would minimize damage were the vulnerability to be exploited. Bolt-ons are by definition work-arounds that are not part of the system itself, such as the firewalls described in Chapter 2. The lower part of Figure 3.3 illustrates the contrast between a bolt-on approach to solving security problems and a security design approach that instead is expected to alter system-level attributes to eliminate or reduce vulnerability. If this approach is tried first, the number of security-specific compensating controls should be minimal.

Figure 3.3 Bolt-on versus design.

c03f003

Nevertheless, there instead seems to be an almost unconscious adoption of the list of security technologies as described in Chapter 2. The effect is that a typical security goal presentation shows the progress of implementation of those security technologies listed by business area and computer operating system. Figure 3.4 is a typical example. In the analysis that would typically accompany the figure, the fact that the marketing business area does not have as much security as the finance area might be explained with reference to a higher risk tolerance on the part of marketing versus finance. As may be evident from the cycle of threat, countermeasure, threat, countermeasure reviewed in Chapter 2, cyber security professionals have their hands full just getting the business areas that want to reduce risk up to the full measure of security technologies available. It is a reactive approach that leaves little time to evaluate what the threats really are, and thus what overall enterprise goals of security should be (Jaquith 2007). Also, although some surveys indicate this situation may be improving (Loveland and Lobel 2011), cyber security practitioners historically have had relatively little input from the business stakeholders they are protecting. A few good examples are the situations described in Section 2.4 wherein business users used email to send sensitive material to customers as well as to their own home PCs even though these actions were prohibited by security policy. In fact, it would appear that the security staff were at odds with the goals of the business. Cyber security activity to date has been characterized by a heads-down approach, concentrating on applying controls and countermeasures. It is a method of problem solving by reacting to external threats with constraints on operations. A focus on enterprise-level goals for security has been missing.

Figure 3.4 Example of cyber security metrics.

c03f004

Compare this phenomenon to management issues in other complex areas. If a manufacturing line is having trouble keeping the equipment running, do they continue despite its obvious negative effect on the product? In a well-run manufacturer, strategic thinkers prevail, and the manufacturing line is redesigned, perhaps pruned, before returning to operation. If components of a transportation system are chronically under repair and causing service delays, are they patched while they are running? At least in efficiently run organizations, they are pulled out, and perhaps replaced or reconfigured. By contrast, in cyber security, it has often been the case that an examination of the underlying business process is not presented as part of the engineering tradespace. This is particularly true in organizations where technology operations are managed independently from business operations. The business runs in parallel to security measures, not in conjunction. Security practitioners are exhorted not to interfere with systems operation, and security itself is not considered a critical component of system functionality. Hence, its failures often take management by surprise, often with devastating effects.

In nonsecurity areas, where specific goals are included in system requirements, there is always a recognition that the goals may not be achieved, and corresponding contingency plans are made for business operations. If revenues do not meet projections, business expansion plans may be revisited. If a new marketing plan does not increase customer traffic, alternative ad campaigns are made ready. If a new customer service strategy alienates customers, it is immediately revised. If security goals are seen in the same light, then security strategy planning would receive similar scrutiny. For example, the goal of “protect intellectual property” would have a corresponding definition of intellectual property that would allow its protection to be monitored to ensure goal achievement. Such monitoring would include both verification that the plan was properly executed, and validation that the plan achieved its security objectives. Failure in either such verification or validation would trigger remediation measures.

3.2 Security Management Goals

Many executives have no articulated goal for security other than “I want to be secure.” In such cases, there is also an element of the goal that goes without saying, as the full articulation would typically be, “I want to be secure with little or no impact to my organization.” They provide this directive to security professionals the same way they delegate balance sheet management to the accounting staff, saying, “I want the numbers to be accurate.” Putting aside the parallels in the two professions concerning the need to be legal and regulatory compliance the delegation amounts to trust that the professional to whom the executive delegates understands the issues involved in the assignment and is capable of working closely with all those in the business who are stakeholders in the delegated functions to achieve the executive’s goal.

However, the accounting profession has a well-established, several-thousand-year history supporting its ability to define trust in terms of relationships that involved a combination of circumstances and sanctions (Guinnane 2005). By contrast, the cyber security profession has just a half century or since the first industry or national security standards, and far less than that since the advent of international security standards (a small sample includes DoD 1985; ISO/IEC 2005a,b; FFIEC 2006; Ross, Katzke et al. 2007; PCI 2008). Moreover, rather than any agreed-upon industry standard, such as accounting’s generally agreed upon accounting principles (GAAP), there are so many multiple competing standards in cyber security that a business has been established to catalog and compare them (UCF ongoing). The product is delivered in a spreadsheet or other structured data format. It is meant to be imported into a security information management (SIM) system, and it allows a security manager to demonstrate compliance with multiple standards without having to read them all.

Security programs that are motivated by regulatory compliance are not specifically designed to achieve organizational goals for security, but instead are designed to demonstrate compliance with security management standards. Hence, the standards themselves have become de facto security metrics taxonomies that cross organizational borders. Practitioners are often advised to organize their metrics around the requirements in security management standards against which they may expect to be audited (Herrmann 2007; Jaquith 2007). There is even an international standard for using the security management standards to create security metrics (ISO/IEC 2009b).

The disadvantage to this type of approach to security management is that details of standards compliance are seen as isolated technology configurations to be mapped to a pre-established scorecard, as opposed to the scorecard being designed to reflect enterprise goals for security. None of these standards comprise a generally accepted method of directly measuring security in terms of achievement in thwarting threats (King 2010). They are typically used to ensure that management has exercised due diligence in establishing activities that should result in security, not to measure whether those activities have been effective.

Contrast this with the layman’s view of security. For example, individuals who have changed jobs sometimes measure the security at the old and new firms in terms based on the degree of difficulty for them to access important data and information, both locally and remotely. For example, they may identify the number of passwords they have to use from their desktops at home to access customer data in the office, and decide that the firm that makes them use more authentication factors is more secure. Figure 3.5 shows this type of layered-defense depiction of system security. Such layering is often called defense in depth. The term refers to an architecture where security controls are layered and are redundant, and vulnerability in one part of the system will be compensated for by another. That is, no one control should present a single point of failure, because at least two controls would have to break for an intruder to get in.

Figure 3.5 A layered defense.

c03f005

Figure 3.5 provides a layered perspective on a typical network of the type in Figure 2.6. It has multiple security “layers,” as described in the central lower part of the diagram. At the top of the diagram, the “Remote Access” user is illustrated as being required to authenticate a workstation, which may or may not be controlled by the enterprise. The user then authenticates via the Internet to the enterprise network. From the network access point, the remote user can directly authenticate to any of the other layers in the internal network. This is why remote access typically requires a higher level of security, because once on the internal network, there are a variety of choices for platform access.

This remote access path is contrasted with the access path for the Web application in Figure 3.5. In the case of the web application, the existence of the layers does not actually constitute defense in depth. This is because such Internet accessible applications are usually accessible with just one log-in. The web application path shows that Internet users typically authenticate to their own workstations, which are not controlled by the enterprise. A user then can access the application without authenticating to the network because the firewall allows anyone on the Internet to have direct access to the login screen of the application on the web server. There is also no need to authenticate to the operating system of the server itself. Once within the application, the data authentication layer is not presented to the user; the application automatically connects to it on behalf of the user. These conveniences are depicted in the figure as bridges through the layers that the remote user would have to authenticate to pass, but the application user does not. Hence, to apply the term defense in depth to this case would be a misnomer.

Recalling the technology required to fortify these layers as presented in Chapter 2, it is obvious that multiple devices must be configured in coordination to ensure that each lock on each layer is actually closed to those who do not have a key. Hence, in much of the literature on security metrics, the goal is assumed to be correct configuration of all of these layers (Hayden 2010). However, despite this assumption, there is not a standard taxonomy for security metrics. Principles to be used in such classification have been explored by different researchers, and these explorations have produced different results. A survey of security metrics taxonomy efforts was compiled a few years ago and still accurately described the field from the practitioners’ viewpoint (Savola 2007). It reported that common a theme in security metrics literature was that taxonomies of security metrics tended to address technical configuration and operational process from the point of view of security management rather than to directly described business goals for security. Even taxonomies that include governance in addition to management tend to focus on the security management tasks that are evidence of governance, and those metrics could easily be considered part of the management category (CISWG 2005). As illustrated in Figure 3.6, it is recommended that security metrics be raised to consider business-level requirements for security.

Figure 3.6 Security management metrics.

c03f006

However, there is an issue with this approach. It is that there is currently no convergence around a single organizational management structure for security, so there can be no corresponding authoritative business-level security metrics taxonomy. Instead, there has been a great deal of consensus around standards for security process (ISO/IEC 2005; ISO/IEC 2005; ISACA 2007; ISF 2007; Ross, Katzke et al. 2007).

Yet even within the standards community, there is a debate on what makes a good measure of security. For example, the National Institute of Standards and Technology (NIST) sets standards for creating security metrics (Chew, Swanson et al. 2008), but is also on the record with a report that observes that current systems security measures are inadequate, and has called for research in security metrics (Jansen 2009). This report acknowledges a difference between managing security consistent with some standard and providing effective security. This correctness and effectiveness distinction is analogous to an engineering distinction between verification and validation, which highlights a distinction between the statements, “the system was built right” and “the right system was built” (INCOSE 2011). The former refers to the conformance to design specifications and the latter refers to the ability of the design to achieve desired functionality. The NIST report also suggested a classification of security metrics into leading, concurrent, and lagging indicators of security effectiveness. An example of a leading indicator is a positive assessment of the security of a system that is about to be deployed. Concurrent indicators are technical target metrics that show whether security was currently configured correctly or not. Lagging indicators would be discovery of past security incidents due to inadequate security requirements definition, or failures in maintaining specified configurations. If the goal is to know the current state of system security, concurrent indicators would make better metrics. However, as there is no systems attribute currently recognized to be security, there is no agreement on what a concurrent security metric looks like. That is, any one organization can judge whether its systems were built “right,” that is, to their specifications. But no organization has reached the holy grail in cyber security, which is to know that the “right” security was built.

Recommendations for security metrics often suggest a hierarchical metrics structure where business process security metrics are at the top, and the next level includes support process metrics like information security management, business risk management, and technology products and services (Savola 2007). As illustrated in Figure 3.6, the supporting processes are expected to achieve the security via goal decomposition into more granular measures, perhaps through several decomposed layers until there are only leaf-level measures, that is, considering the hierarchy as a tree, and reading the lowest level at the end of a branch. Each leaf-level measure is combined with its peers to provide an aggregation measure that determines the metric above them in the hierarchy. For example, the leaf in Figure 3.6 labeled “Product Security” would be filled in with the accumulated totals of security products from the graph in Figure 3.4 that corresponded to security products. This number would be combined with the Security Service metric to provide an overall Security Technology metric. Assume that Security Logs, Web Security, Operating System Security, and Network Security are considered products and Encryption, Identity Management, and Remote and Wireless are considered services. The average percentage target goals achieved in each subset for the four business areas would be called the “Product Security” and “Service Security” metrics, respectively. The average of those two would be the “Technology Security” metric. This method of measurement is still verification that the design for security was implemented (or not) as planned, rather than validation that the top-level security goals are met via the process of decomposition and measures of leaf performance.

3.3 Counting Vulnerabilities

A notable exception to the technology management approach to security metrics, though still one that does not directly measure security, is vulnerability and threat focused. This is the enumeration of system vulnerability and misuse techniques. NIST and MITRE encouraged a consortium of security product vendors and practitioners to contribute to an endlessly growing repository of structured data describing known software vulnerabilities in a project known as the National Vulnerability Database (NVD) (MITRE ongoing). The first Common Vulnerability Enumeration (CVE) was published in 1997 (MITRE ongoing). This provided some standard by which security protection efforts would be judged to be effective by providing a “to-fix” list. Starting with the second antivirus vendor, it has been hard for security practitioners to know whether the security software they use protects them from any specific piece of malware. This is because antivirus vendors give names to malware that are different from competitor names for the same malware if they feel they should get credit for being the first to discover it (a product manager from a large antivirus company actually admitted this in a conference panel; Gilliland and Gula 2009). Just listing the vulnerabilities that allowed malware to work did not address the concern that malware had to be identified in order for it to be eradicated, so in 2004, the CVE was followed with a Common Malware Enumeration (CME) that catalogs malware that exploits vulnerabilities. This facilitates the development of automated methods to detect and eradicate malware. The MITRE NVD data was extended in 2006 to include the Common Weakness Enumeration (CWE), which is a list of software development mistakes that are made frequently and commonly result in vulnerabilities. An example of a specific issue would be the identification of a software security flaw that appears on the “Never-Events” list. The list is a metaphorical reference to the National Quality Forum’s (NQF) medical Never-Events list (Charette 2009). That list includes medical mistakes that are serious, largely preventable, and of concern to both the public and health-care providers for the purpose of public accountability such as leaving a surgical instrument in a patient. The software integrity version of the Never-Events list is the list of the top 25 mistakes software developers make that introduce security flaws (previously identified as CWEs). SQL Injection in the metric example for this category refers to one of those never-events. An SQL-injection mistake allows database commands to be entered by web page users in such a way that the users have the ability to execute arbitrary database queries that provide them with information that the application is not designed to allow them to access (Thompson and Chase 2005, ch. 21). The metric is the number of applications that allow SQL injection to occur. Measurement would rely on an application inventory to provide the 100% target of SQL-injection-free applications, as well as systematic source code scanning processes run by someone familiar with how system authentication is designed to work. To cover the possibility that some system access feature may have been intended, but nevertheless introduces a security vulnerability, in 2009, NIST introduced a Common Misuse Scoring System, which provides a method to measure the severity of software “trust” flaws by correlating them with estimates of negative impact (Ruitenbeek and Scarfone 2009).

All types of vulnerabilities in the NVD are used to create security metrics by using them as a checklist and checking a technology environment to see if they exist. This database is also used by security software vendors used to create a set of test cases for vulnerabilities against which security software should be effective. These are not only anti-malware vendors, but vendors of software vulnerability testing software. Penetration tests of the type used by malicious hackers (also known as “black hats” in reference to old Western movies where the heroes always wore white hats) are designed by cyber security analysts (“white hats”) to exploit any and all of the vulnerabilities in the NVD. They are automated so they can be run from a console. The security metric is usually the inverse of the percentage of machines in inventory that test positive for any of the vulnerabilities in the database.

If a stated security goal is to have no known vulnerabilities, this type of test may seem to provide a good cyber security metric. However, in practice, this type of measurement process is fraught with both false positives and negatives due to the difficulty of designing and executing tests in multiple environments (Thompson 2003; Fernandez and Delessy 2006). Moreover, while such vulnerability metrics may be useful to a security practitioner whose goal is to protect only against commonly known attacks, this is a flawed approach to security goal-setting in general. These metrics will necessarily miss the zero-day attack, and so, if a complete technology inventory test for all the known NVD vulnerabilities was passed with flying colors, then this would not mean that the system was secure. It could simply mean that if the system had security bugs and flaws, those bugs and flaws were not yet identified. As one software security expert puts it, they are a badness-ometer (McGraw 2006). As illustrated in Figure 3.7, these types of measures can provide evidence that security is bad, but there is no number on the scale that would show security is good.

Figure 3.7 Security badness-ometer.

Source: McGraw (2006).

c03f007

3.4 Security Frameworks

So far, the usage of cyberspace in this book has generally corresponded to Internet-related technologies and how they have been used by various e-commerce and government constituents. However, this is only one way to view cyberspace. Where cyberspace is connected to something other than a database of sensitive information, the understanding of the impact of any given metrics on a goal will change considerably. Cyberspace occupies automobiles, trains, boats, planes, buildings, amusement parks, and industrial control systems (ICSs). At a smaller end of the spectrum, it occupies radio antennas, refrigerators, microwaves, audiovisual systems, and mobile phones. Goals for cyber security, and methods to achieve those goals, will vary considerably with the framework within which cyber components operate.

In this chapter, we describe e-commerce systems generically as a framework in order to contrast it with other types of frameworks. There are as many systems frameworks as there are ways to use electronics, so we first chose e-commerce systems and then follow with two at opposite ends of the spectrum for illustration purposes: ICSs and personal mobile devices.

3.4.1 e-Commerce Systems

e-Commerce systems are Internet-facing systems that allow facilitative transactions. The word itself is short for of the now obvious adjective, “electronic,” as in “electronic commerce.” e-Commerce has matured to the point where many retailers only exist online, and many brands are only available via online stores and businesses. In addition to traditional customer-to-business relationships (C2B), e-Commerce also includes business-to-business (B2B) transactions conducted between manufacturers, suppliers, distributors, and retail stores.

e-Commerce systems are called “Internet facing” because they are designed to be directly reached by any other system on the Internet. In order to be Internet facing, a system must be connected to an Internet service provider (ISP). ISP is a generic term for different types of companies that provide Internet connectivity services. They may be a local cable company, a large telecommunications carrier, a municipal network operator, or a web hosting service provider. The common element of the service is that network traffic between the customer and the Internet traverses the ISP. Figure 3.8 illustrates a few alternate ISP connections in the context of the Internet as a whole. Because of the large numbers of systems that must be represented in any diagram of the Internet, the Internet itself is depicted in Network diagrams as a cloud. The cloud symbol has been in use since the 1970s and in no way is meant to refer to the subset of Internet services that today utilize the word “cloud” as a marketing term.

Figure 3.8 e-Commerce system environment.

c03f008

Note that in Figure 3.8, the connection from the customer to the hosting service provider is not itself a direct Internet connection. Rather, it is facilitated by a telephone line, cable, or wireless link that becomes a conduit to the Internet through the hosting provider network. This line is typically leased from a large telecommunications carrier, but that carrier is not the ISP for the customer; the hosting service provider connects the customer to the Internet via their own relationship with a telecommunications carrier. Where a hosting service provider and a client have offices in the same building, they may just arrange for a wire to connect their equipment through a wall or ceiling duct. The diagram is meant to illustrate that there is no single type of company that provides Internet service. Different companies will offer different types of services, including cyber security services, to its customers. Some types of cyber security services, such as denial of service attack mitigation, may only be possible to perform as an add-on to a carrier service. Others, such as mail spam filtering, may only be possible to perform as an add-on to a hosting service. Hence, the way a system connects to the Internet may constrain its options for cyber security.

Once Internet is connectivity established, a typical e-commerce system will follow the general architecture of Figure 3.9. There will be firewalls between the enterprise border and any external network. All computers that face the Internet will be enclosed within an isolated network zone. Any security-critical system will be connected to an internal network zone with no direct routing to external networks. User desktops will also typically be segregated into their own network zone. Various security technologies will be placed at network zone interfaces to facilitate tasks such as remote access to the internal network, intrusion detection, and communications monitoring.

Figure 3.9 e-Commerce system architecture.

c03f009

In addition to the in-house architecture, many e-commerce systems will be dependent on fellow e-commerce business partners to complete the user experience for their application. For example, their website may contain a link for directions to their retail stores, or a link to their stock performance, and that link will take the user to a site that specializes in maps and equity analysis, respectively. The map may look like it is part of the original vendor’s site, but the actual image will be delivered by another company with whom it has a business relationship that includes an agreement to provide subsets of the features on the original vendor’s website. Hence, the complete availability of the original site will necessarily be dependent on services that are outside their control. These techniques are used to deliver advertising as well.

It is also the case that providers of frequently used website features, such as store locators or news releases, will allow their software to be used for free in return for being able to advertise to the customers of the original vendor’s site. Scenarios where the user experiences a composite of e-commerce websites are sometimes referred to as mashups. A mashup is a website wherein multiple companies’ e-commerce services are combined into a single web page under the heading of a single e-commerce vendor.

The purpose of an e-commerce system is usually to provide continuous transactions for customers on Internet-facing servers, while simultaneously facilitating the business transactions received from the Internet with robust and reliable transaction execution. Security features that facilitate this purpose include, but are not limited to:

  • System redundancy—if one system goes down, another takes its place.
  • System diversity—if one system is vulnerable to an attack in progress, transactions it supports can be supported with alternative technology.
  • System integrity—systems are not changed unless there is a well-defined and tested plan to maintain service continuity while the system undergoes change.
  • Transaction accountability—counterparties are identified in a manner that does not allow them to repudiate their activity on the e-commerce site.

Note that these four security features, if accomplished, would be sufficient to support an overall goal of transaction security. Each feature may require the integration of multiple technology components. Each feature will have its own set of goals that indicate whether security features have been implemented as designed, that the system was built right. However, security measurements that determine whether security goals are met are validation rather than verification metrics, and answer the question of whether the right system was built. Validation of security goals requires measurements of the system in the context of its operation rather than measures of the system conformance to security specification. It requires evidence that the purpose of the system will not be adversely impacted by security threats.

It has been our observation that everyone’s first instinct in proposing security validation metrics is to measure successful attacks or intrusions. For example, in the book, How to Measure Anything, the author suggests that security goals be measured by the absence of successful virus attacks (Hubbard 2007). The process described in the book is to start with what you know, structure that knowledge, identify what you would like to know, and use the structured data you have to reduce uncertainty concerning your object of measure. Applied to security, this approach makes sense; however, the suggested metric of “absence of successful virus attacks” suffers the fatal flaw that it measures progress toward a goal by the absence of an event rather than by any positive indicator that the goal is met. Using this approach, a system that is rarely attacked will be judged to be more secure than another simply because its security has not often been tested.

It is therefore common to attempt to bolster the “absence of virus metric by planning and executing attacks on one’s own system.” This combines the absence of viruses with the absence of the vulnerabilities known to be exploited by the set of all currently identified malicious software. This practice is called “penetration testing” and makes use of badness-ometers as described in Section 3.2. As these attacks are fully understood at the time security features to thwart them are designed, this practice demonstrates that a design specification was verified, not that a design goal was validated.

Validation of security goals for an e-commerce system can only be achieved with reference to its purpose in the context of its operation. It requires not just evidence that the latest set of known attacks will fail, but evidence that it is not possible (or at least extremely difficult) to enact security threats that impact system performance. Such a demonstration requires that the system in operation be subject to the types of failures that would be caused by a determined attacker rather than some simulation of any one or more known methods of attack. Hence e-commerce business continuity measures typically include failure mode testing that demonstrates that the failovers among redundant and diverse components are routine and are capable of being conducted without impact on system integrity and transaction accountability and without warning to system operators. However, this does not require a fully automated environment as accidents and false alarms may inadvertently trigger security responses. In these cases, to automate a response would cause unnecessary failover activity. As noted in Chapter 1, systems security includes people, process, and technology working in concert. Note also that validating all security goals requires that system integrity and transaction accountability features are also included in redundant and diverse alternative system configurations. Though no system will ever be 100% secure, there are known technology architecture patterns for design of e-commerce systems that facilitate these capabilities. Validation metrics should show that the system both properly works as designed and that the design thwarts attacks that are known examples of e-commerce crime.

One way to create such metrics is to model criminal activity using attack path analysis techniques. In this approach, attack goals are decomposed into subgoals, and activity required to achieve each subgoal is measured in terms of time, cost, or other quantifiable effort on the part of the attacker. Each path leading to system compromise is then measured in terms of overall capability required to complete all subgoals leading to system compromise. This technique allows for strategic placement of security measures to deter and delay attackers, as well as corresponding incident management processes designed to respond to attacker activity while it is in progress, and before it causes harm. Ideally, the metrics would be used to show that successful system compromise is beyond the capability of any known adversary.

3.4.2 Industrial Control Systems

ICSs operate the industrial infrastructures worldwide including electric power, water, oil/gas, pipelines, chemicals, mining, pharmaceuticals, transportation, and manufacturing. ICSs measure, control, and provide a view of the physical process ICSs monitor sensors and automatically move physical machinery such as levers, valves, and conveyor belts. When most people think of cyberspace, they think of Internet-enabled applications and corresponding information technology (IT). ICSs also utilize advanced communication capabilities and are networked to improve process efficiency, productivity, regulatory compliance, and safety. This networking can be within a facility or even between facilities continents apart. When an ICS does not operate properly, it can result in impacts ranging from minor to catastrophic. Consequently, there is a critical need to ensure that electronic impacts do not cause, or enable, misoperation of ICSs.

Figure 3.10 is an example of ICS architecture. A typical ICS is composed of a control center that will house the human–machine interface (HMI), that is, the operator displays. These are generally Windows-based workstations. Other typical components of an ICS control center include Supervisory Control and Data Acquisition (SCADA) and Distributed Control Systems (DCSs). The control center communicates to the remote field devices over communication networks using proprietary communication protocols. These protocols may be transmitted in Internet format, but the data still include fields that are unique to control system packets. The packets generally are sent via wired or wireless local area networks (LANs). The control center generally communicates to a remote control device such as a remote terminal unit (RTU) or directly to a controller such as programmable logic controller (PLC) or an intelligent electronic device (IED, e.g., a smart relay or smart breaker). A PLC or IED is preprogrammed to perform control actions automatically and send information back to the control center. The PLC or IED communicates via serial, Ethernet, microwave, spread spectrum radio, and a variety of other communication protocols. The communication is received by sensors, gathering measurements of pressure, temperature, flow, current, voltage, motor speed, chemical composition, or other physical phenomena, to determine when and if final elements such as valves, motors, and switches need to be actuated if the system requirements change or if the system is out specification. Generally, these changes are made automatically with the changes sent back to the operator of the control center. However, it is possible for an ICS to merely report status to an operator, who may make manual changes.

Figure 3.10 Industrial control system framework.

c03f010

There are major differences between the type of information technology that runs e-commerce (IT) and that which is used to run an ICS. In the IT world, major issues concern information content. In the ICS world, major issues are reliability and safety. In the IT world, unintentional attacks are not seen as a major issue; in the ICS world, unintentional is just as bad. Security events do not have to have a malicious origin to be of major significance.

Both types of systems include networks and workstations for the HMI. The HMIs of ICSs are generally IT-like systems and may be susceptible to standard IT vulnerabilities and threats. Consequently, they can utilize IT security technologies, and traditional IT education and training can apply (see, e.g., Byres, Karsch et al. 2005). However, ICS field instrumentation and controllers generally do not utilize commercial-off-the-shelf operating systems and are designed to consume the least possible amount of both silicone and energy (Stouffer, Falco et al. 2009). They often use proprietary real-time operating systems (RTOSs) or embedded processors. Due to their unique position in a physical workflow, field instrumentation and controllers often have operating requirements and constraints that IT systems never face. For example, harsh weather conditions and extremely short mean time to repair (MTTR) specification. These systems can be impacted by cyber threats typical of IT systems and also cyber threats unique to ICSs.

It has long been recognized that a cyber attack against ICS system, such as those that control an electric grid, could be more than just a single attack against a single target, and it could also be blended with a physical attack (Schewe 2007). The North American Electric Reliability Corporation (NERC) held a High-Impact, Low-Frequency (HILF) Conference to address those attacks beyond the design basis (NERC 2010).

There are only a limited number of ICS suppliers, and they supply most industrial processes worldwide. Nevertheless, there is significant ambiguity in the industry’s literature on key terms that are used to describe ICS technology and security capabilities. Key terms such as SCADA and field instrumentation carry different meanings to different organizations. For example, the term SCADA can refer to the master station or the entire control loop from the master station to the final field devices. Thus, when these terms are used in security standards, utilities often adopt their own interpretation.

Even within a single industry, security carries many definitions. Though a cyber security definition of security in the energy industry will invariably refer to ICSs, other perceptions of energy industry security range from references to dependence on foreign oil to interties allowing energy to flow from one area to another. Recent NERC regulatory guidance required energy utilities to apply technology security standards to their critical infrastructure. Several of the regulated utilities reported that none of their infrastructure was critical, and hence they did not have to comply with proscribed security standards (Assante 2009). Until we have agreed-upon nomenclature on components of national infrastructure and some common understanding of what it means to be secure, we will continue to have these roadblocks to policy implementation.

The root of the ICS security problem is that ICSs are very different from each other, and there is not one characterization of all possible control configurations that would correspond to any set of definitions that would be valid for all industries (Igure, Laughter et al. 2006). In physical security, cyber security terms have different meanings and implications for security control implementation. For example, the term intrusion detection systems (IDSs) with respect to physical security implies monitoring algorithms using images from cameras and personnel badge or physical access card readers, while in cyber security, the term IDS refers to host or network monitoring for known malicious software and/or damaging impact to cyberspace resources. Moreover, in security and in control systems environment, there are also many overlapping acronyms that are used much more fluently than the actual words they represent, and so initial conversations among these communities start out disadvantaged. For example, among physical security professionals, the term IED refers to improvised explosive devices. To control systems professionals, it means intelligent electronic devices. (Unfortunately, these may be used in combination to facilitate automated destruction.)

Nevertheless, the limited number of suppliers has the consequence that the ICS cyber security-related differences between industrial facilities are not large and this should allow common ICS cyber security policies and standards. What is different is the domain of industrial operation and corresponding control equipment, sensors, and physical material flowing through the system. Examples of impacts from different industries are shown in Figure 3.11. These differences highlight the different impacts on society of cyber security failure. Cyber security failure impact for a nuclear power plant would obviously be different than cyber security failure impact for a water treatment plant. (Unfortunately, these may be used together to facilitate destruction.)

Figure 3.11 Impacts from ICS cyber incidents (NTSB 2010; Weiss 2010).

c03f011

Note that the worst case impact of a cyber security event in an ICS may not be shutting the system down, but rather corrupting the process which it controls. Consequently, denial of service, though it has dire consequences for an e-commerce system, is not the worst case for an ICS; rather, denial of control or denial of view can be much worse. This can be done either by attacking the process directly or compromising the operator displays with misleading information; this may lead the operator execute commands intended to resolve an issue that is not present. Note also that the Internet is not necessarily the biggest threats to ICSs, as they generally can operate for long period of time without direct Internet connectivity. Rather, its biggest threat is the exploit of any access necessary to maintain the operation of the field devices, including physical access.

The goal of an ICS is typically to operate some type of physical process. Environmental sensors provide status information which is processed by the system using rulesets that may or may not trigger valves or levers to achieve stability in operational process. Sometimes these triggers are operated by humans, the “wetware” component of the system. At other times, they are triggered automatically. Even with a human in the loop, cyber components of these systems receive and send electronic signals that operate equipment in response to operator commands. Security features that facilitate these goals include, though are not limited to:

  • ICS device (could include sensor, relay, controller) reliability—if one sensor goes down, another takes its place.
  • Sensor diversity—if one sensor is vulnerable to an attack in progress, environment conditions that it monitors can be achieved with alternative technology.
  • Software containment—the extent to which incorrect commands may be automatically entered by software is limited by compensating factors such as range limits or input validation routines.
  • System resiliency—system should continue to operate despite component failures, even if at reduced capacity.

Note that these four security features, if accomplished, would be sufficient to support the overall goal of controlling an ICS, which includes preventing its falling under the control of outsiders. Of course, many other manual and business processes are required to support the actual industrial process that the ICS supports. As in the e-commerce example, each security feature may require the integration of multiple technology components. Each feature will have its own set of verification procedures and validation will require evidence that it is not possible (or at least extremely difficult) to enact security threats that impact system performance.

Validation of security goals for an ICS system can only be achieved when the system in operation is subject to the types of failures that could be caused by inappropriate actions or by malicious attacks. Failure mode testing should demonstrate that the failure of any one software component cannot adversely impact the operation of the process controlled by the ICS. Unlike the case of e-commerce, there are not well-established architecture patterns for testing such processes, and the risk of deliberately failing an ICS is considerably higher. Hence, validation tests must resort to modeling the impact of the failure of any single component and the cyber interconnections between components. Physical flows through the industrial system should be modeled to the most detailed extent possible in order to ensure that each physical control point is represented and that each cyber component is correctly associated with the physical sensors, electronic switches, or mechanical levers that may be affected by its operation. Models should extend to system interfaces so that potential cascading impact of any one component failure is made transparent.

Research is needed to develop ICS cyber forensics, resource-constrained device authentication, and security models for simulation. Yet the cyber security problems of ICS do not require advances in science to be solved, simply determined security engineering. Research into technology architecture patterns for design of secure ICS systems should be able to facilitate these capabilities. Agreement on the goals of failure mode avoidance should allow an associated security policy to be established in support of goals to maintain control over mechanisms. This type of exercise is common in the Nuclear Regulatory environment (Preckshot 1994) but is not prevalent in other industries that support ICS infrastructure.

3.4.3 Personal Mobile Devices

Many people think of mobile personal devices simply as small computers. To some extent this is true, because they are produced using computing technology. But from a security perspective, mobile personal devices are missing many elements that have typically been taken for granted in computer operation. Security features for computer operating systems that have been standard specifications since the early 1980s. As described in Chapter 2’s discussion on the Orange Book, these were designed to facilitate administrator control of a machine as well as user operation for data processing in an uninterrupted and confidential control flow. A standard computer had been designed to be operated in isolation and has utility for many users whether or not it is connected to the Internet. Yet the design of a mobile operating system does not incorporate most standard operating system security features. Rather, mobile devices are designed to allow the mobile carrier service providers to control the device. Mobile operating systems are in some sense tethered to the mobile carrier and unable to fulfill their purpose without it. This is why the mobile carrier has more interest in ensuring that the configuration of the device can be accessed remotely than in providing the user control over its content.

Some mobile carriers share these device control features with enterprise administrators. For example, some device operating systems may have configurable security settings that allow an administrator to disallow installation of applications, but allow installation of applications from the corporate server. In effect, the corporation plays the role of the mobile phone administrator. Even though phone users may pay the mobile carrier directly for the service, once the device is registered under the corporation’s service contract, the primary customer for the device in the eyes of the mobile carrier becomes the corporation, not the mobile phone user.

Figure 3.12 illustrates mobile phone connectivity. Phones signal cell towers, which relay the signals to equipment that identifies the transmitting device and allocates land-based telecommunications bandwidth to the mobile device based on the tower operator’s agreements with the mobile carrier who administers the phone (of course, the tower operator and mobile carrier may be one and the same company). Where device configuration is administered via the cell service, administration occurs from computers in the mobile carrier’s data centers. They identify the device that is connected and send it data and commands that update the software on the device. Note that this administration process uses part of the same bandwidth that is reserved for cell service itself, and mobile carriers do not charge the customer for the service time spent updating software. This keeps mobile carrier updates to a minimum and thus may actually delay the implementation of security patches if they become available during times of peak mobile service requirements. This is one reason why some mobile carriers require that a device be connected to a computer with an Internet connection in order to download its configuration updates and patches. The device administration process may be run out of a company, the device vendor’s company, or directly from the cellular carrier.

Figure 3.12 Mobile device system framework.

c03f012

Mobile devices have a wide range of capabilities. Although the devices may also facilitate game play and office utilities like calendars and calculators, these services are not core to the system mission, but rather conveniences that create competitive advantage between devices and associated mobile telecommunications carriers. The commonality, or core, function in mobile devices is to provide personalized voice and messaging connectivity services via data transmission. Hence, the purpose of a personal mobile device is to facilitate that communication. But a mobile device cannot communicate on its own. As illustrated in Figure 3.12, it must be part of a larger communications system in order to achieve its mission. Currently, this means that it must be a node on a telecommunications network that includes other nodes with which to communicate. A phone by itself has some functionality, but to be used for communication, it requires access to a multiple independently operating systems that interface using well-defined protocols. It is one system in a system of systems (SoS). An SoS is characterized by a situation in which full functionality in operation of an individual system is not achievable without the larger SoS in which it participates, and that the larger SoS has functionality that cannot be ascribed to any of its individual component parts, nor is simply an aggregate of them. Interaction between individual working systems creates emergent properties that are the functionality of the SoS. All social networking systems share this characteristic. Individual systems may come and go as the SoS continues to function without interruption.

Security features that facilitate these goals include, though are not limited to:

  • Possession—the phone number associated with the device is not transferable without permission of the owner.
  • Reliability—transmissions sent by one user are received by the specified recipients.
  • Connectivity—the system is available to transmit and receive.
  • Confidentiality—mobile users expect that data transmissions will not be intercepted by parties other than those with whom they specifically choose to communicate.

Note that these four security features, if accomplished, would be sufficient to support the overall goal of mobile transmission security. Each feature may require the integration of multiple technology components. Some, most notably confidentiality, have no current technology implementation but may be accomplished in part by features at telecommunications carriers like encrypted wireless transmissions.

Verification that mobile devices security features work as designed is complicated by the fact that the owner of the device has limited control over its operation. Security features are constructed by mobile carriers and phone vendors working in concert to serve their own priorities for service provision rather than expectation of customer security requirements (Barrera and Van Oorschot 2011). All phone vendors have implemented some form of process isolation to separate their own software on the device from applications provided by others. This software may generally be used by the mobile carrier to uninstall software, suspend service, and even erase all the data on the device if it is known to be stolen or maliciously corrupted.

To accommodate user preferences for device use, many vendors have included a permissions file that lists the user-controllable device settings and lets users change them. However, some phones also allow applications acting as users to change the settings, in which case the user would be unaware that the settings had changed. At the other end of the spectrum, some vendors restrict all permission settings to the phone administrator, who may be an enterprise customer. Settable permissions may include the ability to read and write to files such as the user’s contacts and calendar, the ability to access hardware on the device like microphones and cameras, and the ability to run applications from a given source.

The application-level permissions on a mobile device are typically implemented via some form of application code digital signature via certificates that work much like the web server certificates that were discussed in Chapter 2. Each application vendor has their own root certificate that is used to stamp the applications they produce. The root may be checked at any time by mobile devices programmed to check the provenance of software before installing it.

Though not all mobile devices require authentication to operate, many have a feature for password protection. The password unlocks the keyboard and screen of the device, allowing operation. However, the device will not operate unless the device itself can authenticate to the cellular service. This authentication may be built into a chip or entered by a device distributer when provisioning the device for the user. Another pin or password for authentication may be used to secure other network connections supported by the device, such as the close range protocol Bluetooth. A typical mobile device user is confused by these options, much less by the options for basing decisions about file system access on the question of whether the requesting application is digitally signed (Botha, Furnell et al. 2009).

Verification that all security features are configured as per user requirements at this stage can only be done with extensive user education and forensic analysis of mobile device software configuration. Such verification will reveal whether or not all device permissions are set as expected, but as design goals are not shared between mobile carriers and their customers, it may still not be possible to verify that the system was built correctly.

Validation of security goals for mobile personal devices is even less straightforward both because different users will have different security goals and because carrier and vendor security goals are very different from those of the end user. Carrier goals are focuses on service integrity and billing accountability, while end user security requirements for mobile devices need to take into account the cell phone use cases of the owner. Some people may keep valuable client contact lists in mobile devices and thus have confidentiality requirements, while others never store more than nicknames and so do not have confidentiality requirements. Others may use a key stored on their mobile phone as a second factor of authentication for online banking transactions, and so have data confidentiality requirements, while others use it for nothing but voice communication, and thus may only have voice but no data confidentiality requirements.

In order to identify security validation metrics, a specific purpose for the system must be well articulated, and it is simply not possible to clearly articulate security goals for the SoS that is mobile communications as a whole. Nevertheless, while within the larger SoS a subset of the communicating systems may have a joint goal that may be well articulated, it may be impossible to identify a specific purpose that applies to the entire SoS. Only when both the users and network operators are the same, such as in an enterprise-controlled mobile network, might all stakeholder goals be consistent enough to identify validation measures.

We therefore must reduce the scope for this example to identify security validation metrics. Let us say that it is an enterprise mobile communications systems, the cell phone issues by a company, a communications gateway server supported by the company, and the specific cellular operator service that the company had contracted may comprise a system. The purpose of that system may be to provide confidential communications between internal users while allowing them access to messages from external sources. In this more narrow case, it is possible to identify measures by which the security goal of confidentiality may be validated.

Confidentiality is a hard thing to validate because when information is leaked or stolen, the original owner still has it and may not be aware that it has fallen into unauthorized hands. Hence, the only way to validate confidentiality is to identify all the places where the data are authorized to be, and monitor whether the data stay there. In engineering terms, this is to create a model of the information flow, and devise methods to sample whether it has been subverted. In the mobile network case, data communications between mobile user and enterprise should have only preauthorized end points, and no data should be able to travel to external parties without being filtered at a gateway. If all data in the authorized communications flow could be marked with some “internal use only” identifier, it would be easy to see if any such data made its way out of the authorized path. Presumably, the gateway would not let it through without a reference monitor that would determine whether the data are confidential. This type of validation test, however, would be difficult to implement in today’s mobile networks because typically the only data that are marked confidential are those that have already been deemed sensitive. Moreover, not all communications channels between the user and external parties traverse the enterprise gateways. The mobile carrier still has a direct link to the device. This approach also acknowledges that it is well understood among security professionals that security fails in the same way an underground economy fails (Nelson, Dinolt et al. 2011). Those who are constrained by it develop work-arounds that meet their needs. Mechanisms to mark all data confidential allow for identification of leaks via monitoring outside the network for the confidential mark.

Major additions to mobile technology features would be required to create mechanisms to mark all data confidential and then unmark them if they were allowed out. Nevertheless, the fact that security validation goals are not easy to achieve should not prevent them from being set. Unfortunately, these scenarios often are addressed not by changing the way technology works, but by bolting on layers of security overhead around it (e.g., the intrusion detection mechanisms described in Chapter 2).Goals that are presently untestable due to technology limitations should be viewed as requirements for security features that should be incorporated into products to enable such testing.

3.5 Security Policy Objectives

It is typically taken for granted that you can’t manage what you can’t measure. Unfortunately, this observation calls attention to the fact that there are significant obstacles to managing cyber security. Security policymakers must be aware that selecting a policy that supports a strategy is a simple task compared to validating that the policy actually is effective. The state of the practice in the cyber security profession is to design for security and verify that designs are correctly implemented, and it seems enough of a technical challenge to verify that an implementation is correct, much less effective. This is probably why security standards are so often used as a substitute for customized security objectives. Of course, substituting security standards for objectives introduces another oft-quoted phrase, “metrics drive behavior.” It must be acknowledged that there is no one-size-fits-all strategy that will satisfy every security framework. Although security standards have some utility in ensuring that verification techniques for design decisions are sound, all cyber security systems should have, in advance, set some customized design goals that form the basis for cyber security validation metrics. That is, if you measure compliance with standards, you will get compliance with standards, but you will not get security goal achievement because you are not measuring security goal achievement.

Security policy statements should always be phrased as goals that are capable being validated. Even within security frameworks, it is evident that the nuances of a business model will affect the operation of technology, and thus impact the implementation of security standards. Chapter 4 provides some security policy guidance for decision makers who are accountable for security strategy.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset