6

Cyber Security Policy Catalog

The Cyber Security Policy Catalog is organized according to the taxonomy introduced in Chapter 5. There are five major topics that correspond to sections of the catalog. Each topic has several subtopics. Each topic and subtopic is introduced with some background information, some of which may appear technical, but the level of technical detail required to understand the issues has been purposely kept to a minimum and may be skimmed without loss of continuity. The background information is followed by a table that contains a list of policy issues that are of relevance to the topic. Each table has three columns. Each row in the table begins with an articulation of a cyber security policy statement. The second column in each row is an explanation of the policy statement. The third column contains a representative list of the reasons why the policy statement may be controversial.

The authors recognize that it is easy to confuse collecting policy statements with endorsing them, other than as statements that, in our judgment, are good examples. This chapter contains the policy statements that we collected. Please do not read the catalog as if it is a policy document. It is not. The catalog exists because participation in policy debate requires recognition of a policy and reasoning about it. As repeated several times in both Chapters 1 and 5, the statements in this catalog are not contrived as endorsements but as examples. None of the statements in this chapter should be mistaken for the opinion of the authors. In fact, some of the statements seem extreme to the authors. All will seem extreme to one set of readers or another. For any given individual, some statements may offend and others may seem banal. It is important to recognize that all policies are controversial. To that end, all have been presented with corresponding reasons for controversy. Many readers will easily be able to elaborate on those reasons, and also to add more reasons to the list for each statement. This is an expected response from our readers. No issues have been intentionally left out of the list because they were deemed offensive. To do so would leave the reader unaware that controversy exists.

6.1 Cyber Governance Issues

The Internet began as the Advanced Research Projects Agency Network (ARPANET), a U.S.-military-funded network designed to survive a nuclear attack. It quickly became a tool for sharing information among computer science researchers in the military, its contractors, and its academic collaborators. Those with an idea for a communications protocol would share it via a formal process managed by the Internet Engineering Task Force (IETF). These were published as Requests for Comments, which allowed others to quickly learn the new protocols, as well as extend them (IETF ongoing).

While the vast majority of Internet infrastructure and functions are decentralized (a design goal of the Internet), certain centralized planning and coordination functions are required. The most visible are the allocation of names (i.e., http://www.whitehouse.gov) and numbers (i.e., Internet Protocol, or IP, addresses, the cyberspace equivalent of postal addresses; they are used to find routes to locate computers). These coordination functions were initially performed at Stanford Research Institute (SRI), a U.S. Defense Department contractor. In 1972, these functions were transitioned to the Internet Assigned Numbers Authority (IANA) under the oversight of Jon Postel at Information Sciences Institute (ISI) at the University of Southern California. As the network evolved from the seeds of its founders, ARPANET was gradually disbanded. In 1995, the last restrictions to commercial Internet traffic were removed.

In 1998, the National Telecommunications and Information Administration (NTIA), an agency of the U.S. Department of Commerce, began a process to create a sustainable governance model for the IANA functions; this process culminated in the creation of the Internet Corporation for Assigned Names and Numbers (ICANN) in 2000. On January 30, 1998, ICANN issued for comment a policy “Green Paper” entitled: “A Proposal to Improve the Technical Management of Internet Names and Addresses.” The proposal was widely disseminated to encourage suggestions and comments (NTIA 1998). The resulting ICANN model is a unique “multistakeholder” governance model for the centralized components of the Internet where governments participate alongside corporations and individual Internet users to create the policies that govern the Internet in a bottom-up fashion. ICANN technically remained a U.S. government (USG) contractor until the signing of the Affirmation of Commitments (AoC) in 2009 that transitioned ICANN from a USG contractor to a party to what is essentially a Memorandum of Understanding between the USG and ICANN about the principles of multistakeholder Internet governance.

The Internet is a U.S. creation, and the USG has been leery to relinquish all control over the basic Internet coordination functions. Transition from direct control through a contractor to the IANA function and now to the AoC model shows a willingness on behalf of the USG to “internationalize” the governance of the Internet, but to what extent the United States wants and is capable of exerting unilateral control over ICANN is a debatable topic.

It is easy to see an analogy between Internet governance and the global phone system. The reason you can direct dial internationally is because of the committee of cooperating telecommunications companies within each country, who in pursuit of their individual missions to connect their citizens to the world, had a shared global objective of ubiquitous phone communications. These companies formed the International Telecommunications Union in 1865. In 1947, the ITU became a part of the then-new United Nations (UN). The UN/ITU is a top-down, government-driven governance model. In contrast, the ICANN/AoC model is an “international multistakeholder governance model” that favors bottom-up policymaking. World governments participate in the ICANN model through the Governmental Advisory Committee (GAC) which is just one of several advisory committees that set Internet policy within ICANN. Some claim that the ITU/UN model is the correct model for Internet governance, while others claim the ICANN/AoC model is optimal.

The key cyber security policy issue is the Internet governance model and, in particular, the modality of participation by world governments. One of the most unique features of the Internet is that it is shared globally; any Internet-machine can talk to any other Internet-connected machine, and typing http://www.cnn.com in Kansas, Singapore, Berlin, and Moscow all take you to the same place. The technical reason for this global interoperability is the existence of the central coordination functions. If governments disagree on the central coordination functions and begin to use different standards/procedures, the Internet may fragment into multiple or partially connected pieces. Some governments prefer this approach for reasons related to censorship, national sovereignty, and countering U.S. dominance.

6.1.1 Net Neutrality

One word that is frequently used by professionals working in a wide spectrum of jobs related to the Internet is content. Content is a generic term for whatever information may be carried by bits and bytes through the wires and disks at any given point in time without distinction. The ownership, meaning, or origination of content is not assumed unless explicitly used to modify the word, as in “user content” or “voice content.” The design of communications protocols has always been independent of the content of transmission sent. From a strictly technical perspective, it is unnecessary for Internet service providers (ISPs) to examine content in order for content to move through networks.

Ever since the advent of commercial ISPs in the mid-1990s, there has been concern that those who manage large portions of the Internet will unfairly prioritize, manipulate, and/or censor communication for economic or political gain, or both. Recent mergers and acquisitions have led to greater vertical integration of content providers, data service providers such as phone and TV service over the Internet, and ISPs, creating opportunities for favoritism at the data transit level. Proponents of Net Neutrality feel that ISPs should be barred—by law—from manipulation and prioritization of their data transit services that advantage their content and/or data services over competing services. Opponents feel that free markets will adequately address the issue and no new regulation is needed.

In order to fully appreciate the scope of net neutrality, it is important to recognize that different policies will have different technical enforcement points. For example, policies on routing between countries can only be enforced by cooperating telecommunications carriers and/or international treaty, while policies concerning domestic-only transit may be enforced through interstate and intrastate commerce regulation.

The policy statements in this section range from the establishing responsibility for establishing secure communications protocols to requiring ISPs to offer cyber security services. The reasons for controversy illustrate how cyber security efforts and net neutrality positions often seem to be at odds (Table 6.1.1).

Table 6.1.1 Cyber Security Policy Issues Concerning Net Neutrality

c06t097068j

c06t097068j00

c06t097068j11

c06t097068j22

6.1.2 Internet Names and Numbers

As discussed in Section 6.1, within the ICANN registration process, there are two corresponding sets of strings that Internet-connected entities register through ICANN. These are Internet names and Internet numbers. The current communications protocol limits the number of addresses to 4,294,967,296, or 232. Addresses are typically communicated by dividing the 32 bits into groups of eight and displaying each 8-bit set in decimal format, for example: A.B.C.D. There is no theoretical limit on the number of names, although currently the number of globally available top-level domain (TLD) names are limited by ICANN. To be found on the Internet, an entity must register at least one of each, and then must join them together and publish the result in an Internet-accessible Domain Name Service (DNS). Firms that allow registration of names within TLDs are called “Internet Registrars.”

DNS is the technical system that allows human-friendly names, like http://www.whitehouse.gov, that stand for IP addresses, like 209.183.33.23, to function on the Internet. While the DNS is a massive and globally decentralized system, there is one shared global resource that is required for the proper functioning of the Internet on a global basis. This system is called the “DNS Root Server System.” The Root Server System is arguably the single most critical component of Internet infrastructure. Most people are surprised to learn that the physical computer servers that comprise the DNS Root Server System are owned and operated by volunteers without contracts with any party. This is an artifact of the early days of the Internet when it was implemented more as a cooperative than a critical component of the global infrastructure. However, the present model has worked well in that the owners/operators appear to take their responsibility seriously and no major issues with these systems have ever materialized.

For example:

192.168.101.1     http://www.mycompany.com
192.168.101.4 http://mail.mycompany.com

With a publication like this, a company establishes that the computer named http://www.mycompany.com can be found at the Internet address “192.168.101.1.” This publication allows other entities to query a DNS server by providing a computer name as input to the server, and receiving the computer number as output from the DNS server.

The numbers in this example are analogous to the telephone numbers that old movies and sitcoms were required to use to make sure that they did not inadvertently broadcast any individual’s personal phone number in a fictional context. Those fictional phone numbers always started with “555” and the prefix “555” was not used by any actual phone number. In the Internet, there are a few sets of similar “unallocated” numbers (Rekhter, Moskowitz et al. 1996). They allow companies to number their internal network on a way that should never be routed on the Internet. Another reason why this is necessary is because there are not enough Internet addresses for everyone that wishes to connect. Hence, major companies and ISPs use a technique called Network Address Translation (NAT) to maximize the number of people that they connect with unallocated address space, and allow multiple computers to appear on the Internet using the same address.

However, the present model has worked well in that the owners/operators appear to take their responsibility seriously and no major issues with these systems have ever materialized. Many also feel the distributed and volunteer nature of the root server operators also presents positive governance features in that these servers are not under the oversight of any one entity or government (RSTA ongoing). A new version of the network communications protocol, Internet Protocol Version 6 (IPv6), would expand the address space to allow 2128 addresses.

A major concern with Internet names and numbers is that of either accidental or intentional diversion of Internet traffic to unauthorized destinations. For example, the translation from Internet names to Internet numbers can be subverted by a cyber attack called DNS poisoning. DNS poisoning refers to the corruption of a DNS server so that it stores an incorrect address for a given computer name. The incorrect address is usually a malicious site designed to look just like the website on the computer named in the query. DNS poisoning allows attackers to divert legitimate user traffic to malicious sites without their knowledge, and without touching the user’s computer, simply by attacking the DNS server that the user queries for addresses. The security of both the user desktop and website of the company whose traffic is diverted may be impeccable, but nevertheless, both experience damage.

DNS was not designed with security in mind and is vulnerable to poisoning, man-in-the-middle attacks in which DNS queries are intercepted prior to reaching the server, and other subversive tactics. The Domain Name System Security Extensions (DNSSEC) were created to address these concerns. The process uses public-private key cryptography to authenticate DNS records with the authoritative source. Public key cryptography allows data that are encrypted with the private key of a DNS server to be decrypted by anyone with its public key, and vice versa. For DNSSEC to work effectively, a DNS server public key must be distributed in such a manner that users can verify its integrity. Then users can encrypt queries that can only be decrypted by the target DNS server, and DNS servers can encrypt responses with a private key. The public-private key cryptographic algorithms are designed to assure anyone holding a DNS public key who successfully decrypts a DNS response that the response must have come from the server holding the private key. Often referred to as a digital signature, the public-private key technology allows the key holder to sign data with the private key in such a way that allows the public key to be used to verify the digital signature. If the signature matches, the data are assumed to have been sealed by the sender. Note that because the public key is known to anyone, digital signatures do not facilitate confidentiality, merely data provenance and integrity.

As illustrated in Figure 6.1, DNSSEC takes advantage of the hierarchical nature of Internet domain names to distribute public keys. Although it is not required by software, technically all domain names end in “.”. That is, “.com” is really “.com.” by design, so the “.” is the root of the key hierarchy. Enterprises are assumed to be able to get a good copy of the root key and store it safely on their own DNS server (the protocol refers to this as a “trust anchor”). ICANN holds the public key for “.” and for each top-level domain (TLD) issued on the Internet. These keys may be used to verify records at the next level of the hierarchy. Private keys for each server are used with a cryptographic algorithm to create a verification record for a given domain in the level below it, and the verification record is stored in the DNS record for the domain’s DNS server. Output from a DNSSEC verification algorithm can be “yes, the address is verified,” “the address is bad,” or “it cannot be verified because some keys are not available.” An authoritative DNS server that is DNSSEC-enabled will produce standard DNS records and DNSSEC cryptographic signatures for each record. The parent DNS server (.com in the case of example.com) will provide a cryptographic DNSSEC record to identify the real “example.com” thus eliminating the possibility that someone could be running a fraudulent “example.com” domain. The requestor can verify the integrity of a DNS record, say, www.example.com, by requesting DNSSEC records from example.com, .com, and the root (“.”) and cryptographically verifying the entire chain of signatures.

Figure 6.1 Message sequence diagram for DNSSEC.

c06f001

Despite its obvious utility, DNSSEC is a recent development and is not yet widely used. The DNS root was signed in 2010, and the largest TLD (“.com”) allowed publication of signatures for child domains (“example.com”) in September 2010. There is processing overhead involved in using cryptography. Many sites have not established keys. Moreover, even if it was widely used, the process still relies on operating system and software security, and so there are still many malicious ways to bypass or subvert the process.

Cyber security policy issues with Internet names and numbers center on the adoption of technology to combat the bypass of DNS and routing protocols and to enforce agreements between ICANN, TLD registries, and registrars. The first few concern DNSs, the second few address issues related to routing traffic to the correct Internet numbers once they are identified by DNSs. The table ends with a few policies on potential regulation (Table 6.1.2).

Table 6.1.2 Cyber Security Policy Issues Concerning Internet Names and Numbers

c06t1040668

6.1.3 Copyrights and Trademarks

Even if all Internet name to number routing was always accurate, there will still be users who are directed to websites that do not belong to the company they intended to visit. This occurs when companies do not register all the possible Internet domain names that seem straightforward representations of the company name. For example, a company named “product” may have registered product.com but not product.net. A competitor may register product.net and purchase the search term “product.” Then when users search for the word “product” they see the result “product.net.” They assume that they have found the company they are looking for and proceed to the product.net website. Though perhaps unethical, such practices are not always illegal. Even if they are illegal, there is no guarantee that the perpetrator will be caught or prosecuted. If caught, they may just be banned from using that search term and proceed to prey on a different competitor’s customers. In extreme cases, companies who were late to the Internet may have had their company names intentionally registered by “domain squatters,” which is a derogatory term used to refer to people who register domain names for the purpose of selling them to the highest bidder. ICANN has extensive policies and mechanisms to address domain name-related disputes.

Another form of domain squatting is to register domain names that are very similar to the takeover target, such as misspellings of the domain name, or adding numbers or seemingly innocuous community of interest identifiers to the end of the domain name, for example, a competitor or criminal trying to lure a company’s customers may register “prodoct.com,” “product1.com,” or “product-ny.com” in an attempt to make their site appear legitimate. Where this type of domain squatting is conducted with the criminal intent, the typical pattern is to make the site look just like the login page of the target site, and use this page to collect the names and passwords of customers who mistake the criminal site for the real one. Such sites are typically also guilty of copyright violation as they display logos and other proprietary trademarks from the target domain. Competitors may also falsely advertise their own products under a logo belonging to a competitor.

Security technology for the purpose of warning a user that a site may be a counterfeit has been around since the late 1990s. As described in Chapter 1, it started when the browser vendors then changed the message delivered to the user when a root certificate could not be found for a given web server to say that the site was not secure and the user was taking a risk by visiting it. The user still had the ability to add root certificates to their browsers, but the security warning scared them, so most companies gradually gave in to the pressure of client concern and gave up running their own certificate authorities. The certificates purchased from the security vendors would periodically expire and leave sites unable to encrypt traffic, creating emergencies for company web server administrators. This customer service issue for security certificate vendors prompted them to create processes by which certificates had to be quickly and easily administered and delivered. These processes are often infiltrated by Internet hackers to generate and/or steal both root and server certificates that allow them to impersonate company web servers in the SSL mutual identification process. Even where impostor sites do not do a good job of perfectly imitating a site, users get so many pop-up warnings from legitimate Internet sites, many are inclined to accept any and all warning messages simply to get their jobs done (Herley 2009).

Copyright and trademark infringement is a cyber security threat not just because of lost business, but because the business transacted at such sites is mistaken for their own business practices. They sometimes become aware of the counterfeit site only after being sued for product liability and finding that the supposed customer has purchased a counterfeit product. Very popular companies have established divisions in their law departments with the sole mission of addressing such Internet fraud. Cyber security services advertise the ability to find a company logo wherever it appears on the Internet in order to combat such fraud.

Note that instances of name space squatting are not limited to domain names. In public addresses since leaving office, Colin Powell has cautioned that the “Colin Powell” entry in social networking sites is often not him, and urged his audience to register themselves in all currently popular sites simply to ensure that no one else takes their name (Powell 2009). The large communities who both trust and are loyal to social networking name spaces have made them a target for domain squatters both competitive and criminal. They have the same power to mislead as domain names themselves.

The cyber security policy statements in this section start with domain name issues. These are followed by content-related statements. The last few describe social networking concerns (Table 6.1.3).

Table 6.1.3 Cyber Security Policy Issues Concerning Copyright and Trade Issues

c06t108065w

6.1.4 Email and Messaging

Company impersonation has never been so blatant as it is in email. Even though the Morris worm exposed just how insecure the protocol was, there was no concern that the email servers would be impersonated. The actual exchange between two email servers is displayed in Figure 6.2. In the example of the communication between two email servers, there is clear text content and there is no authentication required. The protocol allows for the information to be typed into a command line, so it is not even necessary to have email server software to impersonate an email server using this protocol. Although some servers may require the presentation of a key for authentication or may restrict connections to a prespecified IP address, as long as any one server in the email relay between a sender and receiver supports a text-only-based command string as illustrated in Figure 6.2, then any individual on the Internet is spoof-able. Although email impersonation may happen from a person’s own inbox due to malicious software running on their computer, simply having the email address of a person is enough to enable an impostor impersonate them to an email server, and this is what is illustrated in the example below. This ease of impersonation is why a person may occasionally be contacted by friends who say, “you sent me an email about X” and the supposed sender has no clue what they are talking about.

Figure 6.2 Example email server communication protocol.

c06f002

Advertisers embrace the openness of email server communications because they can identify customers using these open protocols. For example, if a company email server answers commands as in Figure 6.2, then an advertiser’s automated program can attempt to send email communication to and eventually reach the whole population. They can do this by replacing the word “unsuspecting” with every possible spelling of a user name at that company. When errors occur, they simply cease the attempt and move to the next guess at a name. Moreover, it allows an advertiser to approach potential customers using “From” addresses with creative domain names that catch attention while there is no need to actually register them. When advertisers send email to a large quantity of potential customers without discriminating which of the potential recipients may actually have interest in their product, this is called “spam.” Spam is a canned product that can include any variety of meat. It was highlighted in an old comedy sketch as the only thing on the menu, despite the fact that it occupied multiple distinct menu items (Monty Python 1970). In the early days of the Internet, users would use the word “spam” to describe content they had no wish to see, and excessive unwanted multiple postings elicited “spam” as the reply from angry users. The term spam now generically refers to any unwanted email content (Furr 1990). Both for profit and not-for-profit Internet watchdogs keep records of spam in order to identify perpetrators with the goal of reducing unwanted noise (Spamhaus ongoing) but, and any Internet user knows, these efforts are largely unsuccessful.

Another category of unwanted email is called phishing, a phonetic play on the word fishing. It refers to baiting, or luring, Internet users to click on links that take them to malicious websites. The malicious sites may be domain squatting look-alikes that collect user names and passwords. They may download malware. They may be fraudulent scams to trick users into transferring money from their bank accounts. When a specific set of high net worth individuals are targeted by phishing emails, it is called spear-phishing, in analogy with whales as the target. There are as many types of phishing attacks as there are Internet criminals.

Hence, both legitimate and illegitimate businesses routinely send excessive unwanted email, and the blatant ability to spoof email communication has been tolerated by the Internet community. There is very little incentive among e-commerce-related vendors to restrict it and no ability for a company or individual to do anything about it without cutting themselves off from potential customer or friend email communications. Most companies pay for Internet services in units of bandwidth, or the number of ones and zeros that can traverse a telecommunications line at the same time. The more email traversing the line, the more bandwidth a company needs. Telecommunications equipment providers also charge more by bandwidth. So if a company expected to need 100 GB of simultaneous bandwidth, both the ISP and the router vendor make more money. Hence, there has not been a great deal of effort among Internet vendors to cut down on unwanted or even criminally motivated phishing.

However, as spam is also used by criminals, and identity theft is rampant, some consumer rights organizations have provided some incentive to track and shut down known spammers (Spamhaus ongoing). In 2008, a company in the spam business was investigated by security researchers and eventually closed, with the immediate result of a 40% decrease in the number of unwanted emails worldwide (Vijayan 2008). Subsequently, the U.S. Federal Trade Commission has taken action against spam. However, the spam business is still thriving, and there has been no systematic attempt, public or private, to improve the quality of email security going forward. However, there are some technologies available to companies that wish to secure email communications that are in their own controls (BITS 2007). One is Sender ID Framework (SIDF), which utilizes DNS to identify the authorized email server for a domain and does not allow email from a domain unless the sending server is identified in the DNS records for the domain when a valid signature is expected. DomainKeys Identified Mail (DKIM) goes a step further and allows an encryption key to be stored in DNS, so companies can set rules to permit, rejection, deletion, or tag unsigned or improperly signed messages from a given business partner. The third is Transport Layer Security (TLS), which is called an opportunistic protocol because it can be set to require the highest level of security that is available on the server with which it communicates. At the lowest level, it does not authenticate the sender, and does not require communication to be encrypted, but at the highest level, it authenticates the sender and encrypts the communications so it cannot be observed by third parties eavesdropping on Internet traffic between two email servers.

Like the word content, the word messaging is a technical term of the Internet trade. Any security techniques that apply to email can generally be applied to text messaging or Internet chat capabilities, and these capabilities are referred to colloquially as messaging. Messaging technologies rely on protocols between sender and receiver that rarely authenticate, but simply identify the sender via a “user name” string presented as part of the message stream itself.

The cyber security policies listed for email and messaging begin with the common requirements that email and messaging be recognized as under the umbrella of enterprise or mission security. These are followed by more systemic issues related to spam and accountability in general (Table 6.1.4).

Table 6.1.4 Cyber Security Policy Issues Concerning Email and Messaging

c06t113065b

c06t113065b00

6.2 Cyber User Issues

To connect to a network is to be a user of cyberspace. Approximately 30% of the world’s population is Internet-connected (Miniwatts ongoing). In addition to traditional business relationships now moved online as described in Chapter 3, the Internet has spawned new e-commerce business models over the past two decades. These include Internet-only storefronts that are separate from traditional brick-and-mortar sales locations, Internet sales wherein customers pick up merchandise from a physical store, and the use of targeted advertising to mobile shoppers who are price-comparing online while still shopping in traditional business location. Although e-commerce advertising had originally only mirrored pre-Internet public relations and marketing activities, several new marketing models have also emerged that did not exist prior to Internet ubiquity. These are information services that gather information from one corner of the cyberspace and sell it to another. Sometimes referred to as “the user is not the customer” models, these range from online surveys to large networks of monitoring systems designed to track user habits of everything from food preferences to political beliefs. The primary customer for this information is the advertising industry.

Security issues for cyber users have mostly arisen from unintended side effects of the e-commerce race to participate in new markets (Khusial and McKegney 2005). E-commerce transactions flow between the shopper, the shopper’s computer, the network connection between shopper and e-commerce web server, the e-commerce web server, and e-commerce vendor internal network, and the connections between the e-commerce vendor and the service providers they need to close the transaction, such as a credit card payment clearing company. All of these connections are created using software, and any of that software may have a bug or a flaw that allows an intruder to observe cyber user data flow or disrupt the e-commerce transaction. In many of these points of connectivity, observation of data flow provides information that may be used for later attacks, such as observation of user names and passwords being used for impersonation and identity theft.

From a security perspective, there are four major players in the e-commerce environment: the customer; the retailer; the product vendor, wholesaler, or manufacturer; and the attacker. It is the attacker’s goal to exploit one or more of the three other players for illegal gains. Using vulnerabilities in software, application configurations, hardware, and even user habits, an attacker will seek to exploit these vulnerabilities to the attacker’s advantage. e-Commerce attacks are constantly occurring. However, major media reporting on cyber security issues is confined to high-profile issues. Only the most interesting cases of fraud with the most severe consequences for the victims ever make it to the front page. Nevertheless, there is as much day-to-day activity in the information security cyber criminal circuit as there is in the drug circuit. In the book Zero Day Threat, two USA Today reporters describe the phenomenon as the product of three archetypes: exploiters, enablers, and expeditor (Acohido and Swartz 2008). Exploiters carry out data theft and fraud. Enablers are businesses whose practices allow it. Expeditors are technologists who identify the root cause from a technical point of view, though they may be attackers or defenders. The book is full of vignettes about organized crime “exploiters” systematically stealing data from unwitting consumers by impersonating the consumer at “enabler” banks. The exploiters not only exploit the consumer, an identity theft victim, but also exploit low-level social misfits, such as meth addicts. They enlist the social misfits to withdraw unwitting consumers’ cash out of automatic teller machines or to order luxuries on the unwitting consumers’ credit cards. The stories sporadically include tales of victories of law enforcement “expeditors” who figure out how the exploiters did it. The moral of every sad story is that the enabler did not sufficiently protect data within its custody, while an evil genius controlling three or more layers of organized criminal structure above the social misfits is never actually caught. The consumer is left with damaged credit, as well as loss of time and money, while the enabler claims that “adequate” risk measures are in place to secure the enterprise.

This section divides cyber user security issues into six subsections: malvertising, impersonation, appropriate use, cyber crime, geographic location (“geolocation”), and privacy. Malvertising is an anagram of the words “malicious” and “advertising.” Impersonation deals with various types of impostors on the Internet, from anonymous postings to account hijacking. Appropriate use addresses common Internet behaviors that some deem antisocial, and may not be criminal simply because they have not yet been formally considered by legislators. Cyber crime addresses the organized criminal activity that is pervasive in e-commerce. Geolocation of Internet users, both consumers and criminals, is very difficult to determine, and presents its own special set of policy issues. Privacy is one of the concerns that fuels debates on cyber security geolocation policies, but privacy is a much broader set of issues, and so it has its own subsection.

6.2.1 Malvertising

e-Commerce businesses that rely on advertising typically utilize “mash-ups” to integrate multiple software sources (e.g., maps and coupons) ontoa single page. The common element is that they are designed to attract consumers in a desired demographic, the advertising “target.” One method of reaching the target is to identify web pages frequented by the target and purchase ads directly on those web pages. The web page owner/seller may require that the ad be provided to them for placement, or they could simply link to a site provider by the ad buyer and direct the user’s browser to access the buyer’s web content directly. This easy access to the Internet consumer has attracted criminals seeking to install malware. Like any ad buyer, they purchase Internet advertising from media networks and exchanges.

Malware is easy to distribute because numerous websites require Internet users to accept a wide variety of downloads in order to operate, and advertising software frequently continues to run in the background and connect back to the source site to send user tracking information. Malware does the same thing, and thus appears to the user like any other nuisance advertising process running on their computer. Malware that allows a computer to be remotely administered by the malware operator is referred to as a “bot” which is short for “robot.” The correct interpretation of the analogy is that the person who unwittingly installed a bot on their computer has turned their computer into an instrument for the criminal operator. Multiple instances of bots administered by the same malware operator are called “botnets.” Criminals use botnets as soldiers in cyber attacks.

Another type of cyber criminal lurking in the advertising community is engaged in click fraud, which is an automated way to impersonate a user clicking on an advertising link. Internet content providers typically charge advertisers based on the number of users who visit their websites and click on the advertiser’s link. The content provider receives the click, records it in their billing records, and forwards the user’s browser to the address of the advertiser’s site, including a code in the forwarded universal resource location (URL) that specifies which site the user came from. The advertiser’s web server received the user request to display a web page that is associated with the content provider’s code. Both sides count the number of these clicks, and the advertiser pays the content provider based on the volume of user traffic sent from the content provider to the advertiser site. In click fraud, an automated program imitates the activity of an end user, simulating clicks on the advertiser’s site from multiple Internet locations. The advertiser cannot tell the difference between the automated program and a real user, so it pays the content provider for the clicks. Savvy advertisers examine the browsing habits of users from different content provider sites and are sometimes able to pinpoint click fraud, but it is very hard to definitely prove.

A less frequently reported but still significant profit margin e-commerce criminal activity that comes under the heading of malvertising is coupon fraud. Online offers for coupons generally includes security codes and individual identification information intended to ensure that coupon are requested only by legitimate consumers. However, criminals often copy or modify coupons to increase values, decrease purchase requirements, defeat or eliminate security codes, extend or eliminate expiration dates, and/or alter disclaimers, terms and conditions. Moreover, they also sometimes create complete fake coupons from scratch. These counterfeits are then sold on the Internet for less than face value.

The policy statements in this section begin with malware issues that mostly not only impact the consumer, but also may impact the advertiser from the perspective of reputation. These are followed by click and coupon fraud issues, which impact only the advertising community or their direct customers (Table 6.2.1).

Table 6.2.1 Cyber Security Policy Issues Concerning Malvertising

c06t118064d

6.2.2 Impersonation

Impersonation on the Internet is easy not just because it is easy to register a domain name and email address that is not at all related to anything you are labeled, but because it is very difficult if not impossible for others to trace where you actually are. The ability of the Internet to obscure the origination of traffic is taken advantage of by criminals to cloak their activities in the guise of authorized use. In this age of routine business travel, authorized users have patterns of access from different cities on a daily basis. The communications from such users will vary with the business purpose of the specific visit.

It is very hard for some people to distinguish between an Internet user and a person. A person had an identity. A philosophical treatment of the concept may call it a “self,” “soul,” or “mind.” A more practical concept is human placement in society in relationship to others, born to a mother, residing in a locality, responding to a name, and holding various documents bearing that name. Assuming that we agree on the definition of a person, and call that identity, we call identity in cyberspace digital identity. Digital identity is a completely different concept than identity. At the core, digital identity is a string in a computer database. That string is made up of 1s and 0s. It may or may not be the same string that an individual uses to log in to a computer, the string colloquially referred to as “login” or “user ID.” That digital identity is stored in a database so that the identity can be automatically associated with other strings. One of these other strings is often a password. A password is not identity; it is a method by which identity may be verified or authenticated. In the early days of computer security, it became obvious that passwords could be shared, and simple possession of a digital identity did not always correspond to the identity of the individual behind the keyboard. Strong forms of authentication were developed and classified into three factors:

  • What you know
  • What you have
  • What you are.

A password is something you know, and if you know a password, this lets you into most computer systems. But some systems require a second or third factor of authentication: what you have, which may be a handheld token such as a smart card, or what you are, which is a biometric measurement like a fingerprint or a retina pattern. The more factors of authentication a system requires, the stronger the authentication. Most systems admin users have only the lowest possible factor of authentication, so the strength of correlation between the digital identity on the computer and a real person’s identity is very low. A login string that identifies a user, in combination with an authentication factor, are generically referred to as “credentials.” When viewed in that context, it seems more obvious that credentials are things that may be used to impersonate people, and that some types of credentials make such an impersonation attempt harder than others.

Prior to the use of the Internet for e-commerce, companies that required consumer agreement to a transaction demonstrated that agreement via a written signature. When these transactions were originally converted to the Internet, transaction information would be entered in an online form that would be printed and faxed to the counterparty. Security software companies anticipated requirements for digital signature to authenticate transactions on the Internet that required a signature. The most promising of this technology was a cryptographic technique described in Section 6.2 as public key cryptography. Split keys would be created for each user using public key cryptography. The user public key would be placed in a directory available to anyone who wished to verify a signature. The private key would be kept by the individual, their “digital pen” for use with a digital signing algorithm. The technology allows documents signed with the private key to be verified with the public key. In many implementation of digital signature for email, private keys are kept in a file on the owner’s desktop. This technology provides something more than what you know, but is still dependent on a file that is sharable, so it does not actually count as a second factor of authentication. That is, two people could have the same file at the same time, so one could still impersonate another. Of course, one may forge a handwritten signature as well. The act of using a private key file in conjunction with the algorithm was called “digitally signing” a document.

However, when the Digital Signature Act of 1999 and the Electronic Signatures in Global and National Commerce Act were passed in 2000, neither required any proof of identity over and above simple login and passwords, and so the pressure for cryptographic algorithms whereby a private key and/or second factor of authentication were somewhat abated. This opened the door for a wide variety of online e-commerce transactions. It also lowered the bar for e-commerce transaction impersonation. Identity theft has been the number one complaint received by the U.S. Federal Trade Commission since 1999 (FTC 2011).

Another complication of impersonation concerns the age-old practice of slander. Slander on the Internet is so prevalent that it has given rise to new business models for e-commerce reputation maintenance and recovery. There is no accountability for slander cloaked in Internet anonymity or false identity. There are no negative consequences and only customers to gain by posting false accusations that are difficult to disprove.

As digital identity is just a string in a computer, there does not even have to be a person associated with it. In fact, most technology comes out of the box with a digital identity built into it. This is typically a default administrative user but may also be a user specifically configured to demonstrate the technology features of a product. These out-of-the box digital identities are called “generic IDs” because they do not belong to any one person. Often, generic IDs remain configured with the default password supplied by the technology vendor for the entire lifetime of the product. These IDs are well known to criminal elements and are often used to impersonate technology administrators (Table 6.2.2).

Table 6.2.2 Cyber Security Policy Issues Concerning Impersonation

c06t122063m

c06t122063m00

c06t122063m11

6.2.3 Appropriate Use

In the software industry, end-user license agreements (EULAs) are used to specify the terms and conditions under which software is licensed to those who purchase it. These agreements typically limit the authority of the user to copy the software and limit the liability of the vendor for any faults in software operation. These agreements are typically presented in an automated fashion while a user is installing software. Their terms are vague and sometimes one of their terms is that they can change the terms at any time and the user is still bound to them (Hoglund and McGraw 2008). Where possible, software vendors try to enforce these EULAs with automated techniques for license verification.

One common method of software license verification is for the software to “phone home,” which is a colloquial expression used to refer to the capability of software to access the software vendor’s website. Phone home features check attributes of the software installation with the vendor’s records of purchase. For example, if a purchaser has installed the software on more machines than permitted via the EULA, the software may disable itself. Phone home features are also used to check for patches and updates, in which case the software may automatically update itself, or prompt the user to update the software. More insidious use of phone home features are used by spyware to upload data observed on the user’s computer. Phone home features are not just limited to traditional computers and servers, they are standard operating procedure for mobile devices, and are typically incorporated into software that supports industrial control systems.

The opposite of a phone home feature is a command and control feature. A command and control feature allows a central administrator to control software on multiple computers. Each controlled computer is configured to listen to the network; that is, network listening is a technique that software uses to be alerted to Internet queries. Network listening features combine the Internet address of a computer with a subaddress, or port, that can be assigned by a computer operating system to a software process. A typical computer has 64,000 ports that can be distributed among software processes, and the controlled software will select one that is not used by any common programs. Malware command and control features are sometimes referred to as RATs, an acronym for remote access tool that conveys its malicious purpose.

These features are described for the purpose of emphasizing that the ability to install phone home or command and control features on an individual’s computer without their knowledge presents a policy issue under the heading of appropriate use. These features are installed not only by software vendors whose software was purchased by the computer owner, but also by advertisers, e-commerce vendors, and industrial control system integrators. As these programs often are installed without the user’s knowledge and/or run automatically on user’s machines, the circumstance presents an issue concerning unauthorized use of computing resources. So even if personal data are not collected by phone home or command and control software, there are cyber security policy issues to consider separate from the data privacy issues presented by these features.

Other issues in this section concern appropriate use policies within an enterprise, given that it seems appropriate to expect that computers owned and operated by an organization should, in some sense, serve the enterprise mission. Appropriate use is a technology-neutral term but may need to adjust over time. Some nations—and not just those that censor the Internet—may draw a finer line between appropriate use and cyber crime than others. In this section and the next section on cyber crime, we draw the line according to mainstream U.S. culture, where political speech is legal, though it may sometimes be inappropriate, bordering on illegal, for example, if it incites discrimination or violence. By contrast, pornographic content depicting children invariably indicates cyber crime, so it is covered in the next section (Table 6.2.3).

Table 6.2.3 Cyber Security Policy Issues Concerning Appropriate Use

c06t126062w

c06t126062w00

c06t126062w11

6.2.4 Cyber Crime

Cyber crime refers to any criminal act which is conducted in cyberspace. These include infringement of both personal and property rights. Personal rights violated via cyber crime are typically freedoms of speech or religion, invasion of privacy, or issues relating to luring of minors. As discussed in the previous section on appropriate use, there is a dividing line between cyber debate and cyber bullying that not everyone draws the same way. Inappropriate use can be seen as a spectrum, where in one end, there is the woman who impersonated a teenager to harass a rival of her teenage daughter, in the middle are the college students harassing their gay friend, and at the other end is an outspoken radio show host disparaging minorities. Though all are generally thought to have crossed the line of inappropriate use, there is no universal agreement that all cases deserve criminal prosecution. Because the law lags behind the myriad of ways that criminal acts may be conducted in cyberspace, an act does not necessarily have to be illegal to count as cyber crime. Cyber criminal acts may be illegal in some jurisdictions but not in others. While these issues evolve, case law and community involvement will help to define cyber crime against persons.

Cyber crime against property includes, but is not limited to: of disabling, destroying, disrupting, or appropriating assets. However, not all cyber criminal acts are new kinds of crime. They may just be traditional crimes that are enabled by or made more effective by automation. For example, credit card theft originally described the physical act of stealing a credit card and using the stolen card in a physical retail establishment. Today, credit card theft is typically accomplished by stealing the data associated with an individual card, and using that data to make online purchases. The only physical object that changes hands is the drop shipment of the merchandise purchased with the stolen “card.”

When crimes such as card theft are conducted using specialized software that provides economies of scale in mass thievery, it is organized cyber crime. Specialists in the steps required to conduct crime provide services for hire, creating an underground economy. For example, see Figure 6.3. Figure 6.3 describes the relationships between various players and products in the organized cyber crime industry (BITS 2011). Figure 6.4 provides some perspective on each player. They are not all taking equal risks from participation in cyber crime activities. Many may claim to be legitimate merchant, such as gun dealers who are not accountable for the crimes committed by their clients. The Zero-Day vulnerability market is fueled by hackers looking for security bugs and flaws in software that the software owners are not yet aware of. They sell those vulnerabilities to people who design software that can exploit the vulnerabilities to break into systems. Each exploits is a single malware unit, and these units are combined into kits that allow criminals to infect computers to create botnets. Those botnets are rented for criminal activities that have earned the acronym CAAS, which stands for Crime as a Service, The services include everything from password harvesting to denial of services attacks.

Figure 6.3 Crimeware marketplace.

c06f003

Figure 6.4 Crimeware risk-profit tradespace.

c06f004

Organized cyber crime may also generally refer to any situation where automation is used to facilitate Internet fraud. According to some experts, online gambling games of chance are more typically rigged than not, and online gambling companies would rather pay trivial fines when caught than stop raking in the tons of guaranteed profits generated by their software’s interaction with overly gullible online gamblers (Menn 2010). Even online games of skill can be defrauded by players who reverse engineer the software and reap rewards that were not earned via skill but rather from cunning. Those who reverse engineer software used to run games can often use this knowledge to cheat just as or more effectively than counting cards in a poker game (Hoglund and McGraw 2008) (Table 6.2.4).

Table 6.2.4 Cyber Security Policy Issues Concerning Cyber Crime

c06t131060o

c06t131060o00

c06t131060o11

c06t131060o22

c06t131060o33

6.2.5 Geolocation

A major inhibitor to successful investigation of cyber crime is the inability to identify the physical location of an individual user. Though it is clear that user activity on a given computer is associated with an Internet address to which the computer is connected, the source address of an attack is rarely a computer that is physically located in the same place as the human attacker. Cyber criminals cloak their activity by obscuring their physical location. Perhaps by tacit analogy with physical crime, wherein the owner of the location or weapon is not responsible for criminal actions within it, ISP and hosting service providers are not held accountable for computer crime within their networks. If the analogy extended to aiding and abetting, different kinds of users may be accountable for the same crime. Enforcing accountability for consumers, software developers, network administrators, and social networking identities requires different forensic capabilities. These include, but are not limited to, the ability to identify the source of a network connection at both the user and computer level, the ability to determine what physical path supports a network connection, the ability to know the provenance of software updates arriving from the network, and the ability to determine what changes software may effect on a given computer (Landwehr 2009). None of these capabilities are in place on the Internet, and are only with great difficulty enforced on highly critical private networks.

There are so many vulnerable computers on the Internet that they keep catalogs of them, as a salesperson would keep a client contact list. By maintaining credentials for multiple vulnerable computers, an attacker can change the path by which they launch an attack every time they launch one. Figure 6.5 provides an example. In order to trace the attacker using the path in Figure 6.5, the victim would first have to gain access to at least one of the machines in the botnet and hope that they would be able to find the botnet control server. Depending on how the botnet was configured, an investigator working on behalf of the victim may have to observe network traffic going into and out of the machine, reverse engineer the botnet software and find its configuration files, or use operating system logs (normally not configured on vulnerable machines) to see evidence of past connections from the botnet controller. From the botnet controller machine, an investigator would have to perform similar steps to determine that the attacker had accessed that computer from a network device belonging to a hosting service provider. Depending on the relationship between the attack victim and the service provider, an investigator working on behalf of the victim may have to call a lawyer to file a subpoena to produce the records of activity on the network device for the time of the attack. Even if the hosting service provider is friendly and knowledgeable, once it is evident that their own records show that an attack originated on one of their servers, which had been compromised, they may be reluctant to share this information unless compelled by a court order. They in turn would identify the source as a much larger corporate enterprise. As this enterprise was vulnerable at several network interfaces, both internal and external, it is not likely that they have the expertise or the security logs required to track down a multiple-hop network connection within their border. Even if they are, they are just as unlikely as the service provider to admit it, and the investigator working on behalf of the victim may have to issue another subpoena. The information provided by the corporate enterprise would identify the source as a bot controlled by the attacker, which itself is unlikely to have logs and would require reverse engineering.

Figure 6.5 BotNet attack path.

c06f005

If the ultimate source of the attack ends up being a wireless user, the difficulty of identifying a physical location is either increased or decreased, depending on the capability of the user mobile device. If the device is a laptop, the source would be typically be a wireless access control point, which is a device that communicates on wireless protocols with end users, and connects them to the Internet via a land line. An investigator working on behalf of the victim would have to go to the location of the wireless access point and eavesdrop on the connections, looking for the wireless signal emanating from all computers in the area until it correlated one with the network address of the attack source and homed-in on the attacker location. However, if the user is on a mobile phone, these are often equipped with global positioning system (GPS). This is a satellite-based service that allows the device to query for its geospatial latitude and longitude coordinates. These are often automatically queried and stored on the device in order to be available to applications that require such information, such as applications that provide maps and driving directions. The only difficulty in identifying the physical location of a user with GPS services enabled is the accessibility of these records remotely plus the fact that the user is mobile.

Cyber attackers count on the difficulty and complexity of such investigations to cloak their Internet activities. Even if such an investigation was successful, by the time it concludes, the attacker could have completely packed up and moved operations to a different physical location (Table 6.2.5).

Table 6.2.5 Cyber Security Policy Issues Concerning Geolocation

c06t139060d

6.2.6 Privacy

In order to use Internet services, information must pass in both directions between the service and service provider. The technical mechanisms that provide the data exchange pick up certain types of information by default from both sides. Internet and mobile application service providers thus get some of the information they process “for free.” As this information was not requested from the user, but provided without their knowledge, it has spawned a new type of e-commerce business model, one wherein the customers are not the users.

Privacy is the ability of individuals to protect information about themselves and have the ability to release it selectively. Information security is the protection of information from theft, unauthorized change, or denial to authorized users (i.e., confidentiality, integrity, and availability). The discussion of cyber crime in this section illustrates that this ability may be critically important to prevent identity theft and stalking, but that is not the only reason an individual may seek control over his or her own data. For example, data concerning personal spending or browsing activity may present evidence of habits or personality traits that may subject an individual to discrimination. Even if individual behavior is not evident in an individual’s data, one person’s data are often correlated with social networking groups that they frequent and this association may introduce cause for discrimination or embarrassment. Smart Grid technology with inadequately secured Smart Meters that record behavior in the home environment are exacerbating privacy concerns. Some security professionals and advertising executives are comfortable repeating the phrase, “privacy is for pornsters and mobsters.” But this terse dismissal of the need for privacy is insensitive to the plain fact that people do not always openly discuss everything about their personal lives, and the collection of extremely detailed data on one’s cyber activity is equivalent to engaging in that level of open discussion. The advent of e-commerce businesses where the user is not the customer has motivated large-scale data collection services that not only gather correct information about individuals, but use heuristic algorithms to make informed guesses about attributes of an individual (Cleland and Brodsky 2011). Many service providers engage in detailed attempts to personalize user web browsing experiences by collecting information about their behavior and using it to determine what information to display to them. In so doing, they not only collect enough information about an individual to consider it a privacy violation, but they also use it to tailor a user’s experience on the site to limit their views of system features to those that the site programmers have determined that an individual with that kind of attribute profile is most interested (Pariser 2011). When these personalized menus are often laced with advertisements, they are referred to as online behavioral advertising. People are becoming increasingly dependent on services that are not commercially marketed to end users, such as search and global social networking. Hence, they are limited in their use of the Internet unless they submit to the collection of personal information. Such sites are increasingly bold about lifting data that has little to do with the service provided, such as when Twitter was caught uploading contact lists (Sarno 2012).

The policy issues in this section thus center on transparency and accountability for the handling of personal data, both identifiable and not. It starts by addressing privacy issues at the nation-state level and ends with issues of individual choice that present privacy trade-offs (Table 6.2.6).

Table 6.2.6 Cyber Security Policy Issues Concerning Privacy

c06t14105zr

c06t14105zr00

6.3 Cyber Conflict Issues

Cyber conflict is a generic label for conflicts and coercion in cyberspace where software, computers, and networks are the means and/or the targets. It covers a broader scope than cyber warfare, and includes all conflicts and coercion between nations and groups for strategic purposes in cyberspace where software, computers, and networks are both the means and targets. Cyber conflict includes nation-states actively contending with each other in cyberspace for national security purposes. Not all cyber conflicts rise to the level of armed force, such as large-scale cyber espionage. Cyber conflict is not restricted to nations and businesses, but may be between any individual, loosely connected social networking groups, and organizations of all shapes and sizes. Where people engage in cyber conflict for political purposes or to defend ethical beliefs, this is called hactivism. A key point to remember in any discussion of cyber conflict is that it is not a discussion about computers, but about people.

Cyber conflict is often conducted for strategic purposes, as when nation-states actively conduct missions in cyberspace in order to contend for technical superiority (Adair, Deibert et al. 2010). These conflicts may or may or not rise to the level of armed force such as large-scale cyber espionage or cyber war. This term of cyber conflict allows a broader discussion of how nation-states and other organized groups with large cyberspace operations contend in cyberspace while reserving the term “warfare” for only the most significant attacks between nation-states. This term helps simplify concepts of warfare, espionage, and other attacks as it is broad enough to include many other hostility-motivated activities, but still specific enough to allow room for growth and discussion of the essence of violence conducted within or assisted by cyberspace. A legacy term for cyber conflict is electronic warfare, which was more restrictive as it was typically used to refer only to situations where cyberspace was both the means and the target of attack.

This section covers one of the key drivers of cyber conflict—claims to intellectual property in cyberspace. Conflicts over intellectual property may be overt or covert, in which case they are classified as cyber espionage. The most extreme form of cyber conflict is cyber war.

6.3.1 Intellectual Property Theft

Although the copyright and trademark issues discussed in Section 6.1.4 are issues concerning intellectual property, those are closely related to a company’s Internet presence and thus are issues on par with Internet names and numbers. Also, many aspects of cyber crime as discussed in Section 6.2.4 relate to theft of intellectual property. In this section, we consider threats to intellectual property used for competitive advantage such as patents and trade secrets.

The term Advanced persistent threat (APT) refers to an organization that is well equipped to study a cyber infrastructure in multiple dimensions, including network, application, human, and physical, with the ultimate aim of identifying and extracting information and/or undermining critical aspects of a mission, program, or organization. As described by the National Institute of Standards and Technology (NIST), “The advanced persistent threat: (1) pursues its objectives repeatedly over an extended period of time; (2) adapts to defenders’ efforts to resist it; and (3) is determined to maintain the level of interaction needed to execute its objectives” (NIST 2011). Recent headlines show that many large global firms are subject to these attacks (Jacobs and Helft 2010; Drew 2011; Schwartz and Drew 2011; Markoff 2012). Several major cases have been thoroughly investigated, and it has been revealed that significant digital assets have been misappropriated and used for either commercial gain or subsequent attack planning (Alperovitch 2011). Yet there have not as yet been any unrepudiated attribution or successful prosecution that would indicate justice has been served in these cases. Rather, we are left to conclude that hackers in our midst regularly harvest intellectual property with the purpose of duplicating manufacturing lines, profiting from the distribution of stolen entertainment, damaging data integrity, and/or damaging physical equipment.

Many APT attacks begin by social engineering, that is, the act of persuading knowledgeable staff members to divulge information about how to access enterprise networks (FS-ISAC 2011). Social engineers working on behalf of APTs contact staff via social networks and impersonate friends, family, and coworkers, as well as assume false identities such as customers trying to test passwords. They may also engage in in-person social engineering, meeting a staff member on a business trip or other public place, and pretending to be a friend and confidant. This stage of the attack, also referred to as reconnaissance, is the first of a pattern of seven distinct stages. This complete picture as seen by security analysts is (Cloppert 2010):

1. Reconnaissance—social engineering and network scanning, infiltration with phone home malware designed to gather enough information to complete steps 2–4.
2. Weaponization—selection and placement of malware designed to evade security controls identified in step 1.
3. Delivery—propagation of weapon package to attack target identified in step 1, for example, via phishing email.
4. Exploit—execution of code that takes advantage of vulnerabilities identified in step 1 to plant weapon on target.
5. Installation—use weapon to install command and control malware.
6. Command and control—malware connects to malware operator website to retrieve commands.
7. Actions on intent—malware performed actions directed by malware operator.

The stages of the attack can iterate and evolve continually. APT attacks, when discovered at one location, had been traced in logs going back over 18 months. Because the malware “weapon” is specifically designed to evade existing security controls, there is little chance that there will be enough activity logs gathered to make it possible to determine exactly what intellectual property had been accessed by the attackers over the course of their site penetration.

Policy issues related to intellectual property overlap with all e-commerce policy issues described in Section 6.2. Impersonation is used to facilitate social engineering, malvertising is used for delivery, appropriate use addresses the weaponization that often makes use of crimeware, and geolocation issues make investigation extremely difficult. Additional policy issues concerning intellectual property have to do with more systemic issues that create a cyberspace environment where intellectual property theft resists forensics analysis and therefore prosecution. Policy issues range from nation-state objectives for technical superiority to enterprise objectives for awareness (Table 6.3.1).

Table 6.3.1 Cyber Security Policy Issues Concerning Intellectual Property

c06t14605yp

6.3.2 Cyber Espionage

When copying and pirating are motivated by nation-state goals for dominance, the definition of the activity morphs from intellectual property theft to espionage. “Cyber espionage,” like other kinds of espionage, is typically considered a legitimate activity for a sovereign state. Spying has long been an activity for states to conduct against one another. All issues relating to protecting intellectual property would apply to cyber espionage, whether or not the intellectual property in question belongs to a nation-state. This is because, given the size of global corporations, and the dependency of governments on their private sector for critical infrastructure and services, as well as considerable tax income, attacking a private sector company may indeed be part of a nation-state cyber espionage campaign (Table 6.3.2).

Table 6.3.2 Cyber Security Policy Issues Concerning Cyber Espionage

c06t14805yd

6.3.3 Cyber Sabotage

Sabotage describes actions that are bounded often in time (during war), actor (by nonstate guerillas or state commandos), and result (destruction rather than espionage or crime). Cyber sabotage is a phrase that reflects the damage potential from cyberspace terrorists. Any kind of enterprise may be targeted by saboteurs, from individuals to nation-states. It is not uncommon for disagreements among hackers to evolve into the cyber equivalent of gang wars, wherein rivals destroy each other’s information. Such activity may even escalate from the cyber to the physical world, as hackers ransack each other’s homes in acts of vengeance against cyber attacks. Cyber gangs also stalk unsuspecting victims, as when groups of hackers with similar political viewpoints join forces to destroy or defame enterprises that conduct activities to which they are in opposition, or simply publish opinions that oppose their views.

When cyber attackers bond over similar political or ethical causes, they are classified as hactivists. Objects of hactivist attacks may be corporate or non-for-profits. They may even be individuals who are targeted on the basis of involvement in activities related to their job function, as when business partners of pharmaceutical firms were targeted by an animal rights group because the firms conducted product safety tests using animals (Kocieniewski 2006).

Where the target of cyber sabotage is a nation-state, hactivists and nation-state military cyber warriors may be indistinguishable. During a denial of service attack against the country of Estonia in 2007, hactivists rallied to the cause of increasing the strength of botnets used to deliver the denial of service attack (Clarke and Knake 2010). In this case, e-commerce in the nation was virtually shut down for a week, and although many hactivists claimed to be joining the attack for patriotic reasons, no nation-state took responsibility for the overall effort. In this case, only e-commerce was the target, but nation-state threats aiming to exploit cyberspace vulnerabilities may target any component of the national infrastructure, including, but not limited to, the operation of industrial control systems, the integrity of banking transactions, or the readiness of military equipment. As described in Chapter 3, potential damage from sabotage of cyber components of these systems may include physical harm because of the extent to which industrial control systems control physical processes. The policy statements in the following table concern nation-state cyber sabotage issues (Table 6.3.3).

Table 6.3.3 Cyber Security Policy Issues Concerning Cyber Sabotage

c06t15105xy

c06t15105xy00

6.3.4 Cyber Warfare

Military interest in cyber security predates the “cyber” era, as it is rooted in earlier doctrines like automation, intelligence and counterintelligence, operational security, computer security, and electronic warfare. Hence, cyber security is still deeply entwined with all of these military topics which have confused practitioners and theorists for the past decades. One of the first reports to highlight the possible risks of computer automation was the Ware Report of 1970, at the dawn of the age of computing when remote terminals allowed people to access computers even though they were not in the same room (Ware 1970). Most of the threats and vulnerabilities with which the military was concerned in 1970 remain valid concerns: accidental disclosure, deliberate penetration, passive and active infiltration, physical attack, logical trap doors, supply chain intervention, software and hardware leakage points, and malicious users or maintenance people.

Military attitudes toward “information warfare” developed rapidly after successful technology-enabled offensive strategies, driving new doctrine, strategies, and theories, especially in the United States, where it was coined “RMA” for “revolution in military affairs” (Rattray 2001). Though the strategic objective to capitalize on cyberspace in general is clear, the U.S. Department of Defense views on cyber security changed several times over the subsequent 15 years. The doctrinal concepts thrashed from information warfare to information operations and information assurance (all of which generally treat cyberspace as one dimension of the information realm), offensive and defensive counterinformation, computer security (a more traditional view), and network operations and network security. Cyber-related security forces of varying strategic roles were scattered throughout the military’s existing structures until the order for their consolidation into a single four-star command in 2010. Now, there is a central U.S. Cyber Command, as part of the U.S. Strategic Command. “Cyber war” today means war that happens to be conducted using artillery only found in cyberspace. This definition reserves the term “warfare” for the way it is traditionally used by states to refer to conflicts where force can be legitimately used by sovereign states. Nevertheless, the connection between cyber war and traditional war is increasingly obvious. Cyber warfare is in the process of being merged with the larger body of understanding, concepts, and laws. This definition does not match with the public use of the term “cyber war” which has been used to cover everything from online juvenile hooliganism to acts of organized crime to espionage. This definition is also in line with the current general consensus between international lawyers, although it is acknowledged that the general consensus between lawyers may change very quickly after a devastating cyber attack, especially if inflicted by one nation on another.

The Department of Defense focus on cyber war predominantly considers “cyber” as networks. Its doctrines sort through the difference between traditional battles and cyber battles. For example, in cyber battles, preemptive first strikes using overwhelming forces does not necessarily remove adversaries; trying to use cyber counterattacks to disable attacks in progress is complicated by issues of identifying targets, and the topology of the battlefield may change in progress (Denmark and Mulvenon 2010). As a result, a more recent military strategy aspires to these goals (Alexander 2011):

  • Treat cyberspace as a domain for the purposes of organizing, training, and equipping, so that DoD can take full advantage of cyberspace’s potential in military, intelligence, and business operations.
  • Employ new defense operating concepts, including active cyber defenses, such as screening traffic, to protect DoD networks and systems.
  • Partner closely with other U.S. government departments and agencies and the private sector to enable a whole-of-government strategy and an integrated national approach to cyber security.
  • Build robust relationships with U.S. allies and international partners to enable information sharing and strengthen collective cyber security.
  • Leverage the nation’s ingenuity by recruiting and retaining an exceptional cyber workforce and to enable rapid technological innovation.

The policy statements in this section therefore range from strategic partnership initiatives to the decision by an individual country to launch a cyber attack. The first several policy statements describe policy cooperation issues. These are followed by cyber military operations issues. These are followed by policy issues with respect to the use of military force (Table 6.3.4).

Table 6.3.4 Cyber Security Policy Issues Concerning Cyber Warfare

c06t15605wo

c06t15605wo00

c06t15605wo11

c06t15605wo22

c06t15605wo33

c06t15605wo44

6.4 Cyber Management Issues

Even if the military has unmitigated success in arranging its resources around its mission, the best laid plans to establish military cyber defense may be laid low by its unexpected dependence on civil infrastructure (Lynn 2010). The namespaces and numbering systems that provide the infrastructure for both public and private telecommunications are managed by private industry. The practice of technology as a field of professional discipline is quite young compared to other fields. Software architects do not have a guild or apprenticeship system as do architects of physical facilities. Technology consultants are not required to learn their trade through a series of peer-administered exams as do medical consultants. Buyer beware is the rule of the day. The field of technology practice has therefore, not unexpectedly, yielded a field of technology malpractice. Technology malpractice investigations are motivated by suspicion of management neglect of security issues (Rohmeyer 2010). For example, they provide evidence in legal cases of negligence brought by the U.S. Federal Trade Commission (Wolf 2008).

Nevertheless, a half century of practicing security professionals has seen the accumulation of a large body of knowledge in cyber security. Shared experience of similar technology architecture and operational processes has yielded both best practices and rules of thumb that should not be cast aside simply because there is no scientific basis for their universal acceptance. This section explores some of the policy issues that routinely arise in managing cyber security. Cyber security has long been used to control assets tracked by computer systems, and so cyber security management is accustomed to apply checks and balances to ensure that their fiduciary responsibility for asset management is met. Cyber security management often begins with research into both technology capabilities and system requirements. It is dependent on the capability of an organization to buy, build, or outsource technology components, and so supply chain management is a critical requirement for success in technology practice. Often, cyber security management will attempt to delegate security functions to areas of cyberspace management that are most closely associated with the assets to be protected. However, these delegation attempts sometimes fail due to a lack of security skills sets in the delegated area. An often suggested solution to this problem is some type of certification and/or accreditation for security professionals. These requirements extend to suppliers of services and equipment that are incorporated into an enterprise cyberspace infrastructure. Checks and balances are required to hedge against cyber security risk. There is a large amount of research in cyber security practices that has enabled successful security solutions, and it has led professionals to adopt principles that provide guidance for security design and operation. However, as discussed in Chapter 3, more research and development is needed to cover both existing and emerging cyberspace usage scenarios.

6.4.1 Fiduciary Responsibility

Operations is a generic term in many technology and systems-based organizations to refer to the staff that maintains and monitors business process. In heavily technology-supported businesses, technology operations and business process are intractably intertwined. Even where two separate departments maintain and monitor the technology-enabled processes and business-level process independently, the Operations department is supported by screens and programs that are information-rich views of the same technology whose byte-flow and electronic circuits are monitored by the information technology department. For example, the technology department may configure employees to use systems while the business department will be responsible for configuring customer users. Operations, or “ops,” as it is colloquially called, also generally include technology services support organizations like desktop software installation and help desk. Of course, there are always exceptions, and this depiction of mainstream technology operations is not necessarily applicable to industrial control systems (ICSs).

Nevertheless, as in any community where sizable assets are maintained by a few privileged and trusted people, operations administrators routinely face ethical dilemmas. In addition to controlling user access to systems, Operations is the caretaker of the assets themselves. In large systems-oriented organizations, large databases of personally identifiable information (PII) and information repositories of trade secrets are handled according to preset routine, in the same perfunctory fashion as systems containing cafeteria menus are handled. However, in a secure organization, the access control settings and monitoring processes for the sensitive information are more rigorous than the technologies and procedures implemented to support the menus.

Cyber operations in any sizable enterprise is typically a round-the-clock endeavor. Even where global marketplaces do not demand active support, automated system processes may be required to devote considerable computer resources in off-hours to crunch numbers to produce data for start-of-day consumption. The 7 × 24 nature of operations makes ops the obvious first point of contact for any message or alert which may indicate potential business interruption. Hence, security incident identification and response procedures are a routine part of operational process, even those that do not consider themselves responsible for security (Kim, Love et al. 2008).

The policy statements listed in this section all address issues that arise in accepting or performing information caretaker responsibilities. The first few fiduciary responsibility issues concern the establishment of management processes that are required to demonstrate that due diligence is exercised in the caretaker function. These are followed by specific expectations that data owners typically have of data caretakers. The remaining issues address the role of nation-states in establishing conditions for the smooth functioning of a technology industry requiring demonstration of fiduciary responsibility (Table 6.4.1).

Table 6.4.1 Cyber Security Policy Issues Concerning Fiduciary Responsibility

c06t16405uu

c06t16405uu00

c06t16405uu11

6.4.2 Risk Management

Risk management applies to any kind of risk. Typically, a risk management officer or division will focus on credit risk, market risk, and operations risk. Technology risk is a subset of operations risk, and cyber security risk is typically viewed as a subset of technology risk. The human element in operation is considered more of a risk than the technology risk because despite all of the software flaws in computers, they are still typically more reliable than people at performing a job repeatedly and consistently. Even for systems under development, it is far more common for software engineers to sabotage a system or a project by intentionally exercising the authority in their own job function than to thwart security measures (Rost and Glass 2011). Given its low relative risk in the hierarchy of the things risk managers care about, security risk is often absent from any centralized enterprise risk management process. If any formal cyber security management process occurs at all, it is typically done by those responsible for technology and management.

There are not many guidelines on how to perform cyberspace risk assessments, but there has been substantial work performed under the heading of information security risk assessment. Where information is considered as an asset, information security risk determines the potential loss due to damage to information. Damage to information is typically portrayed as loss or degradation of information confidentiality, integrity, or availability, though some have suggested that information security attributes be extended to encompass attributes that refer more directly to its value, such as utility and possession. Although there are many economic analysis methods available to a cyber security manager making risk assessment decisions, in its most basic form, the cost of a security measure is compared to the expected loss avoidance, and if it costs less to implement the measure, the measure is recommended to be implemented (Gordon and Loeb 2005). The hard part of this type of analysis is not to do the math, but to actually know what the risks are, and to know that the suggested measure, whose cost can be quantified, will actually perform as expected once it is installed. Security standards provide little to no guidance on this part of the process.

It is important to distinguish risk assessment as a management tool from either risk management or security management. After risk assessments are done, decisions are made based on the results. Where strategy is involved in the security decision-making process and the outcomes of those strategies are monitored, this is risk management. Where the programs, processes, and projects are created to act on risk management decisions, this is security management. Risk management results in objectives and guidance for security management. As such, risk management is at the heart of many debates on security policy issues. These debates include discussion of cyber security strategy, policy, and implementation, and include risk assessment, risk decisions, concepts for mitigation such as transfer, as well as measuring effectiveness and monitoring evolution.

Organizations in the critical infrastructure sectors are typically held to a higher standard of risk management, with systemically critical organizations being held to the highest standards of maintaining best security practices. This includes systems and networks whether they are connected to the Internet, or are completely privately operated networks for a limited number of identified parties, or proprietary networks within one organization, or industrial control systems which may have very limited network capabilities. Cyber security policy issues in risk management include organizational responsibility to understand and evaluate cyber security risk, segregation of duties utilized in risk and security management, and the government’s role in assuring risk management practices for the critical infrastructure upon which communities depend for both cyber and physical services (Table 6.4.2).

Table 6.4.2 Cyber Security Policy Issues Concerning Risk Management

c06t16805ua

c06t16805ua00

c06t16805ua11

6.4.3 Professional Certification

The process of certifying information security professionals is a growing and dynamic field. There are literally thousands of certifications available, ranging from hands-on examinations of product-specific knowledge, to subject area certification, to broad information security certifications. None of the popular cyber security certifications carry any form of liability or bonding beyond an expected adherence to a common code of ethics and conduct, nor are they equivalent to professional registration regimes. While the term “engineer” is often used in this career field (“software engineer” and “network engineer” are common examples), it is not in the same context as a registered or licensed engineer that is subject to a given government’s regulations of the profession.

Normally, companies and organizations will train and certify their cyber security employees to some standard acceptable to the broader career field. But if internal employees are not used exclusively for cyber security operations, organizations and companies are not relieved of the responsibility for regulatory compliance when they outsource technology operations. Hence, they must find ways to demonstrate that the vendors with whom they have contracted are capable of meeting cyber security requirements. This requirement has spawned a plethora of checklists used by companies to determine whether the vendor security posture is capable of delivering a security operational process. For example, the DoD has established a certification program as a response to an audit finding that DoD contractors were performing security work without the requisite background. The director of the program maintains that any certification is better than none, as it gives the government a tool for oversight that can be improved going forward (e.g., DoD 2005). However, the certification required to perform the job function of a security engineer is one that can be achieved by passing an exam of technology facts and requires no demonstrating of security engineering experience. Nevertheless, a high school dropout who gains this certification on the job will be favored by policy by the DoD for a security engineering job over a successfully practicing engineer with 20 years’ experience and advanced degrees in cyber security. One reason given for upholding the DoD standard is that certifications require continuous learning while advanced degrees are not evident of continuing education in security. However, the authors include Certified Security Information Managers (CISM), Certified Information Systems Security Professionals (CISSP), and Certified Information Security Auditors (CISA), and well understand that one can get ongoing education credit hours from the organizations that support these certifications by attending vendor-advertisement presentations, reading magazine articles, or watching news-oriented podcasts. Moreover, there are many fields within cyber security where staffs require additional training, but there are currently no certifications in that area, for example, secure software engineering. None of the certification’s continuing education requirements have requirements to be related to the job function one is currently performing, and there is little by way of audit. Nor are we aware of anyone who actually had their certification revoked based on lack of ongoing education.

On the other hand, a certification can expire if one does not pay renewal fees, and this is the reason why policy should support companies who may be trying to get more value out of their education and training dollars than simply paying for certification tests. It makes more sense for a large enterprise to invest more in security staff up front for thorough technology education and then keep people trained for the job in which they are placed.

The policies in the section include professional certification standards issued at the individual, organizational, and national level (Table 6.4.3).

Table 6.4.3 Cyber Security Policy Issues Concerning Professional Certification

c06t17305t7

c06t17305t700

6.4.4 Supply Chain

In the cyber security supply chain, the most visible exposure to threat is often seen as external, such as an ISP, reference data source, or cloud computing application. The enterprise-to-enterprise communication that is required to run a technology operation in cyberspace has surfaced many issues with respect to organizational representation of information upon which others must depend to operate in harmony. It has also highlighted the lack of formal accountability for the veracity and integrity of that information. However, the supply chain also includes everything that technology practitioners do to support infrastructure and applications internal to the enterprise.

The depth and breadth of the cyberspace supply chain is difficult to quantify. It will differ depending on the type of system contemplated. It will always include some kind of software, but may also include software developers themselves. The types of hardware it may include range from mainframe computers to programmable chips. Almost all elements of the cyberspace supply chain have experienced known incidents of counterfeit or sabotage, and it is often hard to tell the difference, as a counterfeit part may malfunction and create unintended sabotage (DSB 2005).

That is, another very visible but often overlooked part of an organization’s supply chain is the organization’s own IT department. This department is often not fully integrated with an enterprise, but integrates itself with a suite of technology suppliers that it assumes responsibility to operate on behalf of the business. Weakness in internal supply chain, such as delays in onboarding new staff, account for a lot of negative audit findings due to workarounds by staff needing to use computers to get jobs done. Given a choice between violating security policy and being cited for poor performance, performance wins every time.

Moreover, technology managers are routinely plagued by software vendors who do not consider security requirements and usually disclaim accountability for how the software works (Rice 2008). This places a large burden on technology managers who must chose among insecure software products and integrate them into a technology infrastructure for which they are responsible for maintaining quality of service.

This section starts with policy statements concerning software security quality that are typically encountered in the context of enterprise acquisitions. It then covers cyber security supply chain policy issues of national importance and builds on prior statements concerning Cyber Conflict in Section 6.3. These policy statements are followed by more general issues of supply chain effects on infrastructure (Table 6.4.4).

Table 6.4.4 Cyber Security Policy Issues Concerning Supply Chain

c06t17605sb

c06t17605sb00

c06t17605sb11

c06t17605sb22

6.4.5 Security Principles

Over the years of security management practices, several studies have attempted to classify security technology practice into general security principles (Neumann 2004). The result is that there is a common body of knowledge of cyber security architecture patterns that, if observed in the requirements stages of technology engineering, serve to suggest well-known solutions to well-known security problems. Security principles are generic descriptions of security features that provide solutions to cyber security problems that are both common and well understood. For example, the principle of least resource, which dictates that users should have at their disposal the fewest amount of shared computing resources that they need to complete their tasks, and no more. Many of these principles were derived by the information systems audit profession, and have their origins in the service of the accounting profession, whose early assignments with respect to computers were to ensure that computer-generated computers could properly account for corporate assets (Bayuk 2005). Some evolved alongside and consistent with government standards for security such as the Orange Book and its successors (DoD 1985; ISO/IEC 2009a,b). Others emerged from the study of cryptography in computer science (Denning 1982).

Many of these principles have been codified by the information systems auditors, some as early as 1977 (Singleton 1994). These have been continually updated by the Information Systems Audit and Control Association (ISACA), global certification authority for information systems auditors, Control Objectives for Information Technology (COBIT) (ISACA 2007). For example, ISACA defines segregation of duties as a basic internal control that prevents or detects errors and irregularities by assigning separate individual responsibility for different steps in a multistep process for initiating and recording transactions that result in change of assets custody. This technique is commonly used in large IT organizations for software deployment processes so that no single person is in a position to introduce fraudulent or malicious code without detection. It is also commonly applied to secure financial transactions, and is also used in high security setting such as missile launch scenarios. The same technique could be said for any operation that controls an asset or process critical to enterprise mission or purpose. This is what makes it a security principle.

A key contribution from the accounting profession is the principle of segregation of duties, which dictates that, for situations wherein a user controls valuable assets, every individual one of them should be restricted from changing ownership of those assets without the collusion of others, a principle designed to deter insider fraud. This requires automated processes that transfer assets to be broken down into subprocesses, and no one person being given permission to execute every step in the subprocess. A pure technology derivation of this type of accounting principle is the principle of least privilege which dictates that users should have the minimum access they need to perform a technology task and no more. Segregation of duties applies not just to technology processes, but also to management processes. The most significant of these is the process by which security is managed. Managing security is a two-step process: risk and operation. Once security risks have been identified, management makes decisions on whether, and if so, how to reduce security vulnerabilities. These vulnerability reduction programs should then be treated just as any other set of technology projects. Projects, by definition, are not persistent, and so any management of security measures that requires day-to-day oversight, such as user administration, is an operations rather than a risk management process. Where management has responsibility for risk management, and also security projects and/or operations, there is temptation to accept risk rather than spend resources to reduce vulnerabilities or verify that processes are working. On the verification side, this is obvious, and teams of auditors are normally deployed to ensure that security operations are well-managed in critical systems. However, on the risk management versus vulnerability reduction side, it is common to see the function assigned to the same individual. Hence, formal risk acceptance processes for security policy violations are common, even if the most senior managers in the firm have endorsed security policy.

System security features based on tried and true security principles are not accomplished by technology alone, but by combinations of people, process, and technologies conjoined with security-aware management practices. This section includes policy statements from security principles to illustrate the issue concerning their adoption. From the variety of examples, it is clear that the application of security principles is system and implementation specific. Principles that apply to one situation may not necessarily apply to another.

Cyber security policies listed in this section are based on management, technology, and operations principles, in that order, although it is clear that these are interdependent. These policies are stated generically to apply to any system, for example, e-commerce, industrial control systems, or mobile device frameworks (Table 6.4.5).

Table 6.4.5 Cyber Security Policy Issues Concerning Security Principles

c06t18105rj

c06t18105rj00

c06t18105rj11

c06t18105rj22

6.4.6 Research and Development

Although often lumped into the same heading, research and development are very different things. Research involves breaking new ground, bringing the latest theories and experiments together to hypothesize about a solution to a problem. The process of research is to formulate experiments that will prove or disprove such hypothesis. Development is about building systems for which there is some basis to believe that engineering processes using existing materials and processes will be able to be specified to meet requirements. Both are present hard problems that the U.S. Department of Homeland Security has categorized into a set of laudable but to date, unattainable goals. These include scalable resilient systems, enterprise-level security metrics, system assurance evaluation life cycle, combating insider threats, malware, and botnets, global-scale identity management, survivability of time-critical systems, situational understanding and attack attribution, attribution of technology provenance, privacy-aware and usable security (Maughan 2009).

Research is less immediately useful to businesses and military operations than is development. Hence, cyber security research issues often center on the efforts of academia to contribute to the growing body of knowledge in cyber security. Academic issues necessarily include ways to fund education of graduate students, who are expected to emerge from academic institutions as experts in cyber security technology. Academia has some very different characteristics from industry and government (Jakobsson 2009). First the demographics in academia are biased toward younger, more inquisitive, less risk-adverse users, users who are early adopters of technology. These are users who cannot get fired for negligence, and resist and question attempts at education aimed at conformity to policy. There is also considerable turnover in this community; every year some existing students leave and new students join ongoing research projects. Finally, controls are more lax in an academic environment. As a result, there is greater risk and less control. Unfortunately, since everything is interconnected, this situation can impact other sites. If academic networks and student machines get attacked and compromised, they can be used to launch cyber attacks. Corrupted computers in academia can be used as proxies and bots. This is the environment where most cyber security research takes place.

Moreover, cyber security research itself is limited to what current academics have identified as hot topics from funding sources. There is little, if any, references in cyber security research to systemic cyber security issues such as those found in industrial control systems. Most cyber security research is conducted in departments of computer science and little, if any, in engineering departments. Control theory that is studied in the engineering disciplines does not address security. Fortunately, not all businesses rely on academia to produce research. Many cannot wait for innovative technologies to emerge, so some have cultivated their own research institutions dedicated to studying issues of interest to the enterprise. While it is also rare that security issues are included in privately funded research endeavors, it is not completely unheard of (e.g., Bilgerm, O’Connor et al. 2006).

Development, on the other hand, is a practical necessity in most corporate enterprises. Even where all software code is purchased and customization is outsourced, technology staff is routinely charged with meeting business requirements by engineering solutions composed of existing technology building blocks. As observed in Chapter 3, there are readily accessible security standards which guide security development processes, and these are supported by a wide variety of vendor security products and services. Security issues in development tend to center around the process used by the development organization and whether it considers security requirements (SSE-CMM® 2003). Moreover, there are software development practices that are known to produce vulnerable code, and it is recommended that these be specifically avoided (McGraw 2006).

Policy issues in the practice of security research and development concern government support for research initiatives, both academic and private. The policy statements in the following table begin with high-level nation-state issues, which are followed by statements reflecting concerns for academic and research quality (Table 6.4.6).

Table 6.4.6 Cyber Security Policy Issues Concerning Research and Development

c06t18705qx

c06t18705qx00

c06t18705qx11

6.5 Cyber Infrastructure Issues

This section contains illustrative examples of cyber infrastructure issues faced by private sector industries. The U.S. Department of Homeland Security’s National Infrastructure Protection Plan (NIPP) acknowledges 18 such examples as the critical infrastructure and key resources (CIKRs) of the nation that are managed by the private sector (DHS 2009). Though some are more active than others, each of these sectors is required by the plan to participate in a public–private sector partnership efforts to secure the national infrastructure. The list of sectors include food and water systems, agriculture, health-care systems, emergency services, information technology, communications, banking and finance, energy (electrical, nuclear, gas and oil, and dams), transportation (air, highways, rail, ports, and waterways), the chemical and defense industries, postal and shipping entities, and national monuments and icons.

The section includes discussions and examples of information assurance policies in the illustrative domains of financial services, health care, and industrial control systems. Note that industrial control systems is not itself an industry sector, but a generic label for the type of automated equipment used in a wide variety of industry sectors.

6.5.1 Banking and Finance

The banking and finance industry encompasses a wide variety of institutions with the common focus on products and services for managing money. These institutions include banks, credit card issuers, payment processors, insurance companies, securities dealers, investment funds, clearance firms, and government-sponsored lenders. The companies comprising the U.S. banking and finance industry account for more than 8% of the U.S. annual gross domestic product (FBIIC and FSSCC 2007). All other industries now use e-commerce capabilities for online fund transfers, mortgage research and applications, viewing of bank statements, sales of financial advice or guidance, and subscriptions for interactive consulting. As the sector manages money using information technology, it is constantly threatened by cyber attacks. Capable and persistent cyber criminals present increasingly organized and sophisticated approaches to commit theft and fraud.

Security has always been a concern of the banking and finance industry. The banking and finance industry is also adept at fraud detection and response. These concerns have driven the development of many technical Internet security controls. The industry has a thoroughly documented history of dedication to various public and private forums to provide defenses against attack, enhance resiliency, and sustain public confidence in trusted banking relationships (Abend et al. 2008). These volunteer efforts have proceeded in conjunction with steadily increasing regulatory oversight of the cyber security policy that has always concerned the banking and finance sector (see regulatory history at http://www.ffiec.gov, culminating in the ongoing; FFIEC 2006). Increasingly, there are also legal jurisdictions that focus on financial transactions that had not previously targeted financial services (Smedinghoff 2009). In addition, consumer pressures to respond to the increasingly sophisticated and organized threat landscape have driven the financial industry to set its own cyber security policies to address issues of concern to its customers (Carlson 2009).

Financial audit has long been the basis for best practices in security controls. Communities of information systems auditors were the first to compile standards for enterprise security programs and management strategies (FSSCC 2008). Regulators are likely to continue to focus on whether financial institutions have developed adequate strategies for planning, implementing, and monitoring controls for systems development life cycles. Regulators have developed detailed guidelines on topics such as training software developers, automated and manual code reviews, and penetration testing. For example, in 2008, the Office of the Comptroller of the Currency issued guidance on software application security (OCC 2008). Interface integrity in the service of security is something that physical security professionals refer to as Crime Prevention through Experimental Design (CPTED) (NCPI 2001). Secure interfaces require adequately secure infrastructure on both sides of the interface. Often, this requires unrelated, independent organizations as well as manufacturer, to design to specifications.

The financial industry has long been plagued by the cyber security crime of identity theft. Identity theft is not actually a crime against the bank, but against its customers. Banks are affected as customers in bulk are taken in and thereafter impersonated by criminals, who gain access to bank accounts and withdraw funds. As banks are used to fraud, this activity has been tolerated as the cost of e-commerce. Nevertheless, the pain that bank account takeovers cause consumers has caused bank regulators to issue a requirement that banks add a second “factor” of authentication.

However, most second factors chosen by banks were variations on the password theme in that they are still easily appropriated, either by being guessed by someone who knows certain information about an individual, or by an intruder who invaded a consumer desktop. Information security practitioners consider authentication strength to increase in three levels, generally characterized as something you know, something you have, and something you are. As described in the discussion on impersonation in Section 6.2.2, something you know is a password. Something you have is a physical component in the possession of an individual that is used to facilitate identity verification. Something you are is a measurement based on physical biology, called a biometric. Examples are fingerprints and retina scans. This policy requires the second of the three levels: something you have that would not be vulnerable to such guessing and eavesdropp­ing threats.

The continuing threat to consumer confidence in financial institutions motivated bank regulators to issue a “red flag” rule. This rule requires a banking institution to monitor for potential critical activity on a person’s account with the goal of detecting fraud in progress and preventing account takeovers. The rule requires that both customers and regulators be notified of fraud attempts thwarted by the bank.

Note that all policy statements in Section 6.4 apply to the cyber security policy decision makers of the financial industry. Where financial institutions offer online services, those in Section 6.2 apply as well. The policy statements in this section therefore range from regulatory issues to consumer concerns. They are familiar to the banking and finance industry. The first few concern regulations that apply specifically to the banking and finance sector, but could more broadly apply to any company that is a party to online monetary transactions. The next few concern the banking and finance industry as well as any company that spends a great deal of time and money on security regulatory compliance. The remainder are examples of financial cyber security policy concerning services that banks may or may not include in their own cyber security policy to achieve cyber security goals based on their own risks assessments, and these would not be directly influenced by external standards or regulation (Table 6.5.1).

Table 6.5.1 Cyber Security Policy Issues Concerning Banking and Finance

c06t19205pu

c06t19205pu00

6.5.2 Health Care

The health-care industry encompasses a wide variety of institutions with the common focus on products and services for maintaining health. These institutions include hospitals, doctor’s offices, diagnostic laboratories, medical equipment manufacturers, emergency care specialists, visiting nurses, and a host of other medical community professionals and services. These institutions use typical enterprise support systems such as accounting, administration, collaboration, and advertising. In addition, from the perspective of cyberspace operations, these constituents will utilize two types of mission-critical systems unique to the health-care industry: systems used to administer medical practice and systems used to administer medicine. By administering medical practice, we mean the tools and techniques of doctor’s offices, hospitals, other care providers, pharmacies, pharmaceutical manufacturers, and insurance providers to ensure that medical facilities and supplies are available and medical staff are recruited, trained, and paid. By administering medicine, we mean the process of caring for human patients. We shall call these logistics systems and provider systems, respectively. Logistics and provider systems used by the health-care profession differ in both functionality and data content.

The primary function of logistics systems is to track patients and resources through the maze of organizational workflow that has been created in order to connect patients with health-care providers, facilities, and treatments. The organizational workflow streams from patient home computers through workplace benefits systems, insurance agencies, diagnostic, and treatment facilities. Data content in these systems is the information required by this organizational workflow to function. It includes data that many patients consider private, and information security with respect to such information is regulated by the Health Insurance Portability and Accountability Act (HIPAA) (HIPAA 2003).

The primary function of provider systems is to provide a patient with medical care. These include drug delivery pumps, automated sample chemical or viral analysis, diagnostic imaging tests, remotely monitored electrical implants, and a wide variety of other innovative devices. The information flowing through these systems may begin with the authorization from a logistics system, continue through physician prescriptions, include automated or manual analysis to identify treatment appropriate to given patient conditions, and incorporate test results and automated communication of those results to logistics systems, completing the information life cycle for a simple treatment. Moreover, a single patient likely to require any one provider system interface is likely to incur multiple records on a variety of provider systems.

Cyber security issues unique to logistics and provider systems often focus on interoperability. Interoperability is a major goal for the health-care industry because it is seen as an enabler of fast and accurate decision making with respect to patient treatment. Where logistics systems may be rapidly combined with provider systems, patient histories may be automatically factored into expert-system-based diagnostic and prescription algorithms, enabling more accurate and effective treatments. For example, the recently established National Health Information Network (NHIN) dictates information sharing to enable easy exchange of health information over the Internet (HHS 2010). This critical part of the national health information technology agenda will enable health information to follow the consumer, be available for clinical decision making and support appropriate use of health-care information beyond direct patient care to improve population health. The NHIN is not one organization, but is an abstraction defined by the U.S. government as composed of independently operated systems. These include information service gateways, Health Information Organizations (HIOs) operated by an information provider or consumer, such as a provider emergency medical response, laboratory systems, or doctor’s offices, and NHIN Operational Infrastructure, a set of web services that stores information about HIOs and their data repositories in order to enable connectivity via security services and provide registry information on user capabilities. In essence, NHIN is a set of specifications for HIOs to query and provide data to each other, plus a repository of information concerning authorized HIOs. Where services for health information already exist, they would also be considered HIOs from the point of view of NHIN. These are referred to in NHIN documentation as Health Information Exchanges or Integrated Delivery Networks. The system has no data usage restrictions, but relies on HIO compliance with a Data Use and Reciprocal Support Agreement (DURSA) rather than any data-level security features or due diligence requirements to ensure that DURSAs are met with a feasible level of success.

However, such requirements for quick and easy information sharing also introduce at least two types of major security issues: privacy and integrity. If the NHIN concept is truly the next bar to be met in health-care information sharing, then a corresponding bar in cyber security must also be raised. Questions remain with respect to the evidence standard to which health care should be held accountable when requesting patient information from the system. For example, the question of what information needs to be shared in a disaster situation will vary with the type of event, and different emergency responders will need different information. For example, a physician involved in emergency triage needs different information than the State’s Director of Emergency Management, or the U.S. Secretary of Health and Human Services (Toner 2009).

The point of the NHIN plan and others like it is that the health-care industry has not yet taken advantage of the technology revolution. Existing health-care systems and programs that are targets for information sharing that could lead to vast improvements in patient care range from automated chemical agent surveillance systems, to voluntary contributions to news sites. In between are patient tracking system and mandatory reporting requirements, and, for the most part, these systems are stand-alone systems and are not integrated (Toner 2009). These systems are both publicly and privately held. They include emergency operations and information fusion centers at the local, state, and federal levels whose purpose is to merge the various streams of information. The advantages to the health-care industry of free flow of information are palpable to the service providers trying to get head of the next wave of potential pandemic.

This press for quick and easy information sharing comes against a losing battle for security controls over the health-care information repositories that already exist. A recent survey showed that more than half of information technology professionals working in health-care organizations do not believe that their organization adequately protects sensitive information, and an even larger majority had experienced data breaches (Ponemon Institute 2009). While these statistics may be explained by the fact that those who answered the survey were most likely security-aware, as they had been targeted by surveyors funded by security companies, it also indicates that even those health-care companies who think their IT controls are adequate experience data breaches. Where internal control reports persuade executives that systems are secured to an industry standard that may itself be inadequate, the perception is that there can be no blame in inadequate security, because no one can be expected to exceed industry standards. This type of “not my fault” attitude is easy for a health-care company to assume in a world where even highly technically sophisticated companies that are attacked may leave health-care professionals feeling both helpless and blameless (McMillan 2010).

There is also recognition among technology professionals working in health care that much of the information/communications technology necessary for the realization of integrated systemic solutions to health-care data integrity issues exists. Barriers to information sharing are not currently security issues, but technology interoperability, data dictionary standards, and reliability concerns, as well as training issues at all levels of the health-care system. These and many of the same structural, financial, policy-related (reimbursement schemes, regulation), organizational, and cultural barriers that have impeded the use of systems tools will have to be surmounted to close health care’s wide information/communications technology gap (Proctor 2001). Adding cyber security concerns related to privacy and data provenance significantly increases the complexity facing these professionals.

Nevertheless, policies for interoperability and data sharing ability should not be confused with standards for privacy and integrity. Interoperability standards (ASTM 2009; MD FIRE ongoing) are meant to facilitate communication, not to control information flow. When it is further recognized that the health care also uses industrial control system technology to autodeliver treatments that, if incorrectly administered, may be life-threatening, it is even more important to recognize the distinction and segregate policy decisions accordingly.

The policy statements in this section therefore range from regulatory issues to life and death concerns. They should be familiar to those working in cyber security within the industry. The first few concern what cyber security professionals refer to as “hygiene” issues. They discuss information security standards that have been known to be effective in reducing risk of data breaches when applied consistently to enterprise data. The next few concern cyber security risks introduced by interoperability requirements or lack thereof between various types of health-care data repositories ranging from medical devices to aggregate case databases. The remainder concern information sharing issues and potential interrelationships between policy goals for information sharing and policy goals of privacy and integrity (Table 6.5.2).

Table 6.5.2 Cyber Security Policy Issues Concerning Health Care

c06t19805oc

c06t19805oc00

c06t19805oc11

c06t19805oc22

c06t19805oc33

6.5.3 Industrial Control Systems

Despite their high reliance on automation, ICSs are not typically designed with access controls, their software is not easily updated, and they have little forensics capability, self-diagnostics, or cyber logging. While the lifetime of the equipment in an IT network typically ranges from 3 to 7 years before anticipated replacement and often does not need to be in constant operation, ICS devices may be 15 to 20 years old, perhaps older, before anticipated replacement, and run 7 × 24 × 365. Moreover, patching or upgrading an ICS has many pitfalls. The field device must be taken out of service which may require stopping the process being controlled. This in turn may cost many thousands of dollars and impact thousands of people. An important issue is how to protect unpatchable, unsecurable workstations such as those still running NT Service Pack 4, Windows 95, and Windows 97. Many of these older workstations were designed as part of plant equipment and control system packages and cannot be replaced without replacing the large mechanical or electrical systems that accompany the workstations. Additionally, many Windows patches for ICSs are not standard Microsoft patches but have been modified by the ICS supplier. Implementing a generic Microsoft patch can potentially do more harm than the virus or worm against which it was meant to defend. As an example, in 2003 when the Slammer worm was in the wild, one distributed control systems (DCSs) supplier sent a letter to their customers stating that the generic Microsoft patch should not be installed as it would shut down the DCS. Another example was a water utility that patched a system at a water treatment plant with a patch from the operating system vendor. Following the patch, they were able to start pumps, but were unable to stop them (Weiss 2010).

However, as discussed in Chapter 3, the biggest threat to industrial control systems is not necessarily the remote access necessary to maintain the operation of the field devices. An example is the Idaho National Labs Aurora demonstration that physically destroyed a diesel generator by exploiting dial-up modems (Meserve 2007). Another major concern is the number of people who have physical access to the controllers that may change the software on the chip sets that issue machine instructions. For example, the Stuxnet “worm” was an attack that was designed to propagate via a universal serial bus (USB) device. It was installed in nuclear facilities in Iran where there was no Internet connectivity (Zetter 2011). Neither exploit required Internet connectivity to initiate.

All policies in Section 6.4 should also be considered for the ICS domain of digital assets. However, existing standards and security features used to secure IT are not as easily transferrable. ICS security is a relatively new field and requires development of ICS-specific security verification procedures to enforce even agreed-upon policies (Stamp, Campbell et al. 2003). Even cyber security management standards are not directly applicable as they specifically address only IT management. Consequently, organizations such as the International Society of Automation (ISA) initiated an effort to develop standards for ICSs-S99-Industrial Automation and Control Systems Security. Some of the other organizations developing standards for ICSs include the Institute of Electrical and Electronic Engineers (IEEE), International ElectroTechnical Commission (IEC), International Council on Large Electric Systems (CIGRE), North American Electric Reliability Corporation (NERC), Nuclear Energy Institute (NEI), and the U.S. Nuclear Regulatory Commission (NRC).

The policy statements in this section are related to protecting private critical infrastructure. Table 6.5.3 includes examples of issues related to specific industries that utilize ICS to operate critical infrastructure and technology control recommendations to minimize the potential for successful execution of cyber sabotage threats at both technology and process levels. The overall set of issues is intended to first impress the reader as to the breadth of the domain, and to use that recognition to facilitate understanding of issue relation to the depth of potential consequences from inattention to ICS cyber security policy (Table 6.5.3).

Table 6.5.3 Cyber Security Policy Issues Concerning Industrial Control Systems

c06t20305mt

c06t20305mt00

c06t20305mt11

c06t20305mt22

c06t20305mt33

c06t20305mt44

c06t20305mt55

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset