2

Cyber Security Evolution

To understand cyber security policy, it is helpful to appreciate how cyber security has evolved. When computers enabled the first automated processes, the main goal in all such projects was the increase in productivity that came with replacing human calculators with automated programs that produced more accurate results. As more software became available, the productivity benefits of computers increased. The introduction of the Internet further enabled productivity by allowing quick and accurate communication of information. This led directly to the ability to process business transactions online. This capability was dubbed electronic commerce (e-commerce). By 2000, the economy had become so dependent on e-commerce that it was a frequent target of cyber criminals, and security technology evolved to protect data that could be used to commit fraudulent transactions. Such technology is generally referred to as countermeasures because they are security measures designed to counter a specific threat. The chapter chronicles the progression of cyber security technology, and concludes with observations on the challenges presented by the ongoing cyber arms race wherein countermeasures are falling behind.

2.1 Productivity

The history of cyber security starts in the 1960s with the mainframe. This was the first type of computer that was affordable enough for businesses to see a return on investment from electronic data processing systems. Prior to this time, the word “computer” referred to a person who performed computations, and the word “cyber” was the realm of science fiction. In those days, computers were secured with guards and gates. Physical security procedures were devised to ensure that only people authorized to work on computers had physical access to them. Computers were so large that hundreds of square feet of space would be customized for their operation, with dedicated security staff. A guard function was sometimes combined with the role of computer operator, called a job control technician. People who needed to use the computer would queue up in front of the guard holding their data and programs in stacks of punched cards. The guard would check the user’s authorization to use the computer, receive their stack of cards, and place it into a card reader that would automatically translate the punched holes in the cards into bits and bytes (Schacht 1975). By the late 1960s, remote job entry allowed punched cards to be received from multiple office locations connected via cables to the main computer. Computer security staff then had the added responsibility of tracing these cables under raised floors, and through wall spaces and ceiling ducts to ensure that the authorized person was sitting at the other end.

Managers of these early automated computer systems were acutely aware of security risk, but the confidentiality, integrity, and availability triad was not yet industry standard. Aside from a few installations in the military and intelligence, confidentiality was not the major security requirement. Though businesses did want to keep customer lists confidential, immature software was constantly failing, so their major concern was not confidentiality, but integrity. Potential for human error to cause catastrophic data integrity errors has always been evident in computer software development and operations. Software engineering organizations were the first to raise the security alarm because computers were starting to control systems where faulty operation could put lives at risk (Ceruzzi 2003). In addition, computer crime in the form of financial fraud was common by the early 1970s, and made it to mainstream fictional literature and television (McNeil 1978). Even supposing that the human factor was eliminated from the sphere of security threats, system malfunctions were known to occur without blame, starting with the first actual bug discovered among the vacuum tubes in a computer system (Slater 1987, p. 223).

In the 1970s, punched cards were replaced by electronic input and output via keyboards and terminals. Cables and terminals further extended the range within which authorized users could sit while processing data. Systems security expanded to include following the cables through wall partitions and ceiling ducts to ensure that the cables terminated in offices occupied by authorized computer users. This allowed people in offices far removed from the actual computer to be hooked up to an input–output (IO) port and use it from their desks. The guard in front of the computer room door remained, but mostly to sign in visitors who would tour the computer room, or vendors who performed maintenance. Security of the information was moved to the realm of customized business logic. Users were assigned login names, which were associated with menus that provided the screens they needed to perform their job function. Screens literally screened, or filtered, both data fields and menus. The effect was that most users saw the same basic screen, but different data fields and menu selections were available to different users. The screens were limited by business logic coded into the software. For example, if clerks had a customer service screen, they may be able to view customer records but not change their balance. However, business logic screens often contained overrides. For example, a supervisor observing the customer service clerk could enter a special code to allow a one-time balance change operation through the otherwise limited screen functionality.

Widespread use of computers enabled by keyboard technology drew attention to the issue of confidentiality controls. Military and intelligence computer use had increased. Government-funded research in cryptography had produced a few algorithms that transformed data into unreadable formats using long sequences of bits called “keys” that would both lock and unlock the data. Such cryptographic algorithms are based on diffusion, to disseminate a message into a statistically longer and more obscure formats, and confusion, to make the relationship between an encrypted message and the corresponding key too long and involved to be guessed (Shannon 1949). However, advances in computer power had significantly increased the ability of a determined adversary to identify the relationship between messages and keys. It was easy to envision a day when existing automated cryptography methods were not complex enough to frustrate automated statistical analysis (Grampp and McIlroy 1989). In addition, automation of records by government agencies, such as the U.S. Social Security Administration and the Internal Revenue Service, fostered recognition that stakeholders in cyberspace included those whose physical lives were closely aligned to the bits and bytes representing them. In recognition of the growing confidentiality requirements, but without any good way to meet them, the U.S. National Bureau of Standards (now the National Institute of Standards and Technology [NIST]) launched an effort to achieve consensus on a national encryption standard. In 1974, the U.S. Computer Security Act (Privacy Act) was the first stake in the ground designed to establish control over information propagation. The act covered only government use of computers and only information that today would be called personally identifiable information (PII). But it firmly established confidentiality and corresponding efforts to improve encryption technology as mainstream goals for cyber security.

As technology advanced through the 1970s, minicomputers such as the DEC PDP-11 frequently supplemented mainframes in large companies and were rapidly expanding into smaller companies that could now afford them to automate office tasks such as word processing. For those who could not yet afford a computer of any size, technology-savvy entrepreneurs had started services that allowed people to rent time on computers. These were called “timesharing services” because companies in this business would charge their clients based on the amount of computer time they consumed. Once terminal and keyboard technology made it possible to extend IO devices through cables, they used ordinary telephone lines to extend the reach of computer terminal beyond the walls of the building using analog modulation-demodulation technology (modems and multiplexors). These companies began to specialize by industry, developing complicated software such as payroll tax calculations and commerical lease calculations. Such software development was unlikely to fare well in a cost-benefit analysis to a company that was not in the software business, but it was a time-consuming manual processes run by many businesses. Time-sharing services allowed departments that were not the mainstream part of the business to benefit from automation, though they had to access someone else’s computer to do it. Today, these services are available over the Internet, though their charging models have changed and they are no longer called “time-sharing” but “cloud computing.”

These timesharing services charged for computing resources based on user activity, so they had to have a way to identify users in order to bill them. Often, this user identification was simply a company name, though passwords were sometimes issued where timesharing services were known to have customers who were competitors. However, from the point of view of the customer user, the user name connected them to their information in the computer and the modem connection did not seem like a security risk. Any company large enough to own a computer at the time was obviously a firm of some wealth and substance, so the timesharing service companies were assumed to have physical security around their computer, and passwords were further evidence of their security due diligence. It was considered the risk of the timesharing service vendor to allow customer logical access, and given their wealth and substance, they could be expected to protect their assets accordingly.

Throughout the 1970s into the 1980s, minicomputers became more affordable and eventually allowed people to have an entire computer for their own use. Apple introduced home computers in the late 1970s. These soon made it into the data processing environment and were followed by the IBM personal computer (PC) in 1981. Physical security still was the norm for these small computers, and locked office doors were the primary protection mechanism. Network technologies then allowed desktop computers in the same building to share data with each other, and the names of the computers became important so that people could share information with other computers on the network. The local area network (LAN) cables were protected much like the computer terminals’ connection to the mainframe, except that a new type of network equipment called a “hub” allowed the communication, and hubs had to be kept in a secure area. The hubs that allowed a person to hook his or her computer to the LAN were protected via locked closets.

Until the introduction of LANs, access controls were the exception rather than the norm in computing environments. If login IDs were distributed, they were rarely disabled. They functioned more as a convenient method of labeling data so one knew to whom it belonged than to restrict access to it. However, the LAN-connected computing environments and corresponding plethora of PCs made it very difficult to trace network computer activity to individuals, because they generally logged in only to the machine on their desktop. As LANs grew larger, centralized administration schemes from government research labs were developed for corporate mainframes (Schweitzer 1982, 1983). Mandatory access controls (MAC) allowed management to label computer objects (programs and files) and specify the subjects (users) who could access them. These were supplemented with discretionary schemes (DAC) that allowed each user to specify who else could access their files.

As many of the LAN computer users already had a mainframe terminal on their desks, it was not long before these computers replaced the terminal functionality, and the LAN was connected to the mainframe. It was this development that made cyber security become a hot topic with technology management. Though some of the timesharing-type password technology was employed on the LAN, LAN user names were primarily supported to facilitate directory services rather than to prevent determined attacks. That is, it was helpful to know the name of the person who had written a particular file, or posted a memo on a customer record. Assigning login names to computer users allowed programs to use that name as part of its business logic to provide the correct menus and screens. Prior to this point in cyberspace evolution, transactions on a mainframe could still be traced to an individual terminal, in a given physical location, and subsequent investigation using both physical and digital forensics had a fighting chance to identify a suspect. But the LANs and modems blurred distinctions between users, and it was easy for a criminal to deny or—to use a rapidly proliferating computer security version of the word—to repudiate activity performed from a LAN desktop. Even where passwords were required, they were weak enough to be guessed. There was no concept of network encryption, so anyone with access to the hubs could see passwords travelling on the network. Moreover, many network programs allowed anonymous access, so user names were not available for every connection.

It only took a few cases of insider fraud for management to understand that the status quo carried too much risk to be sustainable. Hence, security technology that had until that point been the topic of military research was hastily implemented by major computer vendors, and applied to mainframe data sets and LAN file resources. These included user identity, authentication in the form of increasingly more difficult passwords, and management authorization for computer access. A complete set of the system features required to secure operation was soon readily available in a U.S. Department of Defense publication called, “The Orange Book” for the color of its cover (DoD 1985). The complete set of features included both technical implementation standards and terminology for sophisticated processes to ensure that users could be identified and properly authenticated and audited. These features were collectively referred to as access control lists (ACLs, pronounced “ak-els”), as they allowed an administrator to specify with some confidence which user could do what on which computers. Encryption was also heralded as an obvious solution to a variety of computer security problems (NRC 1996). But it was a luxury that few outside of the military had enough spare computer processing to afford, so the smaller the computer, the weaker the vendor’s encryption algorithms were likely to be, and encryption was parsimoniously applied to specific data such as passwords in storage.

Although accountability for transaction processing was fast becoming a hot topic at fraud conferences, law enforcement activity in the domain of computer operation was limited. Nevertheless, the early 1980s was also the dawn of the age of digital evidence. Cyberspace presented a new avenue of inquiry for law enforcement investigating traditional crimes. Criminals were caught boasting of their crimes on the social networking sites of the day, which were electronic bulletin board services reached by modems over phone lines. Drug dealers, murderers, and child pornographers were prosecuted using the plans, accounting data, and photographs they had stored on their own computers. Law enforcement partnered with technology vendors to produce software that would recover files that criminals had attempted to delete from computers (Schmidt 2006).

Figure 2.1 illustrates cyberspace architecture as it was typically configured at the dawn of the 1980s. Mainframe, micro, and minicomputers existed side by side, and were not necessarily connected via networks. However, minicomputers were often used to connect to remote computers via the same types of telephone lines that carried voice calls. However, as the pace of technology innovation was rapid, this situation was constantly evolving, and change was inevitable.

Figure 2.1 Cyberspace in the 1980s.

c02f001

2.2 Internet

By the late 1980s, communication across city boundaries had achieved the same level of maturity as LANs. Directory services were available that allowed businesses to connect, and be connected to, the research and military restricted advanced research projects agency (ARPA) network, or ARPANET, whose use case and name were relaxed as it evolved into the public Internet. From the point of view of technology management, these Internet connections looked like another modem-like technology service. It was a connection to a large company in the business of connecting the computers of other large companies. The only noticeable by-product of this connection from a management perspective was the ability to send electronic mail. Technology-savvy companies quickly registered their domain names so that they could own their own corner of cyberspace. Only a few researchers were concerned with the potential for system abuse due to the exponential expansion of the numbers of connected computers.

One of these researchers was Robert Morris at AT&T Bell Laboratories. He was an early computer pioneer, to the extent that he actually had computers at his home long before they were marketed to consumers. His son, Robert Tappan Morris, grew up around these computers and was very familiar with the ways in which they could be used without the permission of their owners (Littman 1990). In 1988, Robert Tappan Morris devised the first Internet worm. The “Morris Worm” accessed computers used as email servers, exploited vulnerabilities to identify all the computers that were known to each email server, and then contacted all of those computers and attempted the same exploits. Within a few hours, most of the Internet had been affected and the damage was severe. Internet communication virtually stopped, computing resources were so overwhelmed by the worm’s activities that they had no processing cycles or network bandwidth left for transaction processing, leaving business processes disrupted.

The only organization on the ARPANET that was safe from the Morris worm was AT&T Bell Laboratories. The reason for the safety had nothing to do with Morris but instead was due to an experiment being conducted by some other computer network researchers. They had developed a method of inspecting each individual information packet within a stream of network traffic that they called a firewall (Cheswick and Bellovin 1994). The firewall was designed to allow network access to only those packets whose source and destination matched those on a previously authorized list. The sources and destinations in the network access rules were formulated using the network addresses of communicating computers, as well as a port number that serves as the access address for software running on each computer that is expected to be accessed via the network. The Bell Labs firewall was hastily employed to safeguard AT&T’s email servers, and the impact to AT&T from the Morris worm was minimal. Since then, cyber security policy has included management directive to safeguard the network periphery. The primary cyber security implementation strategy of choice since then has been to deploy firewalls.

The Morris Worm had a profound effect on the Internet community. As ARPA still officially managed the network, it responded by establishing the Computer Emergency Response Team (CERT) to provide technical assistance to those who suffered from cyber security problems (US-CERT ongoing). Detection and recovery had officially joined prevention as standard cyber security controls.

Introspective postmortems following the Morris worm revealed that the same types of vulnerabilities in Internet-facing email servers existed in systems that presented modem interfaces to the public. Hackers would dial every number in the phone book and listen for the tell-tale hum of a computer modem. Once identified, they would call these modems with their own computer and often find little security. Hackers shared the numbers on bulletin boards and met on vulnerable computers to play games or other activities unbeknownst to the systems owners. Those that stole computer time only to play games were called joyriders. There had been a few public examples of hackers mining such systems with profit motives, but these had largely been directed at theft of phone service, and phone companies would occasionally partner with law enforcement to make a sting (Sterling 1992).

However, it was not just the phone companies that were targeted, they were just the most visible. One month in 1986, Cliff Stoll, an astrophysics graduate student with a university job as a timesharing services administrator, noticed a billing error in the range of 75 cents of computer time that was not associated with any of his users. Though neither his management at Lawrence Berkeley National Laboratory nor law enforcement was concerned, he was curious how the error could have occurred on such a deterministic platform as a computer. Stoll ended up tracking the missing cents of computing time to an Eastern European espionage ring. He published an account of his investigation in 1989 in a detective-like tale called The Cuckoo’s Egg (Stoll 1989). The Cuckoo’s Egg set off a large-scale effort among technology managers to identify and lock down access to computers via modems.

No firewall-like technology had been developed for modems, but various combinations of phone-system technology met the requirements. One such combination is caller ID and dial-back. Caller ID is a method of identifying the phone number attempting to connect, and this allows comparison of the caller to a database of home phone numbers of people allowed to connect. However, anyone with customer premise phone equipment can present any number to a receiving phone via the caller-ID protocol, basically impersonating an authorized home phone number, or spoofing an authorized origination. So it is not secure simply to accept the call based on the fact that caller ID presents a known phone number. After verifying that the number is valid, the dialed computer hangs up and dials-back the authorized number to make sure it was not spoofed.

Seemingly safe behind firewalls and slightly more complex dial-back modems, organizations allowed their users to dial in and use their networks from home and also to surf the fast-growing Internet, which still mostly consisted of universities and research libraries. The first easy-to-use browser made it simple even for nontechnical people to use the Internet, and it was fast becoming the phonebook of choice for those familiar with it. Small, single purpose servers were becoming more affordable, and many companies had an area of the network dedicated for shared server connectivity, called a server farm. Growing familiarity with both server operation and the Internet led most companies who had their own domain names and email servers to establish web servers as well. These were mostly brochure-ware sites that allowed an Internet user to download a company’s catalog and find its sales phone number.

Figure 2.2 illustrates how these networks were typically connected in the early 1990s. The circles show where physical security is heightened to protect network equipment. The devices represent the logical location of the firewalls and telecommunication line connections to other firms. The telecommunication lines are portrayed as logically segmented spaces where lines to business partners terminate on the internal network. These were, and still are, referred to as “private lines” because there is no other network communication on the lines except that which is transmitted between two physical locations.

Figure 2.2 Cyberspace in the early 1990s.

c02f002

Unfortunately, all these network periphery controls did not prevent the hackers and joyriders from disrupting computer operations with viruses. Viruses were distributed on floppy disks (i.e., removable media, the 1990s equivalent of universal serial bus [USB] sticks), and they were planted on websites that were advertised to corporate and government Internet users. Virus specimens were analyzed by cyber forensics specialists, who had earned their security credentials helping law enforcement identify digital evidence. They were able to create a “digital signature” for each virus by identifying each file it altered and the types of logs it left behind. They created “antivirus” software, which they sold to industry and government. Antivirus vendors committed to their clients that they would keep their list of signatures up to date with every new virus introduced on the Internet. As there were already thousands of viruses circulating, companies quickly devised the means to install antivirus software on all of the PCs of all of their users.

The antivirus software vendors’ cyber forensics specialists were also usually able to identify the software security bugs or flaws in operating systems or other software that had been exploited by the viruses. As the signature that identified one virus was not tied to the software flaw but to the files deposited by the virus itself, a virus writer could slightly modify his or her code to take advantage of the same software vulnerability and evade detection by antivirus software. It thus became important not only to update antivirus signatures, but also to demand that software vendors correct the security bugs and flaws in the software that allowed viruses to cause damage in the first place. Software companies were under pressure to fill the demand for Internet applications, and a common software business model was to build skeletal applications that were of minimal utility while their graphical user interfaces (GUIs) communicated a vision for more advanced features (Rice 2008). Customer feedback on the initial software release determined which new features would be added and which bugs and flaws would be repaired.

These fixes were known as “patches” to software. The word “patch” is derived from the physical term meaning a localized repair. Its origin in the context of computers referred to a cable plugged into a wall of vacuum tubes that altered the course of electronic processing in an analog computer by physically changing the path of code execution. Now the term patch refers to a few lines of code that repair some bug or flaw in software. Patches are small files that must frequently be installed on complex software in order to prevent an adversary from exploiting vulnerable code and thereby causing damage to systems or information.

The software rush to the Internet marketplace in the mid-1990s heralded a new era of e-commerce, a generic term for the exchange of goods and services using the Internet as a medium. Software replaced the online catalogs and allowed Internet users to purchase goods and execute financial transactions over the network. Vulnerabilities in software became the source of what was then called “the port 80 problem.” Port 80 is the port on a firewall that has to be open in order for external users to access web services. Web application developers recognized this and knew how web server technology could be exploited to gain access to an internal network. Starting from port 80 on a server facing the Internet, a web server program was designed to accept user commands instructing it to display content, but it would also allow commands instructing it to accept and execute programs provided by a user. What every web developer knew, every hacker knew, and hackers were using port 80 to attack the web server and use it as a launch point to access the internal network. The immediate result of the port 80 problem analysis was that firewalls were installed not just at the network periphery but in a virtual circle around any machine that faced the Internet.

A Demilitarized Zone (DMZ) network architecture became the new security standard. Coined by the Bell Labs researchers who had created the first firewall, a DMZ was an area of the network that allowed Internet access to a well-defined set of specific services. In a DMZ, all computer operating software accessible from the Internet was “hardened” to ensure that no other services could be accessed from those explicitly allowed, or that were considered “sacrificial” systems that were purposely not well secured, but closely monitored to see if attackers were targeting the enterprise (Ramachandran 2002). These sacrificial systems were modeled on a fake national security system that Cliff Stoll had used to lure espionage agents. They were also called “honeypots” in analogy of the practice of trapping flies with honey rather than actively swatting at them.

Like its military counterparts, a cyber DMZ is surrounded by checkpoints on all sides. In the cyber case, the checkpoint includes firewall technology. The design of a DMZ requires that Internet traffic be filtered so packets can only access the servers that have been purposely deployed for public use, and are fortified against expected attacks. It further requires that traffic filters be deployed between those servers and the internal network. It became standard procedure that the path to the internal network was opened only with the express approval of a security architect, who was responsible for testing the security controls on all DMZ and internally accessible software. This practice of security review prior to deployment matured into methods of integrating security review within the systems development life cycle and was christened “systems security engineering.” The process has since become internationally standard (ISO/IEC 2002, 2009c).

This isolation of the path from the consumer to an e-commerce site soon became a liability. As competitors became aware that rivals were growing their businesses by allowing easy online access to catalogs, competing sites attempted to stop the flow of e-commerce to competitors by intentionally consuming all the available bandwidth allowed through the competitor firewall to the competitor websites. Because these attacks prevented other Internet users from using the web services of the stricken competitor, they were designated “denial of service” attacks. To evade detection, attackers used multiple, geographically dispersed machines to execute such attacks, and this practice was dubbed “distributed denial of service” or “DDOS.” At this time, there was no way to mitigate such attacks other than to increase the bandwidth allocated to Internet services.

As companies realized how hard the Internet boundary was to police, it became apparent that the timesharing systems to which they were directly connected had also established markets in online services. This means that the Internet was not only outside their firewall, but was also on the other side of telecommunications lines facing service providers. These were connections that had previously been considered secure. In addition, the introduction of easy-to-carry laptop computers had vastly increased the number of people who wanted to dial in from home and also while traveling, so dial-back databases were becoming hard to securely maintain. Caller ID and dial-back were gradually replaced by a new handheld technology that used cryptography to generate one-time passwords, called tokens. Multiple vendors competed to produce the most convenient handheld device that would be able to compute unguessable strings that provided user authentication in addition to passwords.

Security researchers had long envisioned that passwords would not be considered secure enough for user authentication. Handheld devices were referred to as a second factor, which if required during authentication, would make it harder to impersonate a computer user. A third factor, biometric identification, would be even stronger, but then was still in proof of concept stages. So credit card-sized handheld devices capable of generating tokens were issued to remote users. These contained encryption keys that were synchronized with keys on internal servers. Token administration servers supplemented passwords for authenticating user network connectivity.

Increases in the numbers of remote users exacerbated the virus problem. In addition to installing antivirus software and patches on workstations, companies also enlisted security software vendors to track the spread of viruses on websites so they could block their users from accessing websites that hosted viruses, and thereby reduce the propagation of viruses on their internal networks. The term “blacklist” became to be known in computer security literature as the list of websites that were known to propagate malicious software (“malware”). Web proxy servers work by intercepting all user traffic headed for the Internet, comparing the content of the communication to a set of communication rules established by an organization, and not letting the intercepted traffic proceed if there is a conflict between the traffic and the rules. The first use of this technology made use of a list of the universal resource locations (URLs) corresponding to Internet sites called a “blacklist.” A web proxy server blocks a user from accessing sites on the blacklist. The proxy is enforced because browser traffic is not allowed outbound through the network periphery by the firewalls unless it comes from the proxy server, so users have to traverse the proxy service in order to browse. Vendors quickly established businesses to hunt down and sell lists of malicious software sites.

As the lists of viruses, patches, and malware sites changed continuously, enterprise security management needed a way to know that all of their computers had in fact been updated with antivirus signatures, patches, and proxy configurations. All too often, a user who had been on vacation during a patch or antivirus update became the source of network disruption by bringing a previously eradicated virus back onto the internal network. Headlines in the mid-1990s repeatedly described the travails of many reputable companies whose computing centers were devastated by the latest Internet viruses and worms. Given the amount of effort that they were expending internally to keep up with the latest security technology, it occurred to technology management that they could estimate the cost burden this would place on their service providers and often doubted that those to whom they connected for software services were not keeping up. This type of service provider review was often motivated by increasing regulatory scrutiny on handling of personally identifiable data. When an online transaction occurs between a customer and a company, these two entities are considered the first and second party to the transaction, respectively. If the company outsources some of the data handling for the customer to a service provider, this entity is referred to by regulators as a “third party” to the transaction. It did not take much skepticism to guess that technology services vendors were not keeping up with ever-increasing security requirements. This recognition led to a new standard for protecting the network periphery, not just from publicly accessible network connections, but even from trusted business partners. All network connections were now sources of potential threat of intrusion. Firewalls were placed on the Internal side of the telecommunications lines that privately connected firms from their third party service providers. Only expected services were allowed through, and only to the internal users or servers that required the connectivity to operate.

Figure 2.3 depicts a typical network topology in the mid 1990s. The Vs with the lines through them indicate that antivirus software was installed on the types of machines identified underneath them. The Ps stand for patches that were, and still are, frequently required on the associated computers. The shade of gray used to identify security technology is the same throughout the diagram. The dashed line encircles the equipment that is typically found in a DMZ.

Figure 2.3 Cyberspace in the mid-1990s.

c02f003

2.3 e-Commerce

Despite its complicated appearance, the illustration in Figure 2.3 is dramatically simplified. At the time, LANs were propagating across remote locations; even relatively small companies might have hundreds of PCs and dozens of servers. All of the security software is very difficult to manage, and antivirus vendors came up with antivirus management servers that track each PC in a company inventory to make sure it had the most up-to-date signatures. The situation was not comfortable, but seemed controllable, and e-commerce opportunities beckoned. Customers now expected to not just find a catalog or phone number on company websites, but actually place orders and receive reports. The first such sites were fraught with risk of fraud and threats to confidentiality because of the number of telecommunications devices that suddenly gained unfettered access to customer information, including credit card numbers.

To enable businesses to cloak customer communications in secrecy, a web software company introduced a new encrypted communications protocol called Secure Socket Layer (SSL). This was 1995, and in 1999, the protocol was enhanced by committee and codified under the name Transport Layer Security (TLS) (Rescorla and Dierks 1999). Despite an occasional vulnerability report (Gorman 2012), TLS has been the standard communications encryption mechanism ever since.

The TLS protocol requires web servers to have long identification strings, called certificates. These were technically difficult to generate, so security staff purchased and operated certificate authority software. The software allowed them to create a root certificate for their company, and the root certification was used to generate server certificates for each company web server. The way the technology worked, a customer visiting the web server would be able to tell it was stamped with the identity of the issuing company by comparing it to the company’s root certificate. For critical applications that facilitated high asset value transactions, certificates could also be generated for each customer, which the SSL protocol referred to as a client. The SSL protocol thus made use of certificates to identify client to server and server to client. Once mutually identified, both sides would use data from the certificates to generate a single new key they both would use for encrypted communication. This allows each web session to look different from the point of view of an observer on the network, even if the same information, such as the same credentials, are transmitted. When a user visited an SSL-enabled site for the first time, the site owner would typically redirect them to a link where they could download the root certificate. Thereafter, these browsers automatically checked the corresponding web server certificates. If client certificates were required, the user would be asked a series of questions that installed the client certificate on their desktop.

But this SSL security configuration was difficult for e-commerce customers to manage, and users were confused by the root certificate downloading process and the questions about certificates. So browser software vendors started to preload their browsers with the root certificates from security software vendors, who for a price, would sell a company web server certificates that corresponded to a root certificate delivered with the browser. The default behavior of this new version of browser when encountering a web server with a certificate that did not come from one of these preselected certificate vendors was to declare a security alert. This meant that the clients of any company who had invested in a certificate authority rather than buying certificates from a company like Verisign would receive a warning that the certificate was “untrusted.” The alert caused Internet users so much angst that the result was that most companies abandoned their own certificate authorities and instead purchased certificates from one of the vendors already installed in browsers, creating a new market in encryption keys. To add insult to injury, the certificate vendors periodically expired the certificates. So those who previously made their own keys and switched to avoid the “untrusted” warning had to keep track of the date on which the key was purchased, and repurchase before that day to avoid system failure. The client-side certificates could also be purchased, but due to major variances in customer desktops, these proved so difficult to use they were aban­doned by all but high risk e-commerce financial companies like payroll service vendors.

Even without certificates, dealing with customers over the Internet was hard to manage. Due to the dispersed nature of many sales organizations, customer relationship records had always been difficult to manage centrally, and now login credentials and email addresses had to be associated with customer records. Other than timesharing vendors, companies had rarely issued login credentials to anyone who was not in their own phone directory. Managing external users required specialized software. Identity management systems were developed to ease the administration and integrate customer login information and online activity with existing customer relationship management processes.

This new development of widespread customer access to internally developed software made the software development and deployment process very visible to customers, and thus to management. Software programming errors were routine and hastily assembled patches often caused as much damage as they were intended to fix. The insider threat to computers had previously been focused almost solely on accounting fraud now turned to the software developer. Security strategies were devised to control and monitor code development, testing, and production environments. Source code control and change detection systems became standard cyber security equipment.

By the late 1990s, most e-commerce companies were highly dependent on their technology workforce for software support and had long been paying for dedicated dial-up lines to workers’ homes, and now so many of the users relied on the Internet to perform their job functions, they started to subsidize Internet access. Rather than pay for both, they allowed users to access servers remotely from the Internet. Although it was recognized that the plethora of telecommunications devices that could see this user traffic on the Internet presented the same eavesdropping threat that had been recently solved for customer data by using SSL, most of the people who used this technology were not using customer data, but rather doing technical support jobs. Moreover, remote access still required two-factor authentication, and this was judged an adequate way to maintain access control, particularly when combined with other safeguards, such as a control that prevents a user from being able to have two simultaneous sessions. However, once the speed of Internet connectivity became superior to that provided by modems, even business users handling customer data wanted to connect over the Internet. To maintain confidentiality of customer information, the entire remote access session would have to be encrypted. Virtual private network (VPN) technology answered this requirement, and also, if so warranted, would allow restrictions on network communication on a home network while a PC was VPNed into the corporate network. The network periphery was also extended to Blackberrys and other smartphones so that remote users could have instant access to their email without connecting via VPN, and this required specialized inbound proxy servers that encrypted all traffic between the handheld devices and the internal network.

While many of these security technologies ran on their own devices, they nevertheless required computer processing cycles on user workstations and servers. Firewalls were constantly challenged by increasing needs for network bandwidth. Innovative security companies sought to relieve workstations from their virus-checking duties by providing network-level intrusion detection systems (IDSs). The idea behind IDS was the same as that behind signature-based antivirus technology, but rather than compare the virus signatures to files that were deposited in a network, they were compared to what viruses would look like as they traveled across the network. This level of virus-checking was also appealing because it provided more information about where on the Internet a virus had originated. Network IDSs could also identify attacker activity prior to its resulting in the installation of destructive software by looking for patterns of search activity commonly used by hackers scanning a potential target. An IDS could also spot network-borne attacks such as DDOS.

Although the set of viruses to be checked by network IDS was the same as that compiled over the years by antivirus vendors, the way the antivirus software checked for the signatures on the desktop required different technology than the way it was checked on the network. Security managers began to notice that the end result was that some viruses were identified by some technologies and not others. Even vendors of the same technology widely differed in their ability to identify viruses, and had different levels of false positives, which is where software that was not actually a virus was mistakenly identified as such (McHugh 2000). Many companies created new departments called security operations centers (SOCs) to weed through the output of these systems to try to determine the extent to which they may or may not be under attack.

In the early 2000s, network security challenges were exacerbated by wireless. Like the demand for connectivity by traveling users in the mid 1990s, demand for wireless connectivity became irrepressible in the early 2000s. VPNs and handheld tokens were commonly among the technologies enlisted to maintain confidentiality of those communications, though they were not widely used for wireless access control until researchers demonstrated how easily native wireless security features were broken (Chatzinotas, Karlsson et al. 2008).

Note that, whether these security technologies were newly adopted or redeployed for a new purpose into a company network, their use required installation of a server and specialized software which had to be configured and customized for that use. As described in Chapter 1, Section 1.3.4, these technical configurations such as firewall rule sets, security patch specifications, wireless encryption settings, and password complexity rules were colloquially referred to as “security policy.” As more and more security devices such as firewalls, proxy servers, and token servers had to be replicated to keep up with the escalating scale of technology services, security departments established management servers from which to deploy technology configurations. They did this not only for virus signatures, but also for all of the security technologies. Security policy servers were established to keep track of which configuration variables were supposed to be on which device. If a device failed or was misconfigured, it would take too much work to recreate the policies. Security policy servers economically and effectively allowed the technology configurations to be centrally monitored and managed.

Despite the best intentioned management-level security policy supported by technical security policies, cyber security incidents continued to occur anyway. In the course of an incident investigation, security devices were often found to be out of compliance with technology configuration policy. Security managers would have to investigate the root cause of such incidents and often had to track down logs of user activity on multiple machines. These efforts were streamlined by the introduction of security information management (SIM) servers, which were designed to store and query massive numbers of activity logs. Queries were designed in advance for events that were captured by logs that might indicate that systems were under attack. A SIM server can also verify that logs were in fact retrieved from inventory, so may serve a dual role for security managers: incident identification and policy compliance.

Figure 2.4 demonstrates the state of security technology in the early 2000s. e-Commerce security requirements had motivated the start-up of a plethora of security software companies that produced the additional gray security boxes that appear in the figure. The patch management processes had been enhanced to add tripwires to detect and report software changes. Though originally the subject of a Master’s thesis on security, and then the name of a security software company, the generic use of the word tripwire now has the same connotation in software as its original use in physical security: a triggering mechanism (e.g., in physical security, a wire) that detects change in the environment (Kim and Spafford 1994). These internal software change detection mechanisms were also called host intrusion detection systems (HIDSs) to differentiate them from the network intrusion detection (IDS) that was deployed at the network periphery (Amoroso 1999). The feature also reflects the recognition that segregation of technology services and system change controls are safeguards against insider threats and accidental changes as well as external threats. For this reason, the term “zone” has taken on more of the connotation of local ordinance designating an area for a specified use. Network zones are now designated for isolating critical processes such as payroll from large sets of enterprise users who have no need to see those systems. Hence, many companies have created multiple network zones with different operational security policies of the type described in Section 1.3.3, even where machines do not face the Internet.

Figure 2.4 Cyberspace in the early 2000s.

c02f004

2.4 Countermeasures

Notwithstanding these security technology innovations, cyber attacks continued to be successful. Emails that look like normal communication from financial institutions contained links to malicious look-alike sites that either trick users into typing their passwords into the malicious sites, or into downloading malicious software (“malware”) from malicious sites (Skoudis and Zeltser 2004). Cyber criminals attacked the methods used to direct users to Internet addresses and change the addresses to those of look-alike sites. These attacks were called phishing and pharming in analogies with casting a hook into the ocean to see who would bite, or planting seeds for later attacks, respectively. One type of malware logs user keystrokes and send user names and passwords to criminal data collection websites (“spyware”). Antivirus and intrusion detection vendors still create signatures for the latest spyware and malware, and SOC staff develop routine procedures to eradicate the software once it is identified. The network intrusion detection technology vendors offer the SOC staff a feature that would sever the network connection of any user who was downloading malware, but to accomplish it, they had to replace all of their IDSs with intrusion prevention systems.

The mid-2000s also saw a dramatic increase in organized crime on the Internet, and identity theft was rampant (Acohido and Swartz 2008). There were also many highly publicized incidents of lost laptops and backup tapes that contained large quantities of the type of PII used to commit identity theft. This raised awareness of the habits of remote users, who frequently kept such data on the laptops that they took with them on travel and also used removable media such as USB devices to carry data with them between home and work. While some of the technologies had been configured with the threat of device theft or loss in mind (e.g., smartphones containing software and data programmed to destroy all data if a user enters too many inaccurate passwords), many had never even been the subject of security review. Vendors hastily provided methods to encrypt laptop disks and USB devices. Companies adopted standards and procedures for the authorized use of digital media, and restricted access to the devices. It is hard to purchase laptops without these USB ports and DVD writers. Security software to control them can be very intrusive, expensive, and hard to monitor. So it is not uncommon to see security staff adopt tactical measures such as applying crazy glue to USB ports and removing DVD writers from laptops before they are delivered to users.

Theft of storage devices extended even into the data center. So many devices were being encrypted, it became difficult for administrators to keep up with procedures to safeguard encryption keys. Simple key management systems such as password-protected key databases had been around since the 1990s, but the rate at which the keys needed to be produced to perform technology operations tasks such as recovering a deleted file was rapidly increasing. Security vendors stepped in with automated key storage and retrieval systems. Often keys are stored on special hardware chips physically protected in isolated locations and accessible only by the equipment used to control access to the devices. This way, if the device is stolen without the hardware chip, the storage media itself cannot be decrypted. Unfortunately, it became so hard for users to get the data they needed to work at home on their home PCs that they would email it to themselves in order to bypass the security controls on removable media.

There has been no evolution in email security since the Morris Worm, only patches for known vulnerabilities. Even today, the protocols by which servers communicate and share information are not encrypted without very specialized agreements on both sides of the communication. Email is easy to observe with network equipment and is routinely routed via multiple Internet service providers before landing at its destination. Although there have been some attempts to identify authorized email servers via certificate-like keys, they are often ignored for fear of blocking legitimate email users by accident. Email security vendors created software to assist in the analysis of email content, and many companies who suspected that confidential data such as PII was being sent via email for work-at-home purposes thereby found that many of their business processes routinely emailed such data to customer or service providers. Even those with policies against sending PII in email sometimes had customers who demanded that their reports be delivered via email and were willing to accept the risk of identity theft for the convenience of receiving reports via email. Internal users would bow to customer wishes and ignore security policy. Although this risk acceptance was acceptable in some industries, in others, regulatory requirements prevented its continuation. The security technology response to this issue was content filtering. Patterns were created for identifying sensitive information. These included generalized social security numbers and tax identification numbers from other countries. They also included snippets common in internally developed company software, and “internal use only” stamps hidden in proprietary documents. All information sent by users to the Internet, or other publicly accessible networks, is routed through a device that either blocks the information from leaving or silently alerts security staff, who investigate the internal user. Frequent or blatant offenders are often subject to employment or contract termination.

Still, hackers are finding holes in the network periphery to exploit, and many are still in vulnerable web servers. The network control of the DMZ does not prevent a web software developer from deploying code that can be used to imitate any network activity that is allowed by the web server itself. This can, of course, include access to sensitive customer data because that is how a customer gets it. Developers innovate by sharing the software source code via both public (“open source”) and proprietary development projects. In starting a new project, they typically will try to reuse as much existing code as possible in order to minimize the amount of effort required to build new functionality. They may also use free software (“freeware”) for which no source code is available. Much of this code has known security bugs and flaws. These have been dubbed software security “mistakes” by security software consultants and vendors. Like the lists of viruses and software vulnerabilities, software security mistakes have been cataloged as part of the National Vulnerability Database project (MITRE 2009; MITRE ongoing). Cyber security vendors have created security source code analysis software to be incorporated into source code control systems so these bugs can be found before software is deployed. These work using static software analysis, which reads code as written, or dynamic software analysis, which reads code as it is being executed. Other cyber security vendors have created systems that observe network traffic destined for web server software, as well as the web server response. These devices, called web access firewalls (WAFWs), are programmed to detect unsecure software as it is used, and block attempts to exploit it in real time.

Figure 2.5 depicts the state of the practice of cyber security. Encryption mechanisms are deployed on both critical servers and remote devices. Content filters prevent users from sending sensitive information to the Internet. Intrusion prevention devices have replaced intrusion detection devices. Web access firewalls accompany Internet-facing applications. Moreover, though Figure 2.5 includes most of the security technologies so far mentioned in this chapter, not all existing security technologies are represented in this figure. Only the major security technologies are included.

Figure 2.5 Cyberspace and cyber security countermeasures.

c02f005

2.5 Challenges

Note that we now use the adjective “cyber security” to refer to all of these countermeasures, while the history includes terms like computer security and information security. Though the terminology has morphed over the last half century from computer security to information security to cyber security, the basic concept has remained unchanged. Cyber security policy is concerned with stakeholders in cyberspace. However, the number and type of cyberspace stakeholders far exceeds the scope envisioned with the first Computer Security Act. In a world where computers control financial stability, health-care systems, power grids, and weapons systems, the importance of informed cyber security policy has never before been more significant, and is only likely to increase in significance over the next several decades, if not longer.

Threat, countermeasure. Threat, countermeasure. Threat, countermeasure. None of the threats has disappeared; hence all of the countermeasures are still considered best practice. Nevertheless, cyber security breaches continue. Figure 2.6 depicts the paths taken by today’s hackers. It is the same path that cyberspace engineers have created to allow authorized users into systems. Done correctly, cyber security can keep out the joyriders. In many domains, joyriders are not even perceived as an issue anymore, as the more dangerous threats come from hardened criminals and espionage agents. Note that our description of the evolution of cyber security in no way implies that the way it has evolved is in fact effective, or even appropriate.

Figure 2.6 Cyber crime attack paths.

c02f006

New paradigms of thinking about cyber security protection are needed to face these challenges. Nevertheless, every one of the security devices in Figure 2.6 (and we have skipped or glossed over dozens of others it would be possible to include) is recommended by current cyber security standards. These standards have been proposed as the subject of legislation, and this is just one of numerous reasons why the history of cyber security presents policy issues. To paraphrase Hubbard, “Ineffective risk management methods that somehow manage to become standard spread vulnerability to everything they touch” (Hubbard 2009).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset