3

THE NATURE OF NETWORKS

Knowledge—The First Step Toward Digital Resilience

“When you invent the ship, you also invent the shipwreck,” wrote the French philosopher and cultural theorist Paul Virilio.1 The power of networks is connectivity. The vulnerability of networks is connectivity. But let’s identify our vessel properly. It is not the SS Internet. It is something much older, because networks are much older.

ALL NETWORKS ARE SOCIAL

Back in 1758, the celebrated taxonomist Carl Linnaeus, who laid the basis for the modern classification of biological species, coined a Latin term for human beings: Homo sapiens—“knowing man.” Nicholas A. Christakis and James H. Fowler, authors of Connected: How Your Friends’ Friends’ Friends Affect Everything You Feel, Think, and Do, believe it’s time for an update to a term that more accurately reflects the true evolutionary status of modern humanity. Their suggestion is Homo dictyous. It translates as “network man,” which, they argue, better describes a species that has evolved to care about others through social networks. Christakis and Fowler hold that networks have become so central to human existence that they now constitute “a kind of human superorganism,” which vastly expands the range of human capability. “Just as brains can do things no single neuron can do, so can social networks do things no single person can do.”2

While Christakis and Fowler see the evolution of Homo dictyous from Homo sapiens as relatively recent on the timeline of human evolution, they certainly do not claim any date as recent, say, as 1967, when the plan for the Internet’s ancestor, “ARPANET,” was published under the auspices of DARPA, or 1983, when the Internet’s foundation protocol suite, TCP/IP, was adopted. No, no. Christakis and Fowler see the emergence of Network Man as something that happened some millennia back. Just when is not certain, but it must have come well before the roughly five thousand years of history we have in recorded form. The impulse to record history had to have some motive, and the most compelling motive is a collective desire to share a common past. This desire to share implies the existence of social networks.

Social networks were clearly of tremendous value in our cultural evolution, but, if we take Christakis and Fowler’s Homo dictyous thesis as more than a figure of speech, we must conclude that networks also played a role in our biological evolution as a species. Indeed, they may have served as the nexus at which biological evolution met social evolution. Whatever the stronger driver, biological need, or social imperative, humankind’s most foundational inventions have served explicitly to expand humanity’s social networks. For instance, the invention of writing in portable forms, about 3200 BCE in Sumer and (independently) in Egypt, enabled individuals to network with one another, governments to network with their people, and governments and peoples to network with other governments and other peoples.3 In this manner, the boundaries of social networks challenged time and distance in ways that overcame political and even geographical boundaries. Such transcendence of real-world boundaries is, of course, precisely the innovation frequently credited to the ascendency of the Internet in our own day.

ACTION ITEM

 

The best way to begin to understand digital networks is to understand the nature of social networks. The technology of networking is new. The motives for networking predate recorded history.

One aspect of network-related technology is a connection to seemingly unrelated technologies. For instance, we don’t customarily see the development of various vehicles—from the wheel to spacecraft—as a development in network technology; nevertheless, vehicle innovation increased both the distance over which written communication could be conveyed and the speed of its conveyance.4 This, in turn, enhanced humankind’s networks and networking capabilities. The transportation of people and cargoes in any considerable quantity must have attracted the attention of monarchs and their border guards even in the ancient world. Substantial shipments and immigration, readily detected, could be to some degree regulated by central authority. But the transportation of written material was far less obvious. Written records are far more portable than crowds of people or bulk cargoes. Even in the pre-digital world, including the ancient world, the central control of social networks was far from total and often quite difficult.

THE RELATIONSHIP BETWEEN DATA AND NETWORKS

Before the advent of printing (a force to be reckoned with in Asia by the ninth century), hand-copied manuscripts were the only available means of reproducing written works in quantity, and manuscripts remained dominant until the evolution of movable-type printing, beginning about 1040 in China and 1450 in the Europe of Johannes Gutenberg. Because hand-copying was labor-intensive and costly, libraries and scriptoria (where, in medieval Europe, manuscripts were both copied and stored, usually in monasteries) became the chief nodes of the knowledge network. Typically, libraries were the creation of the wealthy and powerful and, also often, were the property of religious authorities or secular officials (high priests, popes, cardinals, as well as kings, queens, and other hereditary heads of state). Both the flow and content of written data were therefore largely controlled by the state and those most deeply invested in the state. Nevertheless, these key network nodes were vulnerable to attack and destruction. The great Library of Alexandria, for instance, was created and maintained under the patronage of Egypt’s Ptolemaic pharaohs. Built in the third century BCE, it became, arguably, the central node of the ancient world’s knowledge network. It was not just the largest ancient library (housing perhaps 400,000 papyrus scrolls at its height), it was also an active, real-time research center to which learned persons could travel to meet with others and discuss ideas.5

While the Alexandrian library had the patronage, sanction, and protection of the state, it was vulnerable to all manner of attack. In fact, it is less famous today for having existed than for having been destroyed—even though historians and archaeologists continue to debate just which of four major assaults on the library proved definitively destructive. During the Great Roman Civil War (Caesar’s Civil War, 49–45 BCE), it suffered a fire in 48 BCE. The third-century Roman emperor Aurelian’s conquest of the Palmyrene Empire in AD 270–275 included the restoration of Egypt to the Roman Empire. In the fight to reclaim Egypt, Aurelian burned the Royal Quarter of Alexandria (the “Brucheion”) to the ground. If the great library, housed in this neighborhood, had not been totally destroyed by Caesar in 48 BCE, it was almost certainly razed in this massive blaze. In AD 391, the Coptic Christian pope Theophilus of Alexandria may have destroyed the library as part of his campaign of destruction against temples and other structures in the city associated with “pagans.” Finally, if the Library of Alexandria had not been obliterated by these three attacks, it may have been (according to at least four Arabic sources) destroyed by order of ’Amr ibn al-’As, commanding Muslim forces in the Muslim conquest of Egypt in 642.6

The destruction of the Library of Alexandria may have been incidental to wars of conquest or may have been the deliberate attempt of one ruling power to disrupt the data network of another and even to obliterate the history of a people in a particular time and place. Wikipedia lists 168 notable instances of mass book and manuscript burnings, beginning with the Destruction of Ebla in 2240 BCE, going through the bibliographical bonfires of Adolf Hitler’s Nazi Party in 1933, and ending with the book burnings and destruction of cultural monuments by ISIS during 2014–2015.7

Indisputably an attack against a network of ideas was the assault by English king Henry VIII against the monasteries during the somewhat euphemistically named Dissolution of the Monasteries. In several sweeps during 1536–1541, forces dispatched by the newly self-minted Protestant monarch set out to erase any lingering governing authority of what had been the Roman Catholic state church of England. Officially, the dissolution was a state seizure of Roman Catholic church property, including much valuable real estate, income, and other assets, and the dismissal of those clergy who did not submit to the new religious order in England. Tragically, many of the seizures were accompanied by acts of wanton destruction, especially of the monastic scriptoria, where manuscripts, including priceless illuminated manuscripts, were simply discarded or destroyed. Call it a network hack, with the emphasis on the word hack in its root meaning.

ACTION ITEM

 

One of history’s greatest lessons is also one of its least heeded. Part one of the lesson is that data is vulnerable. This is evidenced by its frequent loss or outright destruction. In history’s most famous instance of data loss, the destruction of the Library of Alexandria, security was elaborate. The library’s contents were housed in a great building, protected by a great emperor, in the heart of a great imperial city. So, part two of the lesson is, throughout history, security measures alone have been inadequate the to protect data. Heed the lessons of history. Install security, but create resilience on the historically proven assumption that security, while necessary, is almost never sufficient to the preservation of data.

The storage of flammable records in flammable buildings made for an inherently nonresilient network. Add to this the unique nature of at least some of the manuscripts—either one-offs or reproduced in very few copies—and the network at given nodes became even more vulnerable, not just to accident, but to the intentional acts of literal hackers. The advent of printing, particularly printing with movable type, added great resilience to the world’s pre-electronic data and knowledge networks by introducing redundancy in the form of multiple copies of data. The degree of redundancy—the number of copies—was potentially open-ended, provided that at least one copy of a particular work existed from which to set new type and create a new printed edition. Redundancy in and of itself is neither synonymous with resilience nor a substitute for it. It can, however, be an important component of resilience, which is why savvy businesses and individuals continuously back up their files offsite or in the cloud.

Printing made creating redundancy cheaper and faster than making manuscript copies by hand. In turn, the invention of movable type, which allowed setting type using an individual “case” of letters, made printing far less time-consuming, skill-intensive, and labor-intensive than carving entire unique book pages one at a time. Typesetting is also far more fault-tolerant than carving a woodblock. Because of its cost-effectiveness and speed, movable type not only made knowledge networks more resilient, it made them larger, arming more nodes with the same data. It decentralized the data networks, which no longer had to be concentrated in a few great libraries or scriptoria, both vulnerable to regime change, religious reformation, and natural disasters.

By the nineteenth century, two new elements of speed and economy were added to Gutenberg’s original innovation. The linotype machine allowed the very rapid composition of type, cast into metal “slugs,” each of which (typically) created one line of type. The linotype introduced a significant degree of automation into typesetting. Following this were innovations in the speed of the presses themselves, which ran on steam or (later) electric power. These turned out copies of inexpensive books and, especially, newspapers, pamphlets, broadsides, official government publications, and publications by those opposed to governments in great quantity. This raised redundancy to the status of semi-ubiquity, multiplying the resilience of information networks even further while expanding and decentralizing those networks. Of course, it was still possible to attack information networks. For example, on May 21, 1856, during the run-up to the American Civil War, pro-slavery hooligans attacked an abolitionist newspaper office in Lawrence, Kansas Territory, during a time in which territorial residents were voting on whether to enter the Union as a slaveholding state or a free state. The pro-slavery raiders smashed the newspaper’s printing press and threw its stock of type into the Kansas River.8

Knowledge networks based on printed matter may also be vulnerable to the machinations of criminal and civil law, which may be used to enjoin printing press owners and publishers from printing and distributing material some party considers objectionable. In the case of a legal challenge to the operation of a press, resilience may consist of hiring a better lawyer than that of the plaintiff. In the case of the violent sack of Lawrence, abolitionists from other parts of the country came to the aid of the town and new presses were purchased. The availability of such sources of aid constituted the resilience of this particular network.

REAL-TIME TRANSMISSION OF DATA

Samuel F. B. Morse’s commercially viable version of the telegraph, patented in 1837, launched the era of the analog electronic network, replacing ink and paper with the electrical/electronic creation and transmission of written (or verbal) data and its transcription back into an ink and paper form on the receiving end of the transmission. Telegraphy enabled virtually instantaneous communication, potentially in real-time. There were delays created by the time it took to manually transcribe the electrical signals (the “dots” and “dashes” of the Morse Code) into more universally readable alphabetic language and numerals, and there was the time consumed in delivering messages from the telegraph office to the addressee. On the other hand, in certain applications, communication was in real-time and interactive, even conversational. For instance, a military commander at the front of a battle could communicate telegraphically with higher headquarters in a rear echelon to report on conditions, ask for instructions, request reinforcements, or respond to orders.

In the American Civil War, telegraph wire was run along the rope that tethered manned observation balloons to the ground. This allowed observers aloft to report in real-time on enemy movements and on the effect of artillery fire on the enemy. Commanders on the ground could adjust their tactics, and artillerists their aim. In a historical study of how President Abraham Lincoln used the telegraph to personally direct some of the major action in the Civil War, author Tom Wheeler calls the sixteenth president “the first national leader to project himself electronically. The command and control by email that the evening news showed being employed in a twenty-first-century war [in Iraq] traces its roots to the nineteenth-century American Civil War.” Wheeler argues that the “telegraph changed the nature of national executive leadership and provided Abraham Lincoln with a tool that helped him win the Civil War” by “eliminating distance as a controlling factor in the exchange of information, thus allowing coordination among disparate forces and between the national leadership and those forces.”9 Indeed, Steven Spielberg’s 2012 film Lincoln includes several scenes of the president “running” the war from a basement telegraph room in the War Department’s offices.10

In addition to providing speed and varying levels of interactivity, telegraphy increased the resilience of information networks by eliminating the need for vulnerable human couriers. There was no Pony Express rider to be ambushed. Furthermore, the interactivity of telegraphy made networks more resilient because errors (garbled messages, ambiguous messages, misunderstood messages, messages with typographical errors) could be readily corrected. The receiving operator or the end recipient could very quickly request clarification, confirmation, retransmission, or correction of doubtful messages—sometimes doing so immediately and interactively.

Within years of its invention, the telegraph expanded into a vast network that one writer on the history of technology dubbed “the Victorian Internet.”11 Writers of the Victorian age were captivated by the notion of communicating via electricity, a force or phenomenon steeped in mystery and perceived to be so ethereal as to defy physical reality. To be sure, using electricity, telegraphy could defy time and space, and when the Victorian Internet was vastly expanded from continent to continent via the submarine Atlantic Cable beginning in 1858, the effect seemed positively miraculous.

The public perception of a technological miracle bears discussion because the perception lingers to this day. Regarding the telegraph, people focused on the marvel wrought by the invisible, “medium” of electricity. The fact that this little-understood vehicle required a wire or cable infrastructure was all but ignored by the public in much the same way as the casual user of today’s Internet speaks of “cyberspace,” thereby ignoring a vast complex of physical infrastructure—fiber, copper, semiconducting materials, microprocessors, microcircuits, interconnecting cables and sockets and plugs, switches, and servers.

A digital network neither is nor exists in a vacuum any more than the telegraph or transcontinental cable existed in the absence of conductive materials, connectors, switches, and even wooden telegraph poles. Whether in the context of the “Victorian Internet” or of today’s Internet, to ignore or evade the physical infrastructure by means of metaphors devoted to ethereal electricity or cyberspace is to fail to acknowledge the essential fragilities and vulnerabilities of both networks. Outlaws capable of ambushing a Pony Express rider were even more capable of cutting down telegraph lines and instantly disrupting the network. Those with more sophisticated nefarious purposes could also tap into the line and intercept messages or even send deliberately misleading messages of their own: “Stop the gold train at Banditville!”

When the legendary private detective Allan Pinkerton, hired to protect President-elect Abraham Lincoln on his inaugural rail journey from Springfield, Illinois, to Washington, D.C., caught wind of an assassination plot brewing among Southern sympathizers in Baltimore, he cut the telegraph lines to and from the city. He wanted to prevent conspirators from sending or receiving information on the whereabouts of the presidential train.12 In terms of network security, this incident is revelatory: The deficiency of resilience in the telegraph network was well known to those who had reason to be in the know, even if it did not occupy a prominent place in public awareness.

As for the Atlantic Cable, remarkable though the technological achievement was, the hardware was also notoriously unreliable and subject to failure—as was to be expected with thousands of miles of spliced cable resting uneasily on the sea floor of an often-turbulent Atlantic, a vast body containing a highly corrosive, highly conductive salt solution. Eventually, issues of resilience were addressed by adding redundant wires and cabling (especially in densely populated areas) and hiring linemen to routinely inspect and maintain overground telegraph lines. As for undersea cables, these were, to the degree possible, periodically pulled up, inspected, repaired, and relaid by fleets of specially designed cable-tender ships. In the earliest days of the Atlantic Cable, transmission was extraordinarily slow. Poor reception necessitated the crudest possible form of resilience to eliminate errors: redundancy. Morse Code messages were transmitted character by character. The transmitting operator would send one character and then wait for the receiving operator to transmit it back, to confirm that it had been received without error. This back-and-forth had to be repeated until both sides of the conversation were satisfied that the single character had been properly transmitted and received. In 1858, average speed of transmission was one character per two minutes, and the first message sent took more than seventeen hours to complete.13 Primitive, yes. But today’s digital error-correction protocols share the basic principle of repeat-confirm-correct-repeat-confirm.

EXTENDING THE REACH OF DATA NETWORKING

Despite universal public acceptance of overland and undersea “wired” telegraphic communication—which was supplemented by “wired” voice communication with the emergence of the telephone (patented by Alexander Graham Bell in 1876)—there was growing awareness of the limitations imposed by the necessity of a physical infrastructure in communication networks. In response, late in the nineteenth century, a wireless networking technology began to emerge. In America, it was called radio; in Britain, more simply, wireless. On December 12, 1901, the Irish-Italian inventor Guglielmo Marconi gave a dramatic demonstration of the potential of wireless communication by sending the first non-hardwired transatlantic message.

Not only did wireless communication technology greatly extend the reach of data networking, potentially casting an information net over the entire planet, connecting places beyond the practical reach of a physically interconnective infrastructure, it boded well for security and, therefore, network resilience. There were no wires to cut, tap, or short out. True, analog radio signals were vulnerable to deliberate interception and accidental interference, but Marconi and his associates offered a solution—what Marconi advertised as an ability to “tune [his] instruments so that no other instrument that is not similarly tuned can tap my messages.”14 So-called harmonic telegraphy was already in use for multiplexing wired messages, sending more than one message over a single telegraph wire at the same time by transmitting the messages as pulses of specific audio frequencies. A given receiving device would be tuned to the frequency of one message and not of the others. In “tuning” his wireless transmitter and receiver, Marconi adjusted the amplitude of the electromagnetic waves he produced, not the frequency of any resulting audio signal. The effect, however, was the same. If the transmitter and receiver were tuned to the same wavelength, the transmitted message would be received. If not, the message would not be received and would therefore be (Marconi claimed) secure from prying instruments and ears.

Two years after first successfully transmitting and receiving a transatlantic message, in June 1903, history’s first documented attack on a wireless electronic network occurred. John Ambrose Fleming, a British physicist, was about to demonstrate long-distance radio communication before a distinguished audience at the Royal Institution, London. Three hundred miles away, in Cornwall, Fleming’s employer, none other than Guglielmo Marconi, was preparing to send him a uniquely “tuned”—and therefore utterly secure—wireless signal.

The audience prepared to listen to the Morse Code message, which was acoustically amplified so that it could be heard throughout the auditorium. They waited in anticipation. Suddenly, a transmission echoed through the hall. Arthur Blok, Fleming’s assistant and adept at Morse Code, instantly recognized it as the monosyllable “rats” repeated over and over. A Morse printer, which was connected to the receiver, spelled out the word as well, followed by a string of expletives and some mocking quotations from Shakespeare.

Marconi’s “secure” network was intended to be a network of two and was now, unexpectedly, a network of three. Turns out it had been breached by one Nevil Maskelyne, a highly successful music hall magician by trade, who routinely used short-range wireless Morse Code transmission in his show-stopping mind-reading act. Maskelyne later explained the motive for his attack. It was, he said, an effort to expose security flaws in Marconi’s design.

The Marconi incident was doubtless embarrassing, but it was hardly a deal breaker. For one thing, relatively few messages transmitted by wired or wireless telegraphy were considered matters worthy of high security. People did not routinely transmit, as they do today, personally identifiable information (PII). Business, though discussed, was not routinely executed online. If you wanted to send messages that required a high degree of security, you could always send them in secret cipher. Extremely sophisticated encryption/decryption machines have been available for a surprisingly long time. For instance, the first iteration of the legendary German “Enigma” was patented in 1918 and machines were being marketed commercially beginning in 1923.15 Hacking to steal identities or secrets or financial information or even for fun was not common, let alone epidemic in the pre-digital era. Nevertheless, network exploits—and therefore network security—did surface as a subject of law enforcement as well as popular attention and concern even before the appearance of the personal computer and the Internet.

ACTION ITEM

 

Nevil Maskelyne’s trolling of Marconi—his exposure of a security flaw in radio—did not, of course, kill wireless. The reason was that most people did not envision using the medium to transmit highly sensitive data. This was shortsightedness, for all that it was an instance of the reasonable prioritization of data. Digital resilience is not a one-size-fits-all approach. On the contrary, it is dynamic and depends, among other things, on reasonably assessing the security priority of different classes of data. Think of resilience as a flexible business solution, not as a moat or a high wall.

THE BIRTH OF “HACKING”

In the 1970s, people—usually young—who called themselves “phone phreaks” began almost routinely breaking into the computer networks of telephone company “exchanges” by using “blue boxes,” homemade devices that synthesized the tones of a telephone operator’s dialing console to do such things as switch long-distance calls.16 Using a blue box—a youthful Steve Jobs and Steve Wozniak began their professional collaboration by marketing one they had crafted in the 1970s—a phone phreak could make free calls to just about any place in the world.17 Truly dedicated phreakers did more than use blue boxes to make free calls—a crime known as toll fraud. They became adept at listening to touchtone tone patterns to figure out how these were used to route calls, they devoured phone company technical literature (sometimes dumpster diving to acquire such material), and they even broke into hardwired telephone equipment to wire in their own phones. A phreaker subculture developed, and some groups covertly used conference call circuits to communicate with one another in a pre-Internet version of a chat room.

During the early 1980s, the major telephone companies migrated to computerized telephone networks, which digitized dialing information and sent it separately from the audio channel via a digital channel that was inaccessible to blue boxes. This migration also signaled the emergence of the era of the personal computer, and a period in which phone phreaking morphed into what became popularly called “hacking.”

To the general public, the world of computer hacking in the early 1980s was mysterious, rarefied, and fascinating. The first hacker group to be portrayed as a serious threat to network security called itself the “414s,” after their Milwaukee, Wisconsin, area code. Members were indicted in 1983 for attacking some sixty mainframe computer networks, including those belonging to Los Alamos National Laboratory and the Memorial Sloan-Kettering Cancer Center.18 The idea that “kids” could penetrate serious government and institutional networks was both upsetting and intriguing. Hackers began to achieve a certain mythic status in pop culture, as evidenced by the 1983 John Badham blockbuster WarGames, about a teenage “hacker,” played by a young Matthew Broderick, who uses his PC and an old-school acoustic modem to connect with a military supercomputer, which he believes has given him free access to a new thermonuclear war simulation game. In fact, he nearly starts World War III.19

Within about three years of the 414s and WarGames, the popular perception of hacking had escalated from a pop-culture phenomenon to an increasingly serious law enforcement concern. Amid a growing frequency and volume of network breaches, Congress passed the Computer Fraud and Abuse Act in 1986, giving law enforcement and the courts jurisdiction over digital attackers. During the late 1980s and throughout the 1990s, breaches and other security incidents became more and more common.

The public often seemed to regard hacking as a form of social expression rather than as a criminal enterprise. Figures like Kevin Mitnick attained the stature of a cyber Billy the Kid. At sixteen, Mitnick had breached the computer system of Digital Equipment Corporation (DEC) and copied advanced DEC operating system software. Nine years later, in 1988, he was convicted for that crime and sentenced to a year in prison followed by three years of supervised release. Before his probation ended, he breached voicemail computers belonging to Pacific Bell, and then went on the lam, breaching dozens of computer networks while he was a fugitive. Arrested early in 1995, he confessed to wire fraud, computer fraud, and intercepting a wire communication, for which he served a total of five years in prison. Presumably responding to the absence of public outrage over Mitnick’s exploits, law enforcement officials argued to the sentencing judge that the hacker was capable of starting a thermonuclear war simply by whistling the launch code for NORAD missiles into a prison payphone. The judge took this seriously, and Mitnick spent eight months of his term in solitary confinement for fear he would get to a phone and start World War III.20

Less well publicized during this period were attacks even more consequential for their impact on network security. In 1988, for example, four men were arrested for breaching the computer system of the First National Bank of Chicago in a foiled attempt to transfer $70 million to Austrian bank accounts.21 This incident certainly roiled the banking and business communities, but it was the emergence that same year of a piece of malware specifically designed to penetrate, compromise, and exploit networks that prompted the federal government to create the Computer Emergency Response Team (CERT) at Pittsburgh’s Carnegie Mellon University. Moreover, CERT operated under the direction of the Defense Advanced Research Agency (DARPA), the very Department of Defense agency that had funded the creation of ARPANET, the direct precursor of the Internet itself.22

The item of malware in question was the so-called Morris worm. Now, in the late 1980s, the public’s hands-on experience with computing and networked computing was still new. What everybody did understand quite clearly, however, was the concept of embezzlement, particularly when the funds at risk were potentially one’s own. The idea of a digital “worm” was harder to grasp. People understood that computers could be broken into, but they had to be educated about the means through which malware was propagated on a computer network so that it could infect any number of machines on that network. Once this notion was explained, however imperfectly, there was considerable fear concerning the stealthy power of a network worm to steal information and other assets like the proverbial thief in the night.

Cornell University student Robert Tappan Morris, the worm’s namesake and creator, never intended to do harm. He was neither a criminal nor even a quasi-recreational hacker. His purpose in creating the worm (he claimed) was to exploit weak passwords and vulnerabilities in widely used Unix operating systems so that he could “worm” his way across the entire Internet as a way of gauging its size—which was not all that vast in the late 1980s. The Morris worm was essentially an early network monitoring—or at least estimating—tool. It was an attempt to create some portion of the picture of Internet topology. The unintended consequence of this exercise in cyber census taking was a large-scale distributed denial of service (DDoS) attack. Morris coded his worm in a way that it might infect the same computer over and over again, with each process slowing the machine down. Eventually, networked computers became too slow to function.

The damage caused by the Morris worm was important for both the public and professional perception of the threat to networks during this period. For this exploit was not computer-aided theft, something that is easily comprehended. It was an attack on the functionality of a digital network that people were just beginning to rely on. The Morris worm infected an estimated 6,000 major Unix machines—about 10 percent of the 60,000 computers connected to the Internet at the time. The raw numbers are small because the Internet was small, but the fraction of the network affected was huge.

The lessons of the Morris worm were nevertheless highly useful. First, the creation of CERT demonstrated the government’s understanding that the digital network was becoming an issue of national security and therefore required the resources of the national government to help protect and defend it. Second, the nature of the attack, an exploitation of inherent vulnerabilities in Unix code, revealed that network security depended at some level on software security. This meant engineering software to avoid introducing vulnerabilities in the first place—engineering software to be secure by design. If the Unix vulnerabilities had not existed, Morris could not have created his worm. But developers are human. They will make mistakes or simply fail to recognize all vulnerabilities. Efforts at prevention therefore need to be supplemented by continuous monitoring of networks to detect attacks as well as their impact. Monitoring enables a rapid response to breaches and attempted breaches, and provides information useful for enhancing the security of the underlying software. Resilience is a continuous endeavor.

A BRIEF HISTORY OF NETWORK SECURITY

Threats have always been a part of networks, both analog and digital. Typically, security has lagged behind the emergence of threats. CERT was established to promote the security of the United States’ rapidly growing national digital network. That was admirable. Unfortunately, CERT also established a model of a reactive rather than a preemptive approach to computer and network security. The result of this was the proliferation of viruses and other malware throughout the 1980s and 1990s, each pursued by an antivirus software “cure.”

Indeed, the first wave of “digital security” became largely defined as the detection and removal of malware from computers. A profitable antivirus industry (on the order of $6 billion to $8 billion today) was born, and relatively little effort was devoted either to designing security into software code or developing effective ways to monitor increasingly large and complex digital networks. Unsurprisingly, the 1990s became a decade of exploits that exposed security gaps in numerous high-profile corporate and government systems, ranging from Griffiss Air Force Base, to NASA, to the Korean Atomic Research Institute. An attack against the AT&T network brought down its long-distance service for a time, and the U.S. Department of Defense recorded a quarter-million attacks against its own computers and networks in 1995 alone. That same year, the Computer Security Institute determined that one in five websites had been hacked.23

If government and corporate systems were the targets of choice during the 1990s, the rise of Internet e-commerce introduced a profit motive and a vastly heightened degree of interactive access that brought explosive growth in malicious attacks. In the opening decades of the twenty-first century, computer networks more and more frequently became the targets of the kind of organized crime breaches—sometimes state-sponsored or at least state-sanctioned—described in Chapter 1.

Hyper-connectivity created a new economy and a new threat landscape. Time and again, the vulnerability of code was exposed. Sometimes the weak point was in software; sometimes in the firmware (software embedded in engineered products and systems) of such Internet of Things devices as point-of-sale (POS) credit card readers. The government and industry responded, to be sure, but not always by focusing on building security into software design. True, as early as the 1970s, three Massachusetts Institute of Technology (MIT) graduates innovated a code solution by creating RSA encryption technology, which later found its way into the likes of Microsoft Windows and such popular applications as the Quicken check writing and banking application.24 Network encryption-based security approaches accompanied the rise of the Internet, most notably the development of virtual private network (VPN) technology, Secure Sockets Layer (SSL), Secure Electronic Transactions (SET), and Data Encryption Standard (DES). All enhanced network security, as did antivirus software and firewalls. IT professionals operating large networks turned to Intrusion Detection Systems (IDS), content filtering software, the separation of vital components of operating systems, and behavior analysis software as bulwarks to defend their networks.

Yet the network guardians kept playing catch-up. In 2007, then governor of Arizona (and, later, secretary of Homeland Security in the Obama administration) Janet Napolitano scoffed at the idea of building a wall along the U.S.–Mexican border. “I declared a state of emergency and was the first governor to openly advocate for the National Guard at the border,” she told the National Press Club in 2007, “yet, I also have refused to agree that a wall by itself is an answer. As I often say, ‘You show me a 50-foot wall, and I’ll show you a 51-foot ladder.’”25 This quip might well be used to describe the apparent spirit in which attackers view each new defensive IT obstacle—never as a disincentive to attempt a breach, but as a fresh challenge to be overcome. Despite rising IT security budgets, 2016 was a banner year for major—sometimes spectacular—breaches. The nonprofit Identity Theft Resource Center (ITRC) reported 845 total breaches in its banking/credit/financial, business/educational, government/military, and medical/healthcare categories, representing the compromise of 29,765,131 records.26

HOW THE SHIFT FROM DIGITAL SYSTEMS OF RECORD TO SYSTEMS OF ENGAGEMENT OUTRAN CYBERSECURITY

Before the rise of the Internet, computer systems were primarily systems of record. They stored data and made it available for retrieval and interaction on a scale limited to a given network, which had few connections to the world beyond it. For such systems, a Maginot Line approach to security—a digital wall or line of firewall and software fortifications defending and protecting a perimeter—was appropriate as well as reasonably effective. Perimeter protection evolved into the second wave of cybersecurity strategy and survives today primarily as the network firewall. To the extent that the computer was a physical machine isolated from other machines or connected only to trusted machines and trusted users, the main security concerns were protecting the physical equipment and preventing some intruder or a rogue insider from uploading malicious software or downloading valuable data via a floppy disk. This was computer security rather than network security, and it became inadequate with the explosive growth of the Internet, which meant that most computer systems had morphed from systems of record to systems of engagement, decentralized and open to peer interactions. No longer were security strategies modeled on physical-world scenarios sufficient to maintain complete security, although this method of intrusion prevention accounts for about $20 billion in security products sold worldwide.

ACTION ITEM

 

One way to understand the need for resilience in addition to security is to recognize that the way we use computers has changed from devices that primarily create records to devices that mostly engage with other devices. When computers were used mainly as calculators, financial ledgers, employee information files, and the like, vaultlike security was imperative and sufficient. Now that we use intensively interconnected computers to interact with other intensively connected computers, security is still necessary but by no means sufficient. Today, we must assume that security measures will be sooner or later penetrated because we are all exposed. For this reason, active measures of resilience are necessary to allow networks to continue to operate while breaches are contained.

The pre-Internet era was analogous to the situation on the Western Front in World War I (1914–1918), which was fought along a demarcated, relatively static line of physical trenches. More recent wars, beginning with the Vietnam War of the 1960s and 1970s, have been fought on multiple and typically fluid “fronts.” In fact, the word and concept of a front is meaningless in such wars. The web-based, cloud-oriented, intensively interactive networked systems that characterize the modern computing landscape likewise present no single front or perimeter to defend. It is often impossible to distinguish clearly and cleanly between “trusted” and “untrusted” computers. Firewalls are important, but they won’t stop an insider from thoughtlessly clicking on a toxic link in a phishing email. Besides, the value of most networks in an era of engagement is in who they include, not who they exclude. A dynamic security strategy begins with inherently secure software and firmware design.

The resilience of a physical structure depends on the resilience of the individual components with which it is built. The resilience of an organization depends on the individual resilience of its members. The resilience of a digital network begins with software and firmware elements that are inherently designed to be secure. Of course, it is not enough simply to aggregate resilient elements. Resilient elements connected poorly do not create a resilient network, not in a digital system and not in a corporate or institutional system. Even resilient people make mistakes, whether they are building a digital network or designing an organization.

Software security is necessary to network security, but it is not sufficient to it. There is a wider danger, which may be summed up in a warning issued by the FBI back in 2012. According to Gregory S. Saikin of the Baker Hostetler law firm, the FBI warned users of Internet-based social networking websites that hackers, “ranging from con artists to foreign government spies,” trolled for the purpose of exploiting “the users’ identifying and related personal information.” The FBI report explained that these social networking hackers were “carrying out two general tactics, which are often combined.” They acted as “social engineers,” exploiting personal connections through social networks, and they wrote and manipulated software code “to gain access or install unwanted software on your computer or phone.”

The FBI warned that the “hackers are impersonating social networking users with the intent to target the user’s workplace.” The favored tactic, the FBI pointed out, was “spear phishing,” in which an attacker sends an email that appears to be from a trusted or known source and of interest to the targeted user. The email typically contains a hyperlink or an executable file. When the victim clicks on the link or opens the file, a malware program is installed in the target’s computer. Depending on the nature and function of the malware, the assault might provide the hacker access to the firm’s data, including (for example) trade secrets, security measures, employee files, or, in the case of the attacks on Target Corporation, credit card data.27

Most successful attacks on computer networks use Trojan horses that are introduced into the network through spear phishing emails or messages. No matter how sophisticated the particular malware program is, it is almost always inert—harmless—until a human being opens the phishing email and clicks on whatever link or attachment it offers. Thus, the second stage of a network attack almost always depends on social engineering to get started. That is, a human being has to be persuaded to open the gate, admitting the Trojan horse into walled Troy.

Savvy computer users and those working for savvy networked organizations are often quite aware of and sophisticated about spear phishing emails. They are not easily duped. But, sooner or later, an extremely good spear phishing exploit will overcome the healthy, educated, and practiced skepticism of even the savviest user. Hillary Clinton’s presidential campaign manager John Podesta fell victim to a “spear-phishing hack . . . instigated with an email that purported to come from Google informing him that someone had used his password to try to access his Google account. It included a link to a spoofed Google webpage that asked him to change his password because his current password had been stolen.” What happened next was the collision of human error with digital technology. An aide to Podesta did precisely the right thing, immediately forwarding the suspicious email to the Clinton campaign’s IT staff to ask if it was legitimate. A staffer, Charles Delavan, replied that it was a “legitimate e-mail” and that Podesta should “change his password immediately.” The thing is, Delavan had meant to type “illegitimate,” but had left out the "il." Thanks to this typo, Podesta gave his password to the Russian-based hacker “Fancy Bear and his emails began appearing on WikiLeaks in early October.”28 Increasingly adept and persuasive efforts at social engineering combined with dogged persistence, not technical innovation in the craft of malware creation, have made recent attacks particularly damaging. Add the mindset of today’s most dedicated hackers—a mindset that embraces organized crime as a business model—then stir in the inexhaustible abundance of credulous human fallibility, and you have a recipe for network vulnerability, regardless of what level and combination of hardware and software defenses are used. Innovation and the human factor render every network intensely dynamic, with too many moving parts to defend with absolute certainty. The only alternative is to build in resilience that will absorb an attack and contain it while providing a sufficient level of reliable service and uninterrupted connectivity to allow the enterprise to continue operating while under attack.

ACTION ITEMS

 

Spear phishing is an attack that comes via an email that appears to be from an individual or business you know and therefore trust. It is a con that exploits that trust to get credit card and bank account numbers, passwords, and financial information on your computer and computers networked with you. Your network is only as resilient as the least resilient individual connected to it; therefore, educate everyone on your network to do the following:

Think before clicking on any email attachment.

Understand that attackers can persuasively mimic emails from legitimate organizations, corporations, and individuals with whom you have a relationship.

Learn how to use the features of your email client (the program you use to access your email) so that you can see the true origin of an email. Better yet, find out how to set your client to show you this automatically. If “Your Bank” shows up in the email header, but something else shows up in the actual address ([email protected], etc.), you know that you are being phished.

Skilled spear phishers use social media and other sources to learn your name and to discover companies and organizations with whom you do business. The attacker may even know that you made a recent purchase and may refer to it. This knowledge is often reflected in the content of the email.

Do not be lulled by familiarity. Resist the impulse to click on a button or hyperlink. Instead, use your browser to go directly to the company (bank, partner firm, etc.) the email purports to be from. Check for messages there.

Do not reply to the email. Do not furnish any information requested. Instead, use your browser to go directly to the organization that the email purports to be from.

Report phishing emails to your organization’s IT or digital security manager.

Guard your personally identifiable information (PII). Audit your online presence. What kind of information do you disclose on the social web? Your name? Email address? Friends’ and colleagues’ names? Friends’ and colleagues’ email addresses? What kinds of personal and professional information do you disclose in your posts? Assume an attacker will gain access to all of this.

Audit your passwords. Don’t make them easy to figure out. Don’t use the same password for multiple accounts. Change passwords frequently.

HOW DECENTRALIZATION CAN ENHANCE RESILIENCE

Resilience is a necessary strategy, but it does have limits. Resilient network architectures built on firmware and software elements designed with security as a high priority can usually be counted on to reduce the impact of attacks. Scanning tools identify vulnerabilities in software, and modeling tools diagnose weakness in network architecture. Monitoring tools detect unusual activity that may indicate that an attack or breach is in progress. Used in combination, such tools help us to design, build, and maintain resilient digital networks. An internal network may operate with a considerable degree of central control; however, the hallmark of the Internet, the vast external network to which most enterprise systems of engagement are connected, is decentralization. As noted in Chapter 1, security expert Richard A. Clarke points out that the earliest architects of the Internet, back in the ARPANET days, “did not want [the Internet] to be controlled by governments, either singly or collectively, and so they designed a system that placed a higher priority on decentralization than on security.”29 Clarke sees this as an inherent source of weakness, making it difficult to provide adequate protection to a network of nodes that, while interconnected, are also independent of one another. Yet if decentralization is a source of insecurity, it is also a source of resilience in that it denies attackers the opportunity to strike a fatally decapitating blow.

The decentralized Internet has sometimes made non-digital “real-world” networks more resilient. For example, by enabling coordination of individual action while simultaneously allowing decentralized leadership, the Internet has made some popular political movements less vulnerable to a decapitating attack by the forces of a repressive central government. A case-in-point often cited is the 2010 “Jasmine Revolution” in Tunisia, widely credited with igniting the Arab Awakening or Arab Spring. In 2011, journalist Colin Delany reported on a discussion at National Public Radio’s Washington, D.C., headquarters led by a young Tunisian protester named Rim Nour.30 Nour was a Tunisian whose background combined technology and public policy. He told the NPR audience that social media did not foment the Jasmine Revolution, but did accelerate it and, even more important, helped Tunisians organize and maintain it. By 2010, many Tunisians were enthusiastic users of Internet technology, and 85 percent of the population owned cellphones. Some 2 million of the country’s 10 million residents and an additional 2 million Tunisian expats were on Facebook at the start of the Jasmine Revolution.

Although Western journalists sometimes called the Jasmine Revolution the “Twitter Revolution,” Twitter had a tiny footprint in Tunisia during 2010. No more than 500 Twitter accounts were active in the country. Yet those Tunisians who did tweet were avid and skilled political activists quite capable of leveraging their Twitter presence in ways that produced a social impact highly disproportionate to the small number of users. The Tunisian government could do little to suppress cellphone use, but it did exercise censorship control over YouTube and other major Internet social media channels.

Nour explained that the Jasmine Revolution was sparked by the response to the self-immolation of Mohamed Bouazizi, a street merchant in the rural town of Sidi Bouzid. On December 17, 2010, when a policewoman seized the unlicensed vegetable cart by which Bouazizi eked out a living for himself and his family, Bouazizi appealed to local officials. Rebuffed by them, Bouazizi doused himself with an unidentified flammable liquid and set himself ablaze—just outside of the municipal building where he had pleaded his case in vain.

Bouazizi’s act of self-immolation was not captured on cellphone video, but the brutal police response to the subsequent demonstrations was. Activists shared the videos not only within Tunisia, but throughout the region and the world. In the West, the footage was picked up by major broadcast news networks. As protests developed across Tunisia, leaders organized what Western commentators called “smart mobs,” creating a political movement coordinated via cellphones and Facebook. The demonstrations gathered so much momentum that Tunisian president Zine El Abidine Ben Ali fled the country. At this, the revolution threatened to tip into violent anarchy, which was reined in by social media appeals that counteracted rumors and promoted disciplined organization. The entire drama was shared online with the world, and the global response was, in turn, transmitted back to the Tunisian demonstrators via social media.

Belatedly, the Ben Ali government launched a media counter-offensive via the television and radio networks it controlled. This demonstration of central power, however, proved to be no match for decentralized social media, which poked holes in government lies. When, for instance, government TV broadcast a pro-Ben Ali demonstration, Jasmine activists posted on Facebook their video footage revealing how few supporters Ben Ali actually had. The decentralized Internet facilitated the exchange of information among activists and made it possible to respond to real-world social and political developments in real-time. Mobile video technology and the Internet, especially as connected to conventional broadcast media worldwide, gave the Jasmine Revolution global exposure.

Tragically, the Tunisian experience of a successful and productive revolution proved to be the exception during the next few years of Middle East turmoil. The same Internet whose decentralization outmaneuvered the tyranny of a repressive central government enabled organizations like al-Qaeda and the Islamic State (or ISIS) to recruit and motivate, at times even coordinate, the actions of terrorists across the globe. The decentralization capable of defeating a despotic government also defies the efforts of democratic governments to locate the sources of recruitment and direction or to identify individual recruits to terror. Certainly, the individualized, interactive nature of networked communication prevents simply shutting down the network in a bid to interrupt terrorist recruitment and other plotting.

HOW DECENTRALIZATION CAN DIMINISH RESILIENCE

The decentralized architecture of the Internet, together with the ability even relatively unsophisticated hackers possess to use off-the-shelf software and web solutions to spoof Internet addresses, makes tracing threats back across the net extremely difficult. When epidemiologists investigate an outbreak of disease, they always begin by attempting to identify “patient number one,” the initial source of infection. It is a difficult but feasible task. In the case of digital networks, the difficulty of finding the original perpetrator of, say, a breach or a DDoS attack is compounded by the attacker’s ability to connect through large numbers of network nodes, to spoof points of origin, and even to enlist the aid of governments and government-related institutions willing to help mask the source of an attack.

Those whose responsibility it is to serve as guardians of enterprise or government networks continually find themselves outgunned when it comes to preventing attacks. When an attack hits, they do possess certain forensic tools and techniques to determine—or at least guess at—the source of an attack. The breach of the Democratic National Committee in 2016 and the WikiLeaks disclosure of emails and other materials embarrassing to Democratic presidential candidate Hillary Clinton and others bore forensic hallmarks of other attacks by Russian state-sponsored hackers.31 Yet, as of this writing, no direct means of remediation or retaliation precisely targeting a specific set of hackers could be found. The victims and law enforcement have almost no practical recourse. Besides, the damage was done by the time credible theories of the sources of the attacks had even emerged. As a practical matter, the most that could have been done to defend against the breach was to have monitored the DNC’s presumably far-flung networks with tools capable of showing the dynamic typology of the networks and the traffic flow across them. This might well have enabled a timely, agile, and resilient response to minimize the volume of the material exfiltrated. As we know from other major breaches, such as the attack on Target discussed in Chapter 1, both infiltration and exfiltration during an exploit take a considerable amount of time and generate a detectable, if not obvious, level of activity.

HOW NETWORK KNOWLEDGE BUILDS NETWORK RESILIENCE

Tools that model your network, showing how the bad guys can get into your systems and where they can go once they have infiltrated them, tools that light up the paths to your most critical assets, enhance your knowledge of your network. Software that provides this level of insight can buy you the time you need to contain an intrusion and prevent exfiltration, minimizing its immediate impact so that you can stay in operation even as you work to remove the threat. The better you can prevent an attack, or contain or interrupt an incident, the smaller you make any post-breach data compromise.

There is no sovereign strategy against breach or shipwreck except severing connectivity or remaining ashore. In network parlance, that would be “air gapping” the network. No operator of a system of engagement can unplug, any more than a sailor can remain beached. Fortunately, there are practical means, whether you manage a network or handle a ship, of increasing and maintaining a level of resilience sufficient to spare you all or some damage or loss in the event of a mishap. This has always been true of networks and ships. In the case of networks, however, it has never been a more urgent truth than it is today.

TAKEAWAY

Resilience is a matter of reducing the volume and severity of damage and loss as well as staying in business or on mission. In such a reduction is the possibility not only of survival and recovery, but even of continuing to operate without interruption. In an intensively networked civilization “of fibrous, threadlike, wiry, stringy, ropy, capillary character,”32 in which digital and non-digital networks interface at so many points—trillions, perhaps—connectivity is power. The catch is that connectivity is also a threat. The ship allows you to set sail, even as it exposes you to the possibility of shipwreck.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset