5

PORTRAIT AND LANDSCAPE

Achieving Resilience in Our Fragile Digital Environment

In the Great Recession of 2007–2008, we all learned a new concept. It was called “too big to fail,” and it described certain business entities—especially financial institutions—that have become so large, so interconnected, and so complex that their failure would be catastrophic to the global economic system. For this reason, governments must deem them “too big to fail” and therefore support them at any cost.

That was a debatable idea in 2007–2008. Today, it is an impossible idea, because today virtually any business can become too big to fail. Recall that the spectacularly costly cyber breach of Target (Chapter 1), a giant corporation with millions of customers, began with the failure of one small HVAC service contractor to avoid falling prey to a spear phishing exploit. Small as that contractor was, the connection of its compromised network to the network of Target not only compromised Target’s network, but the millions of nodes beyond the network Target directly controlled, namely the credit cards of at least 40 million customers and the personally identifiable information (PII) of an additional 30 million.

We don’t think of corporate behemoths as inherently fragile structures. In fact, the more connected a corporation is, the less prone to failure it seems. After all, opportunity and profit grow with connection. Yet as networks become increasingly complex, they become increasingly difficult to understand on anything approaching a granular level. The complexity and lack of understanding create instability, multiplying not only the chances of failure, but the magnitude of failure.

What, then, are the consequences of being interconnected times a trillion or more?

QUESTION: WHAT IS A TRILLION? ANSWER: INSTABILITY

Only in relatively recent times have we all become obliged to think about trillions. Consider: As of 8:10 a.m. (U.S. ET), June 6, 2017, the CBO (Congressional Budget Office), OMB (Office of Management and Budget), GOP House Committee on the Budget, and the U.S. Debt Clock can’t quite agree on the amount of the national debt. They give figures ranging from more than $23 trillion (CBO) down to more than $20 trillion (GOP).1

Just how much is a trillion? The easy answer is 1012, 10 with a dozen zeros following. But that still leaves us with a mere number. We could also try the old tricks of visualizing a trillion one-dollar bills . . .

imageStacked = 67,866 miles, or more than one-fourth the distance from the earth to the moon

imageLaid as a carpet = 3,992 square miles, enough to cover an area bigger than two Delawares

imageLined up end to end = 96,906,656 miles, greater than the distance from the earth to the sun (92.96 million miles)

If you prefer to think in temporal terms (time is money, after all), spending a trillion dollars at the rate of one dollar per second would take 31,700 years.2

So, multiply any of these illustrations by 20 to 23, and you might gain a (presumably horrified) notion of the magnitude of the U.S. national debt.

Or you might apply your new appreciation of “trillions” to the global digital network we call the Internet. In Chapter 2 and elsewhere, I mentioned a book by digital technology design consultants Peter Lucas, Joe Ballay, and Mickey McManus called Trillions: Thriving in the Emerging Information Ecology. Their subject is our “future of unbounded complexity” and how we will either profit from it as a civilization or succumb to the risks it poses. The complexity they study is a product of living and working on a network with connections in unprecedented numbers and human-machine interactions in unprecedented volume.3

In 2012, when the book was published, the authors were already able to report that there were “now more computers in the world than there are people. . . . In fact, there are now more computers, in the form of microprocessors, manufactured each year than there are living people.” Most of these microprocessors do not “find their way into anything that we could recognize as a computer.”4 They are nevertheless nodes on the Internet—more specifically, on that increasingly large fraction of the Internet we call the Internet of Things (IoT). In 2012, the Trillions authors believed we were “arguably on the cusp of a . . . revolution: the age of Trillions” as the number of internetworked nodes reached 1012. They argued that “pervasive computing” (that is, computing across nodes numbering in the trillions) “represents a profoundly different relationship of people to information” and is destined to be “understood as a distinct epoch in human history.” A “decade in the era of pervasive computing,” they predict, “will bring unimaginable changes.” What can be imagined about these changes is the emergence of “instability as the status quo.” Those who design and build technology will be forced to create devices increasingly dependent on context and increasingly dynamic. This, in turn, will make dynamic change standard and, therefore, bring to our pervasively networked environment an “inherent and persistent instability.”5

INSTABILITY BECAUSE OF CHANGE IN CONTEXT

Part of the instability will come from the nature of software with functions and security parameters that change depending on the context in which they are used. Not too long ago, the standard instrumentation—what aviators call avionics—on all aircraft consisted of analog dials and toggle switches. These were the hardest of hardwired devices. Each gauge and each switch performed a specific and immutable function.

Beginning in the 1970s, however, digital (sometimes called “electronic”) instrumentation was introduced, and by the end of the twentieth century, this evolved into the so-called glass cockpit, in which the hardwired dials were replaced by dynamic digital displays. They were “dynamic” in the sense that their functions changed, depending on the immediate situation—takeoff versus landing versus straight-and-level flight, for instance.

More recently, in the most advanced aircraft cockpit designs, many of the remaining toggle switches have been replaced by touchscreen controls.6 The traditional—or at least apparent—stability of hardwired analog avionics is increasingly giving way to context-dependent, context-relevant glass cockpit digital avionics with graphical user interfaces (GUI), some of which incorporate touchscreen features.7

With its dynamic avionics, the glass cockpit is intended to increase what aviators call situational awareness: “appreciating all you need to know about what is going on when the full scope of your task—flying, controlling or maintaining an aircraft—is taken into account.”8 Yet the technology also introduces a certain level of that quality the Trillions authors call instability. For instance, by 2008, at least fifty glass cockpit “blackouts” had been reported on the Airbus A320, one of the most prominent among the first generation of commercial airliners to extensively employ a glass cockpit design.9 Today, all commercial aircraft that feature a glass cockpit back up the most critical instrumentation, such as airspeed, compass, and altimeter, with traditional analog instruments.

Whether the glass cockpit enhances or diminishes situational awareness and general safety remains a subject of controversy. Some avionics experts report a negative impact on situational awareness as well as a deterioration in manual flying skills. According to a U.S. National Transportation Safety Board (NTSB) report, the glass cockpit is associated with a generally lower rate of mishaps but a higher rate of fatal accidents. So far, no one has offered an adequate explanation for this disconnect, but a plausible theory cites the effect of “risk homeostasis.” This is a situation in which “pilots will use a safety feature to enhance the aircraft’s utility rather than enjoy the increased level of safety the feature could provide. In other words, pilots use the glass cockpits to fly into conditions that they would otherwise avoid.”10

INSTABILITY INGRAINED INTO OUR INTENSIVELY DIGITIZED ENVIRONMENT

The glass cockpit is an extreme—i.e., life-and-death—example of the kind of instability ingrained into our digital environment. The degree of instability increases as more and more digital devices become nodes on the IoT.

In the case of the glass cockpit, instability may result from such technical failures as blackout as well as the dangerous complacency created by risk homeostasis. Our civilization’s increasing dependence on the Internet presents an intriguingly similar case. Instability may come from the occurrence of technical failures. For instance, “computer problems” may disrupt air travel in any number of ways: air traffic control equipment may fail, airline reservation systems may fail, Immigration Customs and Enforcement (ICE) systems may fail.11 Instability may also result from risk homeostasis. It is this phenomenon that makes phishing exploits successful, as email users, accustomed to clicking on links in official-looking messages, do so confidently and even reflexively. Digital fraud contributes significantly to instability in our digitally connected world. Indeed, it is reasonable to attribute the potentially destabilizing effect of fake news and hoaxes on risk homeostasis created by widespread unquestioning acceptance of information conveyed via the Internet.

ACTION ITEM

 

Know the limits of your network’s security measures. Avoid the dangerous complacency of assuming that firewalls and antimalware software will not only defeat all attacks, but will do so without human intervention and judgment. Almost all cyberattacks that succeed do so because of human error. A computer user, accustomed to clicking on links in myriad email messages sent by familiar companies, readily becomes complacent. Failing to recognize any risk in reflexively clicking on what appears to be a bona-fide link in a bona-fide email, he clicks, unwittingly admitting a Trojan or ransomware or other malware into the network. Knowledge, awareness, and judgment—all human attainments—are critically necessary components of a resilient machine-human network. Digital automation engenders complacency but requires heightened vigilance and the exercise of informed judgment.

HOW THE WORLD WIDE WEB CREATES INSTABILITY

Other factors contribute to digitally induced instability. First, there is the sheer volume of data circulating, flowing, and surging over the global Internet. Quite apart from instability nefariously created by cybercriminals, who compromise data and steal identities, intellectual property, state secrets, or money, or who crash networks by means of distributed denial of service (DDoS) attacks, a key feature of the World Wide Web destabilizes our access to data.

Let me explain.

Anyone who went to college before the ascendency of the Internet remembers reading and writing innumerable research papers that included a bibliography. The word bibliography strongly smells of the pre-online era. Derived from Greek via Latin, bibliography literally means “book writing,” and a bibliography is essentially a list of books, typically books the author of a scholarly essay acknowledges as the sources of the essay. Reading a paper essay with a paper bibliography even years after it was written, one can be reasonably certain of locating all of the bibliographical sources in some library somewhere or, perhaps, some combination of libraries. Today, however, a bibliography is more than “book writing.” Typically, any number of online sources are included. Often, the Internet is the only platform on which the sources are readily available, and the World Wide Web is the only practical means of accessing them. This can be a very great convenience, of course, both for the writer of a scholarly work and for readers who want to consult the listed sources. All you need do is click on a hyperlink in the bibliography (or perhaps in the body of the text) or copy and paste a URL.

But who of us hasn’t had the experience of working with an “old” online document—that is, a document perhaps ten, maybe five, maybe just two years old; maybe even a few months or weeks old—and clicking, in vain, on a dead link? “Fifty years from now, what percentage of [today’s] web references will still be operational?” the authors of Trillions ask. “We will be very surprised if the answer turns out to be greater than zero.”12

In Chapter 4, we discussed the creation of the URL—the Uniform Resource Locator—as a great advance for the human-digital network interface. True enough, for it made the World Wide Web a practical possibility. Without the URL, even if your computer were plugged into the global network, you would need to know the numerical IP address of a computer or server on which the data you seek is stored. The URL translates the machine-friendly series of numbers into a human-friendly letter- or word-based pointer to a data location on a computer network and serves as a mechanism for retrieving the sought-after data.

When you click on a hyperlink or type in a URL, you are accessing a node on the Internet. Functionally, however, the clicking feels as if you are directly accessing the one piece of data or the one dataset you want. This is way better than the old-fashioned printed library catalog card, which gave you the author and title of the book you wanted, together with its Library of Congress Control Number (LCCN) or (in some libraries) its Dewey Decimal Classification number, then sent you on your way to retrieve the book on the library shelf. Of course, that took some legwork, and then, even with book in hand, you often had to find the single page—perhaps even the single sentence—you wanted. This is nowhere near as quick and easy and efficient as clicking on a hyperlink, which will take you not just to the “book” or even to the sentence, but right down to the very word you seek.

Unless the link is dead. Then you will go nowhere and find nothing.

In the pre-digital library, it was always possible that the book listed in the card file was not on the shelf at that moment. The library’s copy or copies might all be in the hands of others, or someone might have mis-shelved the item. Nevertheless, because most books were printed in at least comparatively large numbers—and many in truly massive numbers—you could be quite confident that you would find the volume you sought somewhere. It might be inconvenient or difficult to find, or the library’s particular copy might be missing, but the data was almost certainly not lost forever.

This is not always the case with a dead link or a URL that is no longer functioning. “If you quote from and cite, say, Moby-Dick, everyone understands that you are not referencing one particular instance of that book. Any of the millions upon millions of more or less identical replicas of Melville’s words will (for most purposes) do equally well.” In contrast, the URL you cite does not point “to a web page that has been massively replicated like a published book.” It is a unique address that “will remain relevant only as long as the owner of that ‘place’ [on the World Wide Web] possesses the resources and the will to maintain the pointer.” How long will that be? No one knows, but we can safely assume it will be far from forever. “Sooner or later, all links on the World Wide Web will go dead.”13

The Trillions authors quite reasonably judge that this degree of instability is “no way to run a civilization.” Yet we all blithely cling to URLs as “the primary way that knowledge workers around the world document their thinking and research,” even though these pointers generally have “a half-life of only a few years” at most.14 When a data item or dataset loses all connection with an operational URL, it becomes inaccessible. This may not be as kinetically dramatic an instance of wanton data destruction as the burning of the Library of Alexandria or Henry VIII’s sacking of the monasteries (Chapter 3), but when data becomes inaccessible it is effectively lost just the same. Relying on URLs to protect data—“protect” in the sense of preserving the accessibility of the information—is about as resilient a network strategy as building your house on sand.

CLOUDY WITH A CHANCE OF INSTABILITY

But let us for the moment hold in abeyance further discussion of the alarming instability of the URL-based World Wide Web and turn to today’s most popular digital strategy for data protection: cloud data storage or cloud data backup.

Begin with the very image of a “cloud.” In the context of data backup, storage, and retrieval, cloud is like cyberspace as it is used to describe the Internet. That is, both words are fictions. More than that, both are dangerous fictions, because they obscure rather than reveal and express the nature of the things they describe. Both terms evoke something ethereal, heavenly, or celestial. They connote the eternal. Best to stick a pin in this hyperinflated delusion. Lately, I keep hearing sensible people saying “the cloud is just someone else’s computer.” The fuller truth is that the underlying nature of both “cyberspace” and “the cloud” is intensively and pervasively physical. Both rely on machines, cables, connections, satellites, transmitters, and receivers. Like all physical systems, such things are far from eternal, but, on the contrary, vulnerable to mechanical failure and breakdown, as well as to nefarious tampering. They are also costly. They are owned, operated, maintained, and controlled, for the most part, by corporate entities with profit motives.

While capitalism may long endure, no corporation is forever. True, some companies have lasted a very long time. For instance, in Japan, more than 20,000 firms are more than a hundred years old, and a few are more than a thousand years old. Nishiyama Onsen Keiunkan, a hotel located in Hayakawa, Yamanashi Prefecture, in the Japanese Alps, has been operating continuously since AD 705, making it the oldest company in the world.15 But the more than 20,000 hundred-year-old-plus companies in Japan, together with the handful that exceed a thousand years, are extreme outliers in global business. “The average lifespan of a company listed in the S&P 500 index of leading U.S. companies has decreased by more than fifty years in the last century, from sixty-seven years in the 1920s to just fifteen years today,” according to Professor Richard Foster from Yale University. Foster says that today’s “rate of change ‘is at a faster pace than ever,’” and “he estimates that by 2020, more than three-quarters of the S&P 500 will be companies that we have not heard of yet.”16

The average lifespan of a Fortune 500 (or equivalent) multinational corporation is forty to fifty years. A third of companies listed in the 1970 Fortune 500 were gone by 1983.17 In other words, the life expectancy of our biggest, most established corporations is little more than half the average lifespan of a human being. And yet it is such corporate entities—or lesser ones—that own the so-called cloud. The “bottom line,” say the authors of Trillions, “is that as long as you choose to trust all of your data to a single commercial entity, those data will remain available to you no longer than the lifetime of that entity and its successors.”18 Blanche Dubois, the impecunious and insecure aging Southern belle of Tennessee Williams’s A Streetcar Named Desire, “always relied on the kindness of strangers.” We grasp an even thinner reed when we rely on strangers whose stewardship of our data ends when they cease to turn a profit.

AS A DIGITAL CIVILIZATION, WE CLING TO INSTABILITY

At the heart of digital resilience is the imperative to protect data. Yet, as a digital civilization, we cling to a highly unstable URL tool for identifying, locating, and retrieving data on our global network, and we rely on profit-driven private-sector corporations to safeguard our most critical data. Industry associations, governments, and global institutions need to create some resilient alternatives to this inherently unstable data infrastructure. It is a matter critical to civilization. If the Library of Alexandria can be burned, a corporate cloud can be switched off. Same devastating effect.

Some have suggested that we return to the essence of the Internet as it was originally conceived in 1969, as a true peer-to-peer (P2P) network, owned by everyone and by no one. In this model, networking is radically distributed and its resilience derives from the very absence of central control. The Trillions authors believe that the architecture of such a network could be designed such that “every time a node appears or disappears, the network automatically reconfigures, and, of course, if properly designed, it scales forever.” Because there is no single central repository, it is a “real cloud,” which means that the network can “withstand attackers,” and because it is not owned, “it can’t be shot down by its own proprietors either.”19 Such a radically distributed architecture could make possible a “true Information Commons,” in which data would not be stored on one or a handful of servers, but would be massively replicated (like those copies of Moby-Dick) on many computer systems accessible peer-to-peer.20 Instead of relying on inherently ephemeral URLs to access data, each unit of data that is intended to be universally available will be entered into the Commons and assigned a “universally unique identifier,”21 unambiguous, readily searchable, and as close to eternal as anything human can be.

ACTION ITEM

 

Recognize that our individual networks, no matter how resilient we may work to make them, are connected to an inherently unstable—and therefore insufficiently resilient—Internet. The URLs we use to access data are impermanent, even ephemeral. The cloud storage, although handy for both accessibility and security, is a network, subject to the same insecurities and instabilities that threaten all networks and subject to the whims of profit-driven operation and ownership. Consider becoming an activist in the promotion of a more resilient Internet infrastructure and a data accessibility solution more durable and resilient than the URL. In the meantime, take proactive steps to ensure the security and permanence of your data storage. Do not take the permanence of “the cloud” for granted.

Of course, the “Commons” alternative to corporately owned clouds and papier-mâché URLs is not a plan. It is at best a call for a plan. But it is hardly without precedent, which is none other than the Internet as originally conceived, the Internet that likewise began life as a call for a plan. If it still seems “visionary”—as in very nice but not very practical—we must ask ourselves just how “practical” it is to continue to put all our precious digital eggs in a basket that is shallow, ragged, and full of holes, a basket that becomes less and less resilient the more we heap into it.

THE PHYSICAL INSTABILITY OF THE INTERNET

Although massively distributing publicly accessible data on a true P2P network would increase the resilience of the Internet, it can never transform the Internet into a true realm of cyberspace, an ethereal region in which information somehow floats free. The necessity of a physical infrastructure will always give the lie to the space half of “cyberspace,” because the reality is that “cyberspace” has what IT experts call a “backbone.” This is the trunk line of the Internet.

There are hundreds of Internet service providers (ISPs) in the United States, but only a handful of so-called Tier 1 ISPs that control the backbone that, in turn, connects directly with the smaller ISPs. The U.S. “Tier 1 Club” includes AT&T, Verizon, Sprint, Century Link, Level 3, NTT/Verio, and Cogent. The backbone they own or control consists of hundreds of thousands of miles of fiber optic cable bundles. The overland portion of this backbone network connects at the shoreline to undersea fiber optic cables. As we know, the Internet is massive and complex, a many-branching network. Yet its backbone is relatively simple—and quite vulnerable to both digital and physical attack. If an attacker wishes to take down or to otherwise compromise the Internet, there is no portion of the global network that provides an attacker greater leverage, yields more bang for the buck, than the backbone. As reported in a 2015 article in the MIT Technology Review, “It is disturbingly easy to attack the backbone of the Internet to block access to a major online service like YouTube, or to intercept online communications on a vast scale.”22

Security researchers point to “longstanding weaknesses in the protocol that works out how to route data across the different networks making up the Internet. Almost all the infrastructure running that protocol does not even use a basic security technology that would make it much harder to block or intercept data.” This technology is available, but it is not being used, presumably (as Wim Remes of the security company Rapid7 explains) because there is “limited probability of these attacks”; however, he points out, “the impact once they happen is huge.”23

The significant weakness is in the border gateway protocol (BGP), which is employed by the large routers of the Tier 1 ISPs (among others) “to figure out how to get data [from the backbone] to different places.” BGP lacks “security mechanisms . . . to verify the information they are receiving or the identity of the routers providing it.”24 This is a deficiency that has been known for decades and was “the basis of the hacking group L0pht’s 1998 claim before Congress that they could take down the Internet in thirty minutes.” Indeed, a hitherto unexplained 2008 diversion of U.S. web traffic via Belarus and Iceland may have been the result of an attack on routers at the backbone.25 The security company Qrator Labs has also demonstrated that “BGP could be manipulated to obtain a security certificate in the name of a particular website without permission, making it possible to impersonate [the website] and decrypt secured traffic.”26

Because the Internet is widely distributed, attacks at one or even many nodes may have significant, possibly dire, consequences, but they are unlikely to bring the whole network down. The closer a hacker gets to the backbone, however, the greater the consequences of a massively successful attack. Among the prime targets for anyone bent on global disruption are the undersea cables, fiber optic bundles that carry 99 percent of all transoceanic digital communication.

On October 25, 2015, Pentagon officials reported concern that Russian submarines and “spy ships” were “aggressively operating near the vital undersea cables.” This raised “concerns among some American military and intelligence officials that the Russians might be planning to attack those lines in times of tension or conflict.”27 The crudest kind of attack would use a submarine to place and detonate an explosive charge near a cable to blast it apart and sever it.

The Russians have already employed disruptive attacks against the Internet during times of political tension and war. In 2007, Russian-based hackers mounted three weeks of massive distributed denial of service attacks against Estonia after “a row that erupted . . . over the Estonians’ removal of the Bronze Soldier Soviet war memorial in central Tallinn.” The “websites of government ministries, political parties, newspapers, banks, and companies” were disrupted.28 To this day, there is no conclusive proof directly connecting the Russian government to the cyberattacks, but circumstantial evidence abounds. Konstantin Goloskokov, a leading member of a pro-Kremlin Russian youth organization called Nashi, admitted that Nashi had been involved. Goloskokov was not only a Nashi activist, he was also assistant to Sergei Markov, a Duma Deputy (the equivalent of a U.S. member of Congress).

A Russian word meaning “ours,” Nashi is the short form of “Youth Democratic Anti-Fascist Movement ‘NASHI.’” The organization is officially funded by private Russian business interests, but its creation on April 15, 2005, was enthusiastically endorsed by the Russian government. And, by 2007, Nashi was receiving the vigorous endorsement of prominent government figures, including Vladislav Surkov, who was then first deputy chief of the presidential staff and subsequently Russia’s deputy prime minister.29 In 2008, Russia invaded the Republic of Georgia (which had become independent from the Soviet Union in 1991) in a successful effort to break off two self-proclaimed republics, South Ossetia and Abkhazia, from Georgia and establish a Russian military presence in them. The invasion and “kinetic” battle were preceded by Russian cyberwarfare attacks.30

As for cable sabotage, it is nothing new and “was common during both World Wars.” During the Cold War, the Soviets were suspected of tampering with the transatlantic cable off Newfoundland, and the U.S. Navy deployed divers from submarines to tap into Soviet military communication in Operation Ivy Bells. But even non-nefarious activities are a menace to transoceanic cable. Such things as “dropped anchors and fishing nets” account for “about 60 percent of cut cable incidents.”31

Operation Ivy Bells, which began in the 1970s, ended abruptly in 1981 when an NSA employee, Ronald Pelton, sold Ivy Bells information to the Soviets for $35,000.32 The betrayal of Ivy Bells did not permanently end tapping of the undersea portion of the Internet’s backbone, however. As revealed by NSA contractor Edward Snowden through material he leaked via WikiLeaks and interviews published in The Guardian, Washington Post, Der Spiegel, and The New York Times, the U.K. GCHQ (Government Communications Headquarters) and the U.S. NSA operated collaboratively to tap virtually all digital data traveling through undersea cables. The volume was tremendous. Just one British program, Tempora, was vacuuming up some 21 million gigabytes every day. Snowden’s revelations concerning the NSA PRISM/US-984XN program exposed a technologically advanced operation for analyzing the raw data acquired through eavesdropping on the cable traffic.33

The actual tapping of the undersea cables, however, is strictly old-school, “extremely secretive, but . . . similar to tapping an old-fashioned, pre-digital telephone line” at locations along some 550,000 miles of cable about the diameter of a garden hose.34 A 2005 Associated Press report published in The New York Times and elsewhere described the USS Jimmy Carter (SSN-23), a nuclear submarine commissioned in February 2005 and fitted out with “a special capability . . . to tap undersea cables,” presumably by deploying Navy SEALs or other specialized divers to “physically place . . . tap[s] . . . along the [cable] route.”35 The locations most vulnerable to physical taps are at “regeneration points,” where devices amplify signals at intervals in what is a very long journey. “At these spots, the fiber optics can be more easily tapped, because they are no longer bundled together, rather laid out individually.”36

Of course, physically tapping cables would be easier to do on dry land than underwater, but this is impossible if the undersea cable makes landfall on the coastline of an unfriendly or unwilling country. Fortunately for the NSA, the U.K. is both friendly and willing, and, because of its Atlantic island location, it is the terminus of many cable routes. Tappers use so-called intercept probes to make the physical taps. These “small devices . . . capture the light being sent across the [fiber optic] cable. The probe bounces the light through a prism, makes a copy of it, and turns it into binary data without disrupting the flow of the original Internet traffic.”37

THE INSTABILITY OF THE HUMAN-MACHINE INTERFACE

The physical infrastructure of the global Internet is massively vulnerable from a security point of view, even in mid-ocean or where ocean meets land. The most numerous network weak spots, however, are the millions, billions, and, yes, trillions of nodes on the global Internet. The places where devices—ranging from desktop, laptop, and tablet computers to smart thermostats to smart valve control devices on oil and natural gas pipelines to smart televisions to personal performance and health devices (such as smart watches)—communicate with digital networks are the entry-exit ports of the Internet. Physical taps are possible at many of these nodes, but they are hardly necessary for eavesdropping. As we saw in the case of the Target breach (Chapter 1), malware programs conveyed by the Internet itself are the most effective means of infiltrating targeted networks and computers. In some cases, the malware is installed purposely by someone onsite—a rogue employee, perhaps, or a covert agent armed with a program on a USB thumb drive. Most of the time, however, the malware is installed remotely by means of “social engineering” confidence tricks that include:

imagePretexting—in which the attacker invents a scenario that deceives an innocent insider into doing or divulging something that gains the attacker access to the targeted network or computer.

imageTheft by diversion—in which the attacker persuades an innocent insider to divert data or messages intended for a legitimate recipient to a different target.

imagePhishing—the most common means of attack, which typically uses an email that appears to come from a legitimate source (often a bank, credit card company, or business) requesting “verification” of personally identifiable information (PII) to avert some serious and unwanted consequence (usually suspension of an account). Skilled attackers create counterfeit emails that are virtually identical, logo graphics and all, to legitimate emails the company might send. Attackers also tend to counterfeit messages from firms with immense customer bases, so that the chance of the email being read by an actual customer is quite high.

imageSpear fishing—a specialized form of phishing, in which the attacker obtains certain specific information on the target and uses it to customize a counterfeit email. If the information is accurate, the customization greatly increases the odds of the attack succeeding. The 2016 hack of the Democratic National Committee (see Chapter 3) was an instance of spear phishing targeted against a high-level campaign official, Hillary Clinton campaign chairman John Podesta.

imagePhone phishing—another phishing subset also sometimes known as IVR phishing. Using an automated interactive voice response (IVR) device, the attacker simulates a call from a legitimate business (typically a bank or credit card company), prompting the callee to “verify” account and identity information (especially PIN numbers and passwords) by entering them via the telephone touchpad. The verification is urgently requested to avoid account suspension.

Collectively, these social engineering attacks employ age-old confidence tricks adapted to digital technology. Moreover, many phishing exploits, including some of the most common and destructive, require very little crafty persuasion. They simply prey on our natural curiosity or greed. Emails are blasted to tens or hundreds of thousands of addressees, inviting the recipient to click on a link to obtain something free or at an impossibly steep discount, or to see something interesting or new. The recipient who yields to curiosity and clicks the link inadvertently opens an executable file that installs a malware program, which infiltrates and compromises the local network on which the computer is located.

Because such exploits are so common, many computer users have become sufficiently savvy to avoid clicking on links found in unsolicited emails from unknown senders. No matter. Many malware programs introduced via executable email attachments hijack the user’s email account, using it to send the attacker’s emails—just as if they were being sent from the hijacked user’s account. Moreover, in hijacking the email account, the malware also accesses the targeted user’s address book or contact list, so that the counterfeit emails are sent not only from the targeted user’s address but to recipients who know and presumably trust the targeted user. People too levelheaded to open an attachment associated with an unsolicited email from a stranger often open, without a second thought, an attachment apparently sent from a known and trusted source.

ACTION ITEM

 

Understand that malware is often used to infiltrate and hijack email accounts, thereby giving spammers and other cyber fraudsters the ability to send phishing emails to everyone in the address book of the infiltrated email account holder. To the recipient, these fraudulent emails appear to come from someone they know. Even spear phishing emails sent from the email account of a friend or colleague are usually fairly easy to spot. Do not click on any attachment in an email from a known sender if the content of the email seems uncharacteristic of the sender. For example, a vendor with whom you have a business relationship is unlikely to send an email that begins “Now is your chance to get that job you really want. . . .” If the content of an email is doubtful, contact the sender to ask about it. You can reduce the likelihood that your own email account will be hijacked by periodically changing your account password.

Network attacks based on social engineering demonstrate that the most vulnerable network nodes—or, more accurately, the most vulnerable components of a given network node—are human. The softest target is the person behind the screen and with a finger on the mouse or a pair of thumbs on the virtual keyboard of a smartphone or who, suffering password overload, fails to create a password or change the default password when he or she sets up a new smart thermostat or other device on the IoT. We ourselves are the weakest link in our digital networks. We ourselves are the reason the hardware and software of our networks must be made resilient. Of course, we human beings are educable. But will we heed the lessons? Yes. But do we make mistakes? Sure.

INSTABILITY ON THE INTERNET OF THINGS

Whether they are in business for themselves, employed by organized crime, employed by “legitimate” companies, or employed by governments (as civilians, freelancers, or members of the military), hackers are not waiting to find out the answers to these questions. As noted in Chapter 2, in October 2016, Dyn, an Internet performance management company whose products control much of the DNS, the Internet’s vital domain name system, came under two massive distributed denial of service (DDoS) attacks.

DDoS attacks are common, but what made these 2016 attacks newsworthy was that they used the Mirai botnet to infect a very large number of computers, forcing an attack that overwhelmed Dyn servers. As The Guardian reported, “Unlike other botnets, which are typically made up of computers, the Mirai botnet is largely made up of [far more numerous and less well defended] . . . ‘internet of things’ (IoT) devices.”38 Shortly after the Mirai botnet attacks were revealed, Senator Mark Warner (D-Va.) called for “improved tools to better protect American consumers, manufacturers, retailers, Internet sites and service providers,” and Mark Dufresne, director of threat research at Endgame, a Virginia-based cybersecurity company, warned of “the dangers of this IoT running rampant,” aided and abetted by “bad to middling security [with] nobody . . . knocking it out of the park.”39

Although exploiting the IoT with botnets is a new wrinkle in mounting DDoS attacks, security concerns over the interface between computer software and devices in the “real world” have been around for a long time and actually predate the rise of the Internet. Back in January 1982, “President Ronald Reagan approved a CIA plan to sabotage the economy of the Soviet Union through covert transfers of technology that contained hidden malfunctions,” including software with embedded features—called “logic bombs”—designed to trigger catastrophic equipment failures. When a KGB insider revealed to U.S. government agents that the Soviets were looking to steal advanced SCADA (Supervisory Control And Data Acquisition) software for running natural gas pipeline control apparatus (pumps, turbines, and valves), the CIA persuaded a Canadian firm that designed such software to insert a logic bomb into a program the agency knew the KGB would steal. As Thomas C. Reed, former U.S. Secretary of the Air Force and Reagan administration Director of the National Reconnaissance Office, recounted in his 2004 memoir, At the Abyss: An Insider’s History of the Cold War, the purloined software was duly installed to control critical machinery along Soviet natural gas pipelines. After what Reed called “a decent interval,” the embedded logic bomb caused a reset of “pump speeds and valve settings to produce pressures far beyond those acceptable to pipeline joints and welds. The result was the most monumental non-nuclear explosion and fire ever seen from space.” Reed reported that “there were no physical casualties from the pipeline explosion,” but “there was significant damage to the Soviet economy.”40

The Russians proved themselves remarkably adept at using digital means to tamper with real-world events during the 2007 dispute with Estonia over its removal of “the Bronze Soldier of Tallinn” from a downtown city park to the Defense Forces Cemetery outside of the capital city proper. By 2007, Estonia had earned the sobriquet “e-Stonia” because it was perhaps the most thoroughly wired nation in Europe. Its 1.3 million people were intensively networked. By November 2005, the government had shifted most of its operations entirely to the Internet. All official documents were executed and signed electronically. Cabinet-level meetings were conducted in cyberspace, and Estonians voted entirely online. In 2007, 61 percent of the Estonian population accessed their bank accounts online—exclusively—and, overall, 95 percent of all banking transactions were electronic.41

While ethnic Russians living in Tallinn staged protests against the removal of the Bronze Soldier, the nation was barraged by DDoS attacks, in which botnets consisting of tens of thousands of computers overwhelmed key Estonian websites with log-on requests, thereby disabling them. The botnets were international, networked systems normally used by disreputable e-commerce providers to disseminate spam. During these attacks, Russian online chat rooms buzzed with calls to action and included instructions on how to participate in the DDoS attacks. Estonian “government websites that normally receive 1,000 visits a day reportedly were receiving 2,000 visits every second.” The servers of a government network designed to handle 2 million megabits of traffic per second were flooded with some 200 million megabits per second. In one ten-hour-plus sustained attack, more than 90 million megabits per second of data were unleashed against Estonian targets. These included the websites of the Ministries of Foreign Affairs and Justice, which were beaten into a total shutdown. The Reform Party website was defaced with digital graffiti that included a cookie-duster mustache scrawled, à la Hitler, across the face of Prime Minister Andrus Ansip. And then, on May 3, the botnets turned DDoS attacks against Estonian private-sector websites and servers. This quickly forced most of the country’s banks to shut down, the ripples of the attack reaching well into the international banking community.42

Moscow vigorously denied involvement in the attacks, the volume of which peaked on May 9, 2007, the Russian anniversary of the end of World War II. More recently, of course, Russia attacked the 2015–2016 U.S. presidential campaign and election. In a statement eerily reminiscent of the 2007 cyberattacks on Estonia, Russian President Vladimir Putin told reporters at the St. Petersburg Economic Forum that “‘patriotic hackers’ may have meddled in the U.S. election, but insisted that none of their potential activities were state-backed.” He even likened these individuals to “‘artists,’ who could act on behalf of Russia if they felt its interests were being threatened.” Artists, Putin said, “may act on behalf of their country, they wake up in good mood and paint things. Same with hackers, they woke up today, read something about the state-to-state relations. If they are patriotic, they contribute in a way they think is right, to fight against those who say bad things about Russia.”43 Yet the Russian president denied that his government had launched the attacks.

By the peak of the 2007 attacks on Estonia, the Estonian government managed to respond to the onslaught by quadrupling the data capacity of its systems, and the attacks began to subside. On May 15, however, Russian hacktivists did manage, albeit briefly, to disable the national toll-free emergency phone number. In 2004, Estonia had joined NATO, which responded to the 2007 DDoS attacks by looking into the possibility of invoking Article 5 of the NATO charter, a provision that obligates all members to respond to an act of war against any member as an act of aggression against itself. NATO quickly backed down, however, because, as the Estonian minister of defense explained, “At present, NATO does not define cyberattacks as a clear military action. This means that the provisions of Article 5 of the North Atlantic Treaty . . . will not automatically be extended to the attacked country.”

Commenting on NATO’s dithering, one historian observed, “Technology had transformed NATO’s ring of steel around its members to a fence of tissue paper.”44 The attack did, however, move NATO to create the Cooperative Cyber Defense Centre of Excellence (CCDCOE) in Tallinn beginning in 2008. Eight years later, it should be observed, in 2016, retired Admiral James Stavridis, who commanded NATO from 2009 to 2013, cited the DNC hacks in his assessment of the current state of U.S. digital networks. “It is the greatest mismatch between the level of threat, very high, and the level of preparation, quite low,” he said on December 15, 2016. “We’re headed toward a cyber Pearl Harbor, and it is going to come at either the grid or the financial sector.”45

CREATING DIGITAL RESILIENCE ONE NODE AT A TIME

In its early evolution from the ARPANET and NSFNet, the Internet incorporated in its architecture and protocols the rudiments of resilience (Chapter 4). As the Internet has grown from four networked university computer systems (at UCLA, Stanford, UC Santa Barbara, and the University of Utah) in 1969 to quite probably a trillion or more nodes in 2016, it has become less rather than more resilient. Senator Warner has called for “improved tools to better protect American consumers, manufacturers, retailers, Internet sites and service providers.” This is laudable, but we need more than tools. We need—and by we, I mean all stakeholders in the Internet, which is very nearly everyone on our intensively networked planet—to create a culture that prioritizes resilience. To be sure, we must give security a high priority, but we also must acknowledge that no practical degree of security will ever be bulletproof.

We need to design our systems to withstand the inevitable shots that manage to find their targets. In the meantime, we must recognize that the Internet is vulnerable, which means that our thoroughly intertwined digital and analog networks—physical, social, political, financial, mechanical, and kinetic—have created an inherently unstable, insecure, and nonresilient environment. This being the case, we must design whatever networks we control to be resilient. Whether those networks consist of a handful of devices connected to the Internet or a vast corporate intranet that connects thousands of nodes to the Internet, we must take measures to ensure their resilience. This is a matter of immediate self-defense and the defense of those who are our customers, employees, and investors. This is also our contribution to the defense of the Internet we all share. Each resilient node and element of resilient architecture we add to the Internet contributes to our collective digital resilience.

TAKEAWAY

In an intensively interconnected world, instability is the status quo. We rely heavily on digitally networked devices with functions and security parameters that change depending on the context in which they are used. Because these dynamic changes are automated, we remain largely unaware of the underlying instability. Indeed, digital automation engenders complacency but actually requires heightened vigilance and the thoughtful exercise of informed judgment. We must act both individually and collaboratively—as members of a digital society and civilization—to increase the resilience of the networks we control directly as well as those to which we connect.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset