6

THE MEASURE OF RESILIENCE

Assessing and Improving Your Digital Resilience

Back in high school, I read Herman Melville’s Moby-Dick, one of those datasets formerly known as a book, reproduced in the millions and therefore in little danger of disappearing when some unique URL goes dead. Aside from Ahab and the Whale, one episode has stuck with me through the years, and I know why. It’s about networks. I recently reread it.

It’s Chapter 72, “The Monkey-Rope,” and it describes a bit of whaler’s gear called by that name. Ishmael (the novel’s protagonist and narrator) ties one end of the monkey-rope around his waist and stands on deck while the “harpooneer,” Queequeg, ties the other end around his waist and is lowered down on the half-submerged harpooned whale tied up alongside the ship. Queequeg’s task is to plant a “blubber-hook” into the dead animal, so that it can be cut into transportable pieces that are loaded aboard ship. The sea is rough (in the language of network analysis, it is “unstable”), and hungry sharks, drawn by blood in the water, circle round and round. Queequeg’s life depends on Ishmael, and Ishmael’s on Queequeg. If Ishmael lets Queequeg slip, the harpooneer could be crushed between the whale carcass and the ship’s hull or he could become instant shark food. If Queequeg should fall, he would likely pull Ishmael down with him. “So strongly and metaphysically did I conceive of my situation then,” Ishmael tells us, “that while earnestly watching his [Queequeg’s] motions, I seemed distinctly to perceive that my own individuality was now merged in a joint stock company of two; . . . that another’s mistake or misfortune might plunge innocent me into unmerited disaster and death. . . .” This situation seemed to Ishmael “a gross . . . injustice” aimed squarely at himself.

And yet still further pondering—while I jerked [Queequeg] now and then from between the whale and ship, which would threaten to jam him—still further pondering, I say, I saw that this situation of mine was the precise situation of every mortal that breathes; only, in most cases, he, one way or other, has this Siamese connexion with a plurality of other mortals. If your banker breaks, you snap; if your apothecary by mistake sends you poison in your pills, you die. True, you may say that, by exceeding caution, you may possibly escape these and the multitudinous other evil chances of life. But handle Queequeg’s monkey-rope heedfully as I would, sometimes he jerked it so, that I came very near sliding overboard. Nor could I possibly forget that, do what I would, I only had the management of one end of it.1

For Melville and his Ishmael, the monkey-rope is a metaphor of the essential interconnectedness of life in a civilized society. You may imagine you are independent and entirely self-determined, responsible for your debts and signature only, as the legal phrase goes, but no man is an island and do not ask for whom the bell tolls. In my own metaphysical mood, I see Melville’s monkey-rope as a metaphor of the even more intensively interconnected condition of today’s digital networks. You may imagine that your VPN, your firewalls, your lengthy passwords, your antimalware software, and your rigorously observed safe-computing practices make you independent and entirely self-determined. Let’s even say that yours is so small a business that you personally know and can vouch for every single person on your company LAN.

What could possibly go wrong?

Assuming your modest little cul-de-sac is connected to the Internet, possibly anything, possibly everything. You and your company are tied by the waist to the whole wired-in world. It is a world, as Chapter 5 suggests, that is far more complex than anything Herman Melville could have envisioned in 1851. Ishmael and Queequeg were tied together, one to one. But each of us is tied to potentially trillions of network nodes and, through them, to millions, even billions, of people. How do we even begin to get a handle on such a network?

THE END OF SECURITY—AND WHAT TO DO ABOUT IT

When computers were widely introduced to government and business during the 1960s—it was the era of “big iron,” the age of the mainframe—the principal security issue was physical security. Computer facilities had raised floors, special air conditioning, and were carefully secured from onsite tampering with keycard locks accessible by “authorized” people only. When people and businesses first went online (but not yet on the Internet) in a big way during the early 1980s, a period that coincided with rise of the “personal computer” and the gradual retreat of the mainframe, cybersecurity rapidly emerged as an issue. Initially, the problem was “hackers” uploading malware of various sorts mostly via floppy disk. So, yet again, security was largely an onsite issue. But as local area and wide area networks (LANs and WANs) began to play an increasingly important part in the way businesses and other institutions used computers—and then with the emergence of the Internet in earnest in the mid-1990s—the concern became specifically online security. Malware could be uploaded remotely and systems thereby compromised.

This era saw the proliferation of viruses and, correspondingly, antivirus software as well as such network security systems as hardware and software firewalls. Such “point solutions” worked reasonably well against “point threats,” which were mostly virus-based attacks that were well understood. Because the early viruses had identifiable “signatures,” they were relatively easy to identify and neutralize. For years, a host of antivirus companies diligently collected the latest viruses, quickly developed appropriate antivirus modules, and promptly distributed updates (“patches”) to their installed antivirus products. Provided that users kept their security software patched and therefore up to date, they were reasonably secure against most threats.

ACTION ITEM

 

Application vulnerabilities are continually discovered, and attackers ceaselessly develop new malware. Software patching, therefore, remains essential to cybersecurity. Most of the updates software makers supply on a regular basis are related to patching newly discovered vulnerabilities, meeting new malware threats, or generally improving security. Do not fail to install updates and patches as they are received. Most of the larger software makers offer push or even automatic updates. Enable these. IT managers need to ensure that all devices on their networks are routinely updated and patched. Threats are dynamic. Defending against them must be dynamic as well.

The era of cybersecurity based solely or primarily on antivirus, antimalware, and firewall solutions is over. Don’t get me wrong. These point solutions to point threats are still very necessary, in fact, essential to good cybersecurity practices, but they are no longer sufficient to it. Not by a long shot. Today’s cyber threats are commensurate with today’s trillion-node Internet. They are, in a word, overwhelming—overwhelming in terms of their complexity as well as in their sheer volume. The threats come from everywhere. They are inside, and they are outside. They come by simple emails, through phishing, and via even more sophisticated carefully customized spear phishing attacks. They are directed, typically, at people, all of whom, under the right set of circumstances, are vulnerable to skilled social engineering or to a momentary lapse in prudence and judgment. They come from malware that learns on the fly. They come from malware that seeks patterns of data and that operates completely autonomously until it finds its treasure and then calls home.

Although good cybersecurity, including the application of all security patches and updates, reduces the fraction of attacks that result in a breach, over the long term, both threats and breaches are impossible to prevent. So, what can you do?

With absolute prevention off the table, the goal of always being able to stop, evade, avoid, or otherwise defeat every attack is now unrealistic. It is no longer enough to have the highest, thickest, strongest wall along the perimeter of your network. In fact, we are so intensively and extensively interconnected, and we rely so thoroughly on connectedness, that we cannot afford to cower behind our walls. A security strategy that cuts us off from threats also cuts us off from business itself. What we need is to defend our networks as best we can with good technology, policies, and practices, but we must also be able to identify incidents when they begin and have the knowledge of how a particular incident is attacking us. To gain this intelligence, we need to know our network as intimately as we know the layout of our office or home. This means upping our cyberdefense capability with digital resilience as a core tenet of our cybersecurity strategy.

This is not a choice. It is a necessity. We are in a new era in which the best prevention and protection systems—no matter how extraordinarily sophisticated—are incapable of keeping all the bad guys out. The public sees evidence of this practically every week in news reports. And what the public sees is a mere fraction of what is reported daily in the security industry media. Indeed, many organizations are not legally required to report breaches, and many therefore go unreported. The volume and the effectiveness of the threats are not due to the carelessness of businesses, government, and other institutions (although sometimes these users are very careless). It is not because IT folks do not know how to run their security systems. (They do.) It is not because the security industry produces bad products. (They produce some of the best products that have ever been available.) The success of today’s threats comes from the fact that the bad guys have figured out how to engineer attacks that get inside the network through normally open doors, even when the network has walls, guards, access control, and encryption.

Resilience is a strategy that admits this fact without surrendering to it.

URGENT—MOST BUSINESS NETWORKS LACK RESILIENCE

I have invested a good many words in this book defining and discussing resilience. My purpose has been to show that, far from a novel concept, resilience is the way people, businesses, governments, nations, institutions, biological organisms, and ecosystems have been surviving and thriving for untold centuries. It is more recent reality that has me worried. I have observed, over the last several decades, that we have rarely applied resilience strategies to our digital networks.

Our digital networks are under unprecedented threat today. Resilience, as applied to them, can be thought of simply as the capability of containing an incident and taking swift action before the whole network or organization is compromised. While the threat is contained, the network can withstand the attack and continue to operate even as the users of the network go about eliminating the threat. Once the attack is over or the threat has been defeated, a resilience strategy also prescribes a program of recovery that includes learning from the attack and hardening the network accordingly.

Remember, it isn’t as if resilience is an alien concept. Contemporary society is so focused on resilience that our government has an organization like the National Transportation Safety Board (NTSB) to review air and other transportation disasters, discover what went wrong, and give the industry feedback on how to make things better, more resilient, and therefore safer for us all. Resilience is so important to us that we take organizations like the NTSB pretty much for granted. Yet nothing like this exists on the digital network, even though half of all businesses talk to their customers primarily or exclusively through that network. They are, primarily or exclusively, digital businesses.

As I explained in Chapter 2, resilience is even part of the heritage, the DNA, of the Internet. At its inception, Internet architecture incorporated certain inherently resilient features. That historical fact, added to the many examples of resilience we have in the analog world, makes even more flabbergasting and frustrating the fact that most of today’s digital networks are being designed with little or no resilience in mind.

HOW TO ADDRESS THE GREAT WEAKNESS OF TODAY’S BUSINESS NETWORKS

The great weakness of networks today is the absence of holistic, systemic thinking that goes into the typical vast and costly corporate network. It is a structure that has usually grown over time and therefore consists of hundreds, thousands, or even hundreds of thousands of different devices from a spectrum of more than a hundred different vendors. Each device has a specific, critical function. The result is an interconnected assemblage that is piecemeal and uncoordinated.

Moreover, most security products are just as atomized as the other elements of today’s typical network. A particular security product may deal with the firewall, for instance, or with a host computer. But what we really need are network components and security products that are designed with the entire network in mind. The old saying about a chain being only as strong as its weakest link is very true of digital networks. Ninety-nine robust components and one vulnerable component do not add up to a network that is 99 percent resilient and just 1 percent vulnerable. The sum of 99 + 1 in this instance is a nonresilient network, period. In fact, the compromise in resilience may be enormous—way beyond what the 99 robust versus 1 vulnerable ratio might suggest. Think of a large, complex network as a big, sophisticated live spreadsheet loaded with formulas. Change something in cell B-25, and it may affect the content of E-18 or D-12 or C-5 or any combination of these and more. But imagine that you didn’t even build this spreadsheet. You have no idea what formulas are connected to what cells. Networks these days have many authors, and many of those original designers are long gone. So, it’s no wonder you don’t know what happens in cell C-5 if you change the value in cell B-25. Or in the case of a digital network, change a router configuration in Spokane, which will have an impact elsewhere in the network—in Spokane or Poughkeepsie or Kathmandu.

ACTION ITEM

 

Start treating your network as you would a spreadsheet for a complex and critically important P&L. Know that if you change something in cell B-25, it will affect other cells, other operations, other outcomes, and maybe even the bottom line. It may distort the entire spreadsheet—perhaps catastrophically. But even if the change makes the spreadsheet just a little wrong, is this acceptable?

If you don’t know how that router change will affect your network, you cannot design for resilience. If your heterogeneous collection of components were not each designed to work optimally together in the first place, you cannot now design for resilience. Look, we have already admitted that the Internet is unbelievably complex, its complexity only partly defined by nodes numbered in the trillions. You cannot do anything about the resilience of the Internet. You can, however, introduce resilience into your particular node on that Internet, your system, your network inside your firewall(s). The first step toward this is to understand your network. To gain this understanding, you need a comprehensive, accurate, and up-to-date overview of the entire network. I don’t mean a simple flat plat with symbols connected by lines as on a paper roadmap. I’m talking about a dynamic model, in which the symbols have a unique configuration and the lines represent specific traffic types and flows. You can still think of it as something like a map if you like, but it is more akin to a Google map with the underlying street, lane, directionality, and near real-time traffic information. With a model like this, you can make a meaningful judgment about the resilience of the whole network, not just its disparate parts.

There are software tools available that provide this kind of complete, current picture of your network. Full disclosure: This is where my own company comes in. At RedSeal we have created an enterprise-class software capable of building a network model across even the largest organizations, able to perform diagnostics and to provide measurements telling you where the problems are. Most important, it will tell you if you are making your network better, more resilient, as time passes. RedSeal provides measurements that show you how the heterogeneous elements of your network actually operate together and just how this interoperation produces or fails to produce increasing digital resilience based on your organization’s policies and designs.

No matter how different they may be, each component in your network does have at least one thing in common. Every one of them includes configuration software that allows the component to operate on the network. Software such as RedSeal’s network modeling and risk-scoring platform reads the configuration files from your network devices and applies diagnostics that, based on years of experience and vendor information designed into it, renders a judgment about how well or poorly each device is configured to operate in your environment. This configuration data is then loaded into our enterprise modeling software, which builds a network map—or, more precisely, a software model of your network. This gives you an overall view of the network as it exists right now.

WHY YOU NEED TO GET TO KNOW YOUR NETWORK

There is nothing new about mapping a network. Indeed, most companies or institutions have such maps and there are numerous products, many of them free, that give you a “picture” of a network. But pictures are flat, dead flat, in fact. Usually, they depict only how the network was initially built. They do not reflect its inevitable evolution over time as devices are added, subtracted, or otherwise modified and as users join or leave. Further complicating this evolution are changes beyond devices and users, as instances of virtual or cloud networks are accessed, and as technologies such as Amazon Web Services (AWS) are used to set up cloud-based servers on demand. The cloud has brought to many business networks connections that the organization’s IT department may not even know about. What is required is a picture as close to real-time as possible, a network model that is continually refreshed to create a dynamic map for a dynamic system that changes profoundly whenever something is added, changed, or taken off the network.

ACTION ITEM

 

Use network modeling software to create a full, frequently refreshed map of your network and all its connections. Static maps drawn when your network was first designed are as useful in understanding your network today as a map of Caesar’s Rome is useful in navigating the streets of modern Rome.

All organizations have corporate policies pertaining to their networks and their operations. In addition, governments and industry associations impose regulatory policies, such as NERC CIP, or PCI DSS for the electrical power industry, or HIPAA in healthcare. And there are organizational policies that dictate who gets what from where and how they can reach that information. With a network model, these diverse policies can be tested. If they prove not to be compliant, a good model will offer suggestions to make them compliant. A really good model will also allow you to test those suggestions for effect and effectiveness. For example, a simple policy barring a classified network from communicating with an unclassified network should be readily testable with a network model. Or consider the PCI DSS policy. Every company with customer payment card information on the Internet has to abide by it, and it is complicated. Nevertheless, it can be fully expressed in a model that everyone can understand. And yes, there are policies, including PCI DSS, that require audits. The ability to image your network as it currently exists will help you to ensure that your network policies are in place and performing as they are designed to. A dynamic map will tell you when reality drifts out of alignment with policies, letting you know if you have a problem before the audit begins.

THINK LESS ABOUT YOUR NETWORK, THINK MORE ABOUT YOUR DATA.

Today’s networks do not stop at the perimeter of your property. No analytical mapping software is adequate if it fails to extend the network model through the physical network to your virtual network and your cloud-based network. Creating and maintaining resilience requires understanding and controlling connections to the outside and access to the connections on the inside.

The phrase “access to your network” is fairly meaningless. What matters is access to your data—and even this phrase does not convey the whole story. Some data should be accessible to just about anybody who wants it. Other data needs to be protected from casual access by the public, vendors, partners, or customers. Indeed, not every employee connected to your internal network should have access to every category of data. So, the only meaningful way to plan “access” is to define categories of data, prioritize the categories in terms of degree of access that should be allowed, and segment your network accordingly. Just as it makes no sense to grant public access to your most sensitive data, it is also destructive to your business to arbitrarily restrict access to data that your customers need, the general public needs, or most of your employees most of the time need. Security is about security. Resilience is about business. A resilient business provides degrees of access that promote productive access to data crucial to promotion, presentation, and transaction while jealously guarding access to prized intellectual property and sensitive financial data. The structure of the network needs to accommodate such crucial differentiation.

Dynamic modeling of your network must include a reliable vulnerability scan that will allow you to perform network triage by creating a priority list of vulnerabilities for the whole system, not just for specific devices. It is often the case that a host or computer with a severe vulnerability may be well protected by good network architecture, but a host or computer with a less severe vulnerability is directly connected to an untrusted router. Common sense dictates that you fix the severe vulnerability first. But common sense does not always yield the most effective strategy. Sophisticated modeling and risk-scoring software identifies known software vulnerabilities on the devices in a network. Good management systems score the severity of the vulnerability and determine what kind of data is on a device. The nature of the data is critical to determining its value. (Databases, for instance, are high-value assets.) Traditionally, prioritization has been based on vulnerability severity and on the value of the data asset. This view almost always results in a list of vulnerabilities far too numerous to patch or otherwise address.

The RedSeal model, for example, adds a third dimension to the triage, the element of accessibility. How reachable are the vulnerabilities via untrusted networks? And just what can be reached, should a given device be compromised? By answering these questions, you have the ability to model your network the way a skilled combat commander models a battlefield—not just in one-dimensional terms of simple vulnerability, but in degree of vulnerability, relative value of assets, and accessibility of assets. If you know how and where your network is exposed, and you have a complete and granular understanding of the value consequences of the exposure, you know what areas to harden and what areas to attend to first.

ACTION ITEMS

 

Digital resilience begins with deep knowledge of your data and your networks. Using the best available modeling and scoring tools, build this knowledge by—

1.Verifying that device configurations comply with relevant regulations and industry best practices.

2.Modeling your network as it exists at this moment—in other words, collecting configuration and operation data of your network devices as often as you deem necessary and without burdening your network.

3.Visualizing end-to-end access and path details, so that you see intended as well as unintended access among all the parts of your network. You need to know what access paths exist from the Internet into your network. Are all access paths authorized? Do any—authorized or not—expose privileged or sensitive areas of your network to external threats?

4.Measuring your network resilience. You cannot manage what you cannot measure; therefore, measurement is essential.

5.Identifying hidden areas of your network—the “scary parts” of your network, areas you don’t even know exist. These can be significant security risks.

6.Prioritizing vulnerability patching. A map—even a live and accurate map—of your network is not enough. You need to triage your vulnerabilities so that you can allocate resources to first patch those in most urgent need of attention—based on the network situation, not on the absolute value of the vulnerable asset.

7.Verifying network security policy. You need to know if your security policies are being implemented as specified. This is essential information for assessing the real resilience of your network. A security policy is meaningless unless it is implemented. In fact, it is worse than meaningless, because it is deceptive. This exposes you to both operational and legal liabilities. The most basic knowledge managers need is knowing what they know as well as what they do not know about their networks. No state of ignorance is more potentially destructive that not knowing what you do not know.

8.Prioritizing network change control. You need the capability to assess the security impact of potential or proposed changes to your network. Model the changes you want before you implement them to ensure that you do not cause unintended issues. Perform virtual penetration testing to identify unintended access and other issues. By testing proposed configurations and conducting penetration testing, you get the information you need to optimize your existing cyber investments.

WHAT HAPPENS WHEN NOBODY PRIORITIZES THE DATA

Back in 2010, Chelsea Elizabeth Manning was a U.S. Army private named Bradley Manning, an intelligence analyst who, as he explained in a message to threat analyst and self-confessed convicted hacker Adrian Lamo, “deployed to eastern Baghdad, pending discharge for ‘adjustment disorder’ in lieu of [his actual medical complaint,] ‘gender identity disorder.’” Put another way, Manning was a soldier of the lowest possible rank who had been placed in a highly sensitive position even though he was slated for discharge after revealing (as he told Lamo) “my uncertainty over my gender identity.” His security clearance gave him what he described as “free reign [sic] over classified networks for long periods of time.” He told Lamo that he saw “incredible things, awful things . . . things that belonged in the public domain, and not on some server stored in a dark room in Washington DC”—yet things that were fully accessible to him in an outpost in Iraq.2

In April 2010, Manning disclosed to WikiLeaks, the organization that publishes secret information, news leaks, and classified media from anonymous sources, almost three-quarters of a million classified or sensitive military and diplomatic documents. He was subsequently tried for and convicted of (among other offenses) stealing government secrets. (Sentenced to thirty-five years imprisonment in August 2013 for the WikiLeaks disclosures, Manning was released on May 17, 2017, pursuant to a commutation by President Barack Obama.3) In legal terms, the charge of “theft of government documents” may be completely accurate. In practical terms, however, Manning didn’t so much steal the secrets as the U.S. Army left them out for the taking. The army had set itself up for the breach either by failing to understand the end-to-end access points of its own networks or by failing to appreciate the potential consequences of how the network was laid out.

Ask yourself: What need did a remote firebase in Iraq have for full access to confidential information, including diplomatic information, stored in suburban Washington on Pentagon servers? Intelligent, timely network modeling and a prudent, practical access policy might have prevented the Manning-WikiLeaks breach. In this case, the weakness in the network was less a matter of digital technology than of mismanaged—or perhaps unmanaged—clearance policies. A low-ranking and poorly performing soldier in an outlying base should never have had access to Pentagon plums. Yet accurate, dynamic network modeling would very likely have surfaced this error in policy, and a good digital posture of resilience might well have prevented the Manning-WikiLeaks breach.

TIME TO GET YOUR HEAD INTO THE CLOUD

Today’s networks typically integrate with cloud systems. It is vitally important to visualize your network including its cloud integration to ensure that these make for a single resilient system that protects you and your customers. You also need to go even farther beyond your own network. You can do little about the security of the Internet, but more and more companies are demanding the capability of assessing and certifying the security of any part of their supply chain that accesses their network. Recall the Target breach discussed in Chapter 1. The origin of the breach was the compromise of an HVAC vendor that served many Target stores. It is true that Target’s corporate network was not properly segmented from its point-of-sale system. That lapse in resilience and security was on Target. However, the HVAC vendor’s network was also insufficiently hardened.

As Target discovered to its dismay as well as that of many of its customers, its network was at least as risky as the riskiest element in its supply chain. When everything is connected, it becomes imperative that companies demand that their suppliers confirm a secure environment. A thorough, accurate, dynamic network analysis can provide that confirmation or indicate what needs to be addressed to achieve a level of security and resilience sufficient to certify adequate compliance.

HOW TO SCORE NETWORK RESILIENCE

“You don’t need a weatherman / To know which way the wind blows,” 2016 Nobel laureate Bob Dylan told us in “Subterranean Homesick Blues” back in 1965.4 Today, in the second decade of the twenty-first century, we don’t need technologists, sociologists, or the authors of Trillions to know that intensive, pervasive, universal digital networking has blown our world into a zone of unprecedented complexity. We work it, and we live it. We cannot evade or escape, let alone deny, the ubiquitous complexity.

A trillion Internet nodes and more is a trillion too many units of complexity to evade, escape, or deny. So maybe Hollywood can help. In the 2015 big-screen blockbuster The Martian, astronaut Matt Damon finds himself going solo on the surface of Mars after his crew, swept up in the mother of all dust storms, leaves him for dead and blasts off for home 33.9 million miles away. As an article in Business Insider (yes, Business Insider) asks, “How do you survive on an inhospitable planet when you’re stranded there alone?” Well, you don’t try to escape, evade, or deny reality. Business Insider quotes the marooned astronaut himself: “In the face of overwhelming odds, I’m left with only one option. I’m going to have to science the shit out of this.”5

Ideally, software tools built to map and analyze your network will not only accurately represent the complexity of the structure and all its devices and connections, but will also “science the shit out of it” with the object of reducing an incredibly complex analysis to something readily comprehensible—not just to IT experts, but to nontechnical C-suite leadership as well. While other companies use different scoring methods, RedSeal uses a numerical score modeled on the familiar “credit score” measure of consumer credit risk.

Let me explain. In the pre-digital days of consumer credit lending, banks and businesses evaluated creditworthiness on “The Three Cs,” namely Character, Capital, and Capacity. Capital (real estate, personal property, investments, savings, etc.) and Capacity (ability to repay, namely income) are directly calculable using arithmetical methods, even if it’s, say, 1930, and all you have is pencil, paper, or maybe one of those adding machines with a big-handled pull lever.

But the very first consideration, the one given pride of place among those Three Cs, Character, is not readily calculable. Doubtless, some banks and lenders made attempts to score a credit applicant’s Character by assigning points for certain actions, virtues, and vices revealed in her personal history. Nevertheless, in the end, the evaluation of Character is a human decision, which, like all human decisions is largely subjective and, therefore, complex. The decision process must often have been subject to wrangling within the lending firm. Unquestionably, some applicants were disappointed, dismayed, or just plain pissed concerning the outcome. Likely, they let their displeasure be known. The lender’s rationale for a given credit decision could be presented as a combination of numbers (in the case of the second two Cs) and human judgment (in the case of the first C). There was demonstrable transparency in the second two Cs, but it would be difficult to rule out the presence of irrational subjectivity or outright bias in the evaluation of an applicant’s Character. The inherent complexity of this part of the judgment must have seemed irreducible, at least in the pre-digital age.

The credit score, a product of the era of digital financial record keeping, sought to finally reduce it all to numbers and then to reduce those numbers to a single number. Like the details of the formula for Coca-Cola, the formulas used to calculate the most widely used brand of credit score are trade secrets and jealously guarded. But the rough composition of that score is open-source knowledge.6

image35 percent is payment history

image30 percent is debt burden

image15 percent is length of credit history

image10 percent is types of credit used

image10 percent is recent searches for credit (mainly “hard credit inquiries,” the kind that occur when a consumer applies for a credit card or loan)

Arguably, the commonly used credit scoring systems set out to “science the shit out of the complexity” involved in evaluating creditworthiness by simply ignoring the first C, Character. Based on what little we know about the rationale behind proprietary credit scoring systems, it is just as arguable that they do quantify everything that can be quantified concerning Character. Payment history, the length of credit history, and the types of credit used all speak in some way to Character. In any case, all credit scoring systems introduce into credit decisions a level of objectivity and transparency that was unavailable in the pre-digital Three Cs model. In emulating the credit score model, RedSeal has sought to introduce similar qualities into a network resilience score.

In the RedSeal system, the higher the score the greater the likelihood that the networks of the business evaluated are sufficiently resilient to withstand a cyber incident and keep running. If you do things to your network that raise your score, you know that you are increasing its cyber resilience. If you do things that lower it, you know that you are making your networks less resilient. The Red-Seal Digital Resilience Score is based on three broad components:

1.How well you know what your digital infrastructure looks like; for this, you need a dynamic model.

2.How well your network equipment is configured.

3.Vulnerability of your computing devices in the context of the network—that is, situational awareness. For example, the software evaluates the presence of known issues and their severity; the location of these issues, including the value of the asset associated with them; and whether the asset in question can be reached directly from the outside or from another vulnerable asset. At the end of the day, this allows you to prioritize the weaknesses in the resilience of the network and focus on them.

These three components contribute to a numerical score, which is intended to aid in evaluating network resilience by resolving complexity without distorting or disguising the salient issues. Using this score, we can harden the soft parts of a network and evaluate the impact of this hardening with a number rather than with speculative narrative analysis. Any transaction, no matter how inherently simple or complex, is made more complicated if more than one language is being spoken in negotiation. Finding a single, common, mutually intelligible language productively simplifies any transaction. In business, the lingua franca is money, which is expressed in numbers. A numerical resilience score does for network resilience what money does for business. It offers a common language by which resilience may be unambiguously evaluated.

With a resilience score, the allocation of funds (input) and the cost-benefit analysis of their application (output) are made significantly more straightforward. For instance: Your company’s chief financial officer (CFO) sees a resilience score of 450. The chief information security officer (CISO) wants $100 million to raise the resilience score to 600. Based on the score, the CFO can evaluate this request by modeling the proposed 100 million dollars’ worth of changes to the network to see how these impact the resilience score. Maybe it will take just $50 million to move the needle to 600—or to 550, which is judged to be good enough for now. Maybe $100 million will fail to push that needle far enough. Whatever the outcome of the what-if modeling, the CFO can produce a judgment expressed in the language of business, in which all the denizens of the C-suite are fluent.

This is in stark contrast to the typical way in which a CISO makes his budget request—by calling for so many additional firewalls, X more antivirus packages, several additional fire intrusion detection systems, and so on. To nontechnical executives, an inventory of hardware and software is fairly meaningless. It is the CISO saying to them, Just take my word for it. We need it. Far more meaningful to most of the C-suite is to be told what outcome the requested funds will produce. Nontechnical executives need a practical way to talk about and evaluate cost versus benefit. It is not a case of technical people dumbing down the details for nontechnical people. It is a case in which everyone must find and speak a common language that tells the truth clearly and without evasion, distortion, or denial.

EVALUATE THE NETWORKS TO WHICH YOUR NETWORK CONNECTS

As mentioned earlier, it is not sufficient to gain knowledge of your network and only your network. You must also go outside of your own network to assess those to which you are connected. Based on this assessment, you may make changes to your network to harden it appropriately, you may demand that a prospective vendor (for instance) harden its network, or you may do both. As a lender would use some form of credit score to assess the creditworthiness of a prospective borrower before making a loan, so too can you use resilience scores to evaluate a prospective vendor or other key business partners before they connect. Similarly, if your organization is looking to acquire another company, a critical part of your due diligence must be an assessment of your two networks. Suppose your network has a resilience score of 650, and you score the network of the company marked for acquisition at just 475. The awful truth is that you are buying a network riskier than yours. Connect to that network, and the resulting total score will drop. By scoring the target company’s network, you can model the impact of that acquisition before you sign, and you can make your decisions accordingly. If you decide to proceed with the acquisition and the resulting connection, you have a basis for negotiating or renegotiating a fair price as well as liability escrow provisions and contingencies.

Knowing your network is a vital element in the due diligence phase of acquisition as well as in your own daily operations. On July 25, 2016, Verizon Communications announced its intention to acquire the Internet business of Yahoo for $4.83 billion. On the face of it, that seemed a bargain-basement price for a company that had been valued at $100 billion back in the late 2000s, when it was the most popular American website.7 But then came Thursday, September 22, 2016, the day Yahoo announced that “the account information of at least 500 million users” had been stolen by hackers—stolen two years earlier, in 2014.8 That was bad until it got worse. On December 14, 2016, the company announced that “more than 1 billion accounts” had been breached, not in 2014, but in 2013.9 The 2014 hack came to light only after 200 million Yahoo accounts were discovered for sale on a “darknet market” website called TheRealDeal.10 The 2013 breach came to light as a result of another darknet black market sales revelation.11

Most businesses would be keenly aware of losing 1.5 billion of anything, and probably within a fairly short time of the loss. In the case of the Yahoo breaches, however, the loss was discovered only after the purloined data was put up for sale. One thing was quite clear. Yahoo managers had considerably less than perfect knowledge of their network.

But if the compromise of 1.5 billion users came as a long-delayed surprise to Yahoo, it was an instant stunner for Verizon. That company was on the precipice of making Yahoo’s problems its own by purchasing them for $4.83 billion; however, on December 15, 2016, Bloomberg Technology reported that Verizon was exploring revising the purchase price downward or simply walking away from Yahoo altogether. In the end, although significantly delayed by the fallout from disclosure of the two data breaches, the deal closed on June 8, 2017 for $4.48 billion.12

ACTION ITEM

 

Your network is only as secure and resilient as the networks with which you connect. Before establishing a working connection with another company—a vendor, a customer, a prospective partner or acquisition—thoroughly evaluate the security and resilience of the company’s networks. Merge with an insecure network, and the insecurity becomes yours.

Mergers and acquisitions have always involved risk—not just the risk that the investment might simply fail to pan out, but that instead of acquiring a valuable asset, you might find yourself saddled with a ruinous liability—say, a shopping mall built on a sinkhole or intellectual property that was not patented properly. M&A in the digital age offers opportunity and risk compounded. You don’t just acquire digital assets, you also risk acquiring a network of undetected digital liabilities. The breach revelations that came close to wrecking the Verizon-Yahoo merger may have been late in coming, but they were not too late. They came before the deal was closed. It takes no strategy whatsoever to dodge a bullet, just a lot of luck. And Verizon was indeed lucky, because neither it nor Yahoo seems to have had an effective strategy for assessing the resilience of the Yahoo network. Well, luck’s certainly a wonderful thing—as long as you are lucky enough to have it. The trouble is, of all the good things you can have, luck is the least resilient. You just cannot rely on it. One universal truth applies to all business and, indeed, to all human endeavor. There is no reward without risk. Get up out of bed, let your feet hit the floor, take a step, and there is a risk that you will fall, plant your face in the floor, and break your nose. Fail to take that step, however, and you forgo any opportunity the dawning day holds. Our intensively connected world offers an unprecedented array of potentially rewarding opportunities as well as an unprecedented array of risks. No connected enterprise can permanently avoid the risk of cyberattack, but it can take measures to manage the risk.

ACTION ITEMS

 

In terms of your networks, here are the key questions you and your risk officer need to ask in order to manage risk effectively:

1.What audit policies do we have in place? How effective are these policies? How can we confirm that these policies—which may be effective in theory—are actually in force as stated?

2.Do we understand our networks? That is, do we have a comprehensive capability of managing our digital enterprise from the point of view of risk?

3.What are we paying for our cyber insurance? Is it enough? Or is it too much?

The ability to present a resilience score as well a model of your network should help you determine your needs and put you in a more advantageous position to negotiate the best possible premium. Demonstrating lower risk is, after all, a strong argument for securing a lower premium.

GIVE DIGITAL RESILIENCE A GOOD SEAT AT THE MANAGEMENT TABLE

In 2005, futurist Thomas L. Friedman published The World Is Flat: A Brief History of the Twenty-first Century.13 The book globally extended the vision of a trend that was already strong in many business organizations as early as the 1970s: the transformation of hierarchies and silos into flatter and flatter corporate organizations. Today, we see this most particularly and most dramatically in how technology has emerged from the back office to envelop not only the entire enterprise but to reach far beyond the walls of any organization. Technology has gone a long way toward making business seamless, borderless, with barely a separation between producer and consumer, and often with nary a brick rising from the flat earth.

This is today’s business landscape. Everything is brought forward. The network can no longer by regarded as the exclusive preserve of “IT.” Of course, we need the tech experts, but IT also needs a seat at the management table, and a CISO or other cyber-security executive belongs in the boardroom too, just as boards today have executives for compensation, audit, and other key functions. The presence of IT and cybersecurity at the highest management levels requires those executives and the others to be fully fluent in the same language. The most natural language for business is the language of business, which is money. Revenues, expenses, profits, taxes, you name it, numbers are the actionable measure by which all business strategy, debate, and decisions begin, pull apart, come back together, and productively conclude.

TAKEAWAY

The universal “Catch-22” of business today is that connectivity, a business necessity, creates vulnerability. Fortunately, good, continuously updated cybersecurity reduces the rate at which cyberattacks are converted into actual breaches. Nevertheless, security alone prevents neither attacks nor breaches. Make no mistake, perimeter defenses are absolutely mandatory, but they are also insufficient against today’s threats. Think beyond digital security to create digital resilience. This begins with knowing your network as intimately as you know the layout of your office or home. Resilience also requires recognition that no network or network component is an island. You need to evaluate all networks with which you establish a working connection. Vulnerabilities in the network of a vendor, customer, partner, or acquisition become the vulnerabilities of your network. As for network components, change or reconfigure one device, and it can affect your entire network.

Employ policies and technologies that allow you to understand the impact of each change before you make the change.

Prioritize your data in terms of accessibility versus the need to restrict access.

Build (or rebuild) the structure of your network accordingly.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset