2. Why Security Fails

Human nature is a funny thing. Take smoking, for instance. We know that it’s bad for us, but millions of people continue to smoke. Even in light of the facts that were uncovered during the lawsuits against “Big Tobacco” in the late 1990s, including the findings that they conspired against the public to hide the dangers of smoking, people still smoke. Yes, the number of smokers is down in the United States, but the rest of the world sees the profit in lighting up. Big Tobacco is even taking their message, and their marketing, to the rest of the world (specifically, China).1 Why? Because cigarettes make money—lots of it!2

So, that brings me to the discussion presented in this chapter. If cigarette companies can produce and successfully market a product that kills people, because of the potential for profit, why can’t we make the assumption that other vendors would operate in a similar vein? I’m not ascribing anything to the security vendors so venal as knowingly killing people; I’m just trying to point out that security vendors exist as a business. Making money and generating dividends is a powerful motivation to sell a product, even a flawed one.

For many, security is about business.

However, like the reasons that people smoke, our security failures are part of a complex situation, with many subtleties and subtexts at work to keep the solution obscure. In the following pages, I suggest that vendors play a role in our failures and that some basic human nature is at work that continues to perpetuate those failures.

I also suggest that security isn’t all about business.

Précis

This chapter covers vendors and their role in getting us buried under our own problems. Dealing with network security is a multi-billion-dollar a year business, and this chapter examines how vendors have driven the process and why this is part of our problem.

The discussion then turns to the growing list of malware, including viruses, Trojan horses, worms, and bots. We cover some of the big failures and how bots present a huge potential threat. Yes, this is a bit of fear, uncertainty, and doubt (FUD), but it’s important to understand how these failures have affected us in the past.

This chapter then examines why we have poor results and how much money we’re spending to get them. We continue to use the same tools, expecting different results, and we’re surprised when things go sideways. We can’t predict results, and this chapter discusses why. Something must be missing; we must be overlooking something pretty basic.

The chapter wraps up with a discussion of the key points regarding malware, vendors, and what may be missing from our solutions.

Special Points of Interest

In the Preface, I defined the difference between we (us) and they (them). I was trying to hammer home a point (but in a nonthreatening way). Well, this chapter is as much about the basic problem of us and them as it is about the threat. As you read through this chapter, keep in mind that basic engineering processes aren’t at work in the security industry. I also want you to understand that I think that the basic business processes that make successful vendors also mask our potential for failure. We continue to suffer failures, and we have no way of knowing when our security solutions are successful. Clearly, we are doing something wrong, and we need to take a step back, reevaluate our present solution, and make some decisions.

As engineers, when we approach a problem, one of the first things we do is explore the limits or set the boundaries. I’ve taken a stance, a somewhat harsh stance, at one end of the spectrum to help illustrate what the worst-case scenario is.

As you read through this chapter, keep the separate notions of business and engineering in the forefront of your thoughts. Also, keep in mind some questions that we should be asking, but aren’t.

Setting the Stage

Some people aren’t going to like this section because they believe that it implies that I think that people are stupid. So, let me clarify that I do not think that people are stupid. I think that we are sharp, smart, motivated, and driven—just like the folks in the marketing departments of our security vendors.

Marketing departments spend a great deal of time trying to figure out how to make their message compelling enough for you to spend your money on their product. The marketing people have strategy sessions to discuss how to get you to divert money that you have earmarked for other security technologies to the purchase of their product. I can’t even remember how many times I’ve heard marketing people mention “getting the IDS dollars” or consider “what argument would it take to get the customer to shift their spending patterns to include our products.” Marketing isn’t bad, it’s just a pragmatic approach endemic in any business. You have to compete for your customer’s money!

So, crafting a message is just good business practice. After all, vendors know that you only have so much money to spend, and they want you to spend it with them. Send a message that resonates with your customers. Help them understand how your product works. Help them with testing and evaluation and proof-of-concept deployments. Get them to trust you and your message. It might be good business, but it’s a terrible security practice. Why? Well, one reason is that it eliminates objectivity from the engineering process.

So, knowing this, what can we do?

Our lawmakers seem to think that they can legislate a solution. Let’s tell the consumer when their information becomes compromised and maybe, just maybe, they’ll get mad enough to do something. A lot of people think that HIPAA3 is supposed to make medical information more secure; its real intention, however, is to ensure that patients can take their medical information from one doctor to another without having to pay fees or redo tests. It was designed to help patients get out of a bad medical relationship without paying through the nose for more tests. Oh, and there were some security things added because some insightful legislator understood that the world was really digital.

Another interesting legislative “solution” was the Sarbanes-Oxley Act of 2002 (SOx), or as it’s known in the security world, the Financial Auditor Entitlement Act. SOx was intended to address the weaknesses in the financial accounting system that enabled some executives to game the system to their financial benefit. Through various accounting tricks, billions of dollars were funneled into a few unscrupulous executives’ pockets.

The reason it worked so well was that the financial auditors and the financial consultants understood the concept of “closing the loop.” The consultants advised their customers on how to move accounts, cash, and debt, and the auditors only looked for things that demonstrated compliance with financial “best practices.”

The term collusion comes to mind when examining how this arrangement worked. The first thing that happened after SOx was passed was that companies that provided financial advice could no longer provide audit services. A good first step. Break the chain of collusion and insert a new monitoring process to again close the loop, only this time the loop would be closed by the government through regulation. A bad second step. It did, however, do what it was intended to do: give the public the impression that something was being done to restore the trust in corporate America. People went to jail for their crimes, and those not complying with SOx would join them. You see, SOx made the CEO and CFO responsible and criminally liable for noncompliance. They could go to jail if they didn’t pass a SOx audit twice in a row. Both the CEO and CFO had to sign their name on a legal document that went to the Securities and Exchange Commission (SEC) attesting (remember this word) that their systems and process were SOx compliant.

This is a gross oversimplification, but this is not a book on SOx.

The attestation is where the security part of it crept in. SOx said that you had to demonstrate that only those requiring access to your financial systems could gain access. You also had to keep records of that access and be able to produce them. I’m not aware of any financial systems that aren’t handled by computers, so it stands to reason that whatever solution you put in place must address your enterprise network and the financial systems on them.

The security marketing people exploded. Now everything was about SOx. Instead of focusing on the real security issues, security vendors provided SOx templates for their products. Once again, an opportunity to look at why our security was failing was passed up. I’m not aware of any SOx template that ever stopped a worm. Granted, it was a great example of “find a need and fill it” mentality, but it also gave many people the false impression that being SOx compliant meant being secure.

The point I’m making here is that even with all this legislation and vendor support, we still have security failures. Security vendors used the opportunity to remarket their products in a way that characterized legislation as part of the security solution, even when it wasn’t. Viruses still rip through networks, worms still turn, and endpoints still get compromised. It’s clear that a new approach to understanding why we’re failing is needed.

A year doesn’t go by without a hundred new security products being introduced into the mix, all claiming to be the silver bullet, the one product that will increase security and make our lives easier. Their salespeople give you the line that says no one product can solve your security problems, while their marketing message says that if you buy their product, your security problems will magically disappear. I’ve seen this happen with firewalls, antivirus, intrusion detection systems (IDSs), and virtually every other security product over the past 25 years.

And it’s getting worse.

Hundreds of security products earn millions of dollars a year, all claiming that they add security. However, I can take a traffic dump from any point on the Internet and see any version of every worm introduced over the past five years. Add to that the fact that new threats are being introduced all the time and you have the state of our networked world: We live in a constant state of cyberterror. At any time, a new threat can destroy all that we’ve worked for, and we won’t see it coming until after it has hit us.

How can this be? We have antivirus! We have firewalls! We have IDSs! We have authentication systems! We have HIPAA,4 SOx,5 and let’s not forget GLBA!6 With all this heavy artillery, how can the evil worms of war still manage to break through our defenses? Why do we have systems infected with bots? How can we have all this security and still have a polluted network? What the hell is going on here? Or, better yet, what the hell isn’t going on here?

Basic science.

That’s right, basic science. Before we get into that, however, let’s examine in detail how we got to our present situation. When we understand how we got here and what the real problem is, we can begin to craft a solution based on science.

Vendors Drive Process

In light of the previous rant at the beginning of this chapter, I want say that we need vendors. Without them, we don’t get the new tools that we need to do our jobs. Yes, there are open source tools, but if there weren’t a commercial market for vendors, vendors wouldn’t exist. Just like a natural environment requires predators, prey, and scavengers, our work environment requires vendors. They are necessary to the balance of our environment. (I leave it up to you to decide where vendors, and we as users, fit into that analogy.)

Having said all that, we’ve been letting the wolf guard the henhouse. The very people who benefit most from this mayhem are the ones we’ve been expecting to provide an answer: the vendors. Let me ask you a simple question: If they did in fact have the one security product to solve our problems, how many would they sell over time? Not enough to support their revenue growth objectives quarter after quarter!

I’m not saying that it’s bad; I’m just saying that it’s business. Each vendor has their chart that shows how their product is about “best practices” and how you can’t accomplish the security cycle without their product. We’ve all seen the circular wheel of arrows pointing to each other: assess, report, prioritize, mitigate. This is the cycle that they use to push their business. For us, however, it’s still the business of pain when a point product inevitably fails. Andrew Jaquith calls this “the hamster wheel of pain.” We spin on the wheel like a plucky little hamster in a cage. Not because we’re stupid, but because we’re busy and we’re trying to solve a complex problem with a partner that’s not interested in solving the Big Problem. Besides, we’re not in a position not to trust the vendors. We just need to change some of the questions that we ask them.

Look at the present model that security companies use: They identify a pain point and create a product to address it, like SOx templates, and then explain how it fits into the wheel of best practices. Firewalls did that in the beginning. Evil people were getting onto our computers and that had to stop. How? We prevented access to the network by evil protocols such as TELNET and FTP.

Antivirus is another great example of a point solution to a problem. When on the network, computers could become infected, so a product was created to address that problem. If you look at all the products today, that’s the model that was used. Even the Security Information Manager (SIM) market started that way. The problem was too much information, so we created a SIM to manage it. The end result is that we’ve let the vendors treat our pain as a series of independent problems with no apparent intersection points.

So, how will science provide the answer if business hasn’t? We have to look at what hasn’t worked first.

Solutions Address the Past

Allow me to briefly recap: One vendor supplies the “problem,” and another vendor supplies the “solution.” It reminds me of a Saturday Night Live skit in which a man in a white suit convinces two rubes that he’ll jump into their septic pond for a mere dollar. He jumps in and then gets out of the septic pond smelling pretty bad. The rubes, who are anxious for the now very soiled salesman to move on, are informed that seeing him jump in only costs $1, but seeing him move on down the road will cost them $10.

The real criminal part of this whole process is that it’s all about addressing problems that have occurred in the past. Discover a class of vulnerabilities and generate a new product classification to cover it. We are in total reactive mode here, with the vendors on both sides of the problem. The problem of managing vulnerabilities and resulting patches has gotten so big that there are a number of companies whose business model is based on providing a solution to the complex problem of patch management in large enterprises. The same can be said for firewall, antispyware, antivirus, IDS, and worm-prevention vendors. You have to go out and purchase a third-party application from another vendor to address the underlying weakness or just careless programming techniques of our software vendors.

But, like any war zone, if you stick your head up, you will get noticed. I would like to point out that security vendors are the new targets for hackers, and vulnerabilities are being discovered in the very products designed to protect us. ZDNet reported in June 2005 that Symantec, F-Secure, and CheckPoint Software Technologies were among a list of vendors that had seen increases in discovered vulnerabilities.7

We’re Not Asking Vendors Hard Questions

I think that we may believe that because we’re paying for a solution that it has an intrinsically better quality associated with it. The open source world disagrees. And, based on my knowledge of how things work in security companies, I disagree, too. We can no longer take the products at face value. The organization that produces the product has to have some clue about why security flaws exist and how they can be effectively eliminated early in the product development cycle. There has to be some quality in the development environment if there is to be quality in the final product.

So, to judge the quality of their solutions, I started asking two questions:

1. What type of systems development life cycle (SDLC) do you use?

2. What software analysis tools do you use to discover coding flaws in your software?

I bet you’ll be surprised at the answers you get to these questions. I was. I have been told by product vendors that their customers aren’t interested in their products’ capability to protect itself or the host operating system. As recently as early 2006, a vendor that was providing a database replication product in a large production environment made that very claim. However, when faced with these questions, and the prospect that we were going to discontinue the use of their product in favor of a clued-in vendor, they started taking the time to learn more about what an SDLC was, what a software assurance program was, and why they should be worried about their product being abused.

Viruses, Worms, Trojans, and Bots

In the first few years of the twenty-first century, we saw a steady increase in the global losses suffered by businesses from virus and worm attacks. In 2001, we lost $13 billion.8 In 2002, that loss amount jumped to $25 billion. The astounding estimated loss in 2003 was $55 billion.

ICSA Labs, a provider of security product certifications, sponsors an annual survey to track trends, and the results aren’t promising.9 In a report released in April 2005, ICSA concludes that all indicators are on the rise.10 According to its report, 2004 saw the trend continue with a 50 percent increase in incidents, implying that we’re looking at nearly $75 billion in losses. The FBI estimates that losses for 2005 topped $67 billion just in the United States.11 These numbers track pretty well.

On a more interesting note, of the ICSA respondents, more than 30 percent said that they suffered a virus disaster, where a disaster is defined as more than 25 PCs and servers infected at the same time, causing significant damage or monetary loss. The number of respondents saying that they suffered through such a disaster was up by 12 percent over the previous year.

The most interesting fact is that of all the respondents, none said that they thought things were getting better.

Then came the eleventh annual CSI/FBI report.12 In a surprising reversal of trend, the 2006 CSI/FBI survey reports that although we’re still suffering attacks, the average loss from attacks is down from $203,606 to $167,713 per incident. However, a more disturbing trend is that companies are less likely to discuss financial losses. Fewer than half the respondents in the CSI/FBI survey were willing to discuss financial figures. I’m sure that this masks the real numbers.

Another interesting fact is that the CSI/FBI survey and ICSA survey agree that virus attacks constitute the biggest financial drain. Unauthorized access placed second.

Today’s Malware: Big, Fast, and Dangerous

MessageLabs, a provider of messaging security services, reported that W32/Mydoom.A was propagating at a rate of almost 60,000 copies per hour.13 It was estimated that 1 in every 12 emails contained a copy of this prolific little devil. If you do the math, you come up with about 16 new infections per second. That’s an amazing rate, and not one that we can even think about trying to keep up with.

Being the anal-retentive engineer that I am, I decided to figure out just how much time I would have if something fairly virulent hit my network. I started by making some basic assumptions:

• I had a 10,000 node network.

• I had fairly good security, and only 20 percent of my systems were “vulnerable.”

• I was operating on a 100Mbps full-duplex network.

• The virus or worm had a rate of infection of 2 devices per second.

• The viral payload was 5KB.

• Time to repair (TTR) each node was 2 seconds.

Figure 2-1 shows the results. The graph in Figure 2-1 plots the percentage of network throughput as a function of the percentage of infected computers against time. As you can see from the graph, even at a modest two systems per second, your network has less than a minute to live. Mydoom was eight times more virulent.

Figure 2-1. As more systems are infected, the available network bandwidth drops even if you can fix systems. In this chart, ROI is rate of infection; TTR is time to repair.

image

Another interesting thing shown by this graph is that the network melts down long before all the available systems are infected. If you’re relying on your network to push out your remediation solution, you’re in serious trouble. And, while we’re on the subject of remediation, you’ll also notice that remediation gives you less than a 2-second increase in longevity. The lesson? It’s better to prevent than to try to repair.

High-Profile Failures

In June 2005, it was discovered that 13.9 million credit card accounts were compromised at CardSystems Solutions in Tucson, Arizona. The breach affected customers of MasterCard, Visa, and Discover, and is thought to be the single largest credit card theft so far.

In December 2000, 3.7 million credit card numbers were stolen from Egghead. After an investigation, CEO Jeff Sheahan sent a letter to customers explaining that fewer than two tenths of 1 percent, less than 7,500 cards, had displayed fraudulent activity. I suppose that it was good news for the other 99.8 percent who didn’t have fraudulent charges on their bills. But what about those who did? Have you ever had to deal with a fraudulent charge on your credit card bill? A casual poll of my friends indicates that all of them have had to deal with credit card fraud, and 100 percent of them said it was a pain.

These two incidents, although years apart, have one similarity: Both companies went out of business due to the breach. Egghead was gone within six months of their breach, and as of October 2005, CardSystems was acquired by Pay By Touch. In both cases, their customers had a significant role in their respective demises. As a retail outlet, Egghead had let their customers down, and nobody wanted to take a chance buying there again. With the substantial news coverage, you would have to have lived in a cave not to know about the breach. As for CardSystems, Visa dealt the first blow by revoking their certification to process transactions. MasterCard and Discover followed suit. Although CardSystems did pass a Payment Card Industry (PCI) data security audit in August, and they did supply a Report on Compliance to Visa, MasterCard, American Express, and Discover, the damage had already been done.

What Is Being Exploited?

With few exceptions—the Bank of America (BoA) tapes come to mind—all successful attacks have been against the endpoint. In the BoA incident, a number of tapes with confidential customer information on them “disappeared.” The reality of that situation was that you need the right machine to read them and the right software to extract the data. Neither is likely found in your typical hacker environment.

CardSystems was an attack against an endpoint. Egghead was the same thing.

Bots

Botnets are the ultimate expression of the exploited endpoint. Botnets are networks composed of compromised computers that have some variant of a Trojan horse program loaded on them. There are various type of Trojan horses. Some are designed to give their owner remote access to your system, and some are designed to be tattletales reporting various bits of information back to their master. Some are just remotely triggered attack drones. All are dangerous.

Trojans get loaded in numerous ways, but most rely on the user to do something (such as clicking Yes when asked). For example, W32/Bropia arrived via Microsoft Instant Messenger. The promise was that the user was downloading a sexy image, but in fact he was actually downloading a variant of the W32.spybot.worm.

Instead of a sexy image, the user got the picture shown in Figure 2-2, a suntanned chicken with a nasty payload. Clearly, the focus of this attack was the male user and his endpoint.

Figure 2-2. Chat users thought that they were getting a sexy bikini-clad beauty but instead got this image and a spybot Trojan.

image

A study by Earthlink came up with some grim statistics.14 The worst of which is that they estimate that up to 90 percent of all Internet-connected computers have some form of unwanted program (sypware, worm, Trojan) loaded and running on them. Another interesting statistic is that on average each machine has about 25 installations of some sort of spyware on it.

Not all spyware is completely evil. Some of it is just partially evil. I had an interesting experience recently that I think is relevant to the discussion, because it proves that there are many ways that the endpoint can become infected. I will once again pick on the marketing folks; after all, they seem such easy targets.

On a sunny day, a marketing droid and I were trapped in a conference room going over a presentation that should have been done days before. It hadn’t been, however. The marketing people didn’t like the lack of specificity that security people like to hide behind. We don’t like to say that our product will prevent breaches,15 but we’re willing to say that we can greatly reduce the possibility that a breach will occur. “Greatly reduce” doesn’t sound as sexy as “prevent,” so the marketing folks were trying to wear me down to admit that we prevent evil stuff. While we were having this discussion, we were asked to move from our bright and sunny conference room to an internal conference room with all the accoutrements of a dungeon. For some reason, the buzzing of the florescent lights seemed especially annoying, so my attention wandered to the fact that the notebook we were using to work on the presentation was now painfully slow. Specifically, it was slow running the presentation. Hmm. The only difference between this venue and the other conference room was that in the other room we’d been connected to the network. So, I suggested that we reconnect and try again. Zip—a fast presentation.

My steel-trap mind realized that something in the presentation was being held up by a lack of connectivity to the network. Many Microsoft programs are network aware, but I suspected something more nefarious was at work. I suggested that we disconnect from the network and test again. With the network disconnected, the computer was dirt slow at slide 10.

A hub- and sniffer-equipped notebook later and we had an answer: a graphic object that the marketing person had downloaded from the Internet was phoning home. The offending object was a cool-looking spinning checkmark that was placed in front of each bullet on our Claims of Security slide. When the notebook was connected to the Internet, the checkmark scraped the IP address of the notebook and sent it back to the mothership. The “owners” of the checkmark could then track who was downloading their code and where they were presenting it. I’m sure that this is information that many competitors would love to have. Where is your competition making presentations, and are they successful in the sale? With a little bit of correlation, this information is easy to determine.

Our security software missed the checkmark, and the marketing person was now convinced that “greatly reduce” was really pretty good wording.

Let’s get back to the endpoints and the bots loaded on them. Specifically, let’s do some math on endpoints and connections. If you have a typical home connection to the Internet, you can probably manage at least 128Kbps upload speed. Let’s also assume that your business has a T1 line giving you a 1.544MBps connection to the Internet. That means that 12 infected home computers equipped with evil bots aimed at your corporate network could cause a denial-of-service attack. Only 12!

Now, according to Pewinternet.org, 24 percent (or almost 50 million) of Americans have a high-speed connection such as Digital Subscriber Line (DSL) or cable.16 Add in the Earthlink statistics regarding spyware and bots, and the math says that there is a hugely powerful machine sitting out there waiting to get switched on.

Predictably Poor Results

When you are looking at results, the bottom line is actually the bottom line. We know that we’re spending more money on security because we can see it. We’re trying to determine ways to convince the bean counters that we know what we’re talking about by using convoluted return on investment calculators because we keep asking for ever more money and can’t seem to slow the tide. ROI (in this case, return on investment) calculators are a valuable tool; in the security world, however, they still rely on a bit of squinting, some smoke, and a dash of faith.

Spending More Than Ever

There are a number of ways to calculate how much money is spent, but the most popular seems to be the survey of security professionals, such as CSOs and CISOs. The numbers are then anonymized, normalized, and presented as fact in an “official” report, such as those produced by the CSI/FBI and the ICSA.

I have a problem with a process that relies on subjective answers converted into facts and presented as information, especially when that information is used to convince people that they’re not spending enough money. (And yes, I do realize that I used these very reports to make my point previously, but I did more research to verify my facts.)

So, how do we find out how much money is being spent on security? We follow the money. Most of the security products are supplied by a few industry heavyweights. When folks want to find out how many computers are sold, a simple first attempt at the answer is to count how many Windows installations were sold over a given time. I propose that we use that same process to determine how much “security” we’re buying. We look at the major security vendors and see how much money they made last year!

So, who are the major security vendors, and how much did they make? Infonetics Research reported that the 2005 market leaders are the following:17,18

Cisco: 35%

CheckPoint: 10%

Juniper: 8%

Between the three of them, they represent 53 percent of the total $3.6 billion hardware and software security market. According to Infonetics, the rest of the market is split between Enterasys, ISS, McAfee, Nokia, Nortel, SonicWALL, and Symantec, each of which claims between 1 percent and 6 percent of the market.

Our financials haven’t even factored into the result the impact of security services on our spending habits. A few years ago, the term managed security services (MSS) snuck into our lexicon, and another industry was born. According to the Reseller Channel,19 the MSS market drew $1.5 billion in 2002 and is expected grow to almost $4 billion in 2006. In 2004, the MSS folks rang the bell at the $2.3 billion mark, bringing the global security market forecast as of November 2004 to $12.9 billion.20

Infonetics also reports that as of the end of the third quarter 21 of 2006, network security sales are up and are projected to top $5.48 billion by 2009. Add content security devices to the mix, and the total goes up to $7.57 billion just for those two security markets.

We Have No Way to Predict Success

No matter how much we spend, security people have a simple question to answer: How do you know when you’re done? How do we know when we’re as secure as we can be and still be able to do business? The answer is that we’re never completely done, but we can be “done enough” so that we can manage to keep our heads above water long enough to not get fired. The problem is that we spend our days in firefighting mode because as much as we think we understand the problem, we really don’t. Sure, we have documents that claim to be representative of best practices, but they’re based on different levels of what is considered secure. The bad news is that when something new is introduced, “best practices” can’t help us sort out the problem.

I have a simple premise: If you can describe it and you can measure it, you can understand it.

As proof of this claim, I can point to numerous working groups that are putting together various descriptions of metrics designed to tell us how secure our networks are. Okay, so what is secure? Secure as compared to what? As you can see, a basic element of the problem is at its heart very subjective in nature. What may be secure to me may not be secure to you.

So, we’ve failed the first test because we can’t describe secure. Setting aside our inability to describe secure, let’s take a crack at measuring it. Many people say that you should measure risk. The word risk, as defined by the dictionary, is the possibility of suffering harm or loss; danger.

Security people describe risk as the probability that your network could be exploited to your detriment. They take into account the number and types of vulnerabilities, the availability of an exploit that exercises the vulnerabilities, and the types of protections that surround the vulnerable system that prevent the successful use of the exploit. But, just looking at risk in such simple terms doesn’t even begin to get at the heart of the matter. Isn’t risk a little higher if the system we’re talking about is your finance server?

The bottom line in this discussion is that because we don’t understand the problem well enough, we don’t have a way to predict success; the converse of that is that we can’t predict failure.

We’re Still Being Surprised

The news media would have you believe that the security of our computing systems is a new problem—that only recently have hackers and thieves been working to steal information from computers. Not so. Even in the old days of the mainframe, nefarious individuals found ways to break through security. That’s the reason applications such as Resource Access Control Facility (RACF) and Access Control Facility (ACF2) were developed. But the problem, and the surprise that security failed, still persists.

We can be thankful, however, that many security people aren’t surprised. They’ve lived for years with the reality that our processes are broken and our tools aren’t sufficient. Their problem is that they’re spending most of their time treading water in an attempt to keep the evil out of their networks.

Why do viruses get through if we have antivirus? If it’s not working, why buy it?

Newspaper editors have a saying: “If it bleeds, it leads.” Things make the front page because they are spectacular, they affect a lot of people, or they profoundly affect us as a society. For some reason, it seems that war, murder, rape, and political scandal garner the most attention. Lately, however, we can add identity theft to that list of banner headlines. Why? Because millions of people are affected, and there was no warning. One day, you woke up and read in your local rag that your private information is probably the topic of discussion at hacker parties and is getting a better rate of return than your savings account.

Is Something Missing?

The scientific method says that when you want to learn about something, you study it, make an attempt to understand it, postulate the outcome of a test, perform that test, and compare your answer to your postulate. If your answer is correct, you move on. If you’re like most of us, however, there will be some difference between your postulated solution and the actual answer you get from your test. That difference is due in part to the fact that there are some things that you didn’t know about, and they affected the outcome of the test.

A classic example of this process took place more than 100 years ago at Kittyhawk, North Carolina. The Wright brothers were using a set of tables provided by Otto Lilienthal, and they based the size and shape of their wing on the information in Lilienthal’s tables.22 When the wing didn’t perform as well as the tables said it would, the Wrights did what any good engineer would do: They questioned the accuracy of the information provided to them! They made a wind tunnel and through the scientific method refined the equations so that they could accurately predict how a wing would perform. It was hard, and it took a lot of time, but in the end, that single decision to use the scientific process made all the difference in the world.

What Are We Doing Wrong?

As much as I hate to say it, we, the security industry, are not using sound engineering or the scientific method to figure out what is wrong. Worse yet, we continue to make the same mistakes year after year. We rely on the vendors to tell us what the solution should be instead of turning the formulation of a solution into a science.

For some strange reason, we continue to use the same failing methods to draw our conclusions. Some people say that if we eliminate the vulnerabilities, we will be secure. But, eliminating the vulnerabilities only reduces the number of attack vectors, and it still relies on someone finding security flaws in our software. In short, it’s a reactive method that has proven that it doesn’t work. Besides, any good security person will tell you that although having vulnerabilities implies that you are less secure, not having vulnerabilities does not imply that you are secure.

Have We Missed Some Clues?

In my opinion, yes, we’ve missed a clue every time there’s been a successful attack against the Internet community. I’m not just talking about single hacks against specific targets, but rather those failures that seem to indicate an endemic and ubiquitous situation.

The clues we’re missing aren’t ones that point to vulnerabilities, but they’re the clues that point to the fact that we’re continuing to do the same wrong things. We’ve also failed to gain insight from other IT solutions that have turned what they do into a science. Take the network management folks, for instance. When networks began to spread, only a few talented individuals understood how they worked. So, they were the consultants called in when the network needed tuning. They came with their tools and sniffers and watched the traffic to see what was wrong. Then, after what always seemed like an eternity, the network wizard tweaked some parameter and the network started humming again. When networks started to get big, the tools were commoditized and sold to the network engineers who were responsible for keeping the network going. It wasn’t long before someone realized that a network sniffer was a powerful tool but not one that was easily scalable, so fledgling network management systems were born.

What this points to is that someone said, “Hey, this old stuff isn’t working, and our reliability has gone to hell. Maybe we should try something different.”

What’s happened over the years is that our networks have gotten, and continue to get, more complex. But, that complexity has been hidden behind management systems and cool GUIs so it can no longer be seen. More complex architectures, more complex operating systems, and more complex applications have predictably reduced overall reliability. To illustrate my point, take a look at Figure 2-3, while keeping in mind that there is no real data in the chart. Why no data? Because the data differs for each type of solution. As parts counts go up, the failure rates multiply, thus reducing the overall reliability of the entire system. As relationships multiply, the possibility that one failure will affect multiple systems multiplies.

Figure 2-3. As complexity increases, the trend is for reliability to decrease. Systems that demonstrate excessive complexity, such as software and networks, tend to have lower reliability.

image

It’s difficult to see in our networking system, but we do feel the trends, and we do see the results every day. More cars mean more roads, and more roads mean more choices, and more choices mean more accidents when people can’t manage the complexity of traffic decisions well enough or fast enough. We know that complexity affects the reliability of systems because we have redundancy solutions to deal with the eventual failures. There is math to support this in numerous places, but the trend is simple: More complexity means less reliability. To address this, we add more complexity in parallel with the knowledge that both systems failing at the same time is a remote possibility.

A note to contemplate: Complexity does not mean large. Small things can be complex because of the relationships that exist within them.

Key Points

We—and when I say “we,” I’m indicting all of us in the security world—know that we’ve had a fairly systemic failure in our security solutions that has us constantly throwing money at the problem in the hope that something will work. We fail, yet we continue to use the same method of selecting our solution: We turn to the vendors.

Malware Continues

Despite the efforts to contain and remove malware, we still see it on our networks and on the Internet. We add products and technology, and although we might stop some immediate pain, we are never sure when the next “big one” is going to hit us. We’re still losing a huge amount of money through loss of productivity, fraud, and theft, and the bleeding just goes on and on.

Vendors Aren’t Helping

Vendors continue to produce products that enable hackers, spies, and thieves to take advantage of our networks for their own purposes. Operating systems have holes large enough to drive entire databases through them with little or no warning. Vendors are pumping out products, but they continue to use the “I have a hammer, so each problem must be a nail” approach to security. Security vendor marketing departments spend more time retooling the Web site to make sure that it addresses the issue of the day instead of doing real research that may help identify what products really need to be made.

We Need to Ask Harder Questions

Simply supplying a solution is no longer adequate. Vendors need to prove that they understand what the problem is by leading by example, and we, as buyers, need to push the point. We need to ask vendors what type of SDLC they’re using. We need to ask vendors what type of source code security testing tools they’re using. We need to ask vendors how they incorporate flaw detection and remediation into their product development cycle.

Are We Missing Something?

All the evidence points to the fact that we don’t fully understand the problem. Yes, networks are complex, but they’re not so complex that we mere human beings can’t understand them if we take the time. Each time we apply the scientific method to solving a problem, we begin to make progress. Perhaps it’s time to ask this question: What can we do differently?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset