1
BASIC CONCEPTS

Image

Before you dive into vulnerability management, you should first understand some basic information about vulnerabilities. You might already be familiar with vulnerabilities and their varying risk levels. If so, consider this chapter a refresher to prepare you for the more advanced topics to come. This chapter isn’t an exhaustive primer of information security concepts, but it should be enough to ensure that the rest of the book is comprehensible.

The CIA Triad and Vulnerabilities

The three main pillars of information security are confidentiality of information (who can access data), integrity of information (who can modify data), and availability of information (whether data is available to authorized users). These three factors are known as the CIA triad. Although it isn’t a perfect model, the terms aid in discussing and ­categorizing security vulnerabilities.

Software, firmware, and hardware have bugs, and although not all bugs are serious, many have security implications. If you can enter improper input into a program and cause it to crash, not only is that a bug, it’s a vulnerability. But when you enter improper input and all it does is change the onscreen text color, presuming the text is still visible, that bug isn’t a vulnerability. Well, it isn’t until someone clever figures out how to leverage that bug to cause security-related issues. In short, a vulnerability is a weakness in an information system that an attacker can leverage in a way that has security implications. Typically vulnerabilities are due to bugs, but these weaknesses could stem from flaws in the code logic, poor software design, or implementation choices.

Because a bug must have implications for the confidentiality, integrity, or availability of data—or an entire information system—to be considered a vulnerability, the major vulnerability types map directly to the CIA triad. Denial-of-service (DoS) vulnerabilities impact the availability of data: if authorized users can’t access the system, they can’t access the data either. Information disclosure vulnerabilities impact data confidentiality: they permit unauthorized users to access data that they couldn’t otherwise access. Similarly, information modification vulnerabilities allow unauthorized users to modify data, so these vulnerabilities impact data integrity.

A fourth vulnerability category involves code execution and command execution. These vulnerabilities allow attackers to execute specific commands or arbitrary code on a system. The attacker has either limited or complete access to the system, depending on the user level at which this code executes, and can affect all three portions of the CIA triad. If an attacker can run commands, that person might be able to read or modify sensitive data or even shut down or reboot the system. Vulnerabilities in this category are the most severe.

Some vulnerabilities might fit into more than one category, and the categorization (and severity) could change as attackers begin to better understand the vulnerability and exploit it more thoroughly. Because the vulnerability landscape changes constantly, you need an effective vulnerability management program to keep abreast of developments.

What Is Vulnerability Management?

Vulnerability management is the practice of staying aware of known vulnerabilities in an environment and then resolving or mitigating these vulnerabilities to improve the environment’s overall security posture. Although this definition sounds simple, it entails a number of interdependent activities. I’ll discuss each of these activities in more detail in the following chapters. For now, let’s look at the vulnerability management life cycle’s major components (see Figure 1-1).

Image

Figure 1-1: The vulnerability management life cycle

The first step is to understand the current vulnerability environment. To do so, you need to collect data about your systems to determine the vulnerabilities that exist on them. The next step is to analyze that collected data as well as security-related data from other sources.

Your data analysis results will help you make recommendations about the actions needed to improve your security posture. These recommendations might include installing patches or applying mitigations, such as firewall rules or system-hardening techniques. The next step is to implement recommendations. Once this is complete, the cycle begins again: you collect another round of systems data and the vulnerabilities that remain after analysis and mitigation, as well as new vulnerabilities that weren’t apparent in the previous cycle.

The management process is neither short nor simple. Finding vulnerabilities can be easy, but dealing with them and improving your security baseline will be ongoing. The process will also involve many different roles and business processes throughout the organization.

Let’s look at each step in more detail.

Collecting Data

You can split the collection component into two major categories: internal and external data collection. We’ll look at each in turn.

Internal data collection involves gathering information about your organizational environment. This data includes information about the hosts on your network—endpoints and network devices—and vulnerability information about each host. Host information can come from an exploratory scan using a network-mapping tool (like Nmap), an asset database tool, or a ­configuration management database (CMDB). If you have only a spreadsheet that contains data about your servers and workstations, it won’t be sufficient. For vulnerability management to be successful, you need to start with accurate and complete data. A spreadsheet you create and update manually won’t reflect the actual hosts and network information that live in your environment.

Vulnerability data comes from one source: vulnerability scanners. These tools discover vulnerabilities by interacting with devices, either through ­network-based scans or host-based agents. Network scanners reach out to every IP address within a range, or a specific list of IPs, to determine which ports are open, which services are running on those ports, the ­operating system (OS) versions and relevant configurations, and software ­packages running on each device. Host-based scanless agents query the system directly to determine running services and version information. Both approaches have benefits and drawbacks, which I’ll discuss in more detail in Chapter 3.

The internal data you collect quickly becomes stale—this is especially true of vulnerability information—so you must gather it regularly. Even though you might not add or remove hosts frequently, vulnerability information changes daily: people install new software packages or perform updates, and new vulnerabilities are discovered and publicly disclosed. Regular scanning and routine scanner updates to incorporate new vulnerability information ensure that you have accurate and complete data about your current environment. On the downside, regular scanning might have negative effects. But you must balance this risk against the importance of having accurate vulnerability data. I’ll discuss this trade-off in Chapter 2.

Information like network configurations and other advanced data sources, although potentially useful in your analyses, are outside the scope of this guide. But the same warning applies: if the information isn’t recent and thorough, your entire analysis is less useful to you. Fresh data is good data.

External data collection encompasses the data sources that come from outside your organization. This information includes public vulnerability details, embodied by the constantly growing mass of common vulnerabilities and exposures (CVE) data that NIST (the National Institute of Standards and Technology) provides; public exploit information from the Exploit Database and Metasploit; additional vulnerability, mitigation, and exploit detail from open sources like CVE Details (https://cvedetails.com/); and any number of proprietary data sources, such as threat intelligence feeds.

Although this information comes from outside your organization, you can still remain up-to-date at all times by either querying online sources directly or keeping local data repositories. Unlike local data ­collection, which might cause issues in your environment, collecting data from third-party sources is as easy as reaching out and getting it. So, you have no reason—except perhaps to save data transfer costs—not to update these sources daily or even keep a live connection in the case of threat intelligence feeds.

Analyzing Data

Once you’ve collected internal and external data, you need to analyze this data to gain useful vulnerability intelligence about your environment.

Vulnerability information alone, as anyone familiar with a scanner report can tell you, is overwhelming for any environment larger than a few devices. Scanners will find many vulnerabilities on nearly every device, and separating important vulnerabilities from the unimportant ones can be difficult. Worse, if all you have is a thousand-page scanner report, you’ll have a hard time deciding which remediation tasks to assign to an already overworked systems administrator.

You can approach this problem in two ways. One way is to try to reduce the list of vulnerabilities to a more manageable length, known as ­culling. The other is to try to rank the vulnerabilities in order of importance, known as ranking.

Culling is straightforward: it’s a binary yes-or-no decision you make on every vulnerability. The criterion for accepting a vulnerability might be, for example, the vulnerability is newer than a certain date, there are known exploits, or it’s remotely exploitable. You could also combine any number of these binary filters to cull the list even further. Only if a vulnerability matches the criteria would you take the time to analyze it further.

Ranking requires a criterion using some sort of scale. For instance, you could rank a set of vulnerabilities based on their effects on confidentiality, integrity, or availability. Or, you could use the Common Vulnerability Scoring System (CVSS), which is a 1-to-10 scale that takes into account a vulnerability’s severity along all three of the CIA triad’s axes. If you have a strong understanding of your organization’s risk landscape, you might have your own scoring system that focuses on internally developed risk metrics.

Although these two methodologies have different focuses, you can convert between them. You can use a binary categorization, such as exploitability, to rank rather than to cull, resulting in a list that is split into two groups. In contrast, you can use a ranking metric to cull by setting a threshold. For example, you could set a culling threshold of a CVE score of 5 and ignore any vulnerability with a lower score. Given a metric for categorizing vulnerabilities, you should then decide whether you want to use this category as a ranking or as a culling metric, or both.

Because culling results in a smaller dataset to analyze, whereas ranking is an analysis method in itself, consider using both. By first culling the vulnerability set, you can limit your subsequent analysis to vulnerabilities that you must address, which makes analysis faster and more relevant. Once you identify the most critical vulnerabilities, you can rank the remaining vulnerabilities to more easily determine their relative significance.

In this book’s scripts, I use a simple cull-rank profile, which you can modify or replace based on your organization’s needs. This profile uses the CVSS score and exploitability as metrics (see Figure 1-2).

Image

Figure 1-2: A simple cull-rank profile to filter important vulnerabilities

You first cull vulnerabilities with a low CVSS score because they’re not severe enough to analyze further. Next, you rank the remaining vulnerabilities by exploitability and then by CVSS score, from high to low. You combine this list with the asset list. Then rank the resulting list first by the number of exploitable vulnerabilities per system and then by the total severity of vulnerabilities found on the system. The resulting list shows the systems with the highest risk at the top.

Applying Cull-Rank to a Real-World Example

Let’s look at an example of how the cull-rank analysis process might work in a real-world scenario: let’s say you just ran a vulnerability scan against your main end-user network segment—a Class C network with 256 total addresses of which 254 are usable. You know the segment includes numerous Windows hosts as well as a handful of printers and miscellaneous devices. The scan result shows a list of approximately 2,000 total vulnerabilities spread across 84 devices.

You work through the list and cull vulnerabilities with a CVSS score less than 5, cutting your list to about 500 vulnerabilities on 63 devices. At this point, you have only 38 unique vulnerabilities—most of the vulnerabilities exist on multiple hosts—which means you only need to look at each of those 38 vulnerabilities once. By this measure, you’ve already cut the list of items to investigate by about 92 percent. To determine which of the remaining vulnerabilities you need to investigate, you’ll apply several rankings.

First, find out whether any of these 38 unique vulnerabilities have publicly known exploits. If they do, you need to address those vulnerabilities first. Second, establish what the CVSS severity of each vulnerability is. Higher severity means greater consequences of compromise, so you should focus on the more severe vulnerabilities.

Before you execute the third ranking methodology, look at what you have so far. Of your 38 unique vulnerabilities, 3 have known exploits, and the remaining 35 have been sorted in order of CVSS severity.

Now you can apply the final ranking: combine the list of vulnerabilities with the actual vulnerable hosts. For each host, determine how many vulnerabilities it has and the severity of those vulnerabilities. Once you’ve done this, you’ll have a clear picture of where you need to focus your remediation efforts.

In this example, among those 63 hosts with vulnerabilities, 48 have one to two vulnerabilities of severity no higher than 7, whereas 11 have up to 15 vulnerabilities with one or two in the critical range (CVSS of 9 and higher). The last four contain all the rest of those 500 total vulnerabilities among them—an average of 125 on each host, including all three exploitable vulnerabilities! Clearly these systems need heavy remediation, and you have a good argument for addressing the situation immediately.

Making Recommendations

Now that you have a list of hosts and vulnerabilities that is sorted by risk to your organization, the next step is to recommend actions to remediate the vulnerabilities. You’ll start with the highest risk and work your way down the list. If you’re working in a small environment, you might be responsible for this step; in a larger organization, this step might consist of a longer process that involves working with system and application owners as well as other stakeholders.

The two major types of remediation are patching and mitigation. Patching is simple: you apply the patch that resolves the vulnerability in question. Mitigation is more complex and is context dependent.

If a patch isn’t available or if it’s infeasible to apply one, you need to look at other ways to address the risk. Perhaps changing a configuration will prevent a specific vulnerability from being exploited. Perhaps the vulnerable service isn’t needed outside specific IP ranges so you can protect it with firewall rules or router access control lists (ACLs), reducing the exposure. Perhaps an existing intrusion detection system (IDS) or intrusion prevention ­system (IPS) needs additional rules to detect whether someone is attempting to exploit that specific vulnerability and block it. All of these are examples of vulnerability mitigation, and the correct response will depend on your environment.

Implementing Recommendations

With recommendations in hand, you can finally approach the system and application owners to suggest they implement the proposed remediation actions. If they were involved in the recommendation process, this step should be straightforward. If the recommendations are unexpected, you’ll need to explain the security risks and the reasons for the recommendations you’ve developed. I’ll discuss this process in Chapter 6. At this stage, you should all agree on a timeframe for the implementation.

Once those responsible have implemented the recommendations—via patching or mitigation—the final step is to verify that the changes have been made and are effective. Because mitigating controls vary widely, determining that they’re in place and effective is largely a manual process. But with patching, you can verify the changes by scanning again to see whether the vulnerabilities still exist. This returns you to the first phase—collecting data. The cycle starts over, and the new scans will validate remedial actions and discover new vulnerabilities.

Vulnerability Management and Risk Management

Vulnerability management is closely tied to the enterprise’s risk management goals. This technical guide doesn’t focus on information risk management as a whole. But it’s important to understand where vulnerability management corresponds to risk management. Without a functional vulnerability management program, the enterprise’s IT risk management goals will be difficult, if not impossible, to achieve.

The overall IT risk management framework is similar to vulnerability management. Generally, the IT risk management stages are to identify critical assets, identify and rank risks, identify controls, implement controls, and then monitor the controls’ effectiveness. Risk management is also a continual process rather than a one-time event with a defined endpoint. So where does vulnerability management fit into this process?

Different phases of vulnerability management map to different phases of the risk management process (see Table 1-1). For instance, identifying assets in the risk management framework is directly related to collecting asset and vulnerability data.

Table 1-1: Mapping Vulnerability Management to IT Risk Management

Vulnerability management

IT risk management

Collect data

Identify critical assets

Analyze data

Identify and rank risks

Make recommendations

Identify controls

Implement recommendations

Implement controls

(Collect data)

Monitor controls

But these mappings are only part of the process. Vulnerability-related risks discovered through the vulnerability management process might lead an organization to consider controls that don’t directly resolve the vulnerabilities, such as implementing a protocol-aware firewall. Although a measure like that would be effective against certain exploits, it would also mitigate various other risk types. In addition, regular vulnerability management data collection is useful not only for identifying assets and risks but also for monitoring the controls’ effectiveness. For example, you implement a firewall as a control, but the next scan indicates that it’s misconfigured and not filtering the traffic it’s intended to block.

Because this guide isn’t an information risk management cookbook, we’ll leave this discussion here and continue to an in-depth exploration of vulnerability management. But if you’re interested in understanding information risk management methodology and procedures, I recommend looking into NIST 800-53 and ISO/IEC 27003, ISO/IEC 27004, and ISO/IEC 27005. You can find each with a Google search.

Summary

This chapter provided you with a crash course in vulnerability management and its place in the larger IT risk management framework. You learned about the general vulnerability management process that you’ll follow throughout the remainder of this book and previewed the steps to take once you have actionable vulnerability intelligence.

In the next chapter, we’ll look more closely at the vulnerability management process and get a step closer to implementing your own vulnerability management system.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset