4
AUTOMATING VULNERABILITY MANAGEMENT

Image

In this chapter, you’ll learn how to programmatically compile your data sources to provide vulnerability prioritization and validation. As a result, you’ll save time for more important work, such as improving your organization’s security, rather than going cross-eyed staring at huge vulnerability data dumps.

Understanding the Automation Process

Automating your vulnerability management program consists of correlating information from the three main data sources—asset, vulnerability, and exploit information—as well as any additional accessible data sources. For a refresher on these data sources, refer to Chapter 2.

Information is correlated through two shared fields: one contains IP addresses that are shared between asset and vulnerability data. The other contains CVE/BID IDs (BID is short for Bugtraq ID) that are shared between vulnerability and exploit data. First, you use the IP address to correlate assets with vulnerabilities, and then you use the CVE ID to correlate vulnerabilities with exploits. The result is a useful database that provides a list of exploits per host, hosts per exploit, and more. Figure 4-1 shows this step-by-step process.

Image

Figure 4-1: Correlating information to produce a useful database for vulnerability analysis

You break down each step, except for Results, into substeps: collecting the data and correlating and analyzing the data. You’ll need to get all the data into one place before you can start analyzing it. In this book, you’ll use MongoDB, a document-based database that excels in speedy queries over large volumes of data. But you can also accomplish this process through more traditional SQL databases by replacing the Mongo-specific code in the upcoming scripts with SQL connections and queries.

In each step of the process, you’ll collect the relevant data, import it into your Mongo database, and perform appropriate analysis at that stage, before assimilating the next set of data. Once you’ve completed this process on your own, you’ll find that certain levels of analysis are more useful to you than others. You’ll then be able to streamline your process to highlight those analyses and downplay or set aside the rest.

Data Collection

In the first stage of the process, asset data analysis, you find assets, their ­network information, and the OS running on each asset.

Once you add in vulnerability data, you’ll match vulnerabilities with specific assets and pinpoint hosts that are in the greatest need of vulnerability remediation. Important data points at this second stage include CVSS score, which describes the overall severity of the vulnerability; attack vectors—whether the exploit is local, remote, and so on; and specific ­consequences of exploitation, such as DoS or root code execution.

Next, you add exploit data to further prioritize among vulnerable hosts, highlighting hosts that are vulnerable to known exploits and hence at greater risk of exploitation by malicious actors. At each stage of the analysis process, you can generate reports with useful security-related information, which Table 4-1 summarizes.

Table 4-1: Data Sources and Their Potential Analyses

Data

Analysis

Asset data

Asset summary: a report on assets, their OS, open ports, and networking information

Vulnerability data

Vulnerability summary: discovered vulnerabilities on an asset or set of assets

Vulnerability prioritization by CVSS, attack vectors, consequences: the same report as above but filtered to look for specific vulnerability types

Exploit data

Exploit matching and further vulnerability prioritization: a report focusing on exploitable vulnerabilities or those with certain exploitability characteristics

The two processes described in Chapter 1—culling and ranking—can take place at any of these stages, depending on the criteria you’re using. For instance, an IP-based cull could take place as soon as you have asset data. On the other hand, prioritizing based on CVSS can’t take place until you have vulnerability data.

By culling early, you can limit analysis work. But for simplicity of analysis, it’s easiest to do the culling and ranking steps in one place, once you have all the relevant data. That way, if you want to change your analysis priorities, you can change your criteria in one place rather than looking in multiple scripts from different phases of the vulnerability management process.

Once you’ve combined the datasets and applied prioritization rules, you have a finished product: a list of hosts with relevant vulnerabilities per host, sorted with the highest risk hosts/vulnerabilities at the top.

Automating Scans and Updates

You can gather all the information discussed so far manually. For instance, you can run ad hoc Nmap and vulnerability scans and manually look up information about known exploits. But you experience the real power of a vulnerability management system when you automate these steps. You won’t need to remember to run scans when you set up the system to automatically start them at regular intervals. Most likely, you’ll scan during off hours when any additional load on the systems won’t cause performance issues. The scans will then generate updated reports, which are emailed or placed in a shared network location for perusal at your convenience.

By scheduling scans to run on a regular basis and then automatically importing the results into your database, your vulnerability information will always be up-to-date. This process lets you safely automate ­reporting, because the weekly generated reports use fresh data. Similarly, by periodically updating your other data sources, such as Metasploit and the ­cve-search database, you can be confident that the third-party data you draw upon in your reports is also current.

In the scripts in Part II of this book, you’ll leverage the standard Linux/Unix scheduling utility—the cron daemon—to automate the collection and the analysis of your vulnerability data. To coordinate all of the tasks, from data collection to report generation, you’ll use shell scripts to run your Python scripts in sequence. By doing so, you’ll prevent, for instance, the reporting script from running while the scanners are still collecting data about the environment. These scripts use a one-week interval, but your organization’s collection and reporting interval will depend on how often you need a fresh view of your organization’s vulnerability landscape.

Exploiting Your System’s Vulnerabilities

At this point in your analysis, you have a regularly updated enterprise view that includes hosts, known vulnerabilities on those hosts, and any related known exploits that you could use against those hosts. From here, you can provide prioritized vulnerability information to system and application owners. You can also go one step further and attempt to exploit these vulnerabilities.

The first option is already a successful outcome of the vulnerability management process. The second option looks at the exploitable vulnerabilities list and runs a penetration test against affected hosts to determine whether they’re exploitable. If successful, this option provides an additional level of prioritization to the results: not only is a system in principle exploitable, but it has been exploited.

There are two ways to attempt to exploit vulnerabilities. First, you can use a human penetration tester, either a security analyst with penetration testing skills or an outside auditor. Second, you can extend your automation by bringing Metasploit back into the process. Now, instead of just getting a list of exploits from it, you’ll automate it to exploit those potentially exploitable hosts. This might seem like an excellent option or it might seem very frightening, depending on your perspective. Both perspectives are valid.

For those security analysts who have already seen the value of automating the vulnerability process, attempting to exploit your system might seem like a logical next step. You have a list of exploits and a list of vulnerable hosts, so why not check them out?

For more cautious analysts, exploiting their systems looks like a recipe for disaster. Running live exploits in a production environment is even more unpredictable than running scans: hosts could be taken down, networks could be clogged, and with only an automated system to blame, real heads might roll.

As with the rest of your security program, your decision depends on what you’re trying to accomplish and your organization’s risk tolerance. If your organization would rather incur a DoS attack than get attacked by way of an unpatched exploitable vulnerability, perhaps automated exploitation attempts are an option. On the other hand, if you’re in a more risk-averse environment, tread very carefully: be sure to have full buy-in and acknowledgment of the risks from your CIO or the equivalent executive.

I’ll briefly discuss how to integrate Metasploit into your vulnerability management program in this fashion in Chapter 14. But the actual process—particularly automation—will be left to you as an exercise. Automation is a powerful tool, but you must temper it with skill and extreme caution.

Summary

In this chapter, you learned how to take the raw vulnerability information from your scanners and shape it into usable intelligence. By combining data from your scanners with information about your network, additional information sources, and exploitability information, you can prioritize the vulnerabilities and focus on remediating the most severe issues.

In the next chapter, you’ll learn how to remediate by patching and mitigating vulnerabilities as well as effecting systemic change to improve your organization’s security posture.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset