14
ADVANCED TOPICS

Image

You now have a fully functional and automated vulnerability management system. But projects like building this system are never actually finished. This chapter contains several ideas to enhance your system, including a ­simple integration API, automated penetration testing for known vulnerabilities, and cloud environments. Only the first script is a hands-on exercise: the rest discuss options and possibilities but leave the implementation details to you.

Building a Simple REST API

To get data from your vulnerability management system into another tool or to integrate your system into a third-party automation or orchestration product, you could do periodic database dumps, output reports in a format those tools can ingest, or write an API. If the destination tool supports API integration, using an API is a good solution. In this section, we’ll look at building a simple representational state transfer (REST) API from scratch. But first, let’s look at what a REST API is.

An Introduction to APIs and REST

Programmatic interfaces (shared boundaries between system components that are accessed using programs) provide a consistent method for programs to interact with each other and with the host OS. When you use an API, you don’t need to understand the inner workings of the application you’re communicating with; you just need to know that if your program writes this message to that location, the receiving system will understand and respond with a response of that type. Abstracting the inner workings behind an interface that remains consistent, no matter how the infrastructure behind that interface might change, greatly simplifies software development and interoperation. The reason is that programs can evolve independently while retaining a common communication language.

REST defines a class of APIs that communicate over the internet by reading from and writing to an unknown database (or other arbitrary storage system). A full REST API supports all database operations: creating, reading, updating, and deleting records (commonly called CRUD). The HTTP methods POST, GET, PUT (sometimes PATCH), and DELETE implement their respective CRUD operations, as shown in Table 14-1.

Table 14-1: HTTP Methods Mapped to CRUD Actions

Method

Action

GET

Get (read) the contents of a record (or information about multiple records)

POST

Create a new record

PUT/PATCH

Update an existing record or create one if it doesn’t yet exist

DELETE

Delete an existing record

To use the API, the client sends an HTTP request using the appropriate method to a URL (technically, a universal resource indicator (URI)) that specifies the record or records to act on. For example, I send a GET request to http://rest-server/names/ to tell the REST API to send back a list of names (commonly in XML or JSON). A GET request to http://rest-server/names/andrew-magnusson/ returns more information about the name record for “Andrew Magnusson.” A DELETE request to that same address tells the remote system to delete my name record.

The address in a URI, unlike one in a standard web URL, doesn’t point to a consistent web location. Instead, it points to an API endpoint: an interface for a program running on the server side that the REST client uses to send the appropriate HTTP method to perform CRUD actions.

Designing the API Structure

Think about what you need your vulnerability management API to do. How many of the CRUD actions will you support? In simple-api.py, I implement only GET (read existing records), the simplest and safest method; all a ­client can do is request data that’s already in the database. Our vulnerability management system updates itself internally, so there’s no need for external systems to make changes to the database. If you want external systems (­particularly automation or orchestration routines) to modify the vulnerability database, you can implement POST, PUT/PATCH, and DELETE methods.

You also need to consider what data the API clients should have access to. Your vulnerability management database contains a list of hosts with associated details, a list of discovered vulnerabilities with their own details, and a mirror of the CVE database provided by cve-search. We don’t need to provide the CVE database contents with our API because it’s publicly available. If other tools need this information, there are easier ways for them to get it than by querying your API. But it makes sense to expose host and vulnerability information that is specific to your network and most likely can’t be found anywhere but the vulnerability management system.

The simple-api.py implements four endpoints for the hosts and ­vulnerabilities collections, accessible only via the GET method. Table 14-2 lists the details of each endpoint.

Table 14-2: API Endpoints and Their Function

Endpoint

Action

/hosts/

Returns a JSON-formatted list of IP addresses in the database

/hosts/<ip address>

Returns JSON-formatted host details for the provided IP address, including a list of CVEs it’s vulnerable to

/vulnerabilities/

Returns a JSON-formatted list of CVE IDs in the vulnerabilities database; that is, CVEs that currently affect hosts in the system

/vulnerabilities/<CVE ID>

Returns JSON-formatted details for the provided CVE ID, including a list of IP addresses that are vulnerable

If any other URI paths are requested from the server where you host your API, the script returns a JSON document containing a key-value pair in the form {'error': 'error message'} and an HTTP status code. An HTTP status code of 2xx indicates success, and the 4xx series refers to a variety of errors (for example, 404 “page not found”). For purely whimsical reasons, I decided to make all the errors return code 418, which in HTTP unofficially means (I’m not making this up) “I’m a teapot.” Feel free to use a different error code in your script.

Implementing the API

Instead of building the entire API in a single main() function, we’ll split the script into logical functions:

main() Starts the server instance and tells Python to handle all requests via SimpleRequestHandler.

SimpleRequestHandler A custom class that inherits from the http.server.BaseHTTPRequestHandler class and overrides the do_GET function that parses the request URI for GET requests. It either returns an error or passes control to the database lookup functions that handle requesting and parsing data from Mongo. It returns an error for other HTTP method handlers like do_POST and do_PUT because they’re not supported.

Database lookup functions There are four of these, one for each endpoint. Each one performs Mongo queries and returns the data to SimpleRequestHandler in a JSON document, as well as a response code in the case of errors.

We’ll look at each section in order, starting with the Python headers and the main() function in Listing 14-1.

  #!/usr/bin/env python3

 import http.server, socketserver, json, re, ipaddress
  from bson.json_util import dumps 
  from pymongo import MongoClient
  from io import BytesIO 

  client = MongoClient('mongodb://localhost:27017')
  db = client['vulnmgt']
 PORT=8000
  ERRORCODE=418 # I'm a teapot

  --functions and object definitions are in Listings 14-2 and 14-3--

 def main():
      Handler = SimpleRequestHandler
      with socketserver.TCPServer(("", PORT), Handler) as httpd:
          httpd.serve_forever()

main()

Listing 14-1: Script listing for simple-api.py (part 1)

We import http.server and socketserver for basic HTTP server functionality, bson.json_util for a BSON-dumping utility to turn Mongo responses into clean JSON, and BytesIO to build the server response, which must be in a byte format rather than simple ASCII . The global variables PORT and ERRORCODE define the listening port for the server and the standard error code to return, respectively.

When the script starts , we instantiate a TCPServer, listening at the configured port. It delegates its handling to SimpleRequestHandler and, because it’s invoked with serve_forever, will continue serving requests until the process is killed.

When a request comes in via GET, the do_GET method of SimpleRequestHandler in Listing 14-2 kicks into action.

class SimpleRequestHandler(http.server.BaseHTTPRequestHandler):
    def do_GET(self):
      response = BytesIO()
      splitPath = self.path.split('/')
        if (splitPath[1] == 'vulnerabilities'):
            if(len(splitPath) == 2 or (len(splitPath) == 3  and splitPath[2]
            == '')):
                self.send_response(200)
              response.write(listVulns().encode())
            elif(len(splitPath) == 3):
              code, details = getVulnDetails(splitPath[2])
                self.send_response(code)
                response.write(details.encode())
            else:
              self.send_response(ERRORCODE)
                response.write(json.dumps([{'error': 'did you mean '
                'vulnerabilities/?'}]).encode())
        elif (splitPath[1] == 'hosts'):
            if(len(splitPath) == 2 or (len(splitPath) == 3  and splitPath[2]
            == '')):
                            self.send_response(200)
              response.write(listHosts().encode())
            elif(len(splitPath) == 3):
              code, details = getHostDetails(splitPath[2])
                self.send_response(code)
                response.write(details.encode())
            else:
                self.send_response(ERRORCODE)
                response.write(json.dumps([{'error': 'did you mean '
                'hosts/?'}]).encode())
        else:
            self.send_response(ERRORCODE)
            response.write(json.dumps([{'error': 'unrecognized path ' 
            + self.path}]).encode())
        self.end_headers()
      self.wfile.write(response.getvalue())

Listing 14-2: Script listing for simple-api.py (part 2)

To determine whether the request path is one of the four supported endpoints, the requested URI is first split into its component pieces. To handle this parsing, we split the path into an array using / (forward slash) as the delimiter . The first value in that array is blank (the empty string prior to the first slash), so the second and third values point to the appropriate database lookup function ➌➍➏➐, and the return values of those functions are used as the response body. If no function is matched, an error is returned as the response. Building the response in http.server requires three steps (four if any errors are generated):

  1. Send headers (implicitly handled by sending a response code with send_response).
  2. End headers with end_headers().
  3. Generate errors as needed using ERRORCODE .
  4. Send actual response data with wfile.write , which takes the byte­stream from the response variable. This variable is instantiated as a BytesIO object and is built by adding data to it via response.write, which automatically puts it into the proper byte format.

Additionally, there are four database functions: listHosts, listVulns, getHostDetails, and getVulnDetails, as shown in Listing 14-3.

def listHosts():
  results = db.hosts.distinct('ip')
    count = len(results)
    response =  [{'count': count, 'iplist': results}]
  return json.dumps(response)

def listVulns():
    results = db.vulnerabilities.distinct('cve')
    if 'NOCVE' in results:
        results.remove('NOCVE') # we don't care about these
    count = len(results)
    response = [{'count': count, 'cvelist': results}]
    return json.dumps(response)

def getHostDetails(hostid):
    code = 200
    try:
      ipaddress.ip_address(hostid)
      response = db.hosts.find_one({'ip': hostid})
        if response:
            cveList = []
          oids = db.hosts.distinct('oids.oid', {'ip': hostid})
            for oid in oids:
                oidInfo = db.vulnerabilities.find_one({'oid': oid})
                if 'cve' in oidInfo.keys():
                    cveList += oidInfo['cve']
            cveList = sorted(set(cveList)) # sort, remove dupes
            if 'NOCVE' in cveList:
                cveList.remove('NOCVE') # remove NOCVE
           response['cves'] = cveList
        else:
            response = [{'error': 'IP ' + hostid + ' not found'}]
            code = ERRORCODE
    except ValueError as e:
        response= [{'error': str(e)}]
        code = ERRORCODE
    return code, dumps(response)

def getVulnDetails(cveid):
    code = 200
  if (re.fullmatch('CVE-d{4}-d{4,}', cveid)):
      response = db.vulnerabilities.find_one({'cve': cveid})
        if response: # there's a cve in there
            oid = response['oid']
          result = db.hosts.distinct('ip', {'oids.oid': oid})
            response['affectedhosts'] = result
        else:
            response = [{'error': 'no hosts affected by ' + cveid}]
            code = ERRORCODE
    else:
        response = [{'error': cveid + ' is not a valid CVE ID'}]
        code = ERRORCODE
    return code, dumps(response)

Listing 14-3: Script listing for simple-api.py (part 3)

The first two database functions query Mongo for a complete and deduplicated list of IP addresses (listHosts) or CVE IDs (listVulns) and send it back as a JSON structure .

The details functions first validate whether the input value is a legitimate IP address or CVE ID and send back an error if not. Next, they pull out the details for a specific host or vulnerability . Then they run a second query to get the list of associated hosts (for a vulnerability) or vulnerabilities (for a host) . This data, once collected, is inserted into a JSON structure that is returned to SimpleRequestHandler and then the client.

Getting the API Running

Once the simple-api.py script is complete and tested, set it up on your server to run all the time. The process for doing this depends on the service management system that your OS uses: common ones for Linux are systemd, SysV-style init, and upstart. These instructions apply to systemd.

Create a service file called simple-api.service in the systemd scripts location (/lib/systemd/system on Ubuntu) to add a new systemd service. Listing 14-4 shows the contents of the service file.

[Unit]
Description=systemd script for simple-api.py
DefaultDependencies=no
Wants=network-pre.target

[Service]
Type=simple
RemainAfterExit=false
ExecStart=/path/to/scripts/simple-api.py
ExecStop=/usr/bin/killall simple-api
TimeoutStopSec=30s

[Install]
WantedBy=multi-user.target

Listing 14-4: Service configuration for simple-api.py

Now make simple-api.py executable using chmod +x and run the commands in Listing 14-5 as root to start the service and ensure that it’s running.

# systemctl enable simple-api.service
Created symlink /etc/systemd/system/multi-user.target.wants/simple-api.service
→ /lib/systemd/system/simple-api.service.
# systemctl daemon-reload
# systemctl start simple-api
# systemctl status simple-api
  simple-api.service - SystemD script for simple-api.py
   Loaded: loaded (/lib/systemd/system/simple-api.service; enabled; vendor
   preset: enabled)
   Active: active (running) since Sun 2020-04-26 16:54:07 UTC; 1s ago
 Main PID: 1554 (python3)
    Tasks: 3 (limit: 4633)
   CGroup: /system.slice/simple-api.service
             1554 python3 /path/to/scripts/simple-api.py

Apr 28 16:54:07 practicalvm systemd[1]: Started systemd script for 
simple-api.py.

Listing 14-5: Starting the service

First, systemctl enable adds simple-api.service into the systemd configuration. Next, systemctl daemon-reload and systemctl start simple-api start the service. Then systemctl status simple-api outputs the response you see in Listing 14-5 if the service is successfully running. At this point, the API will be up and listening on the port you’ve configured within the script.

Customize It

Python’s http.server library minimizes external dependencies and makes it very clear how the code functions. But it doesn’t provide API-specific functionality and only supports basic HTTP authentication (the Python authors strongly recommend that you not use it in a production environment). If you want to significantly expand the API, you can use a REST framework, such as Flask or Falcon, to simplify coding and maintain the API.

The simple-api.py script doesn’t even implement basic HTTP authentication. So it’s very important to either heavily restrict access to the web server or add authentication to the script before using it in production.

The script returns a simple list of hosts or vulnerability IDs from the /list endpoints. You can return more information about every host/­vulnerability, similar to the advanced reports in Chapter 13.

If you expect clients to use your API by requesting large batches of data, you can make this easier and more efficient by adding the option to include paging information in the query. For example, a request to http://api-server/hosts/list/?start=20&count=20 would return records 20 through 40, and a client could iterate through the total host listing a batch at a time.

As the script and systemd service are written now, the log messages from http.server are printed to STDERR, which may not be captured by the systemd logger, journald. You can modify the script or the service definition to retain logs so you can keep an eye on who is using your API.

Descriptive error messages let an attacker probe your API to see what information is available. You can harden the API by replacing all the errors with a generic message that doesn’t provide hints to the correct endpoint formats.

Automating Vulnerability Exploitation

Once you have information about systems containing vulnerabilities with known exploits (see Listing 13-9), you can determine whether those vulnerabilities are exploitable. If they are, you might prioritize fixing or mitigating those vulnerabilities. If they’re not, either it’s a false positive result or existing mitigations protect the host from successful exploitation.

But this process is slow and tedious: you have to find the exploit, set up your system to run it, attempt exploitation, and then record the results. You’ve already automated most of your process, so why not automate this final step as well? Tools like Metasploit are scriptable via the command line, so is there any reason not to automatically attempt exploitation?

Pros and Cons

Actually, there are several very good reasons not to automate vulnerability exploitation. Even the process of vulnerability scanning isn’t without risks. It’s always possible to cause glitches or even crash a system with aggressive scans or fragile targets. Running exploits is more dangerous yet: they could crash a production system, damage important data, or even (in rare cases) damage the underlying hardware. Exploit code that you don’t thoroughly understand might have backdoor functions or unexpected side effects. Even if you could ensure that the exploits you’re running do nothing but verify that exploitation is possible, you could still damage the system you’re testing.

For many organizations, the risk isn’t worth the reward of knowing which systems in the environment are vulnerable to which exploits. So they perform vulnerability exploitation manually, or at least partially manually. It’s best to have an experienced penetration tester attempt exploitation in a controlled environment. The tester uses an exploitation framework like Metasploit to automate tedious steps, such as running tests repeatedly with different inputs or trying different exploits until they find one that works. But there’s always a human monitoring its effectiveness and ready to stop the test if something goes wrong.

Some organizations have a large set of assets and a threat model where the exploitation risk is significantly higher than the risk of occasionally crashing a critical service. If manual exploitation of all the critical vulnerabilities isn’t feasible, the additional information might be worth the risk. But this isn’t a decision you should make lightly or in a vacuum. You’ll need organizational support before implementing automatic vulnerability exploitation (see “Gaining Support” on page 39).

Automating Metasploit

Once you’ve identified which exploits exist for vulnerabilities in your environment, you need to run the identified exploits against the vulnerable host. With the Exploit Database, there’s no easy way to script “run this exploit on host X”: exploits are written in various languages, some might need to be compiled before running, and they have received varying levels of vetting for effectiveness and safety. As a unified penetration-testing framework, Metasploit solves these issues. All Metasploit-compatible exploits are implemented in Ruby, tested extensively, and run in a consistent manner via the Metasploit Framework. Better still, you can script Metasploit from the command line and encapsulate it in a Python (or similar) script. This section describes how to write such a script, but I’ll leave the implementation as an exercise for the motivated reader.

NOTE

You can modify the exploitable-vulns.py script in Listing 13-9 to use Metasploit’s internal vulnerability-to-exploit mapping and be confident that any systems thereby marked as exploitable do in fact have automatable Metasploit modules. Access to this data and parsing it to find those mappings is another exercise I’ll leave to the advanced reader.

Listing 14-6 shows the overall structure of a possible automated exploitation script in pseudocode.

Query database for list of hosts with vulnerabilities
Map vulnerabilities against list of exploits (Exploit-DB, Metasploit, other)
Result: list of hosts and exploitable vulnerabilities on each host
For each host in this list:
    For each vulnerability on that host:
        Determine Metasploit module for specified vulnerability
        Kick off Metasploit module against specified host
        Record success/failure in host record in database

Listing 14-6: Pseudocode for automated exploitation with Metasploit

Getting a list of exploitable vulnerabilities on each host by mapping them against an existing list of exploits should be familiar from working with exploitable-vulns.py. The loop in Listing 14-7 goes through each exploitable vulnerability on each host and starts a Metasploit session to attempt to exploit the vulnerability with its associated Metasploit module.

Because Metasploit modules are referred to by name rather than by CVE ID, you’ll need to connect the CVE you’re attempting to exploit with the correct module. If you’re not getting exploit information from Metasploit, you can correlate CVE IDs with Metasploit module names by manually parsing Metasploit searches, as in Listing 14-7.

$ msfconsole -qx 'search cve:CVE-2012-2019;quit'

Matching Modules
================

   #  Name  Disclosure Date  Rank  Check  Description
   -  ----  ---------------  ----  -----  -----------
   1  exploit/windows/misc/hp_operations_agent_coda_34  2012-07-09  normal  
  Yes     HP Operations Agent Opcode coda.exe 0x34 Buffer Overflow

Listing 14-7: Searching Metasploit modules using the Metasploit command line

This process takes quite some time, mostly because starting msfconsole can take tens of seconds. You can split this listing into two scripts: one to start msfconsole and the other to submit requests to the running console process via a simple API.

Once you have the module name, the remaining step is to attempt exploitation. Run msfconsole -qx 'command1;command2;commandX;quit' to run a sequence of exploit-related commands and then close Metasploit. Many modules require additional parameters for the best operation: you might decide to run every module with its default configuration or store parameters for some of the more popular modules separately. To determine whether exploitation was successful, you can rely on the Metasploit output. Or, if you’ve configured Metasploit to use a database, you can pull success/failure information from the database after the exploit was attempted.

At this point, you can test automatic exploitation. But before you do so, consider the following:

  • Is automatic exploitation testing necessary?
  • Can I run this script against a test environment configured to replicate the live environment rather than against production systems?
  • Is this testing really necessary?

If you’re still convinced, good luck, and have at it!

Bringing the System into the Cloud

This book focuses on small organizations with on-premise workstations and servers. But businesses are increasingly adding cloud-based operations or even moving their entire production environment into the cloud. Many new organizations are forgoing local infrastructure entirely, opting to place their entire business infrastructure in a cloud environment. In this section, we’ll look at some considerations for adding your cloud environment into your existing vulnerability management system.

Cloud Architecture

If your infrastructure is entirely in the cloud, it makes sense to deploy your vulnerability-scanning system entirely in the same cloud environment. Doing so will minimize latency and let you allow access to your various cloud network segments from a scanner that’s already in the same environment.

But if your environment is a mix of cloud and on-premise infrastructure, you might need to consider a few different options. You could set up your cloud environment to permit your scanning tools access into the cloud. Or, you could set up separate scanners within the cloud environment that deliver their results to your centralized Mongo database. Scanning the cloud environment from a local scanner introduces latency (especially if you’re geographically distant from your cloud network) and intervening security devices. You’ll have to allow your scanner unlimited egress from your local network and permit its public IP address unlimited access to the cloud environment. Alternatively, you could provide this access via a virtual private network (VPN) configuration, which would let you securely tunnel traffic between your local and cloud environments.

If you set up multiple scanners for cloud or a heavily segmented local network, you’ll need to ensure they coordinate their database insertions to avoid overwriting each other’s data. You also must make sure that database reporting and deletion only happens from one location to guarantee the data remains consistent.

Cloud and Network Ranges

Unlike an on-premise network, where you know that all the IP addresses in a range are part of your network, cloud hosts or services often have multiple IP addresses: at a minimum, one for private access from within the same network and one for public internet access. In the private address space, cloud network separation ensures that you can’t target hosts belonging to another cloud environment. But with public addresses, there is no such guarantee: your cloud’s public IP addresses are adjacent to many other addresses.

If you scan only your cloud environment’s private IP addresses, you can specify an entire network range with confidence that you can’t access hosts outside your cloud. To address ranges within the cloud’s private network, you’ll need either a scanner within that range or a remote connection, such as a VPN.

If you scan your cloud services’ public-facing addresses, you’ll need to address your hosts individually rather than by network range to ensure you don’t accidentally start an unauthorized scan (in other words, an attack) of another organization’s hosts. Even though you can more safely scan hosts via their internal addresses, external-facing scans in concert with internal scans help you understand your public-facing vulnerabilities. A vulnerability that only exists on internal-facing services might be less severe than the same vulnerability on a port that’s open to the internet at large. Getting both views of your environment will give you a better understanding of your overall security posture.

If you perform internal and external scans, you’ll have to make some decisions about the structure of your host data in your database. The scanning and reporting scripts in this book uniquely identify each host by IP address. If a host has more than one IP address, you’ll need to account for this by choosing a different unique host identifier. Or, you can treat the external and internal views on the same cloud system as separate hosts. Whichever you choose, adjust your scripts and database to compensate.

Other Implementation Considerations

You’ll need a complete understanding of your cloud environment(s) for complete scanning and reporting coverage. Consider the following questions: is your cloud environment largely located in the same place, or is it distributed? Do you have multiple private cloud environments or just one? Is there internal segmentation providing limited access into certain subnets? This section discusses aspects of your cloud environment that you’ll need to keep in mind while designing your cloud-scanning system.

Cloud Environment Distribution

Many organizations have multiple cloud environments, possibly spread across several cloud providers, such as Amazon, Google, or Microsoft. Even a “simple” multi-cloud environment might easily include a development cloud environment, a testing cloud, the production cloud environment where the actual business-critical services reside, and a management cloud that controls access into the other three.

Underlying peering connections might link the disparate clouds, or they might be restricted to communication over the public internet. In multiple cloud environments hosted by a single cloud provider, a peering arrangement might allow services in one environment to communicate with another directly. Place your scanners where it’s easiest to ensure full coverage of your multiple cloud environments.

Virtual Machines and Services

You can think of a cloud environment much like a traditional data center except all the physical services are replaced by virtual machines. But cloud environments are a lot more flexible. All the major cloud vendors now provide, in addition to custom virtual machines, software-as-a-service (SaaS). In SaaS environments, you can register, say, a PostgreSQL server without having to think about or even be aware of the underlying OS and support software. For the purposes of your business and vulnerability management system, the only thing that exists is PostgreSQL, and the cloud provider handles the patching, configuration, and underlying OS.

Many modern cloud environments have a blend of full virtual machines, SaaS services, and a containerized environment, which I discuss in the next section. You’ll need to be aware of this blend and choose your networking settings accordingly to ensure that your scanner can access all open ports across your environment.

Containerized Services

Organizations are increasingly turning to container-based deployments for new services, using systems like Docker and Kubernetes. A full introduction to containers is beyond the scope of this book, but you can think of them as extremely stripped-down virtual machines that expose only specific ports/services to the outside world, if at all. In some cases, especially in Kubernetes environments, you might have multiple microservices that speak only to each other and to the Kubernetes management system; hence, they’re nearly invisible from an external scanner’s perspective.

Like SaaS systems, containerized environments raise questions of just how much responsibility you have for vulnerability awareness and scanning in these environments. Unlike with SaaS, your organization is still responsible for the containerized environment, even if the environment only externally exposes a very limited set of services. So you need to ensure that the individual containers are not running vulnerable or outdated services. The vulnerability management system we’ve built in this book is not well suited to managing a containerized environment, but the principles you’ve learned will serve you well in designing policies to keep these deployments fully up-to-date.

Scanner Access Requirements

To accurately catalog vulnerabilities in your cloud environment, your scanner needs network access to all of your virtual machines and services in the cloud environment. In networking terms, this means that the scanner, wherever it’s located, must be allowed to connect to its target IP range on the full range of TCP ports. But what about a SaaS PostgreSQL database? Which ports need to be opened to ensure the scanner can get as much information as possible about that system?

You could allow the scanner access to all ports, 0 through 65535. But considering the database only provides access on port 5432, you might allow the scanner access to only that port on the SaaS host system to save time and effort. On the other hand, what if you don’t entirely trust your cloud provider to expose only the PostgreSQL service? The best way to find out what other services are open might be via a comprehensive port scan.

Summary

In this chapter, you learned ways to expand your vulnerability management system. You built a simple REST API to remotely query the vulnerability database to integrate this system with other security or orchestration tools in your environment. You considered the pros and cons of automated exploitation of known vulnerabilities in your environment. You also considered how to extend your vulnerability management capability into the cloud.

Security is always a process, never a goal, and your vulnerability management system is no different. In the next (and final) chapter, we’ll look back at what you’ve accomplished. Then we’ll explore some of the topics you might want to tackle next. For example, you might want to investigate the vulnerability management implications of coming trends, such as the zero-trust network, or you may someday want to find commercial replacements for some of your homebrew tools.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset