6 Testing Packer

In the last few chapters we’ve seen how to build images with Packer. Part of that process is provisioning images with the configuration you need. When you’re running scripts, or even using configuration management tools, there’s some risk that the outcomes of your build won’t be what you requested. We can ensure that what we get what we’ve asked for by using software-based tests.

Software tests validate that your software does what it is supposed to do. They are the bread-and-butter validation tools of developers. Loosely, they’re a combination of quality measures and correctness measures. We’re going to apply some of the principles of software testing to our infrastructure. A software unit test, at its heart, confirms that an isolated unit of code performs as required. Inputs to the unit of code are applied, the code is run, and the outputs are confirmed as valid.

We’re going to combine Packer with a testing framework called Serverspec. Serverspec is a testing framework and harness that allows you to write RSpec tests for infrastructure. It sits on top of the RSpec testing framework, shares its DSL, and can leverage all of its capabilities and tooling.

6.1 Our test scenario

We’re going to build an AMI image using Puppet modules and write Serverspec tests to validate that configuration. As a basis for our image we’re going to use the Puppet standalone configuration we created in Chapter 4, with some additional modules that we’re also going to install and test. We’ll create and upload some Serverspec tests onto our remote host as part of that build process and then execute them to validate our configuration.

Let’s quickly revisit that configuration and create a directory structure to hold our template and related configuration:

Here we’ve created a directory structure for our build, including a hieradata directory to hold our Hiera configuration, a manifests directory to hold our Puppet code, and a tests directory to hold our new Serverspec tests.

Now let’s create a template. We’ll call it serverspec.json.

Next we’ll populate that template. Our template is a copy of the puppet_standalone.json template we created in Chapter 4. We’ve added an additional step in the provisioners block to handle our Serverspec installation, so we’ll again focus on the provisioners block.

Note You can find all the code for these examples on GitHub.

We’ve chained together three provisioners: the shell, file, and puppet-masterless provisioner.

Our first steps match what we did in Chapter 4: installing Puppet, uploading the Hiera data, and running the Puppet agent to install the modules.

Following that, we’ve added some final inline shell scripts to install Serverspec and run our tests.

Let’s look at each these steps quickly to ensure we understand what’s happening.

6.1.1 Installing Puppet

Our shell provisioner executes a single script called install-puppet.sh. The script adds the Puppet APT repo, then refreshes the package database and installs the puppet-agent. The puppet-masterless provisioner will later use this agent to execute any Puppet code and configure our host.

6.1.2 Hiera and manifest data

The next provisioner is the file provisioner. This loads a Hiera data directory onto the host.

Note Remember from Chapter 4 that we need to upload this directory due to a nuance in the way Puppet is run. All of our other configuration will remain local.

Hiera configuration is contained in YAML files. We’ve created two files. The first, hiera.yaml, tells Puppet about Hiera’s configuration and structure. It’s identical to our configuration in Chapter 4.

Remember the hiera.yaml file also tells Puppet what configuration to install using the second file: common.yaml. We’ve changed the common.yaml file from Chapter 4 to add some new configuration.

Notice that we’ve added four new modules: locales, motd, timezone, and ntp. These additional modules will be loaded together with our existing ssh module. We’ve also specified some additional configuration to set the time zone of the host and enable NTP to ensure instances launched from our image have the correct time.

We’ve also maintained the site.pp manifest in the manifests directory to consume this data.

This will look up the classes array in our common.yaml file and load the resulting modules into Puppet to be processed.

Next, our file provisioner uploads the local hieradata directory to the /tmp directory on the remote host. Our puppet-masterless provisioner will look for it here thanks to our Hiera configuration.

Let’s go get the new modules and install them.

6.1.3 Installing modules

To install modules we’re going to again use the librarian-puppet tool to manage our modules. We’ll install Puppet and that tool first, if you don’t already have them installed.

Note We’ve assumed you have Ruby and Rubygems installed to do this.
Note We’ve also added a Gemfile in the directory so if you have Bundler installed you can just bundle install.

Remember that Librarian-Puppet uses a Puppetfile file, much like a Ruby Gemfile. Let’s update ours to add our new modules.

The Puppetfile specifies the location of the Puppet Forge from which we’ll get the modules. It also lists the specific modules.

We can then run the librarian-puppet command to install the modules.

You’ll see that a Puppetfile.lock lock file will be created, as will the modules directory containing the downloaded modules and any supporting modules.

6.1.4 The Puppet standalone provisioner

The next provisioner is the puppet-masterless provisioner. It takes the configuration and pieces we downloaded and constructed earlier and combines them to execute on the remote host. Its configuration matches what we saw in Chapter 4.

6.1.5 Uploading our tests

Next we need to upload our tests. We need to do this so the tests can be executed on the image host.

This uploads the tests directory to /tmp on the host. Inside this directory are our tests themselves (which we’ll see shortly) as well a Rakefile that’ll provide an execution wrapper for them, allowing us to run them via a Rake task.

Tip Serverspec also supports running tests remotely. We could make use of the shell-local provisioner to run Serverspec in its SSH mode, which connects via SSH and executes the tests. This would save us uploading and installing anything on the image. This blog post discusses running Packer and Serverspec in this mode. Or you can see an example of the configuration in this chapter adapted for SSH in this Gist.

6.1.6 Running Serverspec

Our last provisioning step is to install Serverspec itself, via the serverspec gem. We’ll make use of the Ruby, Rake, and Rubygem binaries provided for us when we installed Puppet. This ensures we only install what is required for Serverspec, rather than polluting the host with additional packages.

Note If we wanted to tidy up after running our tests we could also uninstall the serverspec gem.

The second inline script actually runs the Serverspec tests. We change into the /tmp/tests directory we uploaded earlier, then run a Rake task that executes our tests.

Now we’ve seen the process that will be followed when provisioning our image, let’s look at the Serverspec tests we’re going to run.

6.2 Serverspec tests

Serverspec tests your infrastructure’s state by executing commands against that infrastructure. For example, as our Package template builds an image with SSH configured and running, the tests should check SSH is correctly configured and:

  1. That an SSH daemon is running.
  2. That TCP port 22 is open.

In our case we have installed a series of modules with Puppet. We want to validate that each of those has been successfully applied. Let’s see how we might write tests to validate those assertions.

Tip There are alternatives to Serverspec, like InSpec, Goss, or TestInfra that might also meet your testing needs.

6.2.1 Setting up Serverspec

Serverspec uses the same DSL as RSpec. To write tests we define a set of expectations inside a specific context or related collection of tests, usually in an individual file for each item we’re testing.

Let’s create a spec directory inside our tests directory to hold the tests themselves.

RSpec test file names usually end in _spec, which tell us they contain the tests relevant for a specific context.

6.2.2 Creating our first Serverspec tests

Let’s create a test file to hold our first tests. We’ll create the SSH tests first, and call the file ssh_spec.rb.

Tip There’s also the useful serverspec-init command which initializes a set of new tests.

Let’s populate that test file now.

We’ve now got some tests defined, and we’re requiring a spec_helper. This helper loads useful configuration for each test and is contained in the spec directory in the spec_helper.rb file. Let’s see it now.

Our helper loads the serverspec gem and sets the backend configuration. Serverspec has two modes of operation—the one we’re using now, exec, runs all tests locally—and an SSH mode, which, as we mentioned earlier, allows us to run the tests remotely. We’re executing locally so we set the backend to exec.

Back to our SSH tests. We generally want to set a context for our tests; this groups all of the relevant tests together. To do this we use describe block. This sets a hierarchy for our tests, here a single-layer hierarchy, that groups all of the SSH test assertions together.

Inside our describe block we define a series of individual assertions about our SSH daemon’s configuration.

Each assertion articulates a expectation, hence the expect inside the assertion. Each assertion is wrapped in an it ... end block. Inside that block we use the expect syntax to specify the details of our assertion.

Our first assertion says that a service called ssh should be running.

expect(service('ssh')).to be_running

Serverspec supplements the existing RSpec DSL with infrastructure-centric resource types, such as services or ports, which have matchers that support those resources. Here we’re using a resource called service which allows us to test the state of services on our host. That service resource has a matcher called be_running. Our assertion asks:

“Serverspec expects that the SSH service will be running”

If this assertion isn’t true, the test will fail and Packer will abort.

Note Serverspec automatically detects the operating system of the host on which it is being run. This allows it to know what service management tools, package management tools, or the like need to be queried to satisfy a resource. For example, on Ubuntu, Serverspec knows to use APT to query a package’s state.

Our second test asks Serverspec to confirm that the network configuration of our SSH daemon is correct.

expect(port 22).to be_listening 'tcp'

Here we’re using a new resource, port, to specify the standard SSH port of 22 and a matcher, be_listening, from the resource that we’ve configured to check for TCP ports. This test will validate that TCP port 22 is open on the host; if it is not, the test will fail.

Tip Check out Better Specs for some tips and tricks for writing better RSpec tests.

Let’s look at another set of tests, this time for our NTP module.

6.2.3 The Serverspec NTP tests

Let’s create a new test file for our NTP tests.

Now let’s populate that file with some new tests.

Note that we’ve again specified the spec_helper. We’ve also created a new describe block for our NTP tests; inside it we have three tests. The first uses the package resource to test that the ntp package is installed using the be_installed matcher. The second two tests confirm that the ntp service is enabled and running.

Tip You can find the full list of available resources in the Serverspec documentation.

We can then create tests for our time zone, motd, and locales configuration.

Note You can find those additional tests in the example code on GitHub. There are also a lot of test examples available in GitHub with a simple search.

6.2.4 The Rakefile

Now that we have our tests, let’s create and populate a Rakefile so we can run our tests as a Rake task. We’ll create it now in the tests directory.

Now let’s populate our Rakefile.

Our Rakefile requires rake and the rspec Rake tasks and then creates a new Rake task. That task executes any files ending in _spec in the spec directory as RSpec tests. It also ensures that if any of the tests fail that it’ll return a non-zero exit code.

Now that our Serverspec configuration and tests are complete, let’s run them and see what happens.

6.3 Running our tests

Our tests will run as part of our normal Packer build when we execute the serverspec.json template.

Here we’ve run the serverspec.json template and grabbed only the test run output. We can see our tests are uploaded, Serverspec is installed, and then our Rake task is executed with rake spec. Each dot:

.......

Indicates a test that has passed. Serverspec has reported at the end that seven examples were run and none failed. Our tests have passed—now our image will be created!

If, alternatively, something wasn’t right and a test failed, we’d see that in our output. Let’s run this again, this time assuming something has gone wrong with our configuration.

This time it appears something has gone wrong and the ntp service is not running. This means our assertion about the ntp service has failed—hence the Packer build has failed. We’ll need to work out what has gone wrong, fix the issue, and then run the build again.

Tip When testing like this, it’s useful to run Packer with the -debug flag enabled, which stops between steps and allows you to debug the server if any issues emerge.

6.4 Summary

In this chapter, we’ve seen how to add tests to a Packer build process. We’ve learned about Serverspec, an RSpec-based framework, and how to write tests for it.

We reused some of our Puppet standalone configuration from Chapter 4 and added some additional Puppet modules. We’ve written some basic tests to demonstrate how to validate that that configuration has been successfully applied.

In the next chapter, we’ll look at building pipelines for multi-platform images with Packer.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset