5 Docker and Packer

In the last chapter we saw how to use provisioners to customize our images during the build process. In this chapter, we’re going to continue to explore building and provisioning images with Packer. And we’re going to look at one of Packer’s most interesting and complex use cases: building Docker images.

To build Docker images, Packer uses the Docker daemon to run containers, runs provisioners on those containers, then can commit Docker images locally or push them up to the Docker Hub. But, interestingly, Packer doesn’t use Dockerfiles to build images. Instead Packer uses the same provisioners we saw in Chapter 3 to build images. This allows us to create a consistent mental model for all our images, across all platforms.

We’re going to learn how to build and push Docker images with Packer.

Tip If you want to learn more about Docker, you can look at the Docker documentation or my book on Docker, originally titled The Docker Book.

5.1 Getting started with Docker and Packer

When building Docker images, Packer and the Docker builder need to run on a host that has Docker installed. Installing Docker is relatively simple, and we’re not going to show you how to do it in this book. There are, however, a lot of resources available online to help you.

Tip Use a recent version of Docker for this chapter—at least Docker 17.05.0-ce or later.

You can confirm you have Docker available on your host by running the docker binary.

5.2 A basic Docker build

The Docker builder is just like any other Packer builder: it uses resources, in this case a local Docker daemon, to build an image. Let’s create a template for a basic Docker build:

And now populate that template:

The template is much like our template from Chapter 3—simple and not very practical. Currently it just creates an image from the ubuntu stock image and exports a tar ball of it. Let’s explore each key.

The type of builder we’ve specified is docker. We’ve specified a base image for the builder to work from; this is much like using the FROM instruction in a Dockerfile, using the image key.

The type, as always, and the image are required keys for the Docker builder. You must also specify what to do with the container that the Docker builder builds.

The Docker builder has three possible output actions. You must specify one:

  • Export - Export an image from the container as a tar ball, as above with the export_path key.
  • Discard - Throw away the container after the build, using the discard key.
  • Commit - Commit the container as an image available to the Docker daemon that built it, using the commit key.

Let’s build our template now and see what happens.

We can see that Packer has pulled down a base Docker image, ubuntu, that’s running a new container, then it’s exported the container as docker_basic.tar. You could now use the docker import command to import that image from the tar ball.

Note We’ll see other actions we can take with the final image later in this chapter.

Let’s do something a bit more complex in our next build.

5.3 Provisioning a Docker image

Let’s create a new template, docker_prov.json, that will combine a Docker build with provisioning of a new image. Rather than export the new image we’re going to create, we’re going to commit the image to our local Docker daemon. Let’s take a look at our template.

In our new template, we’ve replaced the export_path key with the commit key, which is set to true. We’ve also added a provisioners block and specified a single script called install.sh. Let’s look at that script now.

Our script updates APT and then installs the apache2 package.

When we run packer on this template it’ll create a new container from the ubuntu image, run the install.sh script using the shell provisioner, and then commit a new image.

Let’s see that now.

Here we can see our Docker image has been built from the ubuntu image and then provisioned using our install.sh and the Apache web server installed. We’ve then outputted an image, stored in our local Docker daemon.

5.4 Instructions and changes

Sometimes a provisioner isn’t quite sufficient and you need to take some additional actions to make a container fully functional. The docker builder comes with a key called changes that allows you to specify some Dockerfile instructions.

Note The changes key behaves in much the same way as the docker commit –change command line option.

We can use the changes key to supplement our existing template:

Here we’ve added three instructions: USER, which sets the default user; WORKDIR, which sets the working directory; and EXPOSE, which exposes a network port. These instructions will be applied to the image being built and committed to Docker.

You can’t change all Dockerfile instructions, but you can change the CMD, ENTRYPOINT, ENV, EXPOSE, MAINTAINER, USER, VOLUME, and WORKDIR instructions.

This is still only a partial life cycle, and we most often want to do something with the artifact generated by our build. This is where post-processors come in.

5.5 Post-processing Docker images

Post-processors take actions on the artifacts, usually images, created by Packer. They allow us to store, distribute, or otherwise process those artifacts. The Docker workflow is ideal for demonstrating their capabilities. We’re going to examine two Docker-centric post-processors:

  • docker-tag - Tags Docker images.
  • docker-push - Pushes Docker images to an image store, like the Docker Hub.

Post-processors are defined in another template block: post-processors. Let’s add some post-processing to a new template, docker_postproc.json.

Note that we’ve added a post-processors block with an array of post-processors defined. Packer will take the result of any builder action and send it through the post-processors, so if you have one builder the post-processors will be executed once, two builders will result in the post-processors being executed twice, and so on. You can also control which post-processors run for which build—we’ll see more of that in Chapter 7.

Tip Also in Chapter 7, we’ll see how multiple builders operate and how to control which post-processors execute.

For each post-processor definition, Packer will take the result of each of the defined builders and send it through the post-processors. This means that if you have one post-processor defined and two builders defined in a template, the post-processor will run twice (once for each builder), by default.

There are three ways to define post-processors: simple, detailed, and in sequence. A simple post-processor definition is just the name of a post-processor listed in an array.

A simple definition assumes you don’t need to specify any configuration for the post-processor. A more detailed definition is much like a builder definition and allows you to configure the post-processor.

Like with a builder or provisioner definition, we specify the type of post-processor and then any options. In our case we use the docker-save post-processor which saves the Docker image to a file.

The last type of post-processor definition is a sequence. This is the most powerful use of post-processors, chained in sequence to perform multiple actions. It can contain simple and detailed post-processor definitions, listed in the order in which you wish to execute them.

You can see our post-processors are inside the post-processors array and further nested within an array of their own. This links post-processors together, meaning their actions are chained or executed in sequence. Any artifacts a post-processor generates is fed into the next post-processor in the sequence.

Note You can only nest one layer of sequence.

Our first post-processor is docker-tag. You can specify a repository and an optional tag for your image. This is the equivalent of running the docker tag command.

This tags our image with a repository name and a tag that makes it possible to use the second post-processor: docker-push.

The docker-push post-processor pushes Docker images to a Docker registry, like the Docker Hub, a local private registry, or even Amazon ECR. You can provide login credentials for the push, or the post-processor can make use of existing credentials such as your local Docker Hub or AWS credentials.

Tip You can also see a simple definition of a post-processor, in this case the docker-push post-processor, in a sequence.

Let’s try to post-process our artifact now.

Tip You can tag and send an image to multiple repositories by specifying the docker-tag and docker-push post-processors multiple times.

We’ve cut out a lot of log entries, but you can see our Docker image being tagged and then pushed to my Docker Hub account, jamtur01. The image has been pushed to the docker_postproc repository with a tag of 0.1. This assumes we’ve got local credentials for the Docker Hub. If you need to specify specific credentials you can add them to the template like so:

Here we’ve specified some variables to hold our Docker Hub username and password. This is more secure than hard coding it into the template.

Tip We could also use environment variables.

We’ve used the user function to reference them in the post-processor. We’ve also specifies the login key and set it to true to ensure the docker-push post-processor logs in prior to pushing the image.

We can then run our template and specify the variables on the command line:

Tip We can also do the same with the Amazon ECR container repository.

5.6 Summary

In this chapter we’ve seen how to combine Packer and Docker to build Docker images. We’ve seen how we can combine multiple stages of the build process:

  1. Starting with a Docker image.
  2. Adding some Dockerfile instructions.
  3. Provisioning to build what we need into the image.
  4. Post-processing to work with the committed image, potentially uploading it to a container repository like the Docker Hub.

There are also other post-processors that might interest you. You can find a full list in the Packer documentation.

In the next chapter, we’re going to see how we can add tests to our Packer build process to ensure that our provisioning and image are correct.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset