Chapter 8. Security, Challenges, and Conclusions

In this, our final chapter, we are going to be looking at all of the tools we have covered in this book and answering the following questions:

  • How the tools can affect the security of your Docker installation?
  • How they can work together and when should they be used?
  • What problems and challenges can the tools be used to resolve?

Securing your containers

So far, we have quite happily been pulling images from the Docker Hub without much thought as to who created them or what is actually installed. This hasn't been too much of a worry as we have been creating ad-hoc environments to launch the containers in.

As we move towards production and resolving the worked in dev problem, it starts to become important to know what it is that you are installing.

Throughout the previous chapters, we have been using the following container images:

All three of these images are classified as official images and have not only been built to a documented standard, they are also peer reviewed at each pull request.

There are then the three images from my own Docker Hub account:

Before we look at the official images, let's take a look at the Consul image from my own Docker Hub account and why it is safe to trust it.

Docker Hub

Here, we are going to look at the three types of images that can be downloaded from the Docker Hub.

I have chosen to concentrate on the Docker Hub rather than private registries as the tools we have been looking at the previous chapters all pull from the Docker Hub, and it is also more likely that you or your end users will use the Docker Hub as their primary resource for their image files.

Dockerfile

The Consul container image is built using a Dockerfile, which is publically accessibly on my GitHub account. Unlike images that are pushed, more on this later in the chapter, it means that you can exactly see action has been taken to build the image.

Firstly, we are using the russmckendrick/base image as our starting point. Again, the Dockerfile for this image is publicly available, so let's look at this now:

### Dockerfile
#
#   See https://github.com/russmckendrick/docker
#
FROM alpine:latest
MAINTAINER Russ McKendrick <[email protected]>
RUN apk update && apk upgrade && 
    apk add ca-certificates bash && 
    rm -rf /var/cache/apk/*

As you can see, all this does is:

  • Uses the latest version of the official Alpine Linux image
  • Runs an apk update and then apk upgrade to ensure that all the packages are updated
  • Installs the ca-certificates and bash packages
  • Cleans up any artifacts left over from the upgrade and installation of the packages

So, now that we know what the base image looks like, let's move onto the Dockerfile for the Consul container:

### Dockerfile
#
#   See https://github.com/russmckendrick/docker
#
FROM russmckendrick/base:latest
MAINTAINER Russ McKendrick <[email protected]>
ENV CONSUL_VERSION 0.6.4
ENV CONSUL_SHA256 abdf0e1856292468e2c9971420d73b805e93888e006c76324ae39416edcf0627
ENV CONSUL_UI_SHA256 5f8841b51e0e3e2eb1f1dc66a47310ae42b0448e77df14c83bb49e0e0d5fa4b7
RUN  apk add --update wget 
  && wget -O consul.zip https://releases.hashicorp.com/consul/${CONSUL_VERSION}/consul_${CONSUL_VERSION}_linux_amd64.zip 
  && echo "$CONSUL_SHA256 *consul.zip" | sha256sum -c - 
  && unzip consul.zip 
  && mv consul /bin/ 
  && rm -rf consul.zip 
  && cd /tmp 
  && wget -O ui.zip https://releases.hashicorp.com/consul/${CONSUL_VERSION}/consul_${CONSUL_VERSION}_web_ui.zip 
  && echo "$CONSUL_UI_SHA256 *ui.zip" | sha256sum -c - 
  && unzip ui.zip 
  && mkdir -p /ui 
  && mv * /ui 
  && rm -rf /tmp/* /var/cache/apk/*
EXPOSE 8300 8301 8301/udp 8302 8302/udp 8400 8500 8600 8600/udp
VOLUME [ "/data" ]
ENTRYPOINT [ "/bin/consul" ]
CMD [ "agent", "-data-dir", "/data", "-server", "-bootstrap-expect", "1", "-ui-dir", "/ui", "-client=0.0.0.0"]

As you can see, there is a little more going on in this Dockerfile:

  1. We will define that we are using the latest version of russmckendrick/base as our base image.
  2. Then, we will set three environment variables. Firstly, the version of Consul we want to download, and then the checksum for the files, which we will grab from a third-party website.
  3. We will then install the wget binary using the APK package manager.
  4. Next up, we will download the Consul binaries from the HashiCorp website, notice that we are downloading over HTTPS and that we are running sha256sum against the downloaded file to check whether it is has been tampered with. If the file doesn't pass this test, then the build will fail.
  5. Once the zip file is confirmed to be the correct one, we uncompress it and copy the binary in place.
  6. We will then do the same actions again for the Consul web interface.
  7. Finally, we will configure some default actions of when the container is launched by exposing the correct port, entry point, and default command.

All of this means that you can see exactly what is installed and how the image is configured before you make the decision to download a container using the image.

Official images

There are are just over 100 images that are flagged as official. You view these in the Docker Hub at https://hub.docker.com/explore/. Official images are easy to spot as they are not preceded by a username, for example, the following are the docker pull lines for the official NGINX image and also my own:

docker pull nginx
docker pull russmckendrick/nginx

As you can see, the top one is the official image.

A lot of the official images are maintained by the upstream providers, for example, the CentOS, Debian, and Jenkins images are maintained by members of the respective projects:

Also, there is a review process for each pull request submitted. This helps in ensuring that each official image is both consistent and built with security in mind.

The other important thing to note about official images is that no official image can be derived from, or depend on, non-official images. This means that there should be no way a non-official image's content can find its way into an official image.

A full detailed explanation on the build standards for official images, as well details of what is expected of an official image maintainer can be found in the Docker Library GitHub page at https://github.com/docker-library/official-images/.

The downside of Docker Hub is that it can sometimes be slow, and I mean really slow. The situation has improved over the past 12 months, but there have been times when Docker's build system has had a big backlog, meaning that your build is queued.

This is only a problem if you need to trigger a build and want it immediately available, which could be a case if you need to quickly fix this application bug before anyone notices.

Pushed images

Finally, there is an elephant in the room, the complete images, which have been pushed from a user to their Docker Hub.

Personally, I try to avoid pushing complete images to my Docker Hub account, as they are something I would typically not recommend using, so why would I expect other users to use them?

As these images are not being built by a published Dockerfile, it is difficult to get an idea of the standard they have built to and exactly what they contain.

Docker has tried to address this by introducing content trust to the Docker Hub, what this does is sign the image before it is pushed to the Docker Hub with the publisher's private key. When you download the image, the Docker Engine uses the publisher's public key to verify that the content of the image is exactly how the publisher intended it to be.

This helps to ensure that the image has not been tampered with at any point of the image's journey from the publisher to you running the container.

More information on Content Trust can be found at https://docs.docker.com/engine/security/trust/content_trust/.

This is useful if you are using the Docker Hub to publish private images that contain propriety applications or code bases you do want to be publically available.

However, for publically available images, I would always question why the image had to be pushed to the Docker Hub rather than being built with a Dockerfile.

Docker Cloud

Since the time I started writing this book, Docker has introduced a commercial service called Docker Cloud. This service is described as a hosted service for Docker container management and deployment by Docker.

You can find details of the service at the following URLs:

So, why mention this service when we are talking about security? Well, in May 2016, Docker announced that they are adding a Security Scanning feature, which, at the time of writing this book, is free of charge.

This feature works with your Private Repositories hosted on the Docker Hub, meaning that any images you have pushed can be scanned.

The service performs a static analysis on your images, looking for known vulnerabilities in the binaries you have installed.

For example, in Chapter 6, Extending Your Infrastructure, we created an image using Packer, I still had an old build of this image on my local machine, so I pushed it to a private Docker Hub repository and took advantage of the free trial of both Docker Cloud and Docker Security Scanning.

As you can see from the following result, the service has found three critical vulnerabilities in the image:

Docker Cloud

This means that it is time to update my base image and the version of NodeJS being used.

More details on the service and how it works can be found in the following announcement blog post:

https://blog.docker.com/2016/05/docker-security-scanning/

There are a few alternatives to this service, such as:

However, the newly launched Docker service is the simplest one to get started with, as it already has deep level of integration with other Docker services.

Private registries

Remember that it is possible to use a private registry to distribute your Docker images. I would recommend taking this approach if you have to bundle your application's code within an image.

A private registry is a resource that allows you push and pull images; typically, it is only available to trusted hosts within your network and is not publically available.

Private registries do not allow you to host automated builds and they do not currently support content trust, this is why they are deployed on private or locked down networks.

More information on hosting your own private registry can be found at the official documentation at https://docs.docker.com/registry/.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset