This chapter covers
Each topic in this chapter is specific to security and to making OpenShift a secure platform for your applications. This chapter isn’t a comprehensive summary of OpenShift’s security features—that would take 100 pages or more and is a great idea for another OpenShift book. What we’ll do in this chapter is walk through the fundamentals of OpenShift security. We want to give you examples of what we think are the most crucial concepts, and we’ll do our best to point you in the right direction for the topics we don’t have room to cover.
We began discussing important security concepts and making OpenShift secure not long after page 1 of this book:
We may be using a broad definition of security here, but every chapter in this book contributes to your understanding of OpenShift and how to deploy it in an automated and secure fashion. Automation and security go hand in hand, because humans aren’t good at repetitive tasks. The more you can automate tasks for your applications, the more secure you can make those applications. Even though we’ve already covered a lot of ground regarding security, we still need to devote this entire chapter to security-specific concepts.
OpenShift has layers of security, from the Linux kernel on each application node through the routing layer that delivers applications to end users. We’ll begin this discussion with the Linux kernel and work our way up through the application stack. For containers and OpenShift, security begins in the Linux kernel with SELinux.
SELinux is a Linux kernel module that’s used to enforce mandatory access control (MAC). MAC is a set of access levels that are assigned to users by the system. Only users with root-level privileges can alter them. For typical users, including the automated user accounts in OpenShift that deploy applications, the SELinux configuration specified for a deployment is an immutable fact.
MAC is in contrast to discretionary access control (DAC) in Linux. DAC is the system of users and file ownership/access modes that we all use every day on Linux hosts. If only DAC were in effect in your OpenShift cluster, users could allow full access to their container’s resources by changing the ownership or the access mode for the container process or storage resources. One of the key security features of OpenShift is that SELinux automatically enforces MAC policies that can’t be changed by unprivileged users for pods and other resources, even if they deployed the application.
We need to take a few pages to discuss some fundamental information that we’ll use throughout the chapter. As with security in general, this won’t be a full SELinux introduction. Entire books have been written on that topic, including an SELinux coloring book available at https://github.com/mairin/selinux-coloring-book. But the following information will help you understand how OpenShift uses SELinux to create a secure platform. We’ll focus on the following SELinux concepts:
Let’s begin by taking a more detailed look at how SELinux labels are designed.
SELinux labels are applied to all objects on your OpenShift servers as they’re created. An SELinux label dictates how an object on a Linux server interacts with the SELinux kernel module. We’re defining an object in this context as anything a user or process can create or interact with on a server, such as the following:
Each object’s SELinux label has four sections, separated by colons:
Figure 11.1 shows an example of a full SELinux label for the socket interface used by Open vSwitch for communication on your OpenShift nodes at /var/run/openvswitch/db.sock. To view this label, run the following ls command, using the -Z option to include SELinux information in its output:
# ls -alZ /var/run/openvswitch/db.sock srwxr-x---. root root system_u:object_r:openvswitch_var_run_t:s0 /var/run/openvswitch/db.sock
In addition to the standard POSIX attributes of mode, owner, and group ownership, the output also includes the SELinux label for /var/run/openvswitch/db.sock.
Next, let’s examine how SELinux labels are applied to files and other objects when they’re created.
Most commands have a -Z option that will include the commands’ SELinux labels. Common command-line tools like ls, ps, netstat, and others accept the -Z option to include SELinux information in their output.
Because objects are presented in the Linux operating system as files, their SELinux labels are stored in their filesystem extended attributes. You can view these attributes directly for the Open vSwitch socket using the following getfattr command:
# getfattr -d -m - /var/run/openvswitch/db.sock getfattr: Removing leading '/' from absolute path names # file: var/run/openvswitch/db.sock security.selinux="system_u:object_r:openvswitch_var_run_t:s0"
If you’re looking for full SELinux documentation, a great place to start is the Red Hat Enterprise Linux 7 SELinux Guide at http://mng.bz/G5t5.
Labels are applied to files using SELinux contexts: rules that are used to apply labels to objects on a Linux system. Contexts use regular expressions to apply labels depending on where the object exists in the filesystem.
One of the worst things a sysadmin can hear is a developer telling them that SELinux “breaks” their application. In reality, their application is almost certainly creating objects on the Linux server that don’t have a defined SELinux context.
If SELinux doesn’t know how to apply the correct label, it doesn’t know how to treat the application’s objects. This often results in SELinux policy denials that lead to frantic calls and requests to disable SELinux because it’s breaking an application.
To query the contexts for a system, use the semanage command and filter it using grep. You can use semanage to search for contexts that apply to any label related to any file or directory, including the Open vSwitch socket. A search for openvswitch in the semanage output shows that the context system_u:object_r:openvswitch_var_run_t:s0 is applied to any object created in the /var/run/openvswitch/ directory:
# semanage fcontext -l | grep openvswitch /etc/openvswitch(/.*)? all files system_u:object_r:openvswitch_rw_t:s0 /var/lib/openvswitch(/.*)? all files system_u:object_r:openvswitch_var_lib_t:s0 /var/log/openvswitch(/.*)? all files system_u:object_r:openvswitch_log_t:s0 /var/run/openvswitch(/.*)? all files system_u:object_r:openvswitch_var_run_t:s0 1 /usr/lib/systemd/system/openvswitch.service regular file system_u:object_r:openvswitch_unit_file_t:s0 /usr/bin/ovs-vsctl regular file system_u:object_r:openvswitch_exec_t:s0 ...
Properly applied, SELinux labels create policies that control how objects with different labels can interact with each other. Let’s discuss those next.
SELinux policies are complex things. They’re heavily optimized and compiled so they can be interpreted quickly by the Linux kernel. Creating one or looking at the code that creates one is outside the scope of this book, but let’s look at a basic example of what an SELinux policy would do. For this, we’ll use an example that most people are familiar with: the Apache web server. You won’t find the Apache web server on your master node—the OpenShift API and user interfaces are served by a custom web application. But Apache is common everywhere and has long-established SELinux policies that we can use as an example.
The executable file for the Apache web server is /usr/sbin/httpd. This httpd executable has an SELinux label of system_u:object_r:httpd_exec_t:s0. On CentOS and Red Hat systems, the default Apache web content directory is /var/www/html. This directory has an SELinux label of system_u:object_r:httpd_sys_content_t:s0. The default cgi-script directory for Apache is /var/www/cgi-bin, and it has an SELinux label of system_u:object_r:httpd_sys_script_exec_t:s0. There’s also an http_port_t label for the following TCP port numbers:
|
|
|
|
|
|
|
|
An SELinux policy enforces the following rules using these labels for the httpd_exec_t object type:
This means even if Apache is somehow compromised by a remote user, it can read content from /var/www/html and run scripts from /var/www/cgi-bin. It also can’t write to /var/www/cgi-bin. All of this is enforced by the Linux kernel, regardless of the ownership or permissions of these directories and which user owns the httpd process (see figure 11.2).
The default type of SELinux loaded on a Linux system is the targeted type. The rules in the targeted SELinux type are applied only to objects that have matching contexts. Every object on a server is assigned a label based on the SELinux context it matches. If an object doesn’t match a context, it’s assigned an unconfined_t type in its SELinux label. The unconfined_t type has no contexts or policies associated with it. Interactions between objects that aren’t covered by a policy in targeted SELinux are allowed to run with no interference.
For CentOS and Red Hat Enterprise Linux, the default policies use type enforcement. Type enforcement uses the type value from SELinux labels to enforce the interactions between objects.
Let’s review what we’ve discussed up to this point about SELinux:
This SELinux configuration is standard for any CentOS or Red Hat system running with SELinux in enforcing mode. Just as in the Apache web server process we’ve been discussing, you know that a container is essentially a process. Each container’s process is assigned an SELinux label when it’s created, and that label dictates the policies that affect the container. To confirm the SELinux label that’s used for containers in OpenShift, get the container’s PID from docker and use the ps command with the -Z parameter, searching for that PID with grep:
# docker inspect -f '{{ .State.Pid }}' 1aa4208f4b80 2534 1 # ps -axZ | grep 2534 system_u:system_r:svirt_lxc_net_t:s0:c7,c8 2534 ? Ss 0:01 httpd -D FOREGROUND 2
OpenShift hosts operate with SELinux in enforcing mode. Enforcing mode means the policy engine that controls how objects can interact is fully activated. If an object attempts to do something that’s against an SELinux policy present on the system, the action isn’t allowed, and the attempt is logged by the kernel. To confirm that SELinux is in enforcing mode, run the following getenforce command:
# getenforce Enforcing
In OpenShift, SELinux is taken care of automatically, and you don’t need to worry about it. There’s no reason to disable it.
In other servers, tools like virus scanners can cause issues with SELinux. A virus scanner is designed to analyze files on a server that are created and managed by other services. That makes writing an effective SELinux policy for a virus scanner a significant challenge. Another typical issue is when applications and their data are placed in locations on the filesystem that don’t match their corresponding SELinux contexts. If the Apache web server is trying to access content from /data on a server, it will be denied by SELinux because /data doesn’t match any SELinux contexts associated with Apache. These sorts of issues lead to some people deciding to disable SELinux.
The user and role portions of the label aren’t used for type-enforcement policies. The svirt_lxc_net_t type is used in SELinux policies that control which resources on the system containers can interact with. We haven’t discussed the fourth part of the SELinux label: the MCS level, which isolates pods in OpenShift. Let’s examine how that works next.
The original purpose of the MCS bit was to implement MCS security standards (https://selinuxproject.org/page/NB_MLS) on Linux servers. These standards control data access for different security levels on the same server. For example, secret and top-secret data could exist on the same server. A top-secret-level process should be able to access secret-level data, a concept called data dominance; but secret processes shouldn’t be able to access top-secret data, because that data has a higher MCS level. This is the security feature you can use to prevent a pod from accessing data it’s not authorized to access on the host.
You may have noticed that the SELinux type for the app-cli container is svirt_lxc_netsvirt_lxc_net. SVirt (https://selinuxproject.org/page/SVirt) has been used for several years to isolate kernel-based virtual machines (KVMs) using the same MCS technology. VMs and containers aren’t similar technologies, but they both use SVirt to provide security for their platforms.
OpenShift uses the MCS level for each container’s process to enforce security as part of the pod’s security context. A pod’s security context is all the information that describes how it’s secured on its application node. Let’s look at the security context for the app-cli pod.
Each pod’s security context contains information about its security posture. You can find full documentation for the possible fields that can be defined at http://mng.bz/phit. In OpenShift, the following parameters are configured by default:
You can view the security context for a pod in the GUI by choosing Applications > Pods, selecting the pod you want information about, and then choosing Actions > Edit YAML. From the command line, the same output is available using the oc export command:
$ oc export pod app-cli-2-4lg8j apiVersion: v1 kind: Pod ... spec: ... securityContext: 1 capabilities: 2 drop: - KILL - MKNOD - SETGID - SETUID - SYS_CHROOT privileged: false 3 runAsUser: 1000070000 4 seLinuxOptions: level: s0:c8,c7 5 ...
Looking at the output, two security contexts are defined. The one displayed here is the MCS level for the container; there’s a similar security context for the pod, as well. The MCS level for the container and the pod should always be equal.
We’ve been discussing SELinux for a while. Let’s bring the topic to a close by walking through how OpenShift uses a pod’s MCS level to enhance security.
The structure of the MCS level consists of a sensitivity level (s0) and two categories (c8 and c7), as shown in the following output from the previous command:
seLinuxOptions: level: s0:c8,c7
You may have noticed that the order of the categories is reversed in the oc output compared with the ps output. This makes no difference in how the Linux kernel reads and acts on the MCS level.
A detailed discussion of how different MCS levels can interact is out of scope for this book. If you’re looking for that depth of information, the SELinux Guide for Red Hat Enterprise Linux at http://mng.bz/G5t5 is a great place to start. Here, we’ll focus on how OpenShift uses MCS levels to isolate pods in each project.
OpenShift assumes that applications deployed in the same project will need to interact with each other. With that in mind, the pods in a project have the same MCS level. Sharing an MCS level lets applications share resources easily and simplifies the security configurations you need to make for your cluster.
Let’s examine the SELinux configurations for pods in different projects. You already know the MCS level for app-cli is s0:c8,c7. Because app-cli and app-gui are in the same project, they should have the same MCS level. To get the MCS level for the app-gui pod, use the same oc export command:
$ oc export pod app-gui-1-cwm7t | grep -A 1 seLinuxOptions seLinuxOptions: level: s0:c8,c7 1 -- seLinuxOptions: level: s0:c8,c7 2
This confirms what we stated earlier: the MCS levels for app-gui and app-cli are the same because they’re deployed in the same project.
Next, let’s compare the same values for a pod deployed in another project. Use the wildfly-app application you deployed in chapter 8. To get the name of the deployed pod, run the following oc get pods command, specifying the stateful-apps project using the -n option:
$ oc get pods -n stateful-apps NAME READY STATUS RESTARTS AGE wildfly-app-1-zsfr8 1/1 Running 0 6d
After you have the pod name, run the same oc export command, searching for seLinuxOptions and specifying the stateful-apps project using the -n option:
$ oc export pod wildfly-app-1-zsfr8 -n stateful-apps | grep -A 1 seLinuxOptions seLinuxOptions: level: s0:c10,c0 1 -- seLinuxOptions: level: s0:c10,c0 2
Each project uses a unique MCS level for deployed applications. This MCS level permits each project’s applications to communicate only with resources in the same project. Let’s continue looking at pod security-context components with pod capabilities.
The capabilities listed in the app-cli security context are Linux capabilities that have been removed from the container process. Linux capabilities are permissions assigned to or removed from processes by the Linux kernel:
securityContext: capabilities: drop: - KILL 1 - MKNOD 2 - SETGID 3 - SETUID 4 - SYS_CHROOT 5
Capabilities allow a process to perform administrative tasks on the system. The root user on a Linux server can run commands with all Linux capabilities by default. That’s why the root user can perform tasks like opening TCP ports below 1024, which is provided by the CAP_NET_BIND_SERVICE capability, and loading modules into the Linux kernel, which is provided by the CAP_SYS_MODULE capability.
You can add capabilities to a pod if it needs to be able to perform a specific type of task. Add them to the capabilities.add list in the pod’s security context. (A full listing of capabilities and their functions is detailed at http://mng.bz/Qy03.) To remove default capabilities from pods, add the capabilities you want to remove to the drop list. This is the default action in OpenShift. The goal is to assign the fewest possible capabilities for a pod to fully function. This least-privileged model ensures that pods can’t perform tasks on the system that aren’t related to their application’s proper function.
The default value for the privileged option is False; setting the privileged option to True is the same as giving the pod the capabilities of the root user on the system. Although doing so shouldn’t be common practice, privileged pods can be useful under certain circumstances. A great example is the HAProxy pod we discussed in chapter 10. It runs as a privileged container so it can bind to port 80 on its node to handle incoming application requests. When an application needs access to host resources that can’t be easily provided to the pod, running a privileged container may help. Just remember, as the comic book says, with great power comes great responsibility.
The last value in the security context that we need to look at controls the user ID that the pod is run with: the runAsUser parameter.
In OpenShift, by default, each project deploys pods using a random UID. Just like the MCS level, the UID is common for all pods in a project, to allow easier interactions between pods when needed. The UID for each pod is listed in the security context in the runAsUser parameter:
runAsUser: 1000070000 1
By default, OpenShift doesn’t allow applications to be deployed using UID 0, which is the default UID for the system’s root user. There aren’t any known ways for UID 0 to break out of a container, but being UID 0 in a container means you must be incredibly careful about taking away capabilities and ensuring proper file ownership on the system. It’s an ounce of prevention that can prevent the need for a pound of cure down the road.
Many containers available publicly on registries like Docker Hub (https://hub.docker.com) run as UID 0 by default. You can learn more about editing these images and dockerfiles, along with some best practices around OpenShift and building container images, at http://mng.bz/L4G5.
The components in a pod’s or container’s security context are controlled by the security context constraint (SCC) assigned to the pod when it’s deployed. An SCC is a configuration applied to pods that outlines the security context components it will operate with. We’ll discuss SCCs in more depth in the next section, when you deploy an application in your cluster that needs a more privileged security context than the default provides. This application is a container image-scanning utility that looks for security issues in container images in your OpenShift registry. Let’s get started.
OpenShift is only as secure as the containers it deploys. Even if your container images are built using proven, vetted base images supplied by vendors or created using your own secure workflows, you still need a process to ensure that the images you’re using don’t have security issues as they age in your cluster. The most straightforward solution for this challenge is to scan your container images.
You’re going to scan a single container image on demand in this chapter. In a production environment, image scanning should be an integral component in your application deployment workflows. Companies like Black Duck Software (www.blackducksoftware.com) and Twistlock (www.twistlock.com) have image-scanning and -compliance tools that integrate with OpenShift.
You must be able to trust what’s running in your containers and quickly fix issues when they’re found. An entire industry has sprung up in the past few years that provides container image-scanning products to help make this an everyday reality.
To run this scan in OpenShift, you need a container image that includes the scanning engine. An OpenShift image scanner called Image Inspector is available at https://github.com/openshift/image-inspector. An image built using this source code is available on Docker Hub at https://hub.docker.com/r/openshift/image-inspector. We’ve created an OpenShift template that uses Image Inspector, making its needed inputs parameters that are easy to use in OpenShift.
Follow these steps:
1. Create a new project named image-scan using the following oc command:
2. Import the image-inspector template into the image-scan project. Creating the template in the image-scan project means it won’t be visible to users who don’t have access to the image-scan project.
Tip
By default, all templates available in the service catalog are located in the openshift project. You can learn more about how templates work in OpenShift at http://mng.bz/8DF6.
3. To import the template, run the following oc command:
$ oc create -f https://raw.githubusercontent.com/OpenShiftInAction/
chapter11/master/image-scanner/templates/image-scanner.yaml -n image-scan
template "image-scan-template" created
Once it completes, the image-scanner template will be available for use in the image-scan project.
The image you’ll scan is wildfly-app, which you deployed when you were working with stateful applications in chapter 8. The image-scan-template template has parameters defined, as shown in the following listing; these are used to specify the image that’s being scanned.
parameters: - name: APPLICATION_NAME displayName: Application Name description: The name assigned to all of the frontend objects defined in this template. value: image-inspector required: true - name: IMAGE_INSPECTOR_URL displayName: Container Image that is doing the scans description: The image inspector image, defaults to CentOS, for RHEL use registry.access.redhat.com/openshift3/image-inspector:latest value: docker.io/openshift/image-inspector:latest required: true - name: IMAGE_TO_SCAN_URL displayName: Image URL to scan with OpenSCAP description: The image getting scanned with OpenSCAP value: registry.access.redhat.com/rhel7:7.0-21 required: true - name: SCAN_TYPE displayName: Scan Type description: Type of scan you want image-inspect to run value: openscap required: true - name: DOCKERCFG_SECRET displayName: dockercfg Secret description: This is the name of a pre-existing dockercfg secret with credentials to access the registry required: true - name: SERVICE_ACCOUNT displayName: Service Account description: The Service Account to run the pod as value: default required: true
The default value for the IMAGE_TO_SCAN_URL parameter is registry.access.redhat.com/rhel7:7.0-21, the publicly available Red Hat Enterprise Linux 7.0 container image. You need to supply the full container-image URL to the image scanner application as the IMAGE_TO_SCAN_URL value. To get the URL for the image used to deploy widfly-app, run the following oc describe command:
# oc describe dc/wildfly-app -n stateful-apps | grep Image: Image: docker-registry.default.svc:5000/stateful-apps/wildfly-app@sha256: e324ae4a9c44daf552e6d3ee3de8d949e26b5c5bfd933144f5555b9ed0bf3c84
OpenShift doesn’t use image tags to specify an image to use when deploying an application, because tags can be changed on images in a registry. Container image tags are mutable objects. Instead, OpenShift uses an immutable SHA256 digest to identify the exact image to deploy a specific version of your application. This is another security safeguard that’s used in OpenShift by default. You can cryptographically prove that the image in your registry is the image you’re using to deploy applications on your host. Pulling images by digest is defined and explained in more depth in the docker engine documentation at http://mng.bz/81H4.
To download a copy of this image to scan, the image-scanning application needs to be able to download images from the OpenShift image registry. Permission to download images from the registry is controlled using a secret, similar to those you created in chapter 6. The dockercfg secret is the JSON data used to log in to a docker registry (http://mng.bz/O0sm) encoded as a base-64 string. It’s one of several secrets created and used by OpenShift:
# oc get secrets NAME TYPE DATA AGE builder-dockercfg-24q2h kubernetes.io/dockercfg 1 39d builder-token-dslpv kubernetes.io/service-account-token 4 39d builder-token-rdv3n kubernetes.io/service-account-token 4 39d default-dockercfg-dvklh kubernetes.io/dockercfg 1 39d 1 default-token-b8dq2 kubernetes.io/service-account-token 4 39d default-token-g9b4p kubernetes.io/service-account-token 4 39d deployer-dockercfg- w8jg2 kubernetes.io/dockercfg 1 39d 1((CO8-2)) deployer-token-b761w kubernetes.io/service-account-token 4 39d deployer-token-zphcs kubernetes.io/service-account-token 4 39d
To deploy the Image Inspector application and have it scan the wildfly-app image, use the following oc new-app command. Supply the wildfly-app URL, and parse the secret output to supply the name of the dockercfg secret as parameters:
$ oc new-app --template=image-scan/image-scan-template > -p DOCKERCFG_SECRET=$(oc get secrets -o jsonpath='{ .items[*].metadata.name}' | xargs -n1 | grep 'default-dockercfg*') > -p IMAGE_TO_SCAN_URL=docker-registry.default.svc:5000/stateful-apps/ wildfly-app@sha256: e324ae4a9c44daf552e6d3ee3de8d949e26b5c5bfd933144f5555b9ed0bf3c84 ... --> Creating resources ... deploymentconfig "image-inspector" created --> Success Run 'oc status' to view your app.
We’ll use the data generated by this new image-scanner application to examine the scan results for the WildFly image.
Running oc new-app deploys a deployment config. The deployment config downloads the container image for wildfly-app so it can be scanned for vulnerabilities. But something isn’t right: if you wait for a few minutes, the deployment pod is running, but the application pod never gets created. To figure out what’s happening, let’s examine the events recorded by OpenShift for the image-scan project, including the error deploying the Image Inspector application:
$ oc get events -n image-scan ... 39s 3m 16 image-inspector-1 ReplicationController Warning FailedCreate replication-controller Error creating: pods "image-inspector-1-" is forbidden: unable to validate against any security context constraint: [spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used provider restricted: .spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed] 1
The security context for each pod is configured based on the security context constraint (SCC) assigned to the pod when it’s created. The default SCC for an application is the restricted SCC. The restricted SCC creates a security context that matches what you saw earlier in this chapter for the app-cli deployment:
The error listed in the events tells you that the image-inspector pod is attempting to define the security context with privileged mode enabled, and the restricted security context prevents that configuration from deploying. To run the Image Inspector application, you need to change the SCC used to deploy the pod.
OpenShift is configured with several SCCs that provide different levels of access for pods, including the default restricted SCC. The privileged SCC lets a pod deploy as any UID, with all Linux capabilities, with any MCS level, and with privileged mode enabled:
$ oc get scc NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES ... privileged true [*] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*] 1 restricted false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] 2
The privileged SCC fulfills the image-inspector pod’s request for privileged mode to be enabled. To change the SCC for the image-inspector pod, you need to change the default SCC for the service account that’s used to run pods in the image-scan project.
A service account is used in OpenShift when one component is interacting with another as part of a workflow. When a project is created in OpenShift, the following three service accounts are created by default:
You can create additional service accounts to fit your specific needs. The process and more details are documented at http://mng.bz/8M4n.
To view the service accounts for a project, you can run the following oc get command:
$ oc get serviceaccount -n image-scan NAME SECRETS AGE builder 2 5d default 2 5d 1 deployer 2 5d
To deploy Image Inspector, you need to add the privileged SCC to the default service account for the image-scan project. To do that, run the following oc adm command:
$ oc adm policy add-scc-to-user privileged -z default -n image-scan ======= $ oc import-image registry.access.redhat.com/rhel7:7.0-21 --confirm The import completed successfully. Name: rhel7 Namespace: image-scan Created: Less than a second ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-12-10T04:37:14Z Docker Pull Spec: docker-registry.default.svc:5000/image-scan/rhel7 Image Lookup: local=false Unique Images: 1 Tags: 1 7.0-21 tagged from registry.access.redhat.com/rhel7:7.0-21 * registry.access.redhat.com/rhel7@sha256: 141c69dc6ae89c73339b6ddd68b6ec6eeeb75ad7b4d68bcb7c25e8d05d9f5e60 Less than a second ago Image Name: rhel7:7.0-21 Docker Image: registry.access.redhat.com/rhel7@sha256: 141c69dc6ae89c73339b6ddd68b6ec6eeeb75ad7b4d68bcb7c25e8d05d9f5e60 Name: sha256: 141c69dc6ae89c73339b6ddd68b6ec6eeeb75ad7b4d68bcb7c25e8d05d9f5e60 Created: Less than a second ago Image Size: 50.37 MB Image Created: 3 years ago Author: <none> Arch: amd64
With this change made, you’re ready to deploy the Image Inspector application using the privileged SCC. Before you do that, however, you need to remove the previous, failed deployment using the following oc delete command:
# oc delete dc/image-inspector deploymentconfig "image-inspector" deleted
After the previous deployment is deleted, rerun the oc new-app command to deploy Image Inspector:
$ oc new-app --template=image-scan/image-scan-template > -p DOCKERCFG_SECRET=$(oc get secrets -o jsonpath='{ .items[*].metadata.name}' | xargs -n1 | grep 'default-dockercfg*') > -p IMAGE_TO_SCAN_URL=docker-registry.default.svc:5000/stateful-apps/ wildfly-app@sha256: e324ae4a9c44daf552e6d3ee3de8d949e26b5c5bfd933144f5555b9ed0bf3c84
Downloading the image and the security scanner content into the build pod will take a minute or two, depending on your internet connection speed. During this time, your pod will be in ContainerCreating status:
# oc get pods NAME READY STATUS RESTARTS AGE image-inspector-1-deploy 1/1 Running 0 16s image-inspector-1-xmlkb 0/1 ContainerCreating 0 13s
After the content downloads, the image-inspector pod will be in a Running state, like any other pod you’ve worked with so far. At this point, the pod has run its scan on the container image, and the results are ready for you to view and act on.
The image scanner in the pod uses OpenSCAP (www.open-scap.org) to scan and generate a report on the wildfly-app container image.
This scanning methods relies on the RPM metadata in Red Hat base images to run properly. This scanning method may not work on images that use a different Linux distribution, including CentOS.
This report is stored in the pod at /tmp/image-results/results.html. To transfer this HTML report to your local workstation, use the following oc rsync command:
oc rsync image-inspector-1-xmlkb:/tmp/image-content/results.html .
Open the scan results with your web browser, and you’ll see a full report of how close to compliance your wildfly-app container image is, and any errata regarding things it may be missing. Figure 11.3 shows that our results were close but had three high-level security issues.
You don’t want to deploy applications when their images have potentially dangerous security issues. In the next section, you’ll add an annotation to the wildfly-app image to prevent it from being run.
OpenShift is configured with image policies that control which images are allowed to run on your cluster. The full documentation for image policies is available at http://mng.bz/o1Po. Annotations in the image metadata enforce image policies; you can add these annotations manually. The deny-execution policy prevents an image from running on the cluster under any conditions. To apply this policy to the wildfly-app image, run the following oc annotate command:
oc annotate image sha256:e324ae4a... images.openshift.io/deny-execution=true image "sha256:e324ae4a9c44daf552e6d3ee3de8d949e26b5c5bfd933144f5555b9ed0bf3c84" annotated
Image policies don’t affect running pods, but they prevent an image with the deny-execution annotation from being used for deployments. To see this in action, delete the active pod for your wildfly-app deployment using the oc delete pod command on the active pod for wildfly-app. Normally, the replication controller for the wildfly-app deployment would automatically deploy a new version of the pod based on the correct base image. But no new pod is deployed. Looking at the events for the stateful-apps project, you can see that the image policies in OpenShift are reading the annotation you added to the image and preventing a new pod from being deployed:
$ oc events -n stateful-apps ... 16s 24s 14 wildfly-app-1 ReplicationController Warning FailedCreate replication-controller Error creating: Pod "" is invalid: spec.containers[0].image: Forbidden: this image is prohibited by policy
This process manually scans a container image and adds an annotation to it if security issues are found. The annotation is read by the OpenShift image-policy engine and prevents any new pods from being deployed using that image. Automated solutions like Black Duck and Twistlock handle this dynamically, including annotations about the security findings and information about the scan. These annotations can be used for security reporting and to ensure that the most secure applications are deployed in OpenShift at all times.
You started this chapter with SELinux and worked your way up to the security contexts that define how pods are assigned security permissions in OpenShift. You used the privileged SCC to give the Image Inspector image scanner the permissions it needed to run. You then deployed the Image Inspector application to scan an existing container image and generate a report on any security findings. Finally, you used image policies to prevent the scanned image from being deployed because you found security issues in its scan results. That sounds like a good place to end this security chapter.
As we said at the start of the chapter, this isn’t a comprehensive list or a complete security workflow. Our goal has been to introduce you to what we think are the most important security concepts in OpenShift and give you enough information to begin to use and customize them as you gain experience using OpenShift.
This also seems like a good place to wrap up OpenShift in Action. As was the case for this chapter, we never intended this book to be a comprehensive OpenShift manual. To be honest, OpenShift and its components are growing and changing too quickly for a truly comprehensive manual to ever be put in print. Our goal has been to focus on the fundamental knowledge that will help you implement OpenShift, even as newer versions are released and the technology evolves. We hope we’ve done that.
We also hope you have a fully functional cluster up and running that you can use for your ongoing learning around containers and OpenShift. We’ll continue to update the code and helper applications at www.manning.com/books/openshift-in-action and https://github.com/OpenShiftInAction. We’ll also continue to be active in the Manning book forum at https://forums.manning.com/forums/openshift-in-action. If you have questions or ideas for improvement, or just want to say hi, you can find us at either of those locations online. Thank you—and we hope you’ve enjoyed OpenShift In Action.