Chapter 26

Managing Containers

The following topics are covered in this chapter:

The following RHCSA exam objectives are covered in this chapter:

  • Find and retrieve container images from a remote registry

  • Inspect container images

  • Perform container management using commands such as podman and skopeo

  • Build a container from a Containerfile

  • Perform basic container management such as running, starting, stopping, and listing running containers

  • Run a service inside a container

  • Configure a container to start automatically as a systemd service

  • Attach persistent storage to a container

Containers have revolutionized datacenter IT. Where services not so long ago were running directly on top of the server operating system, nowadays services are often offered as containers. Red Hat Enterprise Linux 9 includes a complete platform to run containers. In this chapter you learn how to work with them.

“Do I Know This Already?” Quiz

The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics, read the entire chapter. Table 26-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A, “Answers to the ‘Do I Know This Already?’ Quizzes and Review Questions.”

Table 26-1 “Do I Know This Already?” Section-to-Question Mapping

Foundation Topics Section

Questions

Understanding Containers

1, 2

Running a Container

3, 4

Working with Container Images

5, 6

Managing Containers

7

Managing Container Storage

8, 9

Running Containers as Systemd Services

10

1. The success of containers depends on different Linux features. Which of the following is not one of them?

  1. Cgroups

  2. Semaphores

  3. Namespaces

  4. SELinux

2. What is the name of the Red Hat solution to add enterprise features such as scalability and availability to containers?

  1. OpenStack

  2. OpenShift

  3. Kubernetes

  4. JBoss

3. How do you detach from a running container without shutting it down?

  1. exit

  2. quit

  3. detach

  4. Ctrl-P, Ctrl-Q

4. Which command will run an application container in the background?

  1. podman run nginx

  2. podman run -d nginx

  3. podman run --background nginx

  4. podman run -it nginx

5. Which command do you use to inspect images that have not yet been pulled to your local system?

  1. podman inspect

  2. buildah inspect

  3. skopeo inspect

  4. docker inspect

6. Which command do you use for an overview of the registries currently in use?

  1. podman info

  2. podman status

  3. podman search

  4. podman registries

7. There are many ways to figure out whether a container needs any environment variables. Which of the following can you use?

  1. Use podman inspect to inspect the image that you want to run. Within the image, you’ll often find usage information.

  2. Use podman run to run the container. If environment variables are required, it will fail. You can next use podman logs to inspect messages that have been logged to STDOUT.

  3. Read the documentation provided in the container registry.

  4. All of the above.

8. Which SELinux context type must be set on host directories that you want to expose as persistent storage in the container using bind mounts?

  1. container_t

  2. container_file_t

  3. container_storage_t

  4. public_content_rw_t

9. Which of the following commands shows correct syntax to automatically set the correct SELinux context type on a host directory that should be exposed as persistent storage inside a container?

  1. podman run --name mynginx -v /opt/nginx:/var/lib/nginx nginx

  2. podman run --name mynginx --bind /opt/nginx:/var/lib/nginx nginx

  3. podman run --name mynginx -v /opt/nginx:/var/lib/nginx:Z nginx

  4. podman run --name mynginx --bind /opt/nginx:/var/lib/nginx:Z nginx

10. What is needed to ensure that a container that user anna has created can be started as a systemd service at system start, not just when user anna is logging in?

  1. Configure the container as a systemd service.

  2. Use loginctl enable-linger anna to enable the linger feature for user anna.

  3. Use systemctl enable-linger anna to enable the linger feature for user anna.

  4. Just use systemctl --user enable to enable the container.

Foundation Topics

Understanding Containers

In the past decade, containers have revolutionized the way services are offered. Where not so long ago physical or virtual servers were installed to offer application access, this is now done by using containers. But what exactly is a container? Let’s start with an easy conceptual description: a container is just a fancy way to run an application based on a container image that contains all dependencies required to run that application.

To install a noncontainerized application on a server, the server administrator must make sure that not only the application is installed but also all the other software dependencies required by the application. This includes, for instance, the right (supported) version of the underlying operating system. This makes it difficult for application developers, who need to provide many versions of their applications to support all current operating systems.

A container is a complete package that runs on top of the container engine, an integrated part of the host operating system. A container is comparable to an application on your smartphone: you get the complete application package from the smartphone’s app store and install it on your phone.

To use a container, you run the container from the container image. This container image is found in the container registry, which can be compared to the app store that hosts smartphone applications. The result is the container, which is the runnable instance of the container image.

To run containers, you need a host operating system that includes a container engine, as well as some tools used to manage the containers. On versions of RHEL prior to RHEL 8, this was supported by Docker. Docker delivered the container engine as well as the tools to manage the containers. In RHEL 8, Red Hat replaced Docker with a new solution, which is still used on RHEL 9: CRI-o is the container engine, and Red Hat offers three main tools to manage the containers:

An icon reads, Key Topic.
  • podman: The main tool, used to start, stop, and manage containers

  • buildah: A specialized tool that helps you create custom images

  • skopeo: A tool that is used for managing and testing container images

Container Host Requirements

Sometimes it is said that containers are Linux, and that is true. This is because containers rely heavily on features that are offered by the Linux kernel, including the following:

An icon reads, Key Topic.
  • Namespaces for isolation between processes

  • Control groups for resource management

  • SELinux for security

Let’s explore each of these features. To start with, containers need namespaces. A namespace provides isolation for system resources. To best understand what namespaces are like, let’s look at the chroot jail, a feature that was introduced in the 1990s. A chroot jail is a security feature that presents the contents of a directory as if it is the root directory of your system, so the process that runs in a chroot jail can’t see anything but the contents of that directory.

Chroot jails are important for security. When a process is restricted to just the contents of a chroot jail, there is no risk of it accessing other parts of the operating system. However, to make sure this works, all the dependencies required to run the process must be present in the chroot jail.

Chroot jails still exist, but the functionality is now leveraged and is a part of what is called the mount namespace. Here’s an overview of it and some of the other namespaces (note that new namespaces may be added in the future as well):

  • Mount: The mount namespace is equivalent to the chroot namespace. The contents of a directory are presented in such a way that no other directories can be accessed.

  • Process: A process namespace makes sure that processes running in this namespace cannot reach or connect to processes in other namespaces.

  • Network: Network namespaces can be compared to VLAN. Nodes connected to a specific network namespace cannot see what is happening in other network namespaces, and contact to other namespaces is possible only through routers.

  • User: The user namespace can be used to separate user IDs and group IDs between namespaces. As a result, user accounts are specific to each namespace, and a user who is available in one namespace may not be available in another namespace.

  • Interprocess communication (ipc): Interprocess communication is what processes use to connect to one another, and these namespaces ensure that connection can be made only to processes in the same namespace.

In containers, almost all of the namespaces are implemented to ensure that the container is a perfectly isolated environment. Only the network namespace is not enabled by default, to ensure that communication between containers is not restricted by default.

The second important Linux component that is required for running containers is the control group, or cgroup. Cgroups are a kernel feature that enables resource access limitation. By default, there is no restriction to the amount of memory or the number of CPU cycles a process can access. Cgroups make it possible to create that limitation in such a way that each container has strictly limited access to available resources.

The last important pillar of containers is implemented on RHEL by using SELinux. As you’ve learned elsewhere in this book, SELinux secures access by using resource labels. On RHEL, a specific context label is added to ensure that containers can access only the resources they need access to and nothing else.

Containers on RHEL 9

Since its launch in 2014, Docker has been the leading solution for running containers. Up to RHEL 7, Docker was the default container stack used on Red Hat Enterprise Linux. As previously mentioned, with the release of RHEL 8, Red Hat decided to discontinue Docker support and offer its own stack. This stack is based on the CRI-o container runtime and uses Podman as the main tool to run containers. The new solution offers a few advantages over the Docker solution:

  • In Podman, containers can be started by ordinary users that do not need any elevated privileges. This is called the rootless container.

  • When users start containers, the containers run in a user namespace where they are strictly isolated and not accessible to other users.

  • Podman containers run on top of the lightweight CRI-o container runtime, without needing any daemon to do their work.

An important benefit of using Podman is the rootless container. On RHEL 8 and 9, rootless containers are started by non-root users and don’t require root privileges. This makes running containers much more secure, but it also does come with some challenges. Rootless containers cannot access any components on the host operating system that require root access. For example, rootless containers do not have an IP address (because it requires root privileges to allocate an IP address) and can bind only to a nonprivileged TCP or UDP port. Also, if the rootless container needs access to host-based storage, the user who runs the container must be owner of the directory that provides the storage.

Container Orchestration

The solutions for running containers that are discussed in this chapter are all about running standalone containers on top of a single host. If that host goes down, you don’t have any running containers left anymore. When containers are used to run mission-critical services, additional features are needed. They include the following:

  • Easy connection to a wide range of external storage types

  • Secure access to sensitive data

  • Decoupling, such that site-specific data is strictly separated from the code inside the container environment

  • Scalability, such that when the workload increases, additional instances can easily be added

  • Availability, ensuring that the outage of a container host doesn’t result in container unavailability

To implement these features, Kubernetes has developed itself as the industry standard. Kubernetes is open source and, currently, it is the only solution that matters for adding enterprise features to containers. Red Hat has its own Kubernetes distribution, which is called OpenShift. For building a scalable, flexible, and reliable infrastructure based on containers, you should investigate the options offered by either Kubernetes or OpenShift. These topics are outside the scope of the RHCSA exam and for that reason will not be discussed further here.

Running a Container

To get familiar with containers, let’s start by running some. To get full access to all tools that RHEL is offering for running containers, you should start by installing the appropriate software. You can do this by using sudo dnf install container-tools. After installing this software, you can start running your first container by using podman run, which does not require any root privileges. You can use this command with many arguments; the only argument that is really required, however, is the name of the image that you want to run. As we discuss later, the image is fetched from one of the container registries that is configured by default. To run your first container, use the command podman run nginx. This will try to start the nginx image from one of the known registries. You can see the result of running this command in Example 26-1.

Example 26-1 Podman May Prompt Which Registry You Want to Use

[root@server1 ~]# podman run nginx
? Please select an image:
    registry.fedoraproject.org/nginx:latest
    registry.access.redhat.com/nginx:latest
    registry.centos.org/nginx:latest
    quay.io/nginx:latest
    docker.io/library/nginx:latest

While using podman run, it may not be clear from which registry the image you want to run should be started. If that is the case, the podman command will prompt to make a choice from one of the available registries, as can be seen in Example 26-1. This can be avoided by including the complete registry name of the image: if you use podman run docker.io/library/nginx, Podman knows it needs to fetch the image from the docker.io registry. Example 26-2 shows how this works out.

Example 26-2 Running Your First Container with podman run nginx

[root@server1 ~]# podman run docker.io/librarynginx
Resolved "nginx" as an alias (/var/cache/containers/short-name-
  aliases.conf)
Trying to pull docker.io/library/nginx:latest...
Getting image source signatures
Copying blob eef26ceb3309 done
Copying blob 71689475aec2 done
Copying blob 8e3ed6a9e43a done
Copying blob f88a23025338 done
Copying blob 0df440342e26 done
Copying blob e9995326b091 done
Copying config 76c69feac3 done
Writing manifest to image destination
Storing signatures
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will
  attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /
  docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-
  ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/
  nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/
  nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-
  templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-
  processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/10/31 07:27:27 [notice] 1#1: using the "epoll" event method
2022/10/31 07:27:27 [notice] 1#1: nginx/1.23.2
2022/10/31 07:27:27 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian
  10.2.1-6)
2022/10/31 07:27:27 [notice] 1#1: OS: Linux 5.14.0-70.13.1.el9_0.x86_64
2022/10/31 07:27:27 [notice] 1#1: getrlimit(RLIMIT_NOFILE):
  1048576:1048576
2022/10/31 07:27:27 [notice] 1#1: start worker processes
2022/10/31 07:27:27 [notice] 1#1: start worker process 24
2022/10/31 07:27:27 [notice] 1#1: start worker process 25

As you can see in Example 26-2, when running the container, Podman starts by fetching the container image from the registry you want to use. Container images typically consist of multiple layers, which is why you can see that different blobs are copied. When the image file is available on your local server, the nginx container is started. As you will also notice, the container runs in the foreground. Use Ctrl-C to terminate the container.

You typically want to run containers in detached mode (which runs the container in the background) or in a mode where you have access to the container console. You can run a container in detached mode by using podman run -d nginx. Notice that all options that modify the podman command (podman run in this case) need to be placed behind the podman command and not after the name of the image.

When you run a container in detached mode, it really runs like a daemon in the background. Alternatively, you can run the container in interactive TTY mode. In this mode, you get access to the container TTY and from there can work within the container. However, this makes sense only if the container is configured to start a shell as its default command. If it does not, you may have to add /bin/sh to the container image, so that it starts a shell instead of its default command.

Let’s have a look at how this works:

Step 1. To start the nginx image in interactive TTY mode, use the command podman run -it nginx.

Step 2. You are now connected to a TTY in which you only have access to the nginx process output. That doesn’t make sense, so use Ctrl-C to get out.

Step 3. Now start the container using podman run -it nginx /bin/sh. This will start the /bin/sh command, instead of the container default command, which will give you access to a shell. After starting the container in this way, you have access to the TTY, and all the commands that you enter are entered in the container and not on the host operating system.

Tip

Container images are normally created as minimal environments, and for that reason you may not be able to run a bash shell. That’s why in the previous example we used /bin/sh. This is a minimal shell, and no matter which container image you use, it will always be there.

When you’re running in interactive mode, there are two ways to get out of it:

  • Use exit to exit the TTY mode. If you started the container using podman run -it nginx /bin/sh, this will stop the container. That’s because the exit command stops the primary container command, and once that is stopped the container has no reason to be around anymore.

  • Use Ctrl-P, Ctrl-Q to detach. This approach ensures that in all cases the container keeps on running in the background in detached mode. That may not always be very useful though. If like in the previous example you’ve started the nginx image with /bin/sh as the default command (instead of the nginx service), keeping it around might not make much sense because it isn’t providing any functionality anyway.

To get an overview of currently running containers, you can use the podman ps command. This will show you only containers that are currently running. If a container has been started but has already been stopped, you won’t see it. If you also want to see containers that have been running but are now inactive, use podman ps -a. In Example 26-3 you can see the output of the podman ps -a command.

Example 26-3 podman ps -a Output

student@podman ~]$ podman ps -a
CONTAINER ID  IMAGE       COMMAND    CREATED        STATUS
PORTS           NAMES
1f6426109d3f  docker.io/       sh        6 minutes ago  Exited (0) 6
  minutes ago  adoring_
              library/
feynman
              busybox: latest
0fa670dc56fe  docker.io/        nginx -g   8 minutes ago  Up 8
  minutes             web1
              library/        daemon o...                 ago
              nginx:latest
15520f225787  docker.io/       nginx -g     32 minutes ago Exited (0)
  32 minutes ago peaceful_
              library/       daemon o...
visvesvaraya
              nginx:latest

Notice the various columns in the output of the podman ps command. Table 26-2 summarizes what these columns are used for.

Table 26-2 podman ps Output Columns Overview

Column

Use

CONTAINER_ID

The automatically generated container ID; often used in names of files created for this container.

IMAGE

The complete registry reference to the image used for this container.

COMMAND

The command that was started as the default command with this container.

CREATED

The identifier when the container was created.

STATUS

Current status.

PORTS

If applicable, ports configured or forwarded for this container.

NAMES

The name of this container. If no name was specified, a name will be automatically generated.

In Exercise 26-1 you can practice running containers and basic container management.

Exercise 26-1 Running Containers with podman

  1. Use sudo dnf install container-tools to install the container software.

  2. Type podman ps -a to get an overview of currently existing containers. Observe the STATUS field, where you can see whether the container currently is active.

  3. Type podman run -d nginx. This command starts an nginx container in detached mode.

  4. Type podman ps and observe the output. In the CONTAINER ID field, you’ll see the unique ID that has been generated. Also observe the NAME field, where you’ll see a name that has automatically been generated.

  5. Type podman run -it busybox. This command runs the busybox cloud image, a minimized Linux distribution that is often used as the foundation for building custom containers.

  6. Because the busybox container image was configured to run a shell as the default command, you’ll get access to the shell that it is running. Type ps aux to see the processes running in this container namespace. Notice that the ps command works, which is not the case for all container images you may be using.

  7. Type exit to close the busybox shell.

  8. Type podman ps. You won’t see the busybox container anymore because in the previous step you exited it.

  9. Type podman run -it busybox once more, and when you have access to its interactive shell, press Ctrl-P, Ctrl-Q to detach.

  10. Use podman ps. You’ll notice the busybox container is still running. Look at the NAME column to find the name for the container that was automatically generated.

  11. Use podman attach <name>, where <name> should be replaced with the name you found in the preceding step. This will reconnect you to the shell that is still waiting on the busybox container.

  12. Use Ctrl-P, Ctrl-Q again to detach.

  13. Type podman stop <name>. This will stop the busybox container.

Tip

When you run non-root containers, the container files are copied to the ~/.local/share/containers/storage directory. Make sure you have enough storage space in the user home directory. With an average file size of about 60 MB for each container, disk space will be used fast!

Working with Container Images

The foundation of every container is the container image. The container is a running instance of the image, where while running it a writable layer is added to store changes made to the container. To work with images successfully, you need to know how to access container registries and how to find the appropriate image from these registries. Container images are created in the Docker format. The Docker format has become an important standard for defining container images, which is why you can run container images in Docker format without any problem in RHEL.

Using Registries

Container images are typically fetched from container registries, which are specified in the /etc/containers/registries.conf configuration file. A user who runs a rootless container can create a file ~/.config/containers/registries.conf. In case of conflict, settings in the user-specific file will override settings in the generic file.

In the registries.conf file, different registries are in use by default. Don’t worry too much about the exact names of these registries, as they tend to change between different versions of RHEL. Among the registries, you’ll find Red Hat registries that give access to licensed software. You need to enter your Red Hat credentials to access these registries. Also, the Docker registry is used. Docker hosts the biggest container registry currently available, containing more than 10,000,000 images, and adding the Docker registry as the last registry will increase your chances of finding the desired container image.

In the registries.conf file, all container registries are listed as unqualified-search-registries. This is because Red Hat recommends the complete image name (including the registry you want to use it from) to avoid ambiguity. So instead of using podman run -d nginx, use podman run -d docker.io/library/nginx.

To see which registries are currently used, you can use the podman info command. Apart from information about the registries that are used, this command also shows other useful information about your current environment. Example 26-4 shows what the output of this command might look like.

Example 26-4 Using podman info to Find Which Registries Are Used

[student@server1 ~]$ podman info | grep -A 10 registries
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - quay.io
  - docker.io
store:
  configFile: /home/student/.config/containers/storage.conf
  containerStore:
    number: 0  OsArch: linux/amd64
  Version: 4.0.2

NOTE

Much of what is happening in containerized environments is standardized in the Open Containers Initiative (OCI). All companies involved in containers are currently making a huge effort to make their containers OCI compliant. Because of this, you can use Docker images without any issues in a podman environment.

Finding Images

To find available images, you can use the podman search command. If you need to access images from one of the subscriber-only Red Hat registries as well, you need to log in to the registry first because the Red Hat registries are accessible only to users who have a valid Red Hat account. Use podman login to enter your current Red Hat username and password, which will give you access to these registries. To log in to a registry, you have to specify the name of the registry you want to log in to. For instance, use podman login registry.access.redhat.com to log in to that specific registry.

After enabling access to the Red Hat registries that you want to use, use podman search to find the images you need. Example 26-5 shows the partial result of the podman search mariadb command output.

Example 26-5 podman search mariadb Partial Result

INDEX        NAME                DESCRIPTION
  STARS   OFFICIAL   AUTOMATED
docker.io  docker.io/panubo/   MariaDB Galera Cluster         23
  [OK]
           mariadb-galera
docker.io  docker.io/demyx/    Non-root Docker image running  0
           mariadb             Alpine Linux a...
docker.io  docker.io/toughiq/  Dockerized Automated MariaDB   41
  [OK]
          mariadb-cluster      Galera Cluster ...
docker.io  docker.io/bianjp/   Lightweight MariaDB docker     15
  [OK]
          mariadb-alpine       image with Alpine...
docker.io  docker.io/          MariaDB relational database    2
  [OK]
          clearlinux/mariadb   management syste...
docker.io  docker.io/          Fast, simple, and lightweight  2
  [OK]
          jonbaldie/mariadb    MariaDB Docker...
docker.io  docker.io/          Docker MariaDB server w/       1
  [OK]
          tiredofit/mariadb    S6 Overlay, Zabbix ...

In the output of podman search, different fields are used to describe the images that were found. Table 26-3 gives an overview.

Table 26-3 podman search Output Fields

Field

Use

INDEX

The name of the registry where this image was found.

NAME

The full name of the image.

DESCRIPTION

A more verbose description. Use --no-trunc to see the complete description.

STARS

A community appreciation, expressed in stars.

OFFICIAL

Indicates whether this image was provided by the software vendor.

AUTOMATED

Indicates whether this image is automatically built.

You might notice that in some cases this podman search command gives a lot of results. To filter down the results a bit, you can use the --filter option. Use podman search --filter is-official=true alpine to see only alpine images that are created by the application vendor, for instance, or podman search --filter stars=5 alpine to show only alpine images that have been appreciated with at least five stars. Alpine is a common cloud image that is used a lot, because it is really small.

Tip

While you’re looking for images, search for the UBI images in the Red Hat registries. UBI stands for Universal Base Image, and it’s the image that is used as the foundation for all of the Red Hat products.

Inspecting Images

Because images are provided by the open source community, it is important to get more information before you start using them. This allows you to investigate what exactly the image is doing. The best way to do so is to use the skopeo inspect command. The advantage of using skopeo to inspect images is that the inspection happens directly from the registry without any need to first pull the image.

Alternatively, you can inspect local images. To do so, use podman inspect. This command works only on images that are available on your local system but gives more detailed output than skopeo inspect. Use podman images for a list of images that are locally available, and use podman pull to pull an image first. Example 26-6 shows a partial result of the podman inspect command.

Example 26-6 Using podman inspect to Verify Image Contents

student@podman ~]$ podman inspect busybox
[
    {
        "Id":
"6858809bf669cc5da7cb6af83d0fae838284d12e1be0182f92f6bd96559873e3",
        "Digest": "sha256:d366a4665ab44f0648d7a00ae3fae139d55e32f9712c
  67accd604bb55df9d05a",
        "RepoTags": [
            "docker.io/library/busybox:latest"
        ],
        "RepoDigests": [
"docker.io/library/busybox@sha256:2ca5e69e244d2da7368f7088ea3ad0653c3ce
  7aaccd0b8823d11b0d5de956002",
"docker.io/library/busybox@sha256:d366a4665ab44f0648d7a00ae3fae139d55e3
  2f9712c67accd604bb55df9d05a"
        ],
       "Parent": "",
        "Comment": "",
        "Created": "2020-09-09T01:38:02.334927351Z",
        "Config": {
            "Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "sh"
            ]
        },
        "Version": "18.09.7",
        "Author": "",
        "Architecture": "amd64",
        "Os": "linux",
        "Size": 1454611,
        "VirtualSize": 1454611,
        "GraphDriver": {
            "Name": "overlay",
            "Data": {
                "UpperDir": "/home/student/.
local/share/containers/storage/overlay/
be8b8b42328a15af9dd6af4cba85821aad30adde28d249d1ea03c74690530d1c/diff",
               "WorkDir": "/home/student/.
local/share/containers/storage/overlay/
be8b8b42328a15af9dd6af4cba85821aad30adde28d249d1ea03c74690530d1c/work"
            }
        },
        "RootFS": {
            "Type": "layers",
            "Layers": [
"sha256:be8b8b42328a15af9dd6af4cba85821aad30adde28d249d1ea03c74690530
  d1c"
            ]
        },
        "Labels": null,
        "Annotations": {},
        "ManifestType": "application/vnd.docker.distribution.manifest.
  v2+json",
        "User": "",
        "History": [
            {
                "created": "2020-09-09T01:38:02.18459328Z",
                "created_by": "/bin/sh -c #(nop) ADD file:72be520892
  d0a903df801c6425de761264d7c1bc7984d5cf285d778147826586 in / "
            },
            {
                "created": "2020-09-09T01:38:02.334927351Z",
                "created_by": "/bin/sh -c #(nop)  CMD ["sh"]",
                "empty_layer": true
            }
        ]
    }
]

When you use podman inspect, the most interesting information that you should be looking for is the command (Cmd). This is the command that the image runs by default when it is started as a container. Remember: a container is just a fancy way to start an application, and the Cmd line will tell you which application that is.

Tip

To run a container, you can use podman run. This command first pulls the image, stores it on your local system, and then runs the container. You can also use podman pull first to store the image without running it, and after pulling it, you can still run it. This second method is more secure because it allows you to inspect the contents of the image before running it.

Performing Image Housekeeping

For every container that you have ever started, an image is downloaded and stored locally. To prevent your system from filling up, you might want to do a bit of housekeeping every now and then. To remove container images, use the podman rmi command. Notice that this command works only if the container is no longer in use. If podman rmi gives an error message, ensure that the container has been stopped and removed first. Exercise 26-2 shows how to manage your container images.

Exercise 26-2 Managing Container Images

  1. Type podman info | grep -A 10 registries to check which registries are currently used.

  2. Use podman login registry.access.redhat.com and enter your Red Hat account credentials to ensure full access to the Red Hat registries.

  3. Use podman search registry.access.redhat.com/ubi to search only in registry. access.redhat.com for all the UBI images.

  4. Use skopeo inspect docker://registry.access.redhat.com/ubi9 to show information about the container image. Do you see which command is started by default by this image? (Notice that this information is not revealed using skopeo.)

  5. Now use podman pull registry.access.redhat.com/ubi9 to pull the image.

  6. Type podman images to verify the image is now locally available.

  7. Type podman inspect registry.access.redhat.com/ubi9 and look for the command that is started by default by this image. You used skopeo inspect in step 4, whereas now you’re using podman inspect, which shows more details.

Building Images from a Containerfile

Container images provide an easy way to distribute applications. While using containers, application developers no longer have to provide an installer file that runs on all common operating systems. They just have to build a container image, which will run on any OCI-compliant container engine, no matter if that is Docker or Podman.

To build container images, generic system images are commonly used, to which specific applications are added. To make building images easy, Docker introduced the Dockerfile, which in Podman is standardized as the Containerfile. In a Containerfile, different instructions can be used to build custom images, using the podman build command. In Example 26-7 you’ll find a simple example of Containerfile contents.

Example 26-7 Example Containerfile Contents

FROM registry.access.redhat.com/ubi8/ubi:latest
RUN dnf install nmap
CMD ["/usr/sbin/nmap", "-sn", "192.168.29.0/24"] [

In a Containerfile you may have different lines defining exactly what needs to be done. Table 26-4 outlines the common Containerfile directives.

Table 26-4 Common Containerfile Directives

Directive

Use

FROM

Identifies the base image to use

RUN

Specifies commands to run in the base image while building the custom image

CMD

Identifies the default command that should be started by the custom image

Tip

On the RHCSA exam, you’ll only need to work with an existing Containerfile; you won’t have to create one yourself.

To build a custom container image based on a Containerfile, you use the podman build -t imagename:tag . command. In this command the dot at the end refers to the current directory. Replace it with the name of any other directory that may contain the Containerfile you want to use. The -t option is used to specify an image tag. The image tag consists of two parts: the name of the image, which is followed by a specific tag. This specific tag may be used to provide version information. To build a custom image based on the Containerfile in Example 26-7, you could, for instance, use the command podman build -t mymap:1.0 .. After building the custom image, use the podman images command to verify that it has been added. In Exercise 26-3 you can practice working with a Containerfile.

Exercise 26-3 Building Custom Images with a Containerfile

  1. Use mkdir exercise264; cd exercise264 to ensure that your Containerfile is going to be created in a custom directory.

  2. Use an editor to create a Containerfile with the following contents:

    FROM docker.io/library/alpine
    RUN apk add nmap
    CMD ["nmap", "-sn", "172.16.0.0/24"]
  3. Type podman build -t alpmap:1.0.

  4. Verify the image builds successfully. Once completed, use podman images to verify the image has been added.

  5. Use podman run alpmap:1.0 to run the image you’ve just created. If the nmap command gets stuck, use Ctrl-C to interrupt it.

In Exercise 26-3 you’ve created your own custom image based on the alpine image. Alpine is a common cloud image that is used a lot, because it is really small. Even if you’re running your containerized applications on top of Red Hat, that doesn’t mean that you have to use the UBI image, which is provided by Red Hat as a universal base image. If you want it to be small and efficient, better to use alpine instead.

Managing Containers

While working with containers, you need to be aware of a few operational management tasks:

  • Managing container status

  • Running commands in a container

  • Managing container ports

  • Managing container environment variables

In this section you learn how to perform these tasks.

Managing Container Status

You have already learned how podman ps shows a list of currently running containers and how you can extend this list to show containers that have been stopped by using podman ps -a. But let’s talk about what brings a container to a stopped status.

To understand containers, you need to understand that they are just a fancy way to run an application. Containers run applications, including all of the application dependencies, but in the end, the purpose of a container is to run an application. In some cases, the application is a process that is meant to be running all the time. In other cases, the application is just a shell, or another command that runs, produces its result, and then exits, as you have seen in Exercise 26-3. Containers in the latter category are started, run the command, and then just stop because the command has been executed successfully, and there is nothing wrong with that.

Before we continue, let me explain where the potential confusion about the stopped status of containers comes from. Sometimes, a container is considered to be something like a virtual machine. If you start an Ubuntu virtual machine, for instance, it starts and will keep on running until somebody comes and decides to stop it. Containers are not virtual machines. Every container image is configured with a default command, and as just discussed, the container runs the default command and then exits, as it’s done after running the command. Some containers, however, run services, which keep running all the time.

For those containers that do keep on running after starting them, you can use a few commands to stop and start them:

An icon reads, Key Topic.
  • podman stop sends a SIGTERM signal to the container. If that doesn’t give any result after 10 seconds, a SIGKILL command is sent.

  • podman kill immediately sends a SIGKILL command. In most cases, that’s not necessary because podman stop will send a SIGKILL after 10 seconds.

  • podman restart restarts a container that is currently running.

Also, don’t forget that after stopping a container, it is still available on your local system. That availability is convenient because it allows you to easily restart a container and maintain access to modifications that have previously been applied and stored in the writable layer that has been added to the container image while running the container. If, however, you’ve been starting and stopping containers a lot and don’t need to keep the container files around, use podman rm to remove those container files. Alternatively, use podman run --rm to run your container. This command ensures that after it is run, the container files are automatically cleaned up.

Running Commands in a Container

When a container starts, it executes the container entrypoint command. This is the default command that is specified to be started in the container image. In some cases, you may have to run other commands inside the container as well. To do so, you can use the podman exec command. This allows you to run a second command inside a container that has already been started, provided that this other command is available in the namespaced container file system (which often is a small file system that contains only essential utilities).

If a command is not available in a container image, you can install it, using the image operating system package installer. However, this doesn’t make sense in many cases. Installing additional commands will only make the container image significantly bigger and, for that reason, slower. So, you’re better off trying to use default facilities that are provided in the container image.

While running a command, you can run it as a one-shot-only command command. In that case, the command output is written to STDOUT. You can also use podman exec in interactive TTY mode to run several commands inside the container.

For example, you can use podman exec mycontainer uname -r to run the command and write its output to STDOUT, or podman exec -it mycontainer /bin/bash to open a Bash shell in the container and run several commands from there. In Exercise 26-4 you practice running commands in a container.

Exercise 26-4 Running Commands in a Container

  1. Use podman run -d --rm --name=web2 docker.io/library/nginx

  2. Type podman ps to verify that the web2 container is available.

  3. Use podman exec -it web2 /bin/bash to open a Bash shell in the container.

  4. Within the container shell, type ps aux. You will see that there is no ps command in the nginx container; the reason is that many containers come without even fundamental standard tools.

  5. Type ls /proc, and notice that a few directories have a numeric name. These are the PID directories, and if you don’t have access to the ps command, this is how you can find process information.

  6. Each /proc/<PID> directory has a file with the name cdmline. Type cat/proc/1/cmdline to find that the nginx process has been started as PID 1 within the container.

  7. Type exit to close the Bash shell you just opened on the container.

  8. Type podman ps to confirm that the web2 container is still running. It should be running because the exit command you used in the preceding step only exited the Bash shell, not the primary command running inside the container.

  9. On the container host, type uname -r to confirm the current kernel version. The el9 part of the kernel name indicates this is an Enterprise Linux kernel, which you’ll see only on RHEL, CentOS, and related distributions.

  10. Type podman run -it docker.io/library/ubuntu. This will run the latest Ubuntu image from the Docker registry and give access to a shell. Because the image has the shell set as the entrypoint command (the default command it should start), you don’t need to specify the name of the shell as well.

  11. Type cat /etc/os-release to confirm this really is an Ubuntu container.

  12. Type uname -r to see the Enterprise Linux kernel that you saw previously in step 6. The reason is that containers really are all running on the same kernel, no matter which Linux distribution container you’re running on top.

  13. Type exit to close the interactive TTY. Does that command shut down the container?

  14. Use podman ps to verify the Ubuntu container is no longer active. While using exit in step 13, you exited the entrypoint command running in the container, so there is now nothing else to be done.

Managing Container Ports

Rootless containers in podman run without a network address because a rootless container has insufficient privileges to allocate a network address. Root containers do get a dedicated IP address, but that’s an IP address on an isolated network that cannot be accessed directly from external networks. In either case, to make the service running in the container accessible from the outside, you need to configure port forwarding, where a port on the container host is used to access a port in the container application. Notice that if you are running a rootless container, you can address only nonprivileged ports on the host: ports 1–1024 are accessible by the root user only.

Tip

If you do want to run a container that has an IP address and can bind to a privileged port, you need to run a root container. Use sudo podman run ... to run root containers. If you run a root container, you also need to use sudo podman ps to verify that it is running. The root container is running in the root user namespace and therefore is not accessible or visible by ordinary users. The opposite is also true: if you type sudo podman ps, you’ll only see root containers, not the rootless containers that have been started by users.

To run a container with port forwarding, you add the -p option to the podman run command. Use podman run --name nginxport -d -p 8080:80 nginx to run the nginx image as a container and make the nginx process accessible on host port 8080, which will be forwarded to the standard http port 80 on which nginx is offering its services. Don’t forget to use sudo firewall-cmd --add-port 8080/tcp --permanent; sudo firewall-cmd --reload to open the port in the firewall as well afterward! After exposing a web server container on a host port, you can use curl localhost:8080 to verify access. Exercise 26-5 guides you through this procedure.

Exercise 26-5 Managing Container Port Mappings

  1. Type podman run --name nginxport -d -p 8080:80 nginx to run an nginx container and expose it on host port 8080.

  2. Type podman ps to verify that the container has been started successfully with port forwarding enabled.

  3. Use sudo firewall-cmd --add-port 8080/tcp --permanent; sudo firewall-cmd --reload to open this port in the firewall on the host operating system.

  4. Type curl localhost:8080 to verify that you get access to the default nginx welcome page.

Managing Container Environment Variables

Many containers can be started without providing any additional information. Some containers need further specification of how to do their work. This information is typically passed using environment variables. A well-known example where you have to pass environment variables to be able to run the container successfully is mariadb, the database service that needs at least to know the password for the root user that it is going to use.

If a container needs environment variables to do its work, there are a few ways to figure this out:

An icon reads, Key Topic.
  • Just run the container without any environment variables. It will immediately stop, and the main application will generate an error message. Use podman logs on your container to read the log for information on what went wrong.

  • Use podman inspect to see whether there is a usage line in the container image that tells you how to run the container. This may not always work, as it depends on whether or not the image creator has included a usage line in the container image.

After you’ve found out how to run the container, run it, specifying the environment variables with the -e option. To run a mariadb instance, for example, you can use podman run -d -e MYSQL_ROOT_PASSWORD=password -e MYSQL_USER=anna -e MYSQL_PASSWORD=password -e MYSQL_DATABASE=mydb -p 3306:3306 mariadb. Exercise 26-6 guides you through the procedure of running a container using environment variables.

Exercise 26-6 Managing Container Environment Variables

  1. Use podman run docker.io/library/mariadb. It will fail (and you will see an error message on the STDOUT).

  2. Use podman ps -a to see the automatically generated name for the failing mariadb container.

  3. Use podman logs container_name to see the Entrypoint application error log. Make sure to replace container_name with the name you found in step 2.

  4. Use podman inspect mariadb and look for a usage line. You won’t see any.

  5. Use podman search registry.redhat.io/rhel9/mariadb to find the exact version number of the mariadb image in the RHEL registry.

  6. Use podman login registry.redhat.io and provide valid credentials to log in.

  7. Use podman run registry.redhat.io/rhel9/mariadb-nnn (make sure to replace nnn with the version number you found in step 5). It will also fail but will show much more usage details on the STDOUT. The reason is that the Red Hat mariadb image is not the same as the image that was fetched from the Docker registry in the first step of this procedure.

  8. Use podman inspect registry.redhat.io/rhel9/mariadb-nnn and in the command output search for the usage line. It will tell you exactly how to run the mariadb image.

  9. According to the instructions that you found here, type podman run -d -e MYSQL_USER=bob -e MYSQL_PASSWORD=password -e MYSQL_DATABASE=mydb -e MYSQL_ROOT_PASSWORD=password -p 3306:3306 registry.redhat.io/rhel9/mariadb-105. (By the time you read this, the version number may be different, so make sure to check the version number of the image if you’re experiencing a failure in running this command.)

  10. Use podman ps. You will see the mariadb container has now been started successfully.

Managing Container Storage

When a container is started from an image, a writable layer is added to the container. The writable layer is ephemeral: modifications made to the container image are written to the writable layer, but when the container is removed, all modifications that have been applied in the container are removed also. So if you run an application in a container and want to make sure that modifications are stored persistently, you need to add persistent storage.

To add persistent storage to Podman containers, you bind-mount a directory on the host operating system into the container. A bind-mount is a specific type of mount, where a directory is mounted instead of a block device. Doing so ensures that the contents of the directory on the host operating system are accessible within the container. So, when files are written within the container to the bind-mounted directory, they are committed to the host operating system as well, which ensures that data will be available beyond the lifetime of the container. For more advanced storage, you should use an orchestration solution. When you use OpenShift or Kubernetes, it’s easy to expose different types of cloud and datacenter storage to the containers.

To access a host directory from a container, it needs to be prepared:

An icon reads, Key Topic.
  • The host directory must be writable for the user account that runs the container.

  • The appropriate SELinux context label must be set to container_file_t.

Obviously, the container_file_t context label can be set manually by a user who has administrator privileges, using semanage fcontext -a -t container_file_t "/hostdir(/.*)?"; restorecon. It can also be set automatically, but that works only if the user who runs the container is the owner of the directory. It is not enough if the user has write permissions on the directory! For an easy way to apply the right SELinux context, you should focus on the automatic solution.

To mount the volume, you use the -v host_dir:container_dir command. If the user running the container is owner, or the container is a root container, you can use -v host_dir:container_dir:Z as an alternative to setting the SELinux context automatically. So, to make sure that a mariadb database is started in a way that database files are stored on the host operating system, you use podman run -d --name mydb -v /home/$(id -un)/dbfiles:/var/lib/mysql:Z -e MYSQL_USER=user -e MYSQL_PASSWORD=password -e MYSQL_DATABASE=mydatabase registry.redhat.io/rhel9/mariadb-105. In Exercise 26-7 you can practice running containers with storage attached.

Exercise 26-7 Attaching Storage to Containers

  1. Use sudo mkdir /opt/dbfiles; sudo chmod o+w /opt/dbfiles to create a directory on the host operating system.

  2. Use podman login registry.redhat.io and provide valid credentials to log in.

  3. Use podman run -d --name mydbase -v /opt/dbfiles:/var/lib/mysql:Z -e MYSQL_USER=user -e MYSQL_PASSWORD=password -e MYSQL_DATABASE=mydbase registry.redhat.io/rhel9/mariadb-105. The output of this command shows “operation not permitted.”

  4. Type podman ps -a. You’ll see that starting the container has failed.

  5. Use podman logs mydbase to investigate why it has failed. Because the error was not related to the container application, the logs don’t show you anything; the problem is related to Linux permissions.

  6. Remove the failed container by using podman rm mydbase.

  7. Type sudo chown $(id -un) /opt/dbfiles.

  8. Run the command shown in step 3 again. It will now be successful.

  9. Use ls -ldZ /opt/dbfiles. You’ll see that the container_file_t SELinux context has automatically been set.

To understand what is really happening while running rootless containers, it makes sense to investigate a bit more. Rootless containers are launched in a namespace. For each user, a namespace is created in which all containers are started. The namespace provides isolation, which allows the container inside the namespace to run as the root user, where this root-level access does not exist outside of the namespace. To make this work, inside the container namespace different UIDs are used than those used outside of the namespace.

To ensure that access is working correctly, UIDs are mapped between the namespace and the host OS. This UID mapping allows any UID inside the container namespace to be mapped to a valid UID on the container host. The podman unshare command can be used to run commands inside the container namespace, which in some cases is necessary to make sure the container is started the right way. To start with, as a non-root user, type podman unshare cat /proc/self/uid_map. This shows that the root user (UID 0) maps to the current user ID, which in Example 26-8 is shown as UID 1000.

Example 26-8 Using podman unshare to Show UID Mappings

[student@server1 ~]$ podman unshare cat /proc/self/uid_map
          0        1000          1
          1      100000      65536

If you want to set appropriate directory ownership on bind-mounted directories for rootless containers, additional work is required:

Step 1. Find the UID of the user that runs the container main application. In many cases podman inspect imagename will show this.

Step 2. Use podman unshare chown nn:nn directoryname to set the container UID as the owner of the directory on the host. Notice that this directory must be in the rootless user home directory, as otherwise it wouldn’t be a part of the user namespace.

Step 3. Use podman unshare /cat/proc/self/uid_map to verify the user ID mapping.

Step 4. Verify that the mapped user is owner on the host by using ls -ld ~/directoryname.

In Exercise 26-8 you’ll practice bind-mounting in rootless containers

Exercise 26-8 Bind Mounting in Rootless Containers

  1. Make sure you’re in a non-root shell.

  2. Use podman search mariadb | grep quay. The images in quay.io are optimized for use in Red Hat environments, and most of them are rootless by nature.

  3. Type podman run -d --name mydb -e MYSQL_ROOT_PASSWORD=password quay.io/centos7/mariadb-103-centos7

  4. Use podman exec mydb grep mysql /etc/passwd to verify the UID of the mysql user, which is set to 27.

  5. Use podman stop mydb; podman rm mydb, as you’ll now have to set up the storage environment with the right permissions before starting the container again.

  6. Type mkdir ~/mydb

  7. Use podman unshare chown 27:27 mydb to set appropriate permissions inside the user namespace.

  8. Check the UID mapping by typing podman unshare cat /proc/self/uid_map

  9. Use ls -ld mydb to verify the directory owner UID that is used in the host OS. At this point the UIDs are set correctly.

  10. Type podman run -d --name mydb -e MYSQL_ROOT_PASSWORD=password -v /home/student/mydb:/var/lib/mysql:Z quay.io/centos7/mariadb-103-centos7 to start the rootless mariadb container.

  11. Use ls -Z mydb to verify the database files have been created successfully.

Running Containers as Systemd Services

As containers are becoming increasingly common as the way to start services, a way is needed to start them automatically. When you’re using Kubernetes or OpenShift to orchestrate container usage, this is easy: the orchestration platform ensures that the container is started automatically, unless you decide this is not desired behavior. On a standalone platform where containers are running rootless containers, systemd is needed to autostart containers.

In systemd, services are easily started and enabled with root permissions using commands like systemctl enable --now myservice.service. If no root permissions are available, you need to use systemctl --user. The --user option allows users to run the common systemd commands, but in user space only. This works for any service that can run without root permissions; for instance, use systemctl --user start myservice.service to start the myservice service.

By default, when systemctl --user is used, services can be automatically started only when a user session is started. To define an exception to that, you can use the loginctl session manager, which is part of the systemd solution to enable linger for a specific user account. If you use loginctl enable-linger myuser, you enable this for the user myuser. When linger is enabled, systemd services that are enabled for that specific user will be started on system start, not only when the user is logging in.

The next step is to generate a systemd unit file to start containers. Obviously, you can write these files yourself, but a much easier way is to use podman generate systemd --name mycontainer --files to do so. Note that this container file must be generated in the ~/.config/systemd/user/ directory, so you have to create that directory and change to it before running the podman generate command.

The podman generate systemd command assumes that a container with the name mycontainer has already been created and will result in a unit file that can be enabled using systemctl --user enable container-mycontainer.service. In Example 26-9 you can see what such a unit file looks like.

Example 26-9 Podman Autogenerated Container Service File

[student@server1 ~]$ cat container-wbe2.service
# container-wbe2.service
# autogenerated by Podman 4.0.2
# Mon Oct 31 10:35:47 CET 2022

[Unit]
Description=Podman container-wbe2.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=/run/user/1000/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStart=/usr/bin/podman start wbe2
ExecStop=/usr/bin/podman stop -t 10 wbe2
ExecStopPost=/usr/bin/podman stop -t 10 wbe2
PIDFile=/run/user/1000/containers/overlay-containers/2a7fe7b225bdbbfd3b
  3deb6488b9c57400530b2e77310fd3294b6d08b8dc630b/userdata/conmon.pid
Type=forking

[Install]
WantedBy=default.target

In Exercise 26-9 you can practice working with Podman autogenerated systemd unit files.

Exercise 26-9 Running Containers as Systemd Services

  1. Use sudo useradd linda to create user linda.

  2. Use sudo passwd linda to set the password for user linda.

  3. Type sudo loginctl enable-linger linda to enable the linger feature for user linda.

  4. Use ssh linda@localhost to log in. The procedure doesn’t work from a su or sudo environment.

  5. Type mkdir -p ~/.config/systemd/user; cd ~/.config/systemd/user to create and activate the directory where the systemd user files will be created.

  6. Use podman run -d --name mynginx -p 8081:80 nginx to start an nginx pod.

  7. Type podman ps to verify the nginx pod has been started.

  8. Create the systemd user files using podman generate systemd --name mynginx --files.

  9. A systemd unit file with the name container-mynginx.service is created.

  10. Type systemctl --user daemon-reload to ensure that systemd picks up the changes.

  11. Use systemctl --user enable container-mynginx.service to enable the systemd user service. (Do not try to start it, because it has already been started!)

  12. Type systemctl --user status container-mynginx.service to verify the service has the state of enabled.

  13. Reboot your server, and after rebooting, open a shell as your regular non-root user.

  14. Type ps faux | grep -A3 -B3 mynginx to show that the mynginx container has successfully been started and is running as user linda.

Summary

In this chapter you learned about containers. First, you learned how containers really come forth from the Linux operating system and then learned all that is needed to run containers. This includes managing images, managing containers and container storage, as well as running containers as systemd services.

Exam Preparation Tasks

As mentioned in the section “How to Use This Book” in the Introduction, you have several choices for exam preparation: the end-of-chapter labs; the memory tables in Appendix C; Chapter 27, “Final Preparation”; and the practice exams.

Review All Key Topics

Review the most important topics in the chapter, noted with the Key Topic icon in the margin of the page. Table 26-5 lists a reference for these key topics and the page numbers on which each is found.

An icon reads, Key Topic.

Table 26-5 Key Topics for Chapter 26

Key Topic Element

Description

Page

List

Three main tools to manage containers

542

List

Essential Linux kernel features for containers

543

List

Commands to manage container state

559

List

Finding information about variables to use

562

List

Preparing host storage

563

Complete Tables and Lists from Memory

There are no memory tables or lists in this chapter.

Define Key Terms

Define the following key terms from this chapter and check your answers in the glossary:

container

container engine

registry

CRI-o

namespace

Docker

Kubernetes

OpenShift

image

orchestration

linger

Review Questions

The questions that follow are meant to help you test your knowledge of concepts and terminology and the breadth of your knowledge. You can find the answers to these questions in Appendix A.

1. What is the name of the tool that Red Hat includes with RHEL 9 to work with container images without having to download them from the registry first?

2. What are the three Linux features that are needed in container environments?

3. What is the name of the container engine on RHEL 9?

4. Which file defines the registries that are currently used?

5. After you start a container, using podman run ubuntu, executing podman ps doesn’t show it as running. What is happening?

6. What do you need to do to start a rootless container that bind-mounts a directory in the home directory of the current user account?

7. How can you find the default command that a container image will use when started?

8. How do you start an Ubuntu-based container that prints the contents of /etc/os-release and then exits?

9. What do you need to do to run a podman nginx container in such a way that host port 82 forwards traffic to container port 80?

10. Which command do you use to generate a systemd unit file for the container with the name nginx?

End-of-Chapter Lab

At this point you should be familiar with running containers in a RHEL environment. You can now complete the end-of-chapter lab to reinforce these newly acquired skills.

Lab 26.1

  1. Ensure that you have logged in to get access to the Red Hat container registries.

  2. Download the mariadb container image to the local computer.

  3. Start the mariadb container, meeting the following requirements:

    • The container must be accessible at port 3206.

    • The MYSQL_ROOT_PASSWORD must be set to “password”

    • A database with the name mydb is created.

    • A bind-mounted directory is accessible: the directory /opt/mariadb on the host must be mapped to /var/lib/mysql in the container.

  4. Configure systemd to automatically start the container as a user systemd unit upon (re)start of the computer.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset