Chapter 4: Managing Running Containers

In the previous chapter, we learned how to set up the environment to run containers with Podman, covering binary installation for the major distributions, system configuration files, and a first example container run to verify that our setup was correct. This chapter will offer a more detailed overview of container execution, how to manage and inspect running containers, and how to group containers in pods. This chapter is important for gaining the right knowledge and expertise to start our experience as a system administrator for container technologies.

In this chapter, we're going to cover the following main topics:

  • Managing container images
  • Operations with running containers
  • Inspecting container information
  • Capturing logs from containers
  • Executing processes in a running container
  • Running containers in pods

Technical requirements

Before proceeding with this chapter and its exercises, a machine with a working Podman instance is required. As stated in Chapter 3, Running the First Container, all the examples in the book are executed on a Fedora 34 system, but can be reproduced on an operating system (OS) of your choice.

Finally, a good understanding of the topics covered in the previous chapters is useful to easily grasp concepts regarding Open Container Initiative (OCI) images and container execution.

Managing container images

In this section, we will see how to find and pull (download) an image in the local system, as well as inspect its contents. When a container is created and run for the first time, Podman takes care of pulling the related image automatically. However, being able to pull and inspect images in advance gives some valuable advantages, the first being that a container executes faster when images are already available in the machine's local store.

As we stated in the previous chapters, containers are a way to isolate processes in a sandboxed environment with separate namespaces and resource allocation.

The filesystem mounted in the container is provided by the OCI image described in Chapter 2, Comparing Podman and Docker .

OCI images are stored and distributed by specialized services called container registries. A container registry stores images and metadata and exposes simple REpresentational State Transfer (REST) application programming interface (API) services to enable users to push and pull images.

There are essentially two types of registries: public and private. A public registry is accessible as a public service (with or without authentication). The main public registries such as docker.io, gcr.io, or quay.io are also used as the image repositories of larger open source projects.

Private registries are deployed and managed inside an organization and can be more focused on security and content filtering. The main container registry projects nowadays are graduated under the Cloud Native Computing Foundation (CNCF) (https://landscape.cncf.io/card-mode?category=container-registry&grouping=category) and offer advanced enterprise features to manage multitenancy, authentication, and role-based access control (RBAC), as well as image vulnerability scanning and image signing.

In Chapter 9, Pushing Images to a Container Registry, we will provide more details and examples of interaction with container registries.

The largest part of public and private registries expose Docker Registry HTTP API V2 (https://docs.docker.com/registry/spec/api/). Being a HyperText Transfer Protocol (HTTP)-based REST API, users could interact with the registry with a simple curl command or design their own custom clients.

Podman offers a command-line interface (CLI) to interact with public and private container registries, manage logins when registry authentication is required, search for image repositories by passing a string pattern, and handle locally cached images.

Searching for images

The first command we will learn to use to search images across multiple registries is the podman search command. The following example shows how to search an nginx image:

# podman search nginx

The preceding command will produce an output with many entries from all the whitelisted registries (see the Preparing your environment | Customizing container registries' search lists section of Chapter 3, Running the First Container). The output will be a little clumsy, with many entries from unknown and unreliable repositories.

In general, the podman search command accepts the following pattern:

podman search [options] TERM

Here, TERM is the search argument. The resulting output of a search has the following fields:

  • INDEX: The registry indexing the image
  • NAME: The full name of the image, including the registry name and associated namespaces
  • DESCRIPTION: A short description of the image role
  • STARS: The number of stars given by users (available only on registries supporting this feature, such as docker.io)
  • OFFICIAL: A Boolean for specifying whether the image is official
  • AUTOMATED: A field set to OK if the image is automated

    Important Note

    Never trust unknown repositories and always prefer official images. When pulling images from a niche project, try to understand the content of the image before running it. Remember that an attacker could hide malicious code that could be executed inside containers.

    Even trusted repositories can be compromised in some cases. In enterprise scenarios, implement image signature verification to avoid image tampering.

It is possible to apply filters to the search and refine the output. For example, to refine the search and print only official images, we can add the following filtering option that only prints out images with the is-official flag:

# podman search nginx --filter=is-official

This command will print one line pointing to docker.io/library/nginx:latest. This official image is maintained by the nginx community and can be used more confidently.

Users can refine the output format of the command. The following example shows how to print only the image registry and the image name:

# podman search fedora  

  --filter is-official

  --format "table {{.Index}} {{.Name}}"

INDEX       NAME

docker.io   docker.io/library/fedora

The output image name has a standard naming pattern that deserves a detailed description. The standard format is shown here:

<registry>[:<port>]/[<namespace>/]<name>:<tag>

Let's describe the preceding fields in detail, as follows:

  • registry: This contains the registry the image is stored in. The nginx image in our example is stored in the docker.io public registry. Optionally, it is possible to specify a custom port number for the registry. By default, registries expose the 5000 Transmission Control Protocol (TCP) port.
  • namespace: This field provides a hierarchy structure that is useful for distinguishing the image context from the provider. The namespace could represent the parent organization, the username of the owner of the repository, or the image role.
  • name: This contains the name of the private/public image repository where all the tags are stored. It is often referred to as the application name (that is, nginx).
  • tag: Every image stored in the registry has a unique tag, mapped to a Secure Hash Algorithm 256 (SHA256) digest. The generic :latest tag can be omitted in the image name.

The generic search hides the image tags by default. To show all available tags for a given repository, we can use the –list-tags option to a given image name, as follows:

# podman search quay.io/prometheus/prometheus --list-tags

NAME                           TAGquay.io/prometheus/prometheus  v2.5.0

quay.io/prometheus/prometheus  v2.6.0-rc.0

quay.io/prometheus/prometheus  v2.6.0-rc.1

quay.io/prometheus/prometheus  v2.6.0

quay.io/prometheus/prometheus  v2.6.1

quay.io/prometheus/prometheus  v2.7.0-rc.0

quay.io/prometheus/prometheus  v2.7.0-rc.1

quay.io/prometheus/prometheus  v2.7.0-rc.2

quay.io/prometheus/prometheus  v2.7.0

quay.io/prometheus/prometheus  v2.7.1

[...output omitted...]

This option is really useful for finding a specific image tag in the registry, often associated with a release version of the application/runtime.

Important Note

Using the :latest tag can lead to image versioning issues since it is not a descriptive tag. Also, it is usually expected to point to the latest image version. Unfortunately, this is not always true since an untagged image could retain the latest tag while the latest pushed image could have a different tag. It is up to the repository maintainer to apply tags correctly. If the repository uses semantic versioning, the best option is to pull the most recent version tag.

Pulling and viewing images

Once we have found our desired image, it can be downloaded using the podman pull command, as follows:

# podman pull docker.io/library/nginx:latest

Notice the root user for running the Podman command. In this case, we are pulling the image as root, and its layers and metadata are stored in the /var/lib/containers/storage path.

We can run the same command as a standard user by executing the command in a standard user's shell, like this:

$ podman pull docker.io/library/nginx:latest

In this case, the image will be downloaded in the user home directory under $HOME/.local/share/containers/storage/ and will be available to run rootless containers.

Users can inspect all locally cached images with the podman images command, as illustrated here:

# podman images

REPOSITORY                  TAG         IMAGE ID      CREATED        SIZE

docker.io/library/nginx     latest      ad4c705f24d3  2 weeks ago    138 MB

docker.io/library/fedora    latest      dce66322d647  2 months ago   184 MB

[...omitted output...]

The output shows the image repository name, its tag, the image identifier (ID), the creation date, and the image size. It is very useful to keep an updated view of the images available in the local store and understand which ones are obsolete.

The podman images command also supports many options (a complete list is available by executing the man podman-images command). One of the more interesting options is –sort, which can be used to sort images by size, date, ID, repository, or tag. For example, we could print images sorted by creation date to find out the most obsolete ones, as follows:

# podman images --sort=created

Another couple of very useful options are the –all (or –a) and –quiet (or –q) options. Together, they can be combined to print only the image IDs of all the locally stored images, even intermediate image layers. The command will print output similar to the following example:

# podman images -qa

ad4c705f24d3

a56f85702a94

b5c5125e3fee

4d7fc5917f3e

625707533167

f881f1aa4d65

96ab2a326180

Listing and showing the images already pulled on a system it is not the most interesting part of the job! Let's discover how to inspect images with their configuration and contents in the next section.

Inspecting images' configurations and contents

To inspect the configuration of a pulled image, the podman image inspect (or the shorter podman inspect) command comes to help us, as illustrated here:

# podman inspect docker.io/library/nginx:latest

The printed output will be a JavaScript Object Notation (JSON)-formatted object containing the image config, architecture, layers, labels, annotation, and the image build history.

The image history shows the creation history of every layer and is very useful for understanding how the image was built when the Dockerfile or the Containerfile is not available.

Since the output is a JSON object, we can extract single fields to collect specific data or use them as input parameters for other commands.

The following example prints out the command executed when a container is created upon this image:

# podman inspect docker.io/library/nginx:latest

--format "{{ .Config.Cmd }}"

[nginx -g daemon off;]

Notice that the formatted output is managed as a Go template.

Sometimes, the inspection of an image must go further than a simple configuration check. On occasions, we need to inspect the filesystem content of an image. To achieve this result, Podman offers the useful podman image mount command.

The following example mounts the image and prints its mount path:

# podman image mount docker.io/library/nginx

/var/lib/containers/storage/overlay/ba9d21492c3939befbecd5ec32f6f1b9d564ccf8b1b279e0fb5c186e8b7 967f2/merged

If we run a simple ls command in the provided path, we will see the image filesystem, composed from its various merged layers, as follows:

# ls -al /var/lib/containers/storage/overlay/ba9d21492c3939befbecd5ec32f6f1b9d564ccf8b1b279e0fb5c186e8b7 967f2/merged

total 92

dr-xr-xr-x. 1 root root 4096 Sep 25 22:30 .

drwx------. 5 root root 4096 Sep 25 22:53 ..

drwxr-xr-x. 2 root root 4096 Sep  2 02:00 bin

drwxr-xr-x. 2 root root 4096 Jun 13 12:30 boot

drwxr-xr-x. 2 root root 4096 Sep  2 02:00 dev

drwxr-xr-x. 1 root root 4096 Sep  9 20:26 docker-entrypoint.d

-rwxrwxr-x. 1 root root 1202 Sep  9 20:25 docker-entrypoint.sh

drwxr-xr-x. 1 root root 4096 Sep  9 20:26 etc

drwxr-xr-x. 2 root root 4096 Jun 13 12:30 home

drwxr-xr-x. 1 root root 4096 Sep  9 20:26 lib

drwxr-xr-x. 2 root root 4096 Sep  2 02:00 lib64

drwxr-xr-x. 2 root root 4096 Sep  2 02:00 media

drwxr-xr-x. 2 root root 4096 Sep  2 02:00 mnt

drwxr-xr-x. 2 root root 4096 Sep  2 02:00 opt

drwxr-xr-x. 2 root root 4096 Jun 13 12:30 proc

drwx------. 2 root root 4096 Sep  2 02:00 root

drwxr-xr-x. 3 root root 4096 Sep  2 02:00 run

drwxr-xr-x. 2 root root 4096 Sep  2 02:00 sbin

drwxr-xr-x. 2 root root 4096 Sep  2 02:00 srv

drwxr-xr-x. 2 root root 4096 Jun 13 12:30 sys

drwxrwxrwt. 1 root root 4096 Sep  9 20:26 tmp

drwxr-xr-x. 1 root root 4096 Sep  2 02:00 usr

drwxr-xr-x. 1 root root 4096 Sep  2 02:00 var

To unmount the image, simply run the podman image unmount command, as follows:

# podman image unmount docker.io/library/nginx

Mounting images in rootless mode is a bit different since this execution mode only supports manual mounting of the Virtual File System (VFS) storage driver. Since we are working with a default OverlayFS storage driver, the mount/unmount commands would not work. A workaround is to run the podman unshare command first. It executes a new shell process inside a new namespace where the current user ID (UID)/globally unique ID (GID) are mapped to UID 0 and GID 0, respectively. From now on, we have elevated privileges to run the podman mount command. Let's see an example here:

$ podman unshare

# podman image mount docker.io/library/nginx:latest

/home/<username>/.local/share/containers/storage/overlay/ba9d21492c3939befbecd5ec32f6f1b9d564ccf8b1b279e0fb5c186e8b7967 f2/merged

Notice that the mount point is now in the <username> home directory.

To unmount, simply run the podman unmount command, as follows:

# podman image unmount docker.io/library/nginx:latest

ad4c705f24d392b982b2f0747704b1c5162e45674294d5640cca7076eba2 865d

# exit

The exit command is necessary to exit the temporary unshared namespace.

Deleting images

To delete a local store image, we can use the podman rmi command. The following example deletes the nginx image pulled before:

# podman rmi docker.io/library/nginx:latest

Untagged: docker.io/library/nginx:latest

Deleted: ad4c705f24d392b982b2f0747704b1c5162e45674294d5640cca7 076eba2865d

The same command works in rootless mode when executed by a standard user against their home local store.

To remove all the cached images, use the following example, which relies on shell command expansion to get a full list of image IDs:

# podman rmi $(podman images -qa)

Notice the sharp symbol at the beginning of the line that tells us that the command is executed as root.

The next command removes all images in a regular user local cache (notice the dollar symbol at the beginning of the line):

$ podman rmi $(podman images -qa)

Important Note

The podman rmi command fails to remove images that are currently in use from a running container. First, stop the containers using the blocked images and then run the command again.

Podman also offers a simpler way to clean up dangling or unused images—the podman image prune command. It does not delete images from containers in use, so if you have running or stopped containers, the correspondent container image will be not deleted.

The following example deletes all unused images without asking for confirmation:

$ sudo podman image prune -af

The same command applies in rootless mode, deleting only images in the user home local store, as illustrated in the following code snippet:

$ podman image prune -af

With this, we have learned how to manage container images on our machine. Let's now learn how to handle and check running containers.

Operations with running containers

In Chapter 2, Comparing Podman and Docker , we learned in the Running your first container section how to run a container with basic examples, involving the execution of a Bash process inside a Fedora container and an httpd server that was also helpful for learning how to expose containers externally.

We will now explore a set of commands used to monitor and check our running containers and gain insights into their behavior.

Viewing and handling container status

Let's start by running a simple container and exposing it on port 8080 to make it accessible externally, as follows:

$ podman run -d -p 8080:80 docker.io/library/nginx

The preceding example is run in rootless mode, but the same can be applied as a root user by prepending the sudo command. In this case, it was simply not necessary to have a container executed in that way.

Important Note

Rootless containers give an extra security advantage. If a malicious process breaks the container isolation, maybe leveraging a vulnerability on the host, it will at best gain the privileges of the user who started the rootless container.

Now that our container is up and running and ready to serve, we can test it by running a curl command on the localhost, which should produce a HyperText Markup Language (HTML) default output like this:

$ curl localhost:8080

<!DOCTYPE html>

<html>

<head>

<title>Welcome to nginx!</title>

<style>

html { color-scheme: light dark; }

body { width: 35em; margin: 0 auto;

font-family: Tahoma, Verdana, Arial, sans-serif; }

</style>

</head>

<body>

<h1>Welcome to nginx!</h1>

<p>If you see this page, the nginx web server is successfully installed and

working. Further configuration is required.</p>

<p>For online documentation and support please refer to

<a href="http://nginx.org/">nginx.org</a>.<br/>

Commercial support is available at

<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>

</body>

</html>

Obviously, an empty nginx server without contents to serve is useless, but we will learn how to serve custom contents by using volumes or building custom images later in the next chapters.

The first command we can use to check our container is podman ps. This simply prints out useful information from the running containers, with the option of customizing and sorting the output. Let's run the command in our host and see what is printed, as follows:

$ podman ps

CONTAINER ID  IMAGE                           COMMAND               CREATED         STATUS             PORTS                 NAMES

d8bbd5da64d0  docker.io/library/nginx:latest  nginx -g daemon o...  13 minutes ago  Up 13 minutes ago  0.0.0.0:8080->80/tcp  unruffled_saha

The output produces some interesting information about running containers, as detailed here:

  • CONTAINER ID: Every new container gets a unique hexadecimal ID. The full ID has a length of 64 characters, and a shortened portion of 12 characters is printed in the podman ps output.
  • IMAGE: The image used by the container.
  • COMMAND: The command executed inside the container.
  • CREATED: The creation date of the container.
  • STATUS: The current container status.
  • PORTS: The network ports opened in the container. When a port mapping is applied, we can see one or more host ip:port pairs mapped to the container ports with an arrow sign. For example, the 0.0.0.0:8080->80/tcp string means that the 8080/tcp host port is exposed on all the listening interfaces and is mapped to the 80/tcp container port.
  • NAMES: The container name. This can be assigned by the user or be randomly generated by the container engine.

    Tip

    Notice the randomly generated name in the last column of the output. Podman continues the Docker tradition to generate random names using adjectives in the left part of the name and notable scientists and hackers in the right part. Indeed, Podman still uses the same github.com/docker/docker/namesgenerator Docker package, included in the vendor directory of the project.

To get a full list of both running and stopped containers, we can add an –a option to the command. To demonstrate this, we first introduce the podman stop command. This changes the container status to stopped and sends a SIGTERM signal to the processes running inside the container. If the container becomes unresponsive, it sends a SIGKILL signal after a given timeout of 10 seconds.

Let's try to stop the previous container and check its state by executing the following code:

$ podman stop d8bbd5da64d0  

$ podman ps

This time, podman ps produced an empty output. This is because the container state is stopped. To get a full list of both running and stopped containers, run the following command:

$ podman ps –a

CONTAINER ID  IMAGE  COMMAND  CREATED   STATUS   PORT   NAMES

d8bbd5da64d0  docker.io/library/nginx:latest  nginx -g daemon o...  About a minute ago  Exited (0) About a minute ago  0.0.0.0:8080->80/tcp  unruffled_saha

Notice the status of the container, which states that the container has exited with a 0 exit code.

The stopped container can be resumed by running the podman start command, as follows:

$ podman start d8bbd5da64d0  

This command simply starts again the container we stopped before.

If we now check the container status again, we will see it is up and running, as indicated here:

$ podman ps

CONTAINER ID  IMAGE  COMMAND  CREATED   STATUS   PORT   NAMES

d8bbd5da64d0  docker.io/library/nginx:latest  nginx -g daemon o...  8 minutes ago  Up 1 second ago  0.0.0.0:8080->80/tcp  unruffled_saha

Podman keeps the container configuration, storage, and metadata as long as it is in a stopped state. Anyway, when we resume the container, we start a new process inside it.

For more options, see the related manual (man) page (man podman-start).

If we simply need to restart a running container, we can use the podman restart command, as follows:

$ podman restart <Container_ID_or_Name>

This command has the effect of immediately restarting the processes inside the container with a new process ID (PID).

The podman start command can also be used to start containers that have been previously created but not run. To create a container without starting it, use the podman create command. The following example creates a container but does not start it:

$ podman create -p 8080:80 docker.io/library/nginx

To start it, run podman start on the created container ID or name, as follows:

$ podman start <Container_ID_or_Name>

This command is very useful for preparing an environment without running it or for mounting a container filesystem, as in the following example:

$ podman unshare

$ podman container mount <Container_ID_or_Name>

/home/<username>/.local/share/containers/storage/overlay/bf9d8df299436d80dece200a23e1b8b957f987a254a656ef94cdc5666982 3b5c/merged

Let's now introduce a very frequently used command: podman rm. As the name indicates, it is used to remove containers from the host. By default, it removes stopped containers, but it can be forced to remove running containers with the –f option.

Using the container from the previous example, if we stop it again and issue the podman rm command, as illustrated in the following code snippet, all the container storage, configs, and metadata will be discarded:

$ podman stop d8bbd5da64d0

$ podman rm d8bbd5da64d0

If we now run a podman ps command again, even with the –a option, we will get an empty list, as illustrated here:

$ podman ps –a

CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

For more details, please inspect the command man page (man podman-rm).

Sometimes, it is useful—just as with images—to print only the container ID with the –q option. This one, combined with the –a option, can print a list of all stopped and running containers in the host. Let's try another example here:

$ for i in {1..5}; do podman run -d docker.io/library/nginx; done

It's interesting to notice that we have used a shell loop to start five identical containers, this time without any port mapping—just plain nginx containers. We can inspect their IDs with the following command:

$ podman ps –qa

b38ebfed5921

6204efc6d6b2

762967d87657

269f1affb699

1161072ec559

How can we stop and remove all our running containers quickly? We can use shell expansion to combine it with other commands and reach the desired result. Shell expansion is a powerful tool that runs the command inside round parentheses and lets us pass the output string as arguments to the external command, as illustrated in the following code snippet:

$ podman stop $(podman ps -qa)

$ podman rm $(podman ps -qa)

The two commands stopped all the running containers, identified by their IDs, and removed them from the host.

The podman ps command enables users to refine their output by applying specific filters. A full list of all applicable filters is available on the podman-ps man page. A simple but useful application is the status filter, which enables users to print only containers in a specific condition. Possible statuses are created, exited, paused, running, and unknown.

The following example only prints containers in an exited status:

$ podman ps --filter status=exited

Again, we can leverage the power of shell expansion to remove nothing but the exited containers, as follows:

$ podman rm $(podman ps -qa --filter status=exited)

A similar result can be achieved with the simpler-to-remember podman container prune command shown here, which removes (prunes) all stopped containers from the host:

$ podman container prune

Sorting is another useful option for producing ordered output when listing containers. The following example shows how to sort by container ID:

$ podman ps -q --sort id

The podman ps command support formatting using a Go template to produce custom output. The next example prints only the container IDs and the commands executed inside them:

$ podman ps -a --format "{{.ID}}  {{.Command}}" --no-trunc

Also, notice the --no-trunc option is added to avoid truncating the command output. This is not mandatory but is useful when we have long commands executed inside the containers.

If we simply wish to extract the host PID of the process running inside the running containers, we can run the following example:

$ podman ps --format "{{ .Pid }}"

Instead, if we need to also find out information about the isolated namespaces, podman ps can print details about the cloned namespaces of the running containers. This is a useful starting point for advanced troubleshooting and inspection. You can see the command being run here:

$ podman ps --namespace

CONTAINER ID  NAMES                 PID         CGROUPNS    IPC         MNT         NET         PIDNS       USERNS      UTS

f2666ed4a46a  unruffled_hofstadter  437764      4026533088  4026533086  4026533083  4026532948  4026533087  4026532973  4026533085

This subsection covered many common operations to control and view the status of containers. In the next section, we will learn how to pause and resume running containers.

Pausing and unpausing containers

This short section covers the podman pause and podman unpause commands. Despite being a section related to container status handling, it is interesting to understand how Podman and the container runtime leverage control groups (cgroups) to achieve specific purposes.

Simply put, the pause and unpause commands have the purpose of pausing and resuming the processes of a running container. Now, the reader could legitimately need clearance about the difference between pause and stop commands in Podman.

While the podman stop command simply sends a SIGTERM/SIGKILL signal to the parent process in the container, the podman pause command uses cgroups to pause the process without terminating it. When the container is unpaused, the same process is resumed transparently.

Tip

The pause/unpause low-level logic is implemented in the container runtime—for the most curious, this was the implementation in crun at the time of writing:

https://github.com/containers/crun/blob/7ef74c9330033cb884507c28fd8c267861486633/src/libcrun/cgroup.c#L1894-L1936

The following example demonstrates the podman pause and unpause commands. First, let's start a Fedora container that prints a date and time string every 2 seconds in an endless loop, as follows:

$ podman run --name timer docker.io/library/fedora bash -c "while true; do echo $(date); sleep 2; done"

We intentionally leave the container running in a window and open a new window/tab to manage its status. Before issuing the pause command, let's inspect the PID by executing the following code:

$ podman ps --format "{{ .Pid }}" --filter name=timer

816807

Now, let's pause the running container with the following command:

$ podman pause timer

If we go back to the timer container, we see that the output just paused but the container has not exited. The unpause action seen here will bring it back to life:

$ podman unpause timer

After the unpause action, the timer container will start printing date outputs again. Looking at the PID here, nothing has changed, as expected:

$ podman ps --format "{{ .Pid }}" --filter name=timer

816807

We can check the cgroups status of the paused/unpaused container. In a third tab, open a terminal with a root shell and access the cgroupfs controller hierarchy after replacing the correct container ID, as follows:

$ sudo –i

$ cd /sys/fs/cgroup/user.slice/user-1000.slice/[email protected]/user.slice/libpod-<CONTAINER_ID>.scope/container

Now, look at the cgroup.freeze file content. This file holds a Boolean value and its state changes as we pause/unpause the container from 0 to 1 and vice versa. Try to pause and unpause the container again to test the changes.

Cleanup Tip

Since the echo loop was issued with a bash –c command, we need to send a SIGKILL signal to the process. To do this, we can stop the container and wait for the 10-second timeout, or simply run a podman kill command, as follows:

$ podman kill timer

In this subsection, we covered in detail the most common commands for watching and modifying a container's status. We can now move on to inspect the processes running inside the running containers.

Inspecting processes inside containers

When a container is running, processes inside it are isolated at the namespace level, but users still own total control of the processes running and can inspect their behavior. There are many levels of complexity in process inspection, but Podman offers tools that can speed up this task.

Let's start with the podman top command: this provides a full view of the processes running inside a container. The following example shows the processes running inside an nginx container:

$ podman top  f2666ed4a46a

USER        PID         PPID        %CPU        ELAPSED          TTY         TIME        COMMAND

root        1           0           0.000       3m26.540290427s  ?           0s          nginx: master process nginx -g daemon off;

nginx       26          1           0.000       3m26.540547429s  ?           0s          nginx: worker process

nginx       27          1           0.000       3m26.540788803s  ?           0s          nginx: worker process

nginx       28          1           0.000       3m26.540914386s  ?           0s          nginx: worker process

nginx       29          1           0.000       3m26.541040023s  ?           0s          nginx: worker process

nginx       30          1           0.000       3m26.541161213s  ?           0s          nginx: worker process

nginx       31          1           0.000       3m26.541297546s  ?           0s          nginx: worker process

nginx       32          1           0.000       3m26.54141773s   ?           0s          nginx: worker process

nginx       33          1           0.000       3m26.541564289s  ?           0s          nginx: worker process

nginx       34          1           0.000       3m26.541685475s  ?           0s          nginx: worker process

nginx       35          1           0.000       3m26.541808977s  ?           0s          nginx: worker process

nginx       36          1           0.000       3m26.541932099s  ?           0s          nginx: worker process

nginx       37          1           0.000       3m26.54205111s   ?           0s          nginx: worker process

The result is very similar to the ps command output rather than the interactive one produced by the Linux top command.

It is possible to apply custom formatting to the output. The following example only prints PIDs, commands, and arguments:

$ podman top f2666ed4a46a pid comm args

PID         COMMAND     COMMAND

1           nginx       nginx: master process nginx -g daemon off;

26          nginx       nginx: worker process

27          nginx       nginx: worker process

28          nginx       nginx: worker process

29          nginx       nginx: worker process

30          nginx       nginx: worker process

31          nginx       nginx: worker process

32          nginx       nginx: worker process

33          nginx       nginx: worker process

34          nginx       nginx: worker process

35          nginx       nginx: worker process

36          nginx       nginx: worker process

37          nginx       nginx: worker process

We may need to inspect container processes in greater detail. As we discussed earlier in Chapter 1, Introduction to Container Technology, once a brand-new container is started, it will start assigning PIDs from number 0, while under the hood, the container engine will map this container's PIDs with the real ones on the host. So, we can use the output of the podman ps --namespace command to extract the process's original PID in the host for a given container. With that information, we can conduct advanced analysis. The following example shows how to attach the strace command, used to inspect processes' system calls (syscalls), to the process running inside the container:

$ sudo strace –p <PID>

Details about the usage of the strace command are beyond the scope of this book. See man strace for more advanced examples and a more in-depth explanation of the command options.

Another useful command that can be easily applied to processes running inside a container is pidstat. Once we have obtained the PID, we can inspect the resource usage in this way:

$ pidstat –p <PID> [<interval> <count>]

The integers applied at the end represent, respectively, the execution interval of the command and the number of times it must print the usage stats. See man pidstat for more usage options.

When a process in a container becomes unresponsive, it is possible to handle its abrupt termination with the podman kill command. By default, it sends a SIGKILL signal to the process inside the container. The following example creates an httpd container and then kills it:

$ podman run --name custom-webserver -d docker.io/library/httpd

$ podman kill custom-webserver

We can optionally send custom signals (such as SIGTERM or SIGHUP) with the --signal option. Notice that a killed container is not removed from the host but continues to exist, is stopped, and is in an exited status.

In Chapter 10, Troubleshooting and Monitoring Containers, we will again deal with container troubleshooting and learn how to use advanced tools such as nsenter to inspect container processes. We now move on to basic container statistics commands that can be useful for monitoring the overall resource usage by all containers running in a system.

Monitoring container stats

When multiple containers are running in the same host, it is crucial to monitor the amount of central processing unit (CPU), memory, disk, and network resources they are consuming in a given interval of time. The first, simpler command that an administrator can use is the podman stats command, shown here:

$ podman stats

Without any options, the command will open a top-like, self-refreshing window with the stats of all the running containers. The default printed values are listed here:

  • ID: The running container ID
  • NAME: The running container name
  • CPU %: The total CPU usage as a percentage
  • MEM USAGE / LIMIT: Memory usage against a given limit (dictated by system capabilities or by cgroups-driven limits)
  • MEM %: The total memory usage as a percentage
  • NET IO: Network input/output (I/O) operations
  • BLOCK IO: Disk I/O operations
  • PIDS: The number of PIDs inside the container
  • CPU TIME: Total consumed CPU time
  • AVG CPU %: Average CPU usage as a percentage

In case a redirect is needed, it is possible to avoid streaming a self-refreshing output with the --no-stream option, as follows:

$ podman stats --no-stream

Anyway, having a static output of this type is not very useful for parsing or ingestion. A better approach is to apply a JSON or Go template formatter. The following example prints out stats in a JSON format:

$ podman stats --format=json

[

{

  "id": "e263f68bbb83",

  "name": "infallible_sinoussi",

  "cpu_time": "33.518ms",

  "cpu_percent": "2.05%",

  "avg_cpu": "2.05%",

  "mem_usage": "19.3MB / 33.38GB",

  "mem_percent": "0.06%",

  "net_io": "-- / --",

  "block_io": "-- / --",

  "pids": "13"

}

]

In a similar way, it is possible to customize the output fields using a Go template. The following example only prints out the container ID, CPU percentage usage, total memory usage in bytes, and PIDs:

$ podman stats -a --no-stream --format "{{ .ID }} {{ .CPUPerc }} {{ .MemUsageBytes }} {{ .PIDs }}"

In this section, we have learned how to monitor running containers and their isolated processes. The next section shows how to inspect container configurations for analysis and troubleshooting.

Inspecting container information

A running container exposes a set of configuration data and metadata ready to be consumed. Podman implements the podman inspect command to print all the container configurations and runtime information. In its simplest form, we can simply pass the container ID or name, like this:

$ podman inspect <Container_ID_or_Name>

This command prints a JSON output with all the container configurations. For the sake of space, we will list some of the most notable fields here:

  • Path: The container entry point path. We will dig deeper into entry points later when we analyze Dockerfiles.
  • Args: The arguments passed to the entry point.
  • State: The container's current state, including crucial information such as the executed PID, the common PID, the OCI version, and the health check status.
  • Image: The ID of the image used to run the container.
  • Name: The container name.
  • MountLabel: Container mount label for Security-Enhanced Linux (SELinux).
  • ProcessLabel: Container process label for SELinux.
  • EffectiveCaps: Effective capabilities applied to the container.
  • GraphDriver: The type of storage driver (default is overlayfs) and a list of overlay upper, lower, and merged directories.
  • Mounts: The actual bind mounts in the container.
  • NetworkSettings: The overall container network settings, including its internal Internet Protocol (IP) address, exposed ports, and port mappings.
  • Config: Container runtime configuration, including environment variables, hostname, command, working directory, labels, and annotations.
  • HostConfig: Host configuration, including cgroups' quotas, network mode, and capabilities.

This is a huge amount of information that most of the time is too much for our needs. When we need to extract specific fields, we can use the --format option to print only selected ones. The following example prints only the host-bound PID of the process executed inside the container:

$ podman inspect <ID or Name> --format "{{ .State.Pid }}"

The result is in a Go template format. This allows for flexibility to customize the output string as we desire.

The podman inspect command is also useful for understanding the behavior of the container engine and for gaining useful information during troubleshooting tasks.

For example, when a container is launched, we learn that the resolv.conf file is mounted inside the container from a path that is defined in the {{ .ResolvConfPath }} key. The target path is /run/user/<UID>/containers/overlay-containers/<Container_ID>/userdata/resolv.conf when the container is executed in rootless mode, and /var/run/containers/storage/overlay-containers/<Container_ID>/userdata/resolv.conf when in rootful mode.

Other interesting information is the list of all the merged layers managed by overlayfs. Let's try to run a new container, this time in rootful mode, and find out information about the merged layers, as follows:

# podman run --name logger -d docker.io/library/fedora bash -c "while true; do echo test >> /tmp/test.log; sleep 5; done"

This container runs a simple loop that writes a string on a text file every 5 seconds. Now, let's run a podman inspect command to find out information about MergedDir, which is the directory where all layers are merged by overlayfs. The code is illustrated in the following snippet:

# podman inspect logger --format "{{ .GraphDriver.Data.MergedDir }}"

/var/lib/containers/storage/overlay/27d89046485db7c775b108a80072eafdf9aa63d14ee1205946d746 23fc195314/merged

Inside this directory, we can find the /tmp/test.log file, as indicated here:

# cat /var/lib/containers/storage/overlay/27d89046485db7c775b108a80072eafdf9aa63d14ee1205946d746 23fc195314/merged/tmp/test.log

test

test

test

test

test

[...]

We can dig deeper—the LowerDir directory holds a list of the base image layers, as shown in the following code snippet:

# podman inspect logger

--format "{{ .GraphDriver.Data.LowerDir}}"

/var/lib/containers/storage/overlay/4c85102d65a59c6d478bfe6bc0bf32e8c79d9772689f62451c7196 380675d4af/diff

In this example, the base image is made up of only one layer. Are we going to find the log file here? Let's have a look:

# cat /var/lib/containers/storage/overlay/4c85102d65a59c6d478bfe6bc0bf32e8c79d9772689f62451c7196 380675d4af/diff/tmp/test.log

cat: /var/lib/containers/storage/overlay/4c85102d65a59c6d478bfe6bc0bf32e8c79d9772689f62451c7196 380675d4af/diff/tmp/test.log: No such file or directory

We are missing the log file in this layer. This is because the LowerDir directory is not written and represents the read-only image layers. It is merged with an UpperDir directory that is the read-write layer of the container. With podman inspect, we can find out where it resides, as illustrated here:

# podman inspect logger --format "{{ .GraphDriver.Data.UpperDir }}"

/var/lib/containers/storage/overlay/27d89046485db7c775b108a80072eafdf9aa63d14ee1205946d746 23fc195314/diff

The output directory will contain only a bunch of files and directories, written since the container startup, including the /tmp/test.log file, as illustrated in the following code snippet:

# cat /var/lib/containers/storage/overlay/27d89046485db7c775b108a80072eafdf9aa63d14ee1205946d746 23fc195314/diff/tmp/test.log

test

test

test

test

test

[...]

We can now stop and remove the logger container by running the following command:

# podman stop logger && podman rm logger

This example was in anticipation of the container storage topic that will be covered in Chapter 5, Implementing Storage for the Container's Data. The overlayfs mechanisms, with the lower, upper, and merged directory concepts, will be analyzed in more detail.

In this section, we learned how to inspect running containers and collect runtime information and configurations. The next section is going to cover best practices for capturing logs from containers.

Capturing logs from containers

As described earlier in this chapter, containers are made of one or more processes that can fail, printing errors and describing their current state in a log file. But where are these logs stored?

Well, of course, a process in a container could write its log messages inside a file somewhere in a temporary filesystem that the container engine has made available to it (if any). But what about a read-only filesystem or any permission constraints in the running container?

A container's best practice for exposing relevant logs outside the container's shield actually leverages the use of standard streams: standard output (STDOUT) and standard error (STDERR).

Good to Know

Standard streams are communication channels interconnected to a running process in an OS. When a program is run through an interactive shell, these streams are then directly connected to the user's running terminal to let input, output, and error flow between the terminal and the process, and vice versa.

Depending on the options we use for running a brand-new container, Podman will act appropriately by attaching the STDIN, STDOUT, and STDERR standard streams to a local file for storing the logs.

In Chapter 3, Running the First Container, we saw how to run a container in the background, detaching from a running container. We used the -d option to start a container in detached mode through the podman run command, as illustrated here:

$ podman run -d -i -t registry.fedoraproject.org/f29/httpd

With the previous command, we are instructing Podman to start a container in detached mode (-d), with a pseudo-terminal attached to the STDIN stream (-t) keeping the standard input stream open even if there is no terminal attached yet (-i).

The standard Podman behavior is to attach to STDOUT and STDERR streams and store any container's published data in a log file on the host filesystem.

If we are working with Podman as a root user, we can take a look at the log file available on the host system, executing the following steps:

  1. First, we need to start our container and take note of the ID returned by Podman, or ask Podman for a list of containers and take note of their ID. The code to accomplish this is shown in the following snippet:

    # podman run -d -i -t registry.fedoraproject.org/f29/httpd

    c6afe22eac7c22c35a303d5fed45bc1b6442a4cec4a9060f392362bc

    4cecb25d

    # .

    CONTAINER ID                                                      IMAGE                                        COMMAND             CREATED         STATUS             PORTS       NAMES

    c6afe22eac7c22c35a303d5fed45bc1b6442a4cec4a9060f392362bc4 cecb25d  registry.fedoraproject.org/f29/httpd:latest  /usr/bin/run-httpd  27 minutes ago  Up 27 minutes ago              gifted_allen

  2. After that, we can take a look under the /var/lib/containers/storage/overlay-containers/ directory and search for a folder with a name that matches our container's ID, as follows:

    # cd /var/lib/containers/storage/overlay-containers/c6afe22eac7c22c35a303d5fed45bc1b6442a4cec4a9060f392362bc4 cecb25d/

  3. Finally, we can check the logs of our running container by taking a look at the file named ctr.log in the userdata directory, as follows:

    # cat userdata/ctr.log

    2021-09-27T15:42:46.925288013+00:00 stdout P => sourcing 10-set-mpm.sh ...

    2021-09-27T15:42:46.925604590+00:00 stdout F

    2021-09-27T15:42:46.926882725+00:00 stdout P => sourcing 20-copy-config.sh ...

    2021-09-27T15:42:46.926920142+00:00 stdout F

    2021-09-27T15:42:46.929405654+00:00 stdout P => sourcing 40-ssl-certs.sh ...

    2021-09-27T15:42:46.929460531+00:00 stdout F

    2021-09-27T15:42:46.987174441+00:00 stdout P AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.88.0.9. Set the 'ServerName' directive globally to suppress this message

    2021-09-27T15:42:46.987242961+00:00 stdout F

    2021-09-27T15:42:46.996989350+00:00 stdout F [Mon Sep 27 15:42:46.996748 2021] [ssl:warn] [pid 1:tid 139708367605120] AH01882: Init: this version of mod_ssl was compiled against a newer library (OpenSSL 1.1.1b FIPS  26 Feb 2019, version currently loaded is OpenSSL 1.1.1 FIPS  11 Sep 2018) - may result in undefined or erroneous behavior

    ...

    2021-09-27T15:42:47.101066096+00:00 stdout F [Mon Sep 27 15:42:47.099445 2021] [core:notice] [pid 1:tid 139708367605120] AH00094: Command line: 'httpd -D FOREGROUND'

We just discovered the secret place where Podman saves all logs of our containers!

Please note that the procedure we just introduced will work properly if the log_driver field for the containers.conf file is set to the k8s-file value. For example, in the Fedora Linux distribution starting from version 35, the maintainers decided to switch from k8s-file to journald. In this case, you could look for the logs directly using the journalctl command-line utility.

If you want to take a look at the default log_driver field, you can look in the following path:

# grep log_driver /usr/share/containers/containers.conf

Does this mean that we need to perform this entire complex procedure every time we need to analyze the logs of our containers? Of course not!

Podman has a podman logs built-in command that can easily discover, grab, and print the latest container logs for us. Considering the previous example, we can easily check the logs of our running container by executing the following command:

# podman logs c6afe22eac7c22c35a303d5fed45bc1b6442a4cec4a9060f 392362bc4cecb25d

=> sourcing 10-set-mpm.sh ...

=> sourcing 20-copy-config.sh ...

=> sourcing 40-ssl-certs.sh ...

AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.88.0.9. Set the 'ServerName' directive globally to suppress this message

[Mon Sep 27 15:42:46.996748 2021] [ssl:warn] [pid 1:tid 13970 8367605120] AH01882: Init: this version of mod_ssl was compiled against a newer library (OpenSSL 1.1.1b FIPS  26 Feb 2019, version currently loaded is OpenSSL 1.1.1 FIPS  11 Sep 2018) - may result in undefined or erroneous behavior

...

[Mon Sep 27 15:42:47.099445 2021] [core:notice] [pid 1:tid 139708367605120] AH00094: Command line: 'httpd -D FOREGROUND'

We can also get the short ID for our running container and pass this ID to the podman logs command, as follows:

# podman ps

CONTAINER ID  IMAGE                                        COMMAND               CREATED         STATUS             PORTS       NAMES

c6afe22eac7c  registry.fedoraproject.org/f29/httpd:latest  /usr/bin/run-http...  40 minutes ago  Up 40 minutes ago              gifted_allen

# podman logs --tail 2 c6afe22eac7c

[Mon Sep 27 15:42:47.099403 2021] [mpm_event:notice] [pid 1:tid 139708367605120] AH00489: Apache/2.4.39 (Fedora) OpenSSL/1.1.1 configured -- resuming normal operations

[Mon Sep 27 15:42:47.099445 2021] [core:notice] [pid 1:tid 1397 08367605120] AH00094: Command line: 'httpd -D FOREGROUND'

In the previous command, we also used a nice option of the podman logs command: the --tail option, which lets us output only the latest needed rows of the container's log. In our case, we requested the latest two.

As we saw earlier in this section, Podman saves the container logs into the host filesystem. These files, by default, are not limited in size, so it could happen that for long-living containers that might produce a lot of logs, these files could become very large.

For this reason, as we usually talk about logs and log files, one important configuration parameter that could help reduce the log files' size is available through the Podman global configuration file available at this location: /etc/containers/containers.conf.

If this configuration file is missing, you can easily create a new one, inserting the following rows to apply the configuration:

# vim /etc/containers/containers.conf

[containers]

log_size_max=10000000

Through the previous configuration, we are limiting every log file for our future running containers to 10 megabytes (MB). If you have some running containers, you have to restart them to apply this new configuration.

We are now ready to move to the next section, where we will discover another useful command.

Executing processes in a running container

In the Podman daemonless architecture section of Chapter 2, Comparing Podman and Docker, we talked about the fact that Podman, as with any other container engine, leverages the Linux namespace functionality to correctly isolate running containers from each other and from the OS host as well.

So, just because Podman creates a brand-new namespace for every running container, it should not be a surprise that we can attach to the same Linux namespace of a running container, executing other processes just as in a full operating environment.

Podman gives us the ability to execute a process in a running container through the podman exec command.

Once executed, this command will find internally the right Linux namespace to which the target running container is attached. Having found the Linux namespace, Podman will execute the respective process, passed as an argument to the podman exec command, attaching it to the target Linux namespace. The final process will be in the same environment as the original process companion and it will be able to interact with it.

To understand how this works in practice, we can consider the following example whereby we will first run a container and then execute a process beside the existing processes:

# podman run -d -i -t registry.fedoraproject.org/f29/httpd

47fae73e4811a56d799f258c85bc50262901bec2f9a9cab19c01af89713 a1248

# podman exec -ti 47fae73e4811a56d799f258c85bc50262901bec2f9a9cab19c01af89713 a1248 /bin/bash

bash-4.4$ ps aux

USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START    TIME COMMAND

default        1  0.6  0.6  20292 13664 pts/0    Ss+  13:37    0:00 httpd -D FOREGROUND

...

As you can see from the previous commands, we grabbed the container ID provided by Podman once the container was started and we passed it to the podman exec command as an argument.

The podman exec command could be really useful for troubleshooting, testing, and working with an existing container. In the preceding example, we attached an interactive terminal running the Bash console, and we launched the ps command for inspecting the running processes available in the current Linux namespace assigned to the container.

The podman exec command has many options available, similar to the ones provided by the podman run command. As you saw from the previous example, we used the option for getting a pseudo-terminal attached to the STDIN stream (-t), keeping the standard input stream open even if there is no terminal attached yet (-i).

For more details on the available options, we can check the manual with the respective command, as illustrated here:

# man podman exec

We are moving forward in our journey to the container management world, and in the next section, we will also take a look at some of the capabilities that Podman offers to enable containerized workloads in the Kubernetes container orchestration world.

Running containers in pods

As we mentioned in the Docker versus Podman main differences section of Chapter 2, Comparing Podman and Docker , Podman offers capabilities to easily start adopting some basic concepts of the de facto container orchestrator named Kubernetes (also sometimes referred to as k8s).

The pod concept was introduced with Kubernetes and represents the smallest execution unit in a Kubernetes cluster. With Podman, users can create empty pods and then run containers inside them easily.

Grouping two or more containers inside a single pod can have many benefits, such as the following:

  • Sharing the same network namespace, IP address included
  • Sharing the same storage volumes for storing persistent data
  • Sharing the same configurations

In addition, placing two or more containers in the same pod will actually enable them to share the same inter-process communication (IPC) Linux namespace. This could be really useful for applications that need to communicate with each other using shared memory.

The simplest way to create a pod and start working with it is to use this command:

# podman pod create --name myhttp

3950703adb04c6bca7f83619ea28c650f9db37fd0060c1e263cf7ea34 dbc8dad

# podman pod ps

POD ID        NAME        STATUS      CREATED        INFRA ID       # OF CONTAINERS

3950703adb04  myhttp      Created     6 seconds ago  1bdc82 e77ba2  1

As shown in the previous example, we create a new pod named myhttp and then check the status of the pod on our host system: there is just one pod in a created state.

We can now start the pod as follows and check what will happen:

# podman pod start myhttp

3950703adb04c6bca7f83619ea28c650f9db37fd0060c1e263cf7ea34 dbc8dad

# podman pod ps

POD ID        NAME        STATUS      CREATED             INFRA ID      # OF CONTAINERS

3950703adb04  myhttp      Running     About a minute ago  1bdc82e77ba2  1

The pod is now running, but what is Podman actually running? We created an empty pod without containers inside! Let's take a look at the running container by executing the podman ps command, as follows:

# podman ps

CONTAINER ID  IMAGE                 COMMAND     CREATED             STATUS            PORTS       NAMES

1bdc82e77ba2  k8s.gcr.io/pause:3.5              About a minute ago  Up 6 seconds ago              3950703adb04-infra

The podman ps command is showing a running container with an image named pause. This container is run by Podman by default as an infra container. This kind of container does nothing—it just holds the namespace and lets the container engine connect to any other running container inside the pod.

Having demystified the role of this special container inside our pods, we can now take a brief look at the steps required to start a multi-container pod.

First of all, let's start by running a new container inside the existing pod we created in the previous example, as follows:

# podman run --pod myhttp -d -i -t registry.fedoraproject.org/f29/httpd

Cb75e65f10f6dc37c799a3150c1b9675e74d66d8e298a8d19eadfa125d ffdc53

Then, we can check whether the existing pod has updated the number of containers it contains, as illustrated in the following code snippet:

# podman pod ps

POD ID        NAME        STATUS      CREATED         INFRA ID       # OF CONTAINERS

3950703adb04  myhttp      Running     21 minutes ago  1bdc82e77ba2  2

Finally, we can ask Podman for a list of running containers with the associated pod name, as follows:

# podman ps -p

CONTAINER ID  IMAGE                                        COMMAND               CREATED         STATUS             PORTS       NAMES                POD ID        PODNAME

1bdc82e77ba2  k8s.gcr.io/pause:3.5                                               22 minutes ago  Up 20 minutes ago              3950703adb04-infra   3950703adb04  myhttp

cb75e65f10f6  registry.fedoraproject.org/f29/httpd:latest  /usr/bin/run-http...  4 minutes ago   Up 4 minutes ago               determined_driscoll  3950703adb04  myhttp

As you can see, the two containers running are both associated with the pod named myhttp!

Important Note

Please consider periodically cleaning up the lab environment after completing all the examples contained in this chapter. This could help you save resources and avoid any errors when moving to the next chapter's examples. For this reason, you can refer to the code provided in the AdditionalMaterial folder in the book's GitHub repository: https://github.com/PacktPublishing/Podman-for-DevOps/tree/main/AdditionalMaterial.

With the same approach, we can add more and more containers to the same pod, letting them share all the data we described before.

Please note that placing containers in the same pod can be beneficial in some cases, but this represents an anti-pattern for the container technology. In fact, as mentioned before, Kubernetes considers the pod the smallest computing unit to run on top of the distributed nodes' part of one cluster. This means that once you group two or more containers under the same pod, they will be executed together on the same node and the orchestrator cannot balance or distribute their workload on multiple machines.

We will explore more about Podman's features that can enable you to enter the container orchestration world through Kubernetes in the next chapters!

Summary

In this chapter, we started developing experience in managing containers, starting with container images, and then working with running containers. Once our containers were running, we also explored the various commands available in Podman to inspect and check the logs and troubleshoot our containers. The operations needed to monitor and look after running containers are really important for any container administrator. Finally, we also took a brief look at the Kubernetes concepts available in Podman that let us group two or more containers under the same Linux namespace. All the concepts and the examples we just went through will help us start our experience as a system administrator for container technologies.

We are now ready to explore another important topic in the next chapter: managing storage for our containers!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset