8

Following Kubernetes Best Practices

We have finally reached the last chapter of the Kubernetes part! Congrats on making it here—you’re now more than halfway through becoming Kubernetes and Cloud Native Associate (KCNA) certified!

In this chapter, we are going to discuss some of the best practices for operating Kubernetes and some of the security gaps and ways to address those.

We’ll learn about Kubernetes networking and network policies for traffic control; restricting access with role-based access control (RBAC); using Helm as a K8s package manager, and more. As before, we’ll need the minikube setup from the previous chapters to perform a few hands-on exercises.

The topics of this chapter include the following:

  • Kubernetes networking essentials
  • RBAC
  • Helm—the package manager for K8s
  • Kubernetes best practices

Let’s get started!

Kubernetes networking essentials

Without exaggeration, K8s networking is probably the hardest part to understand, and even for very experienced engineers and operators, it might be tough. As you hopefully remember from Chapter 4, Kubernetes implements the Container Networking Interface (CNI), which allows us to use different overlay network plugins for container networking. Yet there are so many CNI providers out there (Flannel, Calico, Cilium, Weave, and Canal, to name a few) that it is easy to get confused. Those providers rely on different technologies such as Border Gateway Protocol (BGP) or Virtual Extensible LAN (VXLAN) to deliver different levels of overlay network performance and offer different features.

But don’t worry – for the scope of KCNA, you are not required to know many details. For now, we will cover Kubernetes networking essentials.

Have a look at the following diagram:

Figure 8.1 – Kubernetes networking model

Figure 8.1 – Kubernetes networking model

As Figure 8.1 suggests, there are three types of communication happening in a Kubernetes cluster:

  • Container to container—Within a pod, all containers can easily communicate with each other via localhost because they are collocated together as one unit.
  • Pod to pod—Communication on the level of the overlay network (sometimes called the pod network) spanning all nodes in the cluster. The overlay network makes it possible for a pod on one node to talk with other pods on any nodes in a cluster. This kind of communication is often called East-West traffic.
  • The outside world (for example, the internet or other networks)—Communication that requires a Service resource of either a NodePort or a LoadBalancer type to expose a pod or a group of pods with the same application outside of the cluster. Communication with the outside world is also known as North-South traffic.

In practice, when a pod needs to communicate with other pods, this will also involve Kubernetes’ service discovery mechanism. Since every new pod started in Kubernetes automatically gets an IP address in the flat overlay network, it is almost impossible to refer to IP addresses in any configuration as addresses change all the time. Instead, we will use the ClusterIP Service, which automatically tracks all changes to the list of endpoints when a new pod comes up or an old pod is terminated (refer to Chapter 6 for a detailed explanation). Kubernetes also allows the use of IP Address Management (IPAM) plugins to control how pod IP addresses are allocated. By default, a single IP pool is used for all pods in a cluster. Using IPAM plugins, it is possible to subdivide the overlay network IP pool into smaller blocks and allocate pod IP addresses based on annotations or the worker node where a pod is started.

Moving on, it is important to understand that all pods in the cluster pod network can talk to each other without any restriction by default.

Note

Kubernetes namespaces do not provide network isolation. Pods in namespace A can reach pods in namespace B by their IP address in the pod network and the other way around unless restricted by a NetworkPolicy resource.

NetworkPolicy is a resource allowing us to control network traffic flow in Kubernetes in an application-centric way. NetworkPolicy allows us to define how a pod can communicate with other pods (selected via label selectors), pods in other namespaces (selected via namespace selector), or IP block ranges.

Network policies are essentially a pod-level firewall in Kubernetes that allows us to specify which traffic is allowed to and from pods that match selectors. A simple example might be when you have one application per Kubernetes namespace consisting of many microservices. You might want to disallow communication of pods between the namespaces in such a scenario for better isolation. Another example scenario: you might want to restrict access to a database running in Kubernetes to only pods that need to access it because allowing every pod in the cluster to reach the database imposes a security risk.

But why, exactly, do we need to apply network policies in Kubernetes?

As applications shifted from monolithic to microservice architectures, this added a lot of network-based communication. Monolithic applications have most communication happening within themselves, as being one big executable program, while microservices rely on message buses and web protocols to exchange data, which causes an increased amount of East-West network traffic that should also be secured.

Under the hood, network policies are implemented by the CNI provider, and to use network policies, the provider should support those. For example, Kindnet—the CNI used by default with minikube-provisioned Kubernetes—does not support network policies. Therefore, if we create any NetworkPolicy definition in our minikube Kubernetes, it will not have any effect on the traffic in the cluster. Nevertheless, feel free to check the Further reading section if you’d like to learn more about K8s networking and network policies.

Coming up next, we will explore RBAC and see how it helps in securing a Kubernetes cluster.

RBAC

You’ve probably noticed that in our minikube cluster, we have unlimited access and control over all resources and namespaces. While this is fine for learning purposes, when it comes to running and operating production systems, you’ll most likely need to restrict the access. This is where Kubernetes RBAC becomes very helpful.

Kubernetes RBAC

This is the main security mechanism in Kubernetes to ensure that users only have access to resources according to their assigned roles.

A few examples of what can be done with K8s RBAC:

  • Restricting access to a specific namespace (for example, production namespace or namespace for a certain application) for a limited group of people (such as with an administrator role)
  • Restricting access to be read-only for certain resources
  • Restricting access to a certain group of resources (such as Pod, Service, Deployment, Secret, or anything else)
  • Restricting access to an application that interacts with the Kubernetes API

Kubernetes RBAC is a very powerful mechanism, and it allows us to implement the least privilege principle, which is considered the best practice for access management.

Least privilege principle

This is when each user or account receives only the minimum privileges required to fulfill their job or process.

As for the scope of the KCNA exam, this is pretty much all you need to know about restricting access in Kubernetes. The intention of this book, however, is to take you one step further and closer to the real-world scenarios of operating a Kubernetes cluster, so we’ll dig a little deeper.

Let’s see what happens when you execute kubectl apply or kubectl create with some resource specification:

  1. kubectl will read the Kubernetes configuration from the file at the KUBECONFIG environment variable.
  2. kubectl will discover available Kubernetes APIs.
  3. kubectl will validate the specification provided (for example, for malformed YAML).
  4. Send the request to kube-apiserver with the spec in the payload.
  5. kube-apiserver receives the request and verifies the authenticity of the request (for example, who made the request).
  6. If the user that did the request is authenticated on the previous step, an authorization check is performed (for example, is this user allowed to create/apply the changes requested?).

This is the point where RBAC kicks in and helps the API server decide if the request should be permitted or not. In Kubernetes, several RBAC concepts are used to define access rules:

  • Role—Contains rules that represent a set of permissions within a particular namespace. There is only an additive ALLOW permission and no DENY rule, and what is not explicitly allowed by a role will be denied. Role is a namespaced resource and requires a namespace to be specified when created.
  • ClusterRole—Same as Role, but a non-namespaced resource. For cluster-wide permissions such as granting access to all resources in all namespaces at once or granting access to cluster-scoped resources such as nodes.
  • ServiceAccount—A resource to give identity to an application running inside the Pod. It is essentially the same as a normal User but used specifically for non-human identities that need to interact with the Kubernetes API. Every pod in Kubernetes always has an association with a service account.
  • RoleBinding—This is an entity to apply and grant the permissions defined in a Role or in a ClusterRole to a User, a Group of users, or a ServiceAccount within a specific namespace.
  • ClusterRoleBinding—Like RoleBinding but works only for ClusterRole to apply the rules to all namespaces at once.

Figure 8.2 demonstrates the RoleBinding of a Role A and a ClusterRole B to a Group D of users and a ServiceAccount C within Namespace E. The rules are additive, meaning that everything that is allowed by merging of ClusterRole B and Role A rules will be allowed:

Figure 8.2 – Application of Role and ClusterRole rules via RoleBinding

Figure 8.2 – Application of Role and ClusterRole rules via RoleBinding

While Kubernetes RBAC might seem complex at first, the moment you start applying it in practice, it gets much easier and clear. You’ll see that RBAC mechanisms are very flexible and granular and allow us to cover all possible scenarios, including a case when an application inside the pod needs to access a Kubernetes API.

Let’s check the following simple pod-reader Role definition:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: kcna
  name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["pods"]
  verbs: ["get", "watch", "list"] # the actions allowed on resources

It can be used to grant read-only access to pod resources in the kcna namespace using RoleBinding, such as in the following code snippet:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: kcna
subjects:
# subjects can be multiple users, groups or service accounts
- kind: User
  name: jack # name is case sensitive
  apiGroup: rbac.authorization.k8s.io # the standard API group for all RBAC resources
roleRef:
  # roleRef specifies the binding to a Role or ClusterRole
  kind: Role # either a Role or ClusterRole
  name: pod-reader # name of the Role or ClusterRole to bind to
  apiGroup: rbac.authorization.k8s.io

Go ahead and create first a Role and then a RoleBinding resource in our minikube playground:

$ minikube kubectl -- create -f role.yaml -n kcna
role.rbac.authorization.k8s.io/pod-reader created
$ minikube kubectl -- create -f rolebinding.yaml -n kcna
rolebinding.rbac.authorization.k8s.io/read-pods created

The RoleBinding was referencing user jack as the only subject, but a single RoleBinding can also be used to reference any number of users, groups, and service accounts.

Now, when it comes to testing permissions, Kubernetes has a very neat feature that allows us to check permissions without the actual user credentials (which can be an x509 client certificate). The respective kubectl auth can-I command allows us to verify what is allowed and what is not for a certain user, group, or service account. Try the following:

$ minikube kubectl -- auth can-i get pods --as=jack
no

But hey, didn’t we allow it to get in our preceding pod-reader role definition for a user named jack? We did, but only in the kcna namespace! Let’s try again by specifying the namespace:

$ minikube kubectl -- auth can-i get pods -n kcna --as=jack
yes

Looks much better now. How about the creation or deletion of pods? Let’s try the following:

$ minikube kubectl -- auth can-i create pods -n kcna --as=jack
no
$ minikube kubectl -- auth can-i delete pods -n kcna --as=jack
no

As expected, this is not allowed, just as nothing else is allowed to be done with other resources than pods in the kcna namespace according to the role and binding we’ve created. You’ve probably noticed that the verbs in the role definition are very precise—we’ve specified get, watch, and list, and they are not the same:

  • watch is a verb that allows us to see updates to resources in real time
  • list allows us to only list resources, but not to get further details about a particular object
  • get allows us to retrieve information about a resource, but you need to know the name of the resource (to find this out, you’ll need the list verb)

And of course, there are write permission verbs such as create, update, patch, and delete, which can be a part of a role definition spec.

If you’d like to learn more about RBAC, feel free to explore on your own and check the materials in the Further reading section at the end of the chapter. Moving forward, we’re going to learn about the Kubernetes package manager in the next section.

Helm – the package manager for K8s

A package manager for Kubernetes—that might sound confusing at first. We are building images with system packages and pushing those to the image registry with Docker or another tool. Why do we need a package manager?

Note

This section is not a prerequisite for passing the KCNA exam; however, it is strongly recommended reading as it might help you to avoid mistakes when using Kubernetes in real-world, practical setups.

Imagine the following scenario—you are operating a few Kubernetes clusters for a small enterprise. Those Kubernetes clusters are similar in size and configuration and run exactly the same applications, but for different environments such as development, testing, and production. The dev team was pushing for microservices architecture, and now there are about 50 microservices that run on Kubernetes working together as a part of bigger applications.

The naive way to manage the Kubernetes specifications for all those would be the creation of individual spec files for each microservice and each environment. The number of YAML files to maintain might easily grow to over 100, and they will likely include a bunch of duplicated code and settings that are even harder to manage in the long run. There must be a better way, and using a package manager such as Helm is one possible solution.

Let’s clarify that in more detail. Helm is not for building container images and packaging application executables inside. Helm is used for the standardized management of Kubernetes specifications that represent the payload we want to run in Kubernetes clusters.

Helm

This is a tool for automating the creation, packaging, deployment, and configuration of Kubernetes applications. It helps to define, install, and update applications on Kubernetes.

Coming back to the previous example with 50 microservices and 3 environments, instead of writing duplicated spec files, with Helm you can create reusable templates once and simply apply configuration values that are different based on the environment where the application should be deployed.

Next, you realize that 20 out of those 50 microservices you run rely on individual Redis instances, and instead of duplicating the same Redis deployment specification with different names 20 times, you create a single one that is templated, reusable, and can be simply added as a requirement for other applications that need it.

In order to understand Helm a little better, let’s talk about its three main concepts:

  • Helm chart—This is a package that contains all K8s resource definitions (specs) required to run an application in Kubernetes. Think of it as the Kubernetes equivalent of a Linux DEB package, an RPM package, or a Homebrew formula.
  • Helm repository—This is a place where charts are collected and shared; it could be thought of as a Kubernetes equivalent to the Python Package Index (PyPI) or the Comprehensive Perl Archive Network (CPAN) for Perl. Charts can be downloaded from and uploaded to the repository.
  • Helm release—This is an instance of a chart running in a Kubernetes cluster. One chart can be installed many times into the same cluster, and on each installation, a new release is created. For the previous example with Redis, we can have 1 Redis chart that we can install 20 times on the same cluster where each installation will have its own release and release name.

In a nutshell, Helm installs charts onto Kubernetes, creating a new release on each installation. Using Helm repositories, it is very easy to find and reuse ready charts for common software to be run on Kubernetes. It is also easy to install multiple charts together that need to work as one application by specifying dependencies between the charts.

Helm comes with a CLI tool that is also called helm. Using the helm CLI tool, we can search chart repositories, package charts, install, update, and delete releases, and do pretty much anything else that Helm allows. Helm uses the same Kubernetes config file that kubectl is using and interacts directly with the Kubernetes API, as shown in Figure 8.3:

Figure 8.3 – Helm v3 architecture

Figure 8.3 – Helm v3 architecture

Helm also makes updates and rollbacks of applications easier. If something goes wrong with the changes introduced by the release, one simple command—helm rollback —can help to go back to the previous release version in a matter of seconds or minutes. Rollbacks with Helm are similar to the Kubernetes Deployment rollbacks that we have tried before in Chapter 6, but the difference is that Helm can roll back any chart spec changes. For example, you have modified a Secret spec file that is a part of a Helm chart and triggered helm upgrade to roll out the changes. A few moments later, you realize that the change broke the chart application, and you need to get back to the previous version quickly. You execute helm rollback with an optional release revision and release name and get back to the working revision.

At this time, we are not going to dive deeper into Helm and do any hands-on assignments because, again, Helm is not a part of the KCNA exam. The goal of this section is to give you a quick introduction to Helm—a tool that significantly simplifies the management of applications on Kubernetes. Helm is a graduated Cloud Native Computing Foundation (CNCF) project and comes with a powerful templating engine that allows the definition of custom functions and flexible control actions (if/else/with/range, and so on).

You can also consider other tools such as Kustomize and YTT that serve the same purpose yet follow a different approach. Neither is a part of KCNA, but as usual, the Further reading section will include resources about those if you’d like to go the extra mile.

Kubernetes best practices

While KCNA is not a security-focused certification, you are expected to know a few basics and best practices about Kubernetes and Cloud Native, and now is the time to talk about those.

Kubernetes’ documentation suggests the 4Cs of Cloud Native security: Cloud, Clusters, Containers, and Code—an approach with four layers for in-depth defense:

Figure 8.4 – 4Cs of Cloud Native security

Figure 8.4 – 4Cs of Cloud Native security

In this approach, the inner circle security builds upon the next outermost layers. This way, the Code layer is protected by the bases of the Container, Cluster, and Cloud layers, and you cannot safeguard against poor security standards and practices in the base layers by addressing the security on the level of Code, just as you cannot disregard the need to secure the innermost circle even when you have very strong security in the outer layers. Let’s see why in more detail and what each layer of the 4Cs means.

Starting with the base, the cloud or other infrastructure (such as a corporate database or co-located servers) acts as a trusted base for the Kubernetes cluster. If the Cloud layer is vulnerable or misconfigured, there is no guarantee that the components built on top are secure.

At the beginning of the book, we discussed what the shared responsibility model means in the cloud, where both the cloud provider and the users must take action in order to keep workloads safe and secure. Therefore, always refer to and follow the security documentation from your cloud provider.

When it comes to the Cluster layer, there are multiple best practices for Kubernetes— specifically, things such as etcd encryption, RBAC configuration, limiting access to nodes, restricting API server access, keeping the Kubernetes version up to date, and more. But don’t worry—you are not required to memorize any of those to pass the KCNA exam.

Next is the Container layer. As you might remember from Chapter 4, there are Namespaced, Sandboxed, and Virtualized containers, and they all have their pros and cons, Virtualized being the most secure yet heavy, and Namespaced the most lightweight, but sharing the same host kernel and thus providing lower levels of security. Which one to run depends on the workload and other requirements you might have. Also, avoid running applications in containers as the root user. Doing so means there is a high chance of the whole node with all other containers being compromised if that root container is compromised.

And reaching the middle, at the core is the Code layer. You should not run sources that you don’t trust—for example, if you don’t know the origin of the code or exactly what it does. We also discussed that aspect in detail in Chapter 4. Container images that you’ve found somewhere might package malicious code inside, and running those in your environment can open a backdoor for an attacker. At a minimum, build and test the code you execute yourself and automate vulnerability scanning as a part of the container image build process.

Should you be running Kubernetes clusters over unsecured or public networks, consider implementing a service mesh to encrypt all pod traffic. Otherwise, by default, Kubernetes’ overlay network transports all data unencrypted, although a few CNI providers support Transport Layer Security (TLS) too. Consider using network policies to isolate and further protect your workloads. The right way to do it is to deny all communication between pods by default and put tailored allow rules for each application and microservice in place. And yes, you can have both a service mesh and network policies in one cluster, and their usage is not exclusive.

Finally, a few basic good practices when dealing with Kubernetes. Some might be a repetition of what we have learned, but better repeat twice than learn the hard way later on:

  • Use controllers to create

Simple pod specification does not provide fault tolerance and any additional functions such as rolling updates. Use Deployment, StatefulSet, DaemonSet, or Job controllers to create pods.

  • Use namespaces to organize workloads

Deploying everything into one, default namespace will quickly make it a mess. Create multiple namespaces for better workload organization and ease of operation. Namespaces are also great for RBAC configuration and restricting traffic with network policies.

  • Use resource requests and limits

These are required for Kubernetes to make the best scheduling decisions and protect clusters against misbehaving applications utilizing all resources and causing nodes to crash.

  • Use readiness and liveness probes

These ensure that requests reach pods only when they are ready to process those. If we don’t define readinessProbe and the application takes too long to start, then all requests forwarded to that pod will fail or time out first. livenessProbe is just as important because it will make the container restart in case its process is caught in a deadlock or stuck.

  • Use small container images when possible

Avoid installing optional packages into container images you’re building and try to get rid of all unnecessary packages. Large images take longer to download (and thus more time for the pod to start first) and consume more disk space. Specialized, minimal images such as Alpine can only be 5-10 MB in size.

  • Use labels and annotations

Add metadata to Kubernetes resources to organize your cluster workloads. This is helpful for operations and for tracking how different applications interact with each other. The K8s documentation recommends including name, instance, version, component, part-of, and other labels. Where labels are used to identify resources, annotations are used to store additional information about K8s resources (last-updated, managed-by, and so on).

  • Use multiple nodes and topology awareness

Use an uneven number of control plane nodes (such as 3 or 5) to avoid split-brain situations and use multiple worker nodes spread across multiple failure domains (such as availability zones or AZs) where possible. Apply pod topology spread constraints or anti-affinity rules to ensure that all replicas of microservices are not running on the same node.

The list can be extended with many further points, but this should be enough to let you continue in the right direction with Kubernetes. Monitoring and observability topics will be discussed additionally in the upcoming chapters.

Summary

With that, we’ve reached the end of the Kubernetes part – well done!

Remember – the more hands-on you get, the faster you’ll learn and understand Kubernetes and its concepts. If some points still feel a bit blurry, that is fine. You can always go back and read some parts again and check the Further reading sections at the end of each chapter. Refer to the official Kubernetes documentation at https://kubernetes.io/docs/home/ if you have any questions.

This chapter discussed which three types of network communication happen in a Kubernetes cluster and that by default, there is nothing restricting communication between two pods in the cluster. Therefore, it is a good idea to use network policies in order to only allow required communication and deny the rest for security reasons. Not all CNI providers support network policies, therefore make sure to check that when planning a Kubernetes installation.

Every new pod in the cluster automatically gets an IP address in the overlay network, and Kubernetes also takes care of cleaning it up when a pod is terminated. However, using pod IP addresses in any configuration is not practical, and we should use Kubernetes Services for both East-West and North-South communication.

Next, we learned about the RBAC features of Kubernetes and how they allow restricting access to the API. It is strongly recommended to implement RBAC rules for any cluster that is accessed by more than one person or if an application running in Kubernetes talks with the K8s API.

Managing a large number of microservices and environments might be challenging, and a package manager tool can become very handy. Helm is a powerful tool for packaging, configuring, and deploying Kubernetes applications. We’ve seen that Helm introduces additional concepts of charts, repositories, and releases.

When it comes to security, Kubernetes suggests a 4Cs layered approach: Cloud, Clusters, Containers, and Code. Each layer requires its own practices and actions to be taken, and only together do they make infrastructure and workloads secure. Depending on the security requirements and the K8s cluster setup, it might be necessary to use virtualized containers instead of namespaced containers and have a service mesh integrated to encrypt pod traffic.

Finally, we collected seven basic Kubernetes practices based on materials from this and previous chapters that should help to get you moving in the right direction. In the upcoming chapter, we will continue exploring the world of Cloud Native and learn about Cloud Native architectures.

Questions

As we conclude, here is a list of questions for you to test your knowledge regarding this chapter’s material. You will find the answers in the Assessments section of the Appendix:

  1. Which of the following is another name for pod-to-pod network traffic?
    1. East-South
    2. North-East
    3. East-West
    4. North-South
  2. What can be applied to restrict pod-to-pod traffic?
    1. PodPolicy
    2. PodSecurityPolicy
    3. TrafficPolicy
    4. NetworkPolicy
  3. Which layers are part of the 4Cs of Cloud Native security?
    1. Cloud, Collocations, Clusters, Code
    2. Cloud, Clusters, Containers, Code
    3. Cloud, Collocations, Containers, Code
    4. Code, Controllers, Clusters, Cloud
  4. Pod A is running in Namespace A and pod B is running in Namespace B. Can they communicate via their IP addresses?
    1. No, because different namespaces are isolated with a firewall
    2. Yes, but only if they are running on the same worker node
    3. Yes, if not restricted with NetworkPolicy
    4. No, because different namespaces have different IP Classless Inter-Domain Routing (CIDR) blocks
  5. How do two containers in the same pod communicate?
    1. Via a network policy
    2. Via localhost
    3. Via the NodeIP Service
    4. Via the ClusterIP Service
  6. Which of the following service types is typically used for internal pod-to-pod communication?
    1. InternalIP
    2. LoadBalancer
    3. ClusterIP
    4. NodePort
  7. What can be used to encrypt pod-to-pod communication in a cluster?
    1. NetworkPolicy
    2. Service mesh
    3. EncryptionPolicy
    4. Security Service
  8. Which of the following container types provides maximum isolation?
    1. Virtualized
    2. Namespaced
    3. Isolated
    4. Sandboxed
  9. What can be used to restrict access to the Kubernetes API?
    1. Service mesh
    2. Helm
    3. Network policies
    4. RBAC
  10. Why is it important to build your own container images?
    1. Newly built images are often smaller in size
    2. Due to copyrights and license restrictions
    3. Newly built images always include the newest packages
    4. Images found on the internet might include malware
  11. Which of the following can be used to provide fault tolerance for pods (pick multiple)?
    1. Service
    2. Deployment
    3. Ingress
    4. StatefulSet
  12. Why it is better to have three and not four control plane nodes?
    1. Because four nodes consume too many resources; three is enough
    2. An uneven number of nodes helps prevent split-brain situations
    3. More nodes make the overlay pod network slower
    4. More nodes introduce more operational burden for version upgrades
  13. Why is it not recommended to use pod IP addresses in ConfigMap configurations?
    1. Because pods are ephemeral
    2. Because the pod IP is not reachable from the internet
    3. Because pods are using an old IPv4 protocol
    4. Because it is hard to remember IP addresses
  14. What could be the reasons why a request forwarded to a running pod ends up in a timeout error (pick multiple)?
    1. The Kubernetes API overloaded, affecting all pods
    2. Network policy rules add additional network latency
    3. A process in a pod is stuck and no livenessProbe is set
    4. A process in a pod is still starting and no readinessProbe is set
  15. Which RBAC entity is used to give an identity to an application?
    1. Role
    2. ServiceAccount
    3. RoleBinding
    4. ServiceIdentity

Further reading

To learn more about the topics that were covered in this chapter, take a look at the following resources:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset