In previous chapters, we looked at the architecture of a Kubernetes cluster. A Kubernetes cluster consists of master components—including kube-apiserver, etcd, kube-scheduler, CoreDNS, kube-controller-manager, and cloud-controller-manager—and node components, including kubelet, kube-proxy, and container-runtime. Master components are responsible for cluster management. They form the control plane of the cluster. Node components, on the other hand, are responsible for the functioning of pods and containers on the node.
In Chapter 3, Threat Modeling, we briefly discussed that components in a Kubernetes cluster need to be configured to ensure the security of the cluster. A compromise of any cluster component can cause a data breach. Misconfiguration of environments is one of the primary reasons for data breaches in traditional or microservices environments. It is important to understand the configurations for each component and how each setting can open up a new attack surface. So, it's important for cluster administrators to understand different configurations.
In this chapter, we look in detail at how to secure each component in a cluster. In many cases, it will not be possible to follow all security best practices, but it is important to highlight the risks and have a mitigation strategy in place if an attacker tries to exploit a vulnerable configuration.
For each master and node component, we briefly discuss the function of components with a security-relevant configuration in a Kubernetes cluster and look in detail at each configuration. We look at the possible settings for these configurations and highlight the recommended practices. Finally, we introduce kube-bench and walk through how this can be used to evaluate the security posture of your cluster.
In this chapter, we will cover the following topics:
kube-apiserver is the gateway to your cluster. It implements a representational state transfer (REST) application programming interface (API) to authorize and validate requests for objects. It is the central gateway that communicates and manages other components within the Kubernetes cluster. It performs three main functions:
A request to the API server goes through the following steps before being processed:
kube-apiserver is the brain of the cluster. Compromise of the API server causes cluster compromise, so it's essential that the API server is secure. Kubernetes provides a myriad of settings to configure the API server. Let's look at some of the security-relevant configurations next.
To secure the API server, you should do the following:
On Minikube, the kube-apiserver configuration looks like this:
$ps aux | grep kube-api
root 4016 6.1 17.2 495148 342896 ? Ssl 01:03 0:16 kube-apiserver --advertise-address=192.168.99.100 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/var/lib/minikube/certs/ca.crt --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --enable-bootstrap-token-auth=true --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=8443 --service-account-key-file=/var/lib/minikube/certs/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/var/lib/minikube/certs/apiserver.crt --tls-private-key-file=/var/lib/minikube/certs/apiserver.key
As you can see, by default on Minikube, kube-apiserver does not follow all security best practices. For example, PodSecurityPolicy is not enabled by default, and strong cipher suites and the tls minimum version are not set by default. It's the responsibility of the cluster administrator to ensure that the API server is securely configured.
kubelet is the node agent for Kubernetes. It manages the life cycle of objects within the Kubernetes cluster and ensures that the objects are in a healthy state on the node.
To secure kubelet, you should do the following:
On Minikube, the kubelet configuration looks like this:
root 4286 2.6 4.6 1345544 92420 ? Ssl 01:03 0:18 /var/lib/minikube/binaries/v1.17.3/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.99.100 --pod-manifest-path=/etc/kubernetes/manifests
Similar to the API server, not all secure configurations are used by default on a kubelet—for example, disabling the read-only port. Next, we talk about how cluster administrators can secure etcd.
etcd is a key-value store that is used by Kubernetes for data storage. It stores the state, configuration, and secrets of the Kubernetes cluster. Only kube-apiserver should have access to etcd. Compromise of etcd can lead to a cluster compromise.
To secure etcd, you should do the following:
On Minikube, the etcd configuration looks like this:
$ ps aux | grep etcd
root 3992 1.9 2.4 10612080 48680 ? Ssl 01:03 0:18 etcd --advertise-client-urls=https://192.168.99.100:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/minikube/etcd --initial-advertise-peer-urls=https://192.168.99.100:2380 --initial-cluster=minikube=https://192.168.99.100:2380 --key-file=/var/lib/minikube/certs/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.99.100:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.99.100:2380 --name=minikube --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/var/lib/minikube/certs/etcd/peer.key --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt
etcd stores sensitive data of a Kubernetes cluster, such as private keys and secrets. Compromise of etcd is compromise of the api-server component. Cluster administrators should pay special attention while setting up etcd.
Next, we look at kube-scheduler. As we have already discussed in Chapter 1, Kubernetes Architecture, kube-scheduler is responsible for assigning a node to a pod. Once the pod is assigned to a node, the kubelet executes the pod. kube-scheduler first filters the set of nodes on which the pod can run, then, based on the scoring of each node, it assigns the pod to the filtered node with the highest score. Compromise of the kube-scheduler component impacts the performance and availability of the pods in the cluster.
To secure kube-scheduler, you should do the following:
On Minikube, the kube-scheduler configuration looks like this:
$ps aux | grep kube-scheduler
root 3939 0.5 2.0 144308 41640 ? Ssl 01:03 0:02 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=0.0.0.0 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true
Similar to kube-apiserver, the scheduler also does not follow all security best practices such as disabling profiling.
kube-controller-manager manages the control loop for the cluster. It monitors the cluster for changes through the API server and aims to move the cluster from the current state to the desired state. Multiple controller managers are shipped by default with kube-controller-manager, such as a replication controller and a namespace controller. Compromise of kube-controller-manager can result in updates to the cluster being rejected.
To secure kube-controller-manager, you should use --use-service-account-credentials which, when used with RBAC ensures that control loops run with minimum privileges.
On Minikube, the kube-controller-manager configuration looks like this:
$ps aux | grep kube-controller-manager
root 3927 1.8 4.5 209520 90072 ? Ssl 01:03 0:11 kube-controller-manager --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=0.0.0.0 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --use-service-account-credentials=true
Next, let's talk about securing CoreDNS.
kube-dns was the default Domain Name System (DNS) server for a Kubernetes cluster. The DNS server helps internal objects such as services, pods, and containers locate each other. kube-dns is comprised of three containers, detailed as follows:
kube-dns has been superseded by CoreDNS since version 1.11 because of security vulnerabilities in dnsmasq and performance issues in SkyDNS. CoreDNS is a single container that provides all the functions of kube-dns.
To edit the configuration file for CoreDNS, you can use kubectl, like this:
$ kubectl -n kube-system edit configmap coredns
By default, the CoreDNS config file on Minikube looks like this:
# Please edit the object below. Lines beginning with a '#'
# will be ignored, and an empty file will abort the edit.
# If an error occurs while saving this file will be
# reopened with the relevant failures.
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
To secure CoreDNS, do the following:
global:53 {
errors
proxy . {cluster IP of this istio-core-dns service}
}
Now that we have looked at different configurations of cluster components, it is important to realize that as the components become more sophisticated, more configuration parameters will be added. It's not possible for a cluster administrator to remember these configurations. So, next, we talk about a tool that helps cluster administrators monitor the security posture of cluster components.
The Center for Internet Security (CIS) released a benchmark of Kubernetes that can be used by cluster administrators to ensure that the cluster follows the recommended security configuration. The published Kubernetes benchmark is more than 200 pages.
kube-bench is an automated tool written in Go and published by Aqua Security that runs tests documented in the CIS benchmark. The tests are written in YAML Ain't Markup Language (YAML), making it easy to evolve.
kube-bench can be run on a node directly using the kube-bench binary, as follows:
$kube-bench node --benchmark cis-1.4
For clusters hosted on gke, eks, and aks, kube-bench is run as a pod. Once the pod finishes running, you can look at the logs to see the results, as illustrated in the following code block:
$ kubectl apply -f job-gke.yaml
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kube-bench-2plpm 0/1 Completed 0 5m20s
$ kubectl logs kube-bench-2plpm
[INFO] 4 Worker Node Security Configuration
[INFO] 4.1 Worker Node Configuration Files
[WARN] 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Not Scored)
[WARN] 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Not Scored)
[PASS] 4.1.3 Ensure that the proxy kubeconfig file permissions are set to 644 or more restrictive (Scored)
[PASS] 4.1.4 Ensure that the proxy kubeconfig file ownership is set to root:root (Scored)
[WARN] 4.1.5 Ensure that the kubelet.conf file permissions are set to 644 or more restrictive (Not Scored)
[WARN] 4.1.6 Ensure that the kubelet.conf file ownership is set to root:root (Not Scored)
[WARN] 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Not Scored)
......
== Summary ==
0 checks PASS
0 checks FAIL
37 checks WARN
0 checks INFO
It is important to investigate the checks that have a FAIL status. You should aim to have zero checks that fail. If this is not possible for any reason, you should have a risk mitigation plan in place for the failed check.
kube-bench is a helpful tool for monitoring cluster components that are following security best practices. It is recommended to add/modify kube-bench rules to suit your environment. Most developers run kube-bench while starting a new cluster, but it's important to run it regularly to monitor that the cluster components are secure.
In this chapter, we looked at different security-sensitive configurations for each master and node component: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, CoreDNS, and etcd. We learned how each component can be secured. By default, components might not follow all the security best practices, so it is the responsibility of the cluster administrators to ensure that the components are secure. Finally, we looked at kube-bench, which can be used to understand the security baseline for your running cluster.
It is important to understand these configurations and ensure that the components follow these checklists to reduce the chance of a compromise.
In the next chapter, we'll look at authentication and authorization mechanisms in Kubernetes. We briefly talked about some admission controllers in this chapter. We'll dive deep into different admission controllers and, finally, talk about how they can be leveraged to provide a finer-grained access control.
You can refer to the following links for more information on the topics covered in this chapter: