Placing pods with constraints

Most of the time, we don't really care about which node our pods are running on as we just want Kubernetes to arrange adequate computing resources to our pods automatically. Nevertheless, Kubernetes isn't aware of factors such as the geographical location of a node, availability zones, or machine types when scheduling a pod. This lack of awareness about the environment makes it hard to deal with situations in which pods need to be bound to nodes under certain conditions, such as deploying testing builds in an isolated instance group, putting I/O intensive tasks on nodes with SSD disks, or arranging pods to be as close as possible. As such, to complete the scheduling, Kubernetes provides different levels of affinities that allow us to actively assign pods to certain nodes based on labels and selectors.

When we type kubectl describe node, we can see the labels attached to nodes:

$ kubectl describe node
Name: gke-mycluster-default-pool-25761d35-p9ds
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/fluentd-ds-ready=true
beta.kubernetes.io/instance-type=f1-micro
beta.kubernetes.io/kube-proxy-ds-ready=true
beta.kubernetes.io/os=linux
cloud.google.com/gke-nodepool=default-pool
cloud.google.com/gke-os-distribution=cos
failure-domain.beta.kubernetes.io/region=europe-west1
failure-domain.beta.kubernetes.io/zone=europe-west1-b
kubernetes.io/hostname=gke-mycluster-default-pool-25761d35-p9ds
...
kubectl get nodes --show-labels allows us to get just the label information of nodes instead of everything.

These labels reveal some basic information about a node, as well as its environment. For convenience, there are also well-known labels provided on most Kubernetes platforms:

  • kubernetes.io/hostname
  • failure-domain.beta.kubernetes.io/zone
  • failure-domain.beta.kubernetes.io/region
  • beta.kubernetes.io/instance-type
  • beta.kubernetes.io/os
  • beta.kubernetes.io/arch

The value of these labels might differ from provider to provider. For instance, failure-domain.beta.kubernetes.io/zone will be the availability zone name in AWS, such as eu-west-1b, or the zone name in GCP, such as europe-west1-b. Also, some specialized platforms, such as minikube, don't have all of these labels:

$ kubectl get node minikube -o go-template --template=
'{{range $k,$v:=.metadata.labels}}{{printf "%s: %s " $k $v}}{{end}}'

beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/hostname: minikube
node-role.kubernetes.io/master:

Additionally, if you're working with a self-hosted cluster, you can use the --node-labels flag of kubelet to attach labels on a node when joining a cluster. As for other managed Kubernetes clusters, there are usually ways to customize labels, such as the label field in NodeConfig on GKE.

Aside from these pre-attached labels from kubelet, we can tag our node manually by either updating the manifest of the node or using the shortcut command, kubectl label. The following example tags two labels, purpose=sandbox and owner=alpha, to one of our nodes:

## display only labels on the node:
$ kubectl get node gke-mycluster-default-pool-25761d35-p9ds -o go-template --template='{{range $k,$v:=.metadata.labels}}{{printf "%s: %s " $k $v}}{{end}}'
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/fluentd-ds-ready: true
beta.kubernetes.io/instance-type: f1-micro
beta.kubernetes.io/kube-proxy-ds-ready: true
beta.kubernetes.io/os: linux
cloud.google.com/gke-nodepool: default-pool
cloud.google.com/gke-os-distribution: cos
failure-domain.beta.kubernetes.io/region: europe-west1
failure-domain.beta.kubernetes.io/zone: europe-west1-b
kubernetes.io/hostname: gke-mycluster-default-pool-25761d35-p9ds

## attach label
$ kubectl label node gke-mycluster-default-pool-25761d35-p9ds
purpose=sandbox owner=alpha
node/gke-mycluster-default-pool-25761d35-p9ds labeled

## check labels again
$ kubectl get node gke-mycluster-default-pool-25761d35-p9ds -o go-template --template='{{range $k,$v:=.metadata.labels}}{{printf "%s: %s " $k $v}}{{end}}'
...
kubernetes.io/hostname: gke-mycluster-default-pool-25761d35-p9ds
owner: alpha
purpose: sandbox

With these node labels, we're capable of describing various notions. For example, we can specify that a certain group of pods should only be put on nodes that are in the same availability zone. This is indicated by the failure-domain.beta.kubernetes.io/zone: az1 label. Currently, there are two expressions that we can use to configure the condition from pods: nodeSelector and pod/node affinity.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset