Working with labels and selectors

Labels are a set of key/value pairs, which are attached to object metadata. We could use labels to select, organize and group objects, such as pods, replication controllers and services. Labels are not necessarily unique. Objects could carry the same set of labels.

Label selectors are used to query objects via labels. Current supported selector types are:

  • Equality-based label selector
  • Set-based label selector
  • Empty label selector
  • Null label selector

An equality-based label selector is a set of equality requirements, which could filter labels by equal or non-equal operation. A set-based label selector is used to filter labels by a set of values, and currently supports in and notin operators. When a label value matches the values in the in operator, it will be returned by the selector; conversely, if a label value does not match the values in the notin operator, it will be returned. Empty label selectors select all objects and null labels select no objects. Selectors are combinable. Kubernetes will return the objects that match all the requirements in selectors.

Getting ready

Before you get to set labels into the objects, you should consider the valid naming convention of key and value.

A valid key should follow these rules:

  • A name with an optional prefix, separated by a slash.
  • A prefix must be a DNS subdomain, separated by dots, no longer than 253 characters.
  • A name must be less than 63 characters with the combination of [a-z0-9A-Z] and dashes, underscores and dots. Note that symbols are illegal if put at the beginning and the end.

A valid value should follow the following rules:

  • A name must be less than 63 characters with the combination of [a-z0-9A-Z] and dashes, underscores and dots. Note that symbols are illegal if put at the beginning and the end.

You should also consider the purpose, too. For example, we have a service in the pilot project under different development environments which contain multiple tiers. Then we could make our labels:

  • project: pilot
  • environment: development, environment: staging, environment: production
  • tier: frontend, tier: backend

How to do it…

Let's try to create an nginx pod with the previous labels in both a staging and production environment:

  1. We will create the same staging for pod and production as that for the replication controller (RC):
    # cat staging-nginx.yaml
    apiVersion: v1
    kind: Pod
    metadata:
      name: nginx
      labels:
        project: pilot
        environment: staging
        tier: frontend
    spec:
      containers:
        -
          image: nginx
          imagePullPolicy: IfNotPresent
          name: nginx
          ports:
          - containerPort: 80
    
    // create the pod via configuration file
    # kubectl create -f staging-nginx.yaml
    pod "nginx" created
    
  2. Let's see the details of the pod:
    # kubectl describe pod nginx
    Name:        nginx
    Namespace:      default
    Image(s):      nginx
    Node:        ip-10-96-219-231/
    Start Time:      Sun, 27 Dec 2015 18:12:31 +0000
    Labels:        environment=staging,project=pilot,tier=frotend
    Status:        Running
    ...
    

    We could then see the labels in the pod description as environment=staging,project=pilot,tier=frontend.

    Good. We have a staging pod now.

  3. Now, get on with creating the RC for a production environment by using the command line:
    $ kubectl run nginx-prod --image=nginx --replicas=2 --port=80 --labels="environment=production,project=pilot,tier=frontend"
    

    This will then create an RC named nginx-prod with two replicas, an opened port 80, and with the labels environment=production,project=pilot,tier=frontend.

  4. We can see that we currently have a total three pods here. One pod is created for staging, the other two are for production:
    # kubectl get pods 
    NAME               READY     STATUS    RESTARTS   AGE
    nginx              1/1       Running   0          8s
    nginx-prod-50345   1/1       Running   0          19s
    nginx-prod-pilb4   1/1       Running   0          19s
    
  5. Let's get some filters for the selecting pods. For example, if I wanted to select production pods in the pilot project:
    # kubectl get pods -l "project=pilot,environment=production"
    NAME               READY     STATUS    RESTARTS   AGE
    nginx-prod-50345   1/1       Running   0          9m
    nginx-prod-pilb4   1/1       Running   0          9m
    

    By adding -l followed by key/value pairs as filter requirements, we could see the desired pods.

Linking service with a replication controller by using label selectors

Service in Kubernetes is used to expose the port and for load-balancing:

  1. In some cases, you'll need to add a service in front of the replication controller in order to expose the port to the outside world or balance the load. We will use the configuration file to create services for the staging pod and command line for production pods in the following example:
    // example of exposing staging pod
    # cat staging-nginx-service.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx
      labels:
        project: pilot
        environment: staging
        tier: frontend
    spec:
      ports:
        -
          protocol: TCP
          port: 80
          targetPort: 80
      selector:
        project: pilot
        environment: staging
        tier: frontend
      type: LoadBalancer
    // create the service by configuration file
    # kubectl create -f staging-nginx-service.yaml
    service "nginx" created
    
  2. Using kubectl describe to describe the details of the service:
    // describe service
    # kubectl describe service nginx
    Name:      nginx
    Namespace:    default
    Labels:      environment=staging,project=pilot,tier=frontend
    Selector:    environment=staging,project=pilot,tier=frontend
    Type:      LoadBalancer
    IP:	    192.168.167.68
    Port:      <unnamed>  80/TCP
    Endpoints:    192.168.80.28:80
    Session Affinity:  None
    No events.
    

    Using curl for the ClusterIP could return the welcome page of nginx.

  3. Next, let's add the service for RC with label selectors:
    // add service for nginx-prod RC
    # kubectl expose rc nginx-prod --port=80 --type=LoadBalancer --selector="project=pilot,environment=production,tier=frontend"
    
  4. Using kubectl describe to describe the details of service:
    # kubectl describe service nginx-prod
    Name:      nginx-prod
    Namespace:    default
    Labels:      environment=production,project=pilot,tier=frontend
    Selector:    environment=production,project=pilot,tier=frontend
    Type:      LoadBalancer
    IP:      192.168.200.173
    Port:      <unnamed>  80/TCP
    NodePort:    <unnamed>  32336/TCP
    Endpoints:    192.168.80.31:80,192.168.80.32:80
    Session Affinity:  None
    No events.
    

    When we use curl 192.168.200.173, we can see the welcome page of nginx just like the staging one.

    Note

    It will return a Connection reset by peer error if you specify the empty pod set by the selector.

There's more…

In some cases, we might want to tag the resources with some values just for reference in the programs or tools. The non-identifying tags could use annotations instead, which are able to use structured or unstructured data. Unlike labels, annotations are not for querying and selecting. The following example will show you how to add an annotation into a pod and how to leverage them inside the container by downward API:

# cat annotation-sample.yaml
apiVersion: v1
kind: Pod
metadata:
  name: annotation-sample
  labels:
    project: pilot
    environment: staging
  annotations:
    git: 6328af0064b3db8b913bc613876a97187afe8e19
    build: "20"
spec:
  containers:
    -
      image: busybox
      imagePullPolicy: IfNotPresent
      name: busybox
      command: ["sleep", "3600"]

You could then use downward API, which we discussed in volumes, to access annotations in containers:

# cat annotation-sample-downward.yaml
apiVersion: v1
kind: Pod
metadata:
  name: annotation-sample
  labels:
    project: pilot
    environment: staging
  annotations:
    git: 6328af0064b3db8b913bc613876a97187afe8e19
    build: "20"
spec:
  containers:
    -
      image: busybox
      imagePullPolicy: IfNotPresent
      name: busybox
      command: ["sh", "-c", "while true; do if [[ -e /etc/annotations ]]; then cat /etc/annotations; fi; sleep 5; done"]
      volumeMounts:
      - name: podinfo
        mountPath: /etc
volumes:
    - name: podinfo
      downwardAPI:
        items:
          - path: "annotations"
            fieldRef:
              fieldPath: metadata.annotations

In this way, metadata.annotations will be exposed in the container as a file format under /etc/annotations. We could also check the pod logs are printing out the file content into stdout:

// check the logs we print in command section
# kubectl logs -f annotation-sample
build="20"
git="6328af0064b3db8b913bc613876a97187afe8e19"
kubernetes.io/config.seen="2015-12-28T12:23:33.154911056Z"
kubernetes.io/config.source="api"

See also

You can practice labels and selectors through the following recipes:

  • Working with pods
  • Working with a replication controller
  • Working with services
  • Working with volumes
  • The Working with configuration files recipe in Chapter 3, Playing with Containers
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset