ReplicaSet

A pod isn't self-healing. When a pod encounters failure, it won't recover on its own. This is where ReplicaSet (RS) comes into play. ReplicaSet ensures that the specified number of replica pods are always up and running in the cluster. If a pod crashes for any reason, ReplicaSet will send a request to spin up a new pod.

ReplicaSet is similar to ReplicationController (RC), which was used in older versions of Kubernetes. Unlike ReplicaSet, which uses set-based selector requirement, ReplicationController used equality-based selector requirements. It has now been completely replaced by ReplicaSet.

Let's see how ReplicaSet works:

ReplicaSet with a desired count of 2

Let's say that we want to create a ReplicaSet object, with a desired count of 2. This means that we'll always have two pods in the service. Before we write the spec for ReplicaSet, we'll have to decide on the pod template first. This is similar to the spec of a pod. In a ReplicaSet, labels are requirein the metadata section. A ReplicaSet uses a pod selector to select which pods it manages. Labels allow ReplicaSet to distinguish whether all of the pods matching the selectors are all on track.

In this example, we'll create two pods, each with the labels project, service, and version, as shown in the preceding diagram:

// an example for RS spec
# cat 3-2-2_rs.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: nginx
spec:
replicas: 2
selector:
matchLabels:
project: chapter3
matchExpressions:
- {key: version, operator: In, values: ["0.1", "0.2"]}
template:
metadata:
name: nginx
labels:
project: chapter3
service: web
version: "0.1"
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80

// create the RS
# kubectl create -f 3-2-2_rs.yaml
replicaset.apps/nginx created

Then, we can use kubectl to get the current RS status:

// get current RSs
# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx 2 2 2 29s

This shows that we desire two pods, we currently have two pods, and two pods are ready. How many pods do we have now? Let's check it out via the kubectl command:

// get current running pod
# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-l5mdn 1/1 Running 0 11s
nginx-pjjw9 1/1 Running 0 11s

This shows we have two pods up and running. As described previously, ReplicaSet manages all of the pods matching the selector. If we create a pod with the same label manually, in theory, it should match the pod selector of the RS we just created. Let's try it out:

// manually create a pod with same labels
# cat 3-2-2_rs_self_created_pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: our-nginx
labels:
project: chapter3
service: web
version: "0.1"
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
// create a pod with same labels manually
# kubectl create -f 3-2-2_rs_self_created_pod.yaml
pod "our-nginx" created

Let's see if it's up and running:

// get pod status
# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-l5mdn 1/1 Running 0 4m
nginx-pjjw9 1/1 Running 0 4m
our-nginx 0/1 Terminating 0 4s

It's scheduled, and ReplicaSet catches it. The amount of pods becomes three, which exceeds our desired count. The pod is eventually killed:

// get pod status
# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-l5mdn 1/1 Running 0 5m
nginx-pjjw9 1/1 Running 0 5m

The following diagram is an illustration of how our self-created pod was evicted. The labels are matched with ReplicaSet, but the desired count is 2. Therefore, the additional pod was evicted:

ReplicaSet makes sure pods are in the desired state

If we want to scale on demand, we could simply use kubectl edit <resource> <resource_name> to update the spec. Here, we'll change the replica count from 2 to 5:

// change replica count from 2 to 5, default system editor will pop out. 
Change `replicas` number
# kubectl edit rs nginx
replicaset.extensions/nginx edited

Let's check the RS information:

// get RS information
# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx 5 5 5 5m

We now have five pods. Let's check how RS works:

// describe RS resource `nginx`
# kubectl describe rs nginx
Name: nginx
Namespace: default
Selector: project=chapter3,version in (0.1,0.2)
Labels: project=chapter3
service=web
version=0.1
Annotations: <none>
Replicas: 5 current / 5 desired
Pods Status: 5 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: project=chapter3
service=web
version=0.1
Containers:
nginx:
Image: nginx
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 3m34s replicaset-controller Created pod: nginx-l5mdn
Normal SuccessfulCreate 3m34s replicaset-controller Created pod: nginx-pjjw9
Normal SuccessfulDelete 102s replicaset-controller Deleted pod: our-nginx
Normal SuccessfulCreate 37s replicaset-controller Created pod: nginx-v9trs
Normal SuccessfulCreate 37s replicaset-controller Created pod: nginx-n95mv
Normal SuccessfulCreate 37s replicaset-controller Created pod: nginx-xgdhq

By describing the command, we can learn the spec of RS and the events. When we created the nginx RS, it launched two containers by spec. Then, we created another pod manually by another spec, named our-nginx. RS detected that the pod matches its pod selector. After the amount exceeded our desired count, it evicted it. Then, we scaled out the replicas to five. RS detected that it didn't fulfill our desired state and launched three pods to fill the gap.

If we want to delete an RC, simply use the kubectl command: kubectl delete <resource> <resource_name>. Since we have a configuration file on hand, we could also use kubectl delete -f <configuration_file> to delete the resources listing in the file:

// delete a rc
# kubectl delete rs nginx
replicaset.extensions/nginx deleted

// get pod status
# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-pjjw9 0/1 Terminating 0 29m
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset