For the benefit of containers, we can easily publish new programs by executing the latest image, and reduce the headache of environment setup. But, what about publishing the program on running containers? Using native Docker commands, we have to stop the running containers prior to booting up new ones with the latest images and the same configurations. There is a simple and efficient zero-downtime method to update your program in the Kubernetes system. It is called rolling-update. We will show this solution to you in this recipe.
Rolling-update works on the units of the replication controller. The effect is to create new pods one by one to replace the old one. The new pods in the target replication controller are attached to the original labels. Therefore, if any service exposes this replication controller, it will take over the newly created pods directly.
For a later demonstration, we are going to update a new nginx image. In addition to this, we are going to make sure that nodes get your customized image, pushing it to Docker Hub, the public Docker registry, or private registry.
For example, you can create the image by writing your own Dockerfile
:
$ cat Dockerfile FROM nginx RUN echo "Happy Programming!" > /usr/share/nginx/html/index.html
In this Docker image, we changed the content of the default index.html
page. Then, you can build your image and push it with the following commands:
// push to Docker Hub $ docker build -t <DOCKERHUB_ACCOUNT>/common-nginx . && docker push <DOCKERHUB_ACCOUNT>/common-nginx // Or, you can also push to your private docker registry $ docker build -t <RESITRY_NAME>/common-nginx . && docker push <RESITRY_NAME>/common-nginx
To add nodes' access authentications of the private Docker registry, please take the Working with the private Docker registry recipe in Chapter 5, Building a Continuous Delivery Pipeline, as a reference.
You'll now learn how to publish a Docker image. The following steps will help you successfully publish a Docker image:
80
to the container, while the Kubernetes service transferred the port to 8080
in the internal network:// Create a replication controller named nginx-rc # kubectl run nginx-rc --image=nginx --replicas=5 --port=80 --labels="User=Amy,App=Web,State=Testing" replicationcontroller "nginx-rc" created // Create a service supporting nginx-rc # kubectl expose rc nginx-rc --port=8080 --target-port=80 --name="nginx-service" service "nginx-service" exposed # kubectl get service nginx-service NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE nginx-service 192.168.163.46 <none> 8080/TCP App=Web,State=Testing,User=Amy 35s
You can evaluate whether the components work fine or not by examining <POD_IP>:80
and <CLUSTER_IP>:8080
.
rolling-update
helps to keep the live replication controller up to date. In the following command, users have to specify the name of the replication controller and the new image. Here, we will use the image that is being uploaded to Docker Hub:# kubectl rolling-update nginx-rc --image=<DOCKERHUB_ACCOUNT>/common-nginx Created nginx-rc-b6610813702bab5ad49d4aadd2e5b375 Scaling up nginx-rc-b6610813702bab5ad49d4aadd2e5b375 from 0 to 5, scaling down nginx-rc from 5 to 0 (keep 5 pods available, don't exceed 6 pods) Scaling nginx-rc-b6610813702bab5ad49d4aadd2e5b375 up to 1
rolling-update
will start a single new pod at a time and wait for a period of time; the default is one minute to stop an old pod and create a second new pod. From this idea, while updating, there always is one more pod on the serving, one more pod than the desired state of the replication controller. In this case, there would be six pods. While updating the replication controller, please access another terminal for a brand-new process.# kubectl get rc CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE nginx-rc nginx-rc nginx App=Web,State=Testing,User=Amy,deployment=313da350dea9227b89b4f0340699a388 5 1m nginx-rc-b6610813702bab5ad49d4aadd2e5b375 nginx-rc <DOCKERHUB_ACCOUNT>/common-nginx App=Web,State=Testing,User=Amy,deployment=b6610813702bab5ad49d4aadd2e5b375 1 16s
deployment
is added to both the replication controllers for discriminating. On the other hand, new nginx-rc
is attached to the other original labels. Service will also take care of the new pods at the same time:// Check service nginx-service while updating # kubectl describe service nginx-service Name: nginx-service Namespace: default Labels: App=Web,State=Testing,User=Amy Selector: App=Web,State=Testing,User=Amy Type: ClusterIP IP: 192.168.163.46 Port: <unnamed> 8080/TCP Endpoints: 192.168.15.5:80,192.168.15.6:80,192.168.15.7:80 + 3 more... Session Affinity: None No events.
There are six endpoints of pods covered by nginx-service
, which is supported by the definition of rolling-update.
Created nginx-rc-b6610813702bab5ad49d4aadd2e5b375 Scaling up nginx-rc-b6610813702bab5ad49d4aadd2e5b375 from 0 to 5, scaling down nginx-rc from 5 to 0 (keep 5 pods available, don't exceed 6 pods) Scaling nginx-rc-b6610813702bab5ad49d4aadd2e5b375 up to 1 Scaling nginx-rc down to 4 Scaling nginx-rc-b6610813702bab5ad49d4aadd2e5b375 up to 2 Scaling nginx-rc down to 3 Scaling nginx-rc-b6610813702bab5ad49d4aadd2e5b375 up to 3 Scaling nginx-rc down to 2 Scaling nginx-rc-b6610813702bab5ad49d4aadd2e5b375 up to 4 Scaling nginx-rc down to 1 Scaling nginx-rc-b6610813702bab5ad49d4aadd2e5b375 up to 5 Scaling nginx-rc down to 0 Update succeeded. Deleting old controller: nginx-rc Renaming nginx-rc-b6610813702bab5ad49d4aadd2e5b375 to nginx-rc replicationcontroller "nginx-rc" rolling updated
Old nginx-rc
is gradually taken out of service by scaling down.
// Take a look a current replication controller // The new label "deployment" is remained after update # kubectl get rc nginx-rc CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE nginx-rc nginx-rc <DOCKERHUB_ACCOUNT>/common-nginx App=Web,State=Testing,User=Amy,deployment=b6610813702bab5ad49d4aadd2e5b375 5 40s
# curl 192.168.163.46:8080 Happy Programming!
--update-period
. The valid time units are ns
, us
, ms
, s
, m
, and h
. For example, --update-period=1m0s
:// Try on this one! # kubectl rolling-update <REPLICATION_CONTROLLER_NAME> --image=<IMAGE_NAME> --update-period=10s
In this section, we will discuss rolling-update in detail. How about renewing a replication controller with N seconds as the period of updating? See the following image:
The previous image indicates each step of the updating procedure. We may get some important ideas from rolling-update:
While doing rolling-update, we may specify the image for a new replication controller. But sometimes, we cannot update the new image successfully. It is because of container's image pull policy.
To update with a specific image, it will be great if users provide a tag so that what version of the image should be pulled is clear and accurate. However, most of the time, the latest one to which users look for and the latest tagged image could be regarded as the same one in local, since they are called the latest as well. Like the command <DOCKERHUB_ACCOUNT>/common-nginx:latest image
will be used in this update:
# kubectl rolling-update nginx-rc --image=<DOCKERHUB_ACCOUNT>/common-nginx --update-period=10s
Still, nodes will ignore to pull the latest version of common-nginx
if they find an image labeled as the same request. For this reason, we have to make sure that the specified image is always pulled from the registry.
In order to change the configuration, the subcommand edit
can help in this way:
# kubectl edit rc <REPLICATION_CONTROLLER_NAME>
Then, you can edit the configuration of the replication controller in the YAML format. The policy of image pulling could be found in the following class structure:
apiVersion: v1 kind: replicationcontroller spec: template: spec: containers: - name: <CONTAINER_NAME> image: <IMAGE_TAG> imagePullPolicy: IfNotPresent :
The value IfNotPresent
tells the node to only pull the image not presented on the local disk. By changing the policy to Always
, users will be able to avoid updating failure. It is workable to set up the key-value item in the configuration file. So, the specified image is guaranteed to be the one in the image registry.
Pod is the basic computing unit in the Kubernetes system. You can learn how to use pods even more effectively through the following recipes: