Managing rollouts

Once the rollout process is triggered, Kubernetes silently completes all tasks in the background. Let's try some hands-on experiments. Again, the rolling update process won't be triggered even if we've modified something with the commands mentioned earlier, unless the associated pod's specification is changed. The example we prepared is a simple script that will respond to any request with its hostname and the Alpine version it runs on. First, we create deployment and check its response in another Terminal:

$ kubectl apply -f ex-deployment.yml
deployment.apps/my-app created
service/my-app-svc created
$ kubectl proxy &
[1] 48334
Starting to serve on 127.0.0.1:8001
## switch to another terminal, #2
$ while :; do curl http://localhost:8001/api/v1/namespaces/default/services/my-app-svc:80/proxy/; sleep 1; done
my-app-5fbdb69f94-5s44q-v-3.7.1 is running...
my-app-5fbdb69f94-g7k7t-v-3.7.1 is running...
...

Now, we change its image to another version and see what the responses are:

## go back to terminal#1
$ kubectl set image deployment.apps my-app app=alpine:3.8
deployment.apps/my-app image updated

## switch to terminal#2
...
my-app-5fbdb69f94-7fz6p-v-3.7.1 is running...
my-app-6965c8f887-mbld5-v-3.8.1 is running...

...

Messages from version 3.7 and 3.8 are interleaved until the updating process ends. In order to immediately determine the status of updating processes from Kubernetes, rather than polling the service endpoint, we can use kubectl rollout to manage the rolling update process, including inspecting the progress of ongoing updates. Let's see the acting rollout with the status sub-command:

## if the previous rollout has finished,
## you can make some changes to my-app again:

$ kubectl rollout status deployment my-app
Waiting for deployment "my-app" rollout to finish: 3 out of 5 new replicas have been updated...
...
Waiting for deployment "my-app" rollout to finish: 3 out of 5 new replicas have been updated...
Waiting for deployment "my-app" rollout to finish: 3 of 5 updated replicas are available...
Waiting for deployment "my-app" rollout to finish: 3 of 5 updated replicas are available...
Waiting for deployment "my-app" rollout to finish: 3 of 5 updated replicas are available...
deployment "my-app" successfully rolled out

At this moment, the output at terminal#2 should be from version 3.6. The history sub-command allows us to review previous changes to deployment:

$ kubectl rollout history deployment.app my-app
deployment.apps/my-app
REVISION CHANGE-CAUSE
1 <none>
2 <none>

However, the CHANGE-CAUSE field doesn't show any useful information that helps us to see the details of the revision. To profit from the rollout history feature, add a --record flag after each command that leads to a change, such as apply or patch. kubectl create also supports the record flag.

Let's make some changes to the deployment, such as modifying the DEMO environment variable on pods in my-app. As this causes a change in the pod's specification, rollout will start right away. This sort of behavior allows us to trigger an update without building a new image. For simplicity, we use patch to modify the variable:

$ kubectl patch deployment.apps my-app -p '{"spec":{"template":{"spec":{"containers":[{"name":"app","env":[{"name":"DEMO","value":"1"}]}]}}}}' --record
deployment.apps/my-app patched
$ kubectl rollout history deployment.apps my-app
deployment.apps/my-app
REVISION CHANGE-CAUSE
1 <none>
2 <none>
3 kubectl patch deployment.apps my-app --patch={"spec":{"template":{"spec":{"containers":[{"name":"app","env":[{"name":"DEMO","value":"1"}]}]}}}} --record=true

CHANGE-CAUSE of REVISION 3 notes the committed command clearly. Only the command will be recorded, which means that any modification inside edit/apply/replace won't be marked down explicitly. If we want to get the manifest of the former revisions, we could retrieve the saved configuration, as long as our changes are made with apply.

The CHANGE-CAUSE field is actually stored in the kubernetes.io/change-cause annotation of an object.

For various reasons, we sometimes want to roll back our application even if the rollout is successful to a certain extent. This can be achieved with the undo sub-command :

$ kubectl rollout undo deployment my-app

The whole process is basically identical to updating—that is, applying the previous manifest—and performing a rolling update. We can also utilize the --to-revision=<REVISION#> flag to roll back to a specific version, but only retained revisions are able to be rolled back. Kubernetes determines how many revisions it keeps according to the revisionHistoryLimit parameter in the deployment object.

The progress of an update is controlled by kubectl rollout pause and kubectl rollout resume. As their names indicate, they should be used in pairs. Pausing a deployment involves not only stopping an ongoing rollout, but also freezing any triggering of updates even if the specification is modified, unless it's resumed.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset