Image Pull errors

In this section, we are going to introduce image pull errors by setting the image tag value to a non-existent one.

Run the following command on Azure Cloud Shell:

kubectl edit deployment/frontend

Next, change the image tag from v3 to v_non_existent by running the following commands:

image: gcr.io/google-samples/gb-frontend:v3
image: gcr.io/google-samples/gb-frontend:v_non_existent

Now save it.

Running the following command lists all the pods in the current namespace:

kc get pods

The preceding command should show errors as shown here:

pod/frontend-5489947457-hvtq2 0/1  ErrImagePull   0          4s

Run the following command to get all the errors:

kubectl describe pods/frontend-5489947457-<random chars>

A sample error output that should be similar to your output is shown here. The key error line is highlighted in bold:

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m default-scheduler Successfully assigned default/frontend-5489947457-hvtq2 to aks-agentpool-26533852-0
Normal Pulling 1m (x4 over 2m) kubelet, aks-agentpool-26533852-0 pulling image "gcr.io/google-samples/gb-frontend:v_non_existent"
Warning Failed 1m (x4 over 2m) kubelet, aks-agentpool-26533852-0 Failed to pull image "gcr.io/google-samples/gb-frontend:v_non_existent": rpc error: code = Unknown desc = Error response from daemon: manifest for gcr.io/google-samples/gb-frontend:v_non_existent not found
Warning Failed 1m (x4 over 2m) kubelet, aks-agentpool-26533852-0 Error: ErrImagePull
Normal BackOff 1m (x6 over 2m) kubelet, aks-agentpool-26533852-0 Back-off pulling image "gcr.io/google-samples/gb-frontend:v_non_existent"
Warning Failed 1m (x7 over 2m) kubelet, aks-agentpool-26533852-0 Error: ImagePullBackOff

So, the events clearly show that the image does not exist. Errors such as passing invalid credentials to private Docker repositories will also show up here.

Let's fix the error, by setting the image tag back to v3:

kubectl edit deployment/frontend
image: gcr.io/google-samples/gb-frontend:v_non_existent
image: gcr.io/google-samples/gb-frontend:v3

Save the file, and the deployment should get automatically fixed. You can verify it by getting the events for the pods again.

Because we did a rolling update, the frontend was continuously available with zero downtime. Kubernetes recognized a problem with the new specification and stopped rolling out the changes automatically.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset