We can use the same trick of changing the yaml file as we did earlier, as follows:
kc get -o yaml deploy/frontend > frontend.yaml
...
This time, we are going to download the yaml file and modify it as follows:
curl -O -L https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/all-in-one/guestbook-all-in-one.yaml
Find resources | cpu limit for redis-slave and frontend and replace 100m with 10m:
cpu: 10m
Remember that the advantage of using Deployments versus plain Replication Controllers was the ability to roll out an upgrade. We can use that capability to let Kubernetes make the required changes in a declarative fashion.
In our case, we get this new error from the kubectl get events command:
1s 18s 4 redis-slave-b6566c98-gq5cw.15753462c1fbce76 Pod Warning FailedScheduling default-scheduler 0/2 nodes are available: 1 Insufficient memory, 1 node(s) were not ready, 1 node(s) were out of disk space.
To fix the error shown in the previous code, let's edit the memory requirements in the yaml file as well. This time, we will use the following command:
kubectl edit deploy/redis-deploy
Change the memory requirement to the following:
memory: 10Mi
- Press i to change text
- Press Esc to get out edit mode
- Then :wq to write the file out and quit
Kubernetes makes the required changes to make things happen. Make changes to replicas and resource settings to get to this state:
ab443838-9b3e-4811-b287-74e417a9@Azure:~$ kc get pods |grep Running
frontend-84d8dff7c4-98pph 1/1 Running 0 1h
redis-master-6b464554c8-f5p7f 1/1 Running 1 23h
redis-slave-787d9ffb96-wsf62 1/1 Running 0 1m
The guestbook appears in the browser when we enter the IP address as follows:
Since we now have the entries, we can confirm that the application is working properly.