Upgrading worker nodes

After upgrading the Kubernetes master, you can start to upgrade the worker nodes. However, again, there's no AWS CLI support yet, so you need some manual steps to upgrade worker nodes:

  1. Create new worker nodes using the same steps as earlier using CloudFormation. However, here, you'll specify the new version of AMI, such as ami-0b4eb1d8782fc3aea. You can get an AMI ID list from the AWS documentation via https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html.
  2. Update a security group for both old and new worker nodes to allow network traffic between them. You can find a security group ID via the AWS CLI or AWS Web Console. For more details on this, please visit the AWS documentation: https://docs.aws.amazon.com/eks/latest/userguide/migrate-stack.html.
  3. Update ConfigMap to add (not replace) new worker nodes Instance ARNs, as in the following example:
$ vi aws-auth-cm.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
#
# new version of Worker Nodes
#
- rolearn: arn:aws:iam::xxxxxxxxxxxx:role/chap10-v11-NodeInstanceRole-10YYF3AILTJOS
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
#
# old version of Worker Nodes
#
- rolearn: arn:aws:iam::xxxxxxxxxxxx:role/chap10-worker-NodeInstanceRole-8AFV8TB4IOXA
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes

//apply command to update ConfigMap
$ kubectl apply -f aws-auth-cm.yaml

// you can see both 1.10 and 1.11 nodes

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-2-122.ec2.internal Ready <none> 1m v1.11.5
ip-10-0-2-218.ec2.internal Ready <none> 6h v1.10.3
...
  1. Taint and drain the old nodes to move the pod to the new node:
// prevent to assign pod to older Nodes
$ kubectl taint nodes ip-10-0-2-218.ec2.internal key=value:NoSchedule
$ kubectl taint nodes ip-10-0-4-74.ec2.internal key=value:NoSchedule

// move Pod from older to newer Nodes
$ kubectl drain ip-10-0-2-218.ec2.internal --ignore-daemonsets --delete-local-data
$ kubectl drain ip-10-0-4-74.ec2.internal --ignore-daemonsets --delete-local-data

// Old worker node became SchedulingDisabled
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-2-122.ec2.internal Ready <none> 7m v1.11.5
ip-10-0-2-218.ec2.internal Ready,SchedulingDisabled <none> 7h v1.10.3
ip-10-0-4-74.ec2.internal Ready,SchedulingDisabled <none> 7h v1.10.3
  1. Remove old nodes from the cluster and update ConfigMap again:
$ kubectl delete node ip-10-0-2-218.ec2.internal
node "ip-10-0-2-218.ec2.internal" deleted

$ kubectl delete node ip-10-0-4-74.ec2.internal
node "ip-10-0-4-74.ec2.internal" deleted

$ kubectl edit configmap aws-auth -n kube-system
configmap "aws-auth" edited

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-2-122.ec2.internal Ready <none> 15m v1.11.5

Upgrading a Kubernetes version is an annoying topic for Kubernetes administrators. This is because of Kubernetes' release cycle (which usually occurs every three months) and the need to carry out enough compatibility testing.

The EKS upgrade procedure requires AWS knowledge and understanding. This consists of many steps and involves some technical difficulty, but it should not be too difficult. Because EKS is still a newer service in AWS, it'll keep improving and providing easier options to the user in the future. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset