Adding worker nodes

As discussed, AWS doesn't allow the AWS CLI to set up EKS worker nodes. Instead, use CloudFormation. This creates the necessary AWS component for worker nodes, such as security groups, AutoScaling groups, and IAM Instance Roles. Furthermore, the Kubernetes master needs an IAM Instance Role when a worker node joins the Kubernetes cluster. It's highly recommended to use the CloudFormation template to launch worker nodes.

CloudFormation execution steps are simple and follow the AWS EKS documentation, https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html. Use the S3 template URL, https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-12-10/amazon-eks-nodegroup.yaml, and then specify the parameters as in the following example:

Parameter Value
Stack name chap10-worker
ClusterName chap10 (must be match to EKS control plane name)
ClusterControlPlaneSecurityGroup sg-0fbac0a39bf64ba10 (eks-control-plane)
NodeGroupName chap10 EKS worker node (any name)
NodeImageId ami-027792c3cc6de7b5b (version 1.10.x)
KeyName my-key
VpcId vpc-0ca37d4650963adbb
Subnets
  • subnet-04b78ed9b5f96d76e (10.0.2.0/24)
  • subnet-08e16157c15cefcbc (10.0.4.0/24)

Note: only private subnets

CloudFormation execution takes around five minutes to complete, and then you need to get the NodeInstanceRole value from Outputs, as follows:

Finally, you can add these nodes to your Kubernetes cluster by adding ConfigMap. You can download a ConfigMap template from https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-12-10/aws-auth-cm.yaml and then fill out the Instance Role ARN, as in this example:

$ cat aws-auth-cm.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::xxxxxxxxxxxx:role/chap10-worker-NodeInstanceRole-8AFV8TB4IOXA
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes


$ kubectl create -f aws-auth-cm.yaml
configmap "aws-auth" created

After a few minutes, the worker nodes will be registered to your Kubernetes master, as follows:

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-2-218.ec2.internal Ready <none> 3m v1.10.3
ip-10-0-4-74.ec2.internal Ready <none> 3m v1.10.3
...

Now you can start to use your own Kubernetes cluster on AWS. Deploy your application to take a look at this. Note that, based on the preceding instruction, we deployed the worker nodes on a private subnet, so if you want to deploy an internet-facing Kubernetes Service, you need to use type:LoadBalancer. We'll explore this in the next section.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset