The EKS control plane is a managed Kubernetes master; you just need to use the AWS CLI to specify your IAM, subnets, and security group. This example also specifies the Kubernetes version as 1.10:
// Note: specify all 4 subnets
$ aws eks create-cluster --name chap10 --role-arn arn:aws:iam::xxxxxxxxxxxx:role/eksServiceRole --resources-vpc-config subnetIds=subnet-09f8f7f06c27cb0a0,subnet-04b78ed9b5f96d76e,subnet-026058e32f09c28af,subnet-08e16157c15cefcbc,securityGroupIds=sg-0fbac0a39bf64ba10 --kubernetes-version 1.10
This takes around 10 minutes to complete. You can check the status by typing aws eks describe-cluster --name chap10. Once your control plane status is ACTIVE, you can start to set up kubeconfig to access your Kubernetes API server.
However, AWS integrates Kubernetes API access control with AWS IAM credentials. So, you need to use aws-iam-authenticator (https://github.com/kubernetes-sigs/aws-iam-authenticator) to generate a token when you run the kubectl command.
This simply downloads an aws-iam-authenticator binary and installs it to the default command search path (for example, /usr/local/bin), then verifies whether aws-iam-authenticator works or not, using the following command:
$ aws-iam-authenticator token -i chap10
If you see the authenticator token, run the AWS CLI to generate kubeconfig, as follows:
$ aws eks update-kubeconfig --name chap10
If you succeed in creating kubeconfig, you can check whether you can access the Kubernetes master using the kubectl command, as follows:
$ kubectl cluster-info
$ kubectl get svc
At this moment, you don't see any Kubernetes nodes (kubectl get nodes returns empty). So, you need one more step to add worker nodes (Kubernetes nodes).