Node pool

When launching the Kubernetes cluster, you can specify the number of nodes using the --num-nodes option. GKE manages a Kubernetes node as a node pool. This means you can manage one or more node pools that are attached to your Kubernetes cluster.

What if you need to add or delete nodes? GKE let's you resize the node pool by performing the following command to change Kubernetes node from 3 to 5:

//run resize command to change number of nodes to 5
$ gcloud container clusters resize my-k8s-cluster --size 5 --zone asia-northeast1-a

//after a few minutes later, you may see additional nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-my-k8s-cluster-default-pool-bcae4a66-j8zz Ready <none> 32s v1.10.9-gke.5
gke-my-k8s-cluster-default-pool-bcae4a66-jnnw Ready <none> 32s v1.10.9-gke.5
gke-my-k8s-cluster-default-pool-bcae4a66-mlhw Ready <none> 4m v1.10.9-gke.5
gke-my-k8s-cluster-default-pool-bcae4a66-tn74 Ready <none> 4m v1.10.9-gke.5
gke-my-k8s-cluster-default-pool-bcae4a66-w5l6 Ready <none> 4m v1.10.9-gke.5

Increasing the number of nodes will help if you need to scale out your node capacity. However, in this scenario, it still uses the smallest instance type (f1-micro, which has only 0.6 GB memory). It might not help if a single container needs more than 0.6 GB of memory. In this case you need to scale up, which means you need to add a larger VM instance type.

In this case, you have to add another set of node pools onto your cluster. This is because, within the same node pool, all VM instances are configured the same. Consequently, you can't change the instance type in the same node pool.

Add a new node pool that has two new sets of g1-small (1.7 GB memory) VM instance types to the cluster. Then you can expand Kubernetes nodes with a different hardware configuration.

By default, there are some quotas that can create a number limit for VM instances within one region (for example, up to eight CPU cores on us-west1). If you wish to increase this quota, you must change your account to a paid one and then request a quota change to GCP. For more details, please read the online documentation from https://cloud.google.com/compute/quotas and https://cloud.google.com/free/docs/frequently-asked-questions#how-to-upgrade.

Run the following command to add an additional node pool that has two instances of a g1-small instance:

//create and add node pool which is named "large-mem-pool"

$ gcloud container node-pools create large-mem-pool --cluster my-k8s-cluster --machine-type g1-small --num-nodes 2 --tags private --zone asia-northeast1-a
//after a few minustes, large-mem-pool instances has been added
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-my-k8s-cluster-default-pool-bcae4a66-j8zz Ready <none> 5m v1.10.9-gke.5
gke-my-k8s-cluster-default-pool-bcae4a66-jnnw Ready <none> 5m v1.10.9-gke.5
gke-my-k8s-cluster-default-pool-bcae4a66-mlhw Ready <none> 9m v1.10.9-gke.5
gke-my-k8s-cluster-default-pool-bcae4a66-tn74 Ready <none> 9m v1.10.9-gke.5
gke-my-k8s-cluster-default-pool-bcae4a66-w5l6 Ready <none> 9m v1.10.9-gke.5
gke-my-k8s-cluster-large-mem-pool-66e3a44a-jtdn Ready <none> 46s v1.10.9-gke.5
gke-my-k8s-cluster-large-mem-pool-66e3a44a-qpbr Ready <none> 44s v1.10.9-gke.5

Now you have a total of seven CPU cores and 6.4 GB memory in your cluster, which has more capacity. However, due to larger hardware types, the Kubernetes scheduler will probably assign a pod to large-mem-pool first because it has enough memory capacity.

However, you may want to preserve the large-mem-pool node in case a big application needs a large memory size (for example, a Java application). Therefore, you may want to differentiate default-pool and large-mem-pool.

In this case, the Kubernetes label, beta.kubernetes.io/instance-type, helps to distinguish an instance type of node. Therefore, use nodeSelector to specify a desired node for the pod. For example, the following nodeSelector parameter will force the use of the f1-micro node for the nginx application:

//nodeSelector specifies f1-micro
$ cat nginx-pod-selector.yml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
beta.kubernetes.io/instance-type: f1-micro

//deploy pod
$ kubectl create -f nginx-pod-selector.yml
pod "nginx" created

//it uses default pool
NAME READY STATUS RESTARTS AGE IP NODE
nginx 0/1 ContainerCreating 0 10s <none> gke-my-k8s-cluster-default-pool-bcae4a66-jnnw
If you want to specify a particular label instead of beta.kubernetes.io/instance-type, use the --node-labels option to create a node pool. That assigns your desired label to the node pool. For more details, please read the following online document: https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create.

Of course, you can feel free to remove a node pool if you no longer need it. To do that, run the following command to delete default-pool (f1-micro x 5 instances). This operation will involve pod migration (terminate the pod on default-pool and relaunch it on large-mem-pool) automatically if there are some pods running at default-pool:

//list Node Pool
$ gcloud container node-pools list --cluster my-k8s-cluster --zone asia-northeast1-a
NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION
default-pool f1-micro 100 1.10.9-gke.5
large-mem-pool g1-small 100 1.10.9-gke.5


//delete default-pool
$ gcloud container node-pools delete default-pool --cluster my-k8s-cluster --zone asia-northeast1-a

//after a few minutes, default-pool nodes x 5 has been deleted
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-my-k8s-cluster-large-mem-pool-66e3a44a-jtdn Ready <none> 9m v1.10.9-gke.5
gke-my-k8s-cluster-large-mem-pool-66e3a44a-qpbr Ready <none> 9m v1.10.9-gke.5

You may have noticed that all of the preceding operations happened in a single zone (asia-northeast1-a). Therefore, if the asia-northeast1-a zone gets an outage, your cluster will be down. In order to avoid zone failure, you may consider setting up a multi-zone cluster.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset