We've learned the basic network concept in GCP. Let's launch our first GKE cluster:
Parameter |
Description |
Value in example |
--cluster-version |
Supported cluster version (Refer to https://cloud.google.com/kubernetes-engine/release-notes) |
1.9.2-gke.1 |
--machine-type |
Instance type of nodes (Refer to https://cloud.google.com/compute/docs/machine-types) |
f1-micro |
--num-nodes |
Number of nodes in the cluster |
3 |
--network |
Target VPC network |
k8s-network (the one we just created) |
--zone |
Target zone |
us-central1-a (you're free to use any zone) |
--tags |
Network tags to be attached to the nodes |
private |
--service-account | --scopes |
Node identity (Refer to https://cloud.google.com/sdk/gcloud/reference/container/clusters/create for more scope value) |
storage-rw,compute-ro |
By referring preceding parameters, let's launch a three nodes cluster by gcloud command:
// create GKE cluster
$ gcloud container clusters create my-k8s-cluster --cluster-version 1.9.2-gke.1 --machine-type f1-micro --num-nodes 3 --network k8s-network --zone us-central1-a --tags private --scopes=storage-rw,compute-ro
WARNING: The behavior of --scopes will change in a future gcloud release: service-control and service-management scopes will no longer be added to what is specified in --scopes. To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_behavior property (gcloud config set container/new_scopes_behavior true).
WARNING: Starting in Kubernetes v1.10, new clusters will no longer get compute-rw and storage-ro scopes added to what is specified in --scopes (though the latter will remain included in the default --scopes). To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_behavior property (gcloud config set container/new_scopes_behavior true).
Creating cluster my-k8s-cluster...done.
Created [https://container.googleapis.com/v1/projects/kubernetes-cookbook/zones/us-central1-a/clusters/my-k8s-cluster].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-central1-a/my-k8s-cluster?project=kubernetes-cookbook
kubeconfig entry generated for my-k8s-cluster.
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
my-k8s-cluster us-central1-a 1.9.2-gke.1 35.225.24.4 f1-micro 1.9.2-gke.1 3 RUNNING
After the cluster is up-and-running, we can start to connect to the cluster by configuring kubectl:
# gcloud container clusters get-credentials my-k8s-cluster --zone us-central1-a --project kubernetes-cookbook
Fetching cluster endpoint and auth data.
kubeconfig entry generated for my-k8s-cluster.
Let's see if the cluster is healthy:
// list cluster components
# kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
And we can check the nodes inside the cluster:
// list the nodes in cluster
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-my-k8s-cluster-default-pool-7d0359ed-0rl8 Ready <none> 21m v1.9.2-gke.1
gke-my-k8s-cluster-default-pool-7d0359ed-1s2v Ready <none> 21m v1.9.2-gke.1
gke-my-k8s-cluster-default-pool-7d0359ed-61px Ready <none> 21m v1.9.2-gke.1
We can also use kubectl to check cluster info:
// list cluster info
# kubectl cluster-info
Kubernetes master is running at https://35.225.24.4
GLBCDefaultBackend is running at https://35.225.24.4/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
Heapster is running at https://35.225.24.4/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://35.225.24.4/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://35.225.24.4/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Metrics-server is running at https://35.225.24.4/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy