Congratulations! You've built your own Kubernetes cluster in the previous sections. Now, let's get on with running your very first container nginx (http://nginx.org/), which is an open source reverse proxy server, load balancer, and web server.
Before we start running the first container in Kubernetes, it's better to check whether every component works as expected. Please follow these steps on master to check whether the environment is ready to use:
# check component status are all healthy $ kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok nil scheduler Healthy ok nil etcd-0 Healthy {"health": "true"} nil
# Check master is running $ kubectl cluster-info Kubernetes master is running at http://localhost:8080
# check nodes are all Ready $ kubectl get nodes NAME LABELS STATUS kub-node1 kubernetes.io/hostname=kub-node1 Ready kub-node2 kubernetes.io/hostname=kub-node2 Ready
Before we go to the next section, make sure the nodes are accessible to the Docker registry. We will use the nginx image from Docker Hub (https://hub.docker.com/) as an example. If you want to run your own application, be sure to dockerize it first! What you need to do for your custom application is to write a Dockerfile (https://docs.docker.com/v1.8/reference/builder), build, and push it into the public/private Docker registry.
Test your node connectivity with the public/private Docker registry
On your node, try docker pull nginx
to test whether you can pull the image from Docker Hub. If you're behind a proxy, please add HTTP_PROXY
into your Docker configuration file (normally, in /etc/sysconfig/docker
). If you want to run the image from the private repository in Docker Hub, using docker login
on the node to place your credential in ~/.docker/config.json
, copy the credentials into /var/lib/kubelet/.dockercfg in the json format and restart Docker:
# put the credential of docker registry $ cat /var/lib/kubelet/.dockercfg { "<docker registry endpoint>": { "auth": "SAMPLEAUTH=", "email": "[email protected]" } }
If you're using your own private registry, specify INSECURE_REGISTRY
in the Docker configuration file.
We will use the official Docker image of nginx as an example. The image is prebuilt in Docker Hub (https://hub.docker.com/_/nginx/).
Many official and public images are available on Docker Hub so that you do not need to build it from scratch. Just pull it and set up your custom setting on top of it.
kubectl run
to create a certain number of containers. The Kubernetes master will then schedule the pods for the nodes to run:$ kubectl run <replication controller name> --image=<image name> --replicas=<number of replicas> [--port=<exposing port>]
my-first-nginx
from the nginx image and expose port 80
. We could deploy one or more containers in what is referred to as a pod. In this case, we will deploy one container per pod. Just like a normal Docker behavior, if the nginx image doesn't exist in local, it will pull it from Docker Hub by default:# Pull the nginx image and run with 2 replicas, and expose the container port 80 $ kubectl run my-first-nginx --image=nginx --replicas=2 --port=80 CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS my-first-nginx my-first-nginx nginx run=my-first-nginx 2
The name of replication controller <my-first-nginx> cannot be duplicate
The resource (pods, services, replication controllers, and so on) in one Kubernetes namespace cannot be duplicate. If you run the preceding command twice, the following error will pop up:
Error from server: replicationControllers "my-first-nginx" already exists
kubectl get pods
. Normally, the status of the pods will hold on Pending
for a while, since it takes some time for the nodes to pull the image from Docker Hub:# get all pods $ kubectl get pods NAME READY STATUS RESTARTS AGE my-first-nginx-nzygc 1/1 Running 0 1m my-first-nginx-yd84h 1/1 Running 0 1m
If the pod status is not running for a long time
You could always use kubectl get pods
to check the current status of the pods and kubectl describe pods $pod_name
to check the detailed information of a pod. If you make a typo of the image name, you might get the Image not found
error message, and if you are pulling the images from a private repository or registry without proper credentials setting, you might get the Authentication error
message. If you get the Pending
status for a long time and check out the node capacity, make sure you don't run too many replicas that exceed the node capacity described in the Preparing your environment section. If there are other unexpected error messages, you could either stop the pods or the entire replication controller to force master to schedule the tasks again.
Running
status:# get replication controllers $ kubectl get rc CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS my-first-nginx my-first-nginx nginx run=my-first-nginx 2
We might also want to create an external IP address for the nginx
replication controller. On cloud providers, which support an external load balancer (such as Google Compute Engine) using the LoadBalancer
type, will provision a load balancer for external access. On the other hand, you can still expose the port by creating a Kubernetes service as follows, even though you're not running on the platforms that support an external load balancer. We'll describe how to access this externally later:
# expose port 80 for replication controller named my-first-nginx $ kubectl expose rc my-first-nginx --port=80 --type=LoadBalancer NAME LABELS SELECTOR IP(S) PORT(S) my-first-nginx run=my-first-nginx run=my-first-nginx 80/TCP
We can see the service status we just created:
# get all services $ kubectl get service NAME LABELS SELECTOR IP(S) PORT(S) my-first-nginx run=my-first-nginx run=my-first-nginx 192.168.61.150 80/TCP
Congratulations! You just ran your first container with a Kubernetes pod and exposed port 80
with the Kubernetes service.
We could stop the application using commands such as the stop
replication controller and service. Before this, we suggest you read through the following introduction first to understand more about how it works:
# stop replication controller named my-first-nginx $ kubectl stop rc my-first-nginx replicationcontrollers/my-first-nginx # stop service named my-first-nginx $ kubectl stop service my-first-nginx services/my-first-nginx
Let's take a look at the insight of the service using describe
in the kubectl
command. We will create one Kubernetes service with the type LoadBalancer
, which will dispatch the traffic into two Endpoints
192.168.50.4
and 192.168.50.5
with port 80
:
$ kubectl describe service my-first-nginx Name: my-first-nginx Namespace: default Labels: run=my-first-nginx Selector: run=my-first-nginx Type: LoadBalancer IP: 192.168.61.150 Port: <unnamed> 80/TCP NodePort: <unnamed> 32697/TCP Endpoints: 192.168.50.4:80,192.168.50.5:80 Session Affinity: None No events.
Port
here is an abstract service port, which will allow any other resources to access the service within the cluster. The nodePort
will be indicating the external port for allowing external access. The targetPort
is the port the container allows traffic into; by default, it will be the same with Port
. The illustration is as follows. External access will access service with nodePort
. Service acts as a load balancer to dispatch the traffic to the pod using Port
80
. The pod will then pass through the traffic into the corresponding container using targetPort
80
:
In any nodes or master (if your master has flannel installed), you should be able to access the nginx service using ClusterIP 192.168.61.150
with port 80
:
# curl from service IP $ curl 192.168.61.150:80 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
It will be the same result if we curl to the target port of the pod directly:
# curl from endpoint $ curl 192.168.50.4:80 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
If you'd like to try out external access, use your browser to access the external IP address. Please note that the external IP address depends on which environment you're running in.
In Google Compute Engine, you could access it via a ClusterIP with proper firewall rules setting:
$ curl http://<clusterIP>
In a custom environment, such as on a premise datacenter, you could go through the IP address of nodes to access to:
$ curl http://<nodeIP>:<nodePort>
You should be able to see the following page using a web browser:
We have run our very first container in this section. Now: