The master node of Kubernetes works as the control center of containers. The duties of which are taken charge by the master include serving as a portal to end users, assigning tasks to nodes, and gathering information. In this recipe, we will see how to set up Kubernetes master. There are three daemon processes on master:
We can either start them using the wrapper command, hyperkube
, or individually start them as daemons. Both the solutions are covered in this section.
Before deploying the master node, make sure you have the etcd endpoint ready, which acts like the datastore of Kubernetes. You have to check whether it is accessible and also configured with the overlay network Classless Inter-Domain Routing (CIDR https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). It is possible to check it using the following command line:
// Check both etcd connection and CIDR setting $ curl -L <etcd endpoint URL>/v2/keys/coreos.com/network/config
If connection is successful, but the etcd configuration has no expected CIDR value, you can push value through curl
as well:
$ curl -L <etcd endpoint URL>/v2/keys/coreos.com/network/config -XPUT -d value="{ "Network": "<CIDR of overlay network>" }"
In order to build up a master, we propose the following steps for installing the source code, starting with the daemons and then doing verification. Follow the procedure and you'll get a practical master eventually.
Here, we offer two kinds of installation procedures:
systemd
yum
command:// install Kubernetes master package # yum install kubernetes-master kubernetes-client
The kubernetes-master
package contains master daemons, while kubernetes-client
installs a tool called kubectl
, which is the Command Line Interface for communicating with the Kubernetes system. Since the master node is served as an endpoint for requests, with kubectl
installed, users can easily control container applications and the environment through commands.
CentOS 7's RPM of Kubernetes
There are five Kubernetes RPMs (the .rpm
files, https://en.wikipedia.org/wiki/RPM_Package_Manager) for different functionalities: kubernetes
, kubernetes-master
, kubernetes-client
, kubernetes-node
, and kubernetes-unit-test
.
The first one, kubernetes
, is just like a hyperlink to the following three items. You will install kubernetes-master
, kubernetes-client
, and kubernetes-node
at once. The one named kubernetes-node
is for node installation. And the last one, kubernetes-unit-test
contains not only testing scripts, but also Kubernetes template examples.
yum install
:// profiles as environment variables for services # ls /etc/kubernetes/ apiserver config controller-manager scheduler // systemd files # ls /usr/lib/systemd/system/kube-* /usr/lib/systemd/system/kube-apiserver.service /usr/lib/systemd/system/kube-scheduler.service /usr/lib/systemd/system/kube-controller-manager.service
systemd
files as the original ones and modify the values in the configuration files under the directory /etc/kubernetes
to build a connection with etcd. The file named config
is a shared environment file for several Kubernetes daemon processes. For basic settings, simply change items in apiserver
:# cat /etc/kubernetes/apiserver ### # kubernetes system config # # The following values are used to configure the kube-apiserver # # The address on the local server to listen to. KUBE_API_ADDRESS="--address=0.0.0.0" # The port on the local server to listen on. KUBE_API_PORT="--insecure-port=8080" # Port nodes listen on # KUBELET_PORT="--kubelet_port=10250" # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd_servers=<etcd endpoint URL>:<etcd exposed port>" # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=<CIDR of overlay network>" # default admission control policies KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" # Add your own! KUBE_API_ARGS="--cluster_name=<your cluster name>"
kube-apiserver
, kube-scheduler
, and kube-controller-manager
one by one; the command systemctl
can help for management. Be aware that kube-apiserver
should always start first, since kube-scheduler
and kube-controller-manager
connect to the Kubernetes API server when they start running:// start services # systemctl start kube-apiserver # systemctl start kube-scheduler # systemctl start kube-controller-manager // enable services for starting automatically while server boots up. # systemctl enable kube-apiserver # systemctl enable kube-scheduler # systemctl enable kube-controller-manager
systemd
does not return error messages without the API server running, both kube-scheduler
and kube-controller-manager
get connection errors and do not provide regular services:$ sudo systemctl status kube-scheduler -l—output=cat kube-scheduler.service - Kubernetes Scheduler Plugin Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled) Active: active (running) since Thu 2015-11-19 07:21:57 UTC; 5min ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 2984 (kube-scheduler) CGroup: /system.slice/kube-scheduler.service └─2984 /usr/bin/kube-scheduler—logtostderr=true—v=0 --master=127.0.0.1:8080 E1119 07:27:05.471102 2984 reflector.go:136] Failed to list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=spec.unschedulable%3Dfalse: dial tcp 127.0.0.1:8080: connection refused
systemd.unit
in /usr/lib/systemd/system/kube-scheduler
and /usr/lib/systemd/system/kube-controller-manager
:[Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=kube-apiserver.service Wants=kube-apiserver.service
With the preceding settings, we can make sure kube-apiserver
is the first started daemon.
kube-apiserver
is stopped, kube-scheduler
and kube-controller-manager
will be stopped as well; you can change systemd.unit
item Wants
to Requires
, as follows:Requires=kube-apiserver.service
Requires
has more strict restrictions. In case the daemon kube-apiserver
has crashed, kube-scheduler
and kube-controller-manager
would also be stopped. On the other hand, configuration with Requires
is hard for debugging master installation. It is recommended that you enable this parameter once you make sure every setting is correct.
It is also possible that we download a binary file for installation. The official website for the latest release is here: https://github.com/kubernetes/kubernetes/releases:
hyperkube
:// download Kubernetes package # curl -L -O https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v1.1.2/kubernetes.tar.gz // extract the tarball to specific local, here we put it under /opt. the KUBE_HOME would be /opt/kubernetes # tar zxvf kubernetes.tar.gz -C /opt/ // copy all binary files to system directory # cp /opt/kubernetes/server/bin/* /usr/local/bin/
# cat /etc/init.d/kubernetes-master #!/bin/bash # # This shell script takes care of starting and stopping kubernetes master # Source function library. . /etc/init.d/functions # Source networking configuration. . /etc/sysconfig/network prog=/usr/local/bin/hyperkube lockfile=/var/lock/subsys/`basename $prog` hostname=`hostname` logfile=/var/log/kubernetes.log CLUSTER_NAME="<your cluster name>" ETCD_SERVERS="<etcd endpoint URL>:<etcd exposed port>" CLUSTER_IP_RANGE="<CIDR of overlay network>" MASTER="127.0.0.1:8080"
init
script. Please double-check the etcd URL and overlay network CIDR to confirm that they are the same as your previous installation:start() { # Start daemon. echo $"Starting apiserver: " daemon $prog apiserver --service-cluster-ip-range=${CLUSTER_IP_RANGE} --port=8080 --address=0.0.0.0 --etcd_servers=${ETCD_SERVERS} --cluster_name=${CLUSTER_NAME} > ${logfile}_apiserver 2>&1 & echo $"Starting controller-manager: " daemon $prog controller-manager --master=${MASTER} > ${logfile}_controller-manager 2>&1 & echo $"Starting scheduler: " daemon $prog scheduler --master=${MASTER} > ${logfile}_scheduler 2>&1 & RETVAL=$? [ $RETVAL -eq 0 ] && touch $lockfile return $RETVAL } stop() { [ "$EUID" != "0" ] && exit 4 echo -n $"Shutting down $prog: " killproc $prog RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f $lockfile return $RETVAL }
# See how we were called. case "$1" in start) start ;; stop) stop ;; status) status $prog ;; restart|force-reload) stop start ;; try-restart|condrestart) if status $prog > /dev/null; then stop start fi ;; reload) exit 3 ;; *) echo $"Usage: $0 {start|stop|status|restart|try-restart|force-reload}" exit 2 esac
kubernetes-master
:$sudo service kubernetes-master start
systemd
and service
, are able to get the logs:# systemd status <service name>
journalctl
:# journalctl -u <service name> --no-pager --full
Once you find a line showing Started...
in the output, you can confirm that the service setup has passed the verification.
kubectl
, can begin the operation:// check Kubernetes version # kubectl version Client Version: version.Info{Major:"1", Minor:"0.3", GitVersion:"v1.0.3.34+b9a88a7d0e357b", GitCommit:"b9a88a7d0e357be2174011dd2b127038c6ea8929", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"0.3", GitVersion:"v1.0.3.34+b9a88a7d0e357b", GitCommit:"b9a88a7d0e357be2174011dd2b127038c6ea8929", GitTreeState:"clean"}
From the recipe, you know how to create your own Kubernetes master. You can also check out the following recipes: