Gathering data from Kubernetes

The steps for implementing the monitoring layers discussed previously in Prometheus are now quite clear:

  1. Install the exporters
  2. Annotate them with appropriate tags
  3. Collect them on auto-discovered endpoints

The host layer monitoring in Prometheus is done by the node exporter (https://github.com/prometheus/node_exporter). Its Kubernetes template can be found under the examples for this chapter, and it contains one DaemonSet with a scrape annotation. Install it as follows:

$ kubectl apply -f exporters/prom-node-exporter.yml

Its corresponding target in Prometheus will be discovered and created by the pod discovery role if using the example configuration.

The container layer collector should be kubelet. Consequently, discovering it with the node mode is the only thing we need to do.

Kubernetes monitoring is done by kube-state-metrics, which was also introduced previously. It also comes with Prometheus annotations, which means we don't need to do anything else to configure it. 

At this point, we've already set up a strong monitoring stack based on Prometheus. With respect to the application and the external resource monitoring, there are extensive exporters in the Prometheus ecosystem to support the monitoring of various components inside our system. For instance, if we need statistics on our MySQL database, we could just install MySQL Server Exporter (https://github.com/prometheus/mysqld_exporter), which offers comprehensive and useful metrics.

In addition to the metrics that we have already described, there are some other useful metrics from Kubernetes components that play an important role:

  • Kubernetes API server: The API server exposes its stats at /metrics, and this target is enabled by default.
  • kube-controller-manager: This component exposes metrics on port 10252, but it's invisible on some managed Kubernetes services such as GKE. If you're on a self-hosted cluster, applying kubernetes/self/kube-controller-manager-metrics-svc.yml creates endpoints for Prometheus.
  • kube-scheduler: This uses port 10251, and it's also not visible on clusters by GKE. kubernetes/self/kube-scheduler-metrics-svc.yml is the template for creating a target to Prometheus.
  • kube-dns: DNS in Kubernetes is managed by CoreDNS, which exposes its stats at port 9153. The corresponding template is kubernetes/self/ core-dns-metrics-svc.yml.
  • etcd: The etcd cluster also has a Prometheus metrics endpoint on port 2379. If your etcd cluster is self-hosted and managed by Kubernetes, you can use kubernetes/self/etcd-server.yml as a reference.
  • Nginx ingress controller: The nginx controller publishes metrics at port 10254, and will give you rich information about the state of nginx, as well as the duration, size, method, and status code of traffic routed by nginx. A full guide can be found here: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/monitoring.md.
The DNS in Kubernetes is served by skydns and it also has a metrics path exposed on the container. The typical setup in a kube-dns pod using skydns has two containers, dnsmasq and sky-dns, and their metrics ports are 10054 and 10055 respectively. The corresponding template is kubernetes/self/ skydns-metrics-svc.yml if we need it.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset