Incorporating custom metrics

Although scaling pods on CPU and memory usages is quite intuitive, sometimes it's inadequate to cover situations such as scaling with network connections, disk IOPS, and database transactions. As a consequence, the custom metrics API and external metrics API were introduced for Kubernetes components to access metrics that aren't supported. We've mentioned that, aside from Resource, there are still Pods, Object, and External type metrics in an HPA.

The Pods and Object metrics refer to metrics that are produced by objects inside Kubernetes. When an HPA queries a metric, the related metadata such as the pod name, namespaces, and labels are sent to the custom metrics API. On the other hand, External metrics refer to things not in the cluster, such as the metrics of databases services from the cloud provider, and they are fetched from the external metrics API with the metric name only. Their relation is illustrated as follows:

We know that the metrics server is a program that runs inside the cluster, but what exactly are the custom metrics and external metrics API services? Kubernetes doesn't know every monitoring system and external service, so it provides API interfaces to integrate those components instead. If our monitoring system supports these interfaces, we can register our monitoring system as the provider of the metrics API, otherwise we'll need an adapter to translate the metadata from Kubernetes to the objects in our monitoring system. In the same manner, we'll need to add the implementation of the external metrics API interface to use it.

In Chapter 7Monitoring and Logging, we built a monitoring system with Prometheus, but it doesn't support both custom and external metric APIs. We'll need an adapter to bridge the HPA and Prometheus, such as the Prometheus adapter (https://github.com/DirectXMan12/k8s-prometheus-adapter). 

For other monitoring solutions, there is a list of adapters for different monitoring providers: https://github.com/kubernetes/metrics/blob/master/IMPLEMENTATIONS.md#custom-metrics-api.

If none of the listed implementations support your monitoring system, there's still an API service template for building your own adapter for both custom and external metrics: https://github.com/kubernetes-incubator/custom-metrics-apiserver.

To make a service available to Kubernetes, it has to be registered as an API service under the aggregation layer. We can find out which service is the backend for an API service by showing the related apiservices objects:

  • v1beta1.metrics.k8s.io
  • v1beta1.custom.metrics.k8s.io
  • v1beta1.external.metrics.k8s.io

We can see that the metrics-server service in kube-system is serving as the source of the Resource metrics:

$ kubectl describe apiservices v1beta1.metrics.k8s.io
Name: v1beta1.metrics.k8s.io
...
Spec:
Group: metrics.k8s.io
Group Priority Minimum: 100
Insecure Skip TLS Verify: true
Service:
Name: metrics-server
Namespace: kube-system
  Version:           v1beta1
...

The example templates based on the deployment instructions of the Prometheus adapter are available in our repository (chapter8/8-2_scaling/prometheus-k8s-adapter) and are configured with the Prometheus service that we deployed in Chapter 7, Monitoring and Logging. You can deploy them in the following order:

$ kubectl apply -f custom-metrics-ns.yml
$ kubectl apply -f gen-secrets.yml
$ kubectl apply -f configmap.yml

$ kubectl apply -f adapter.yml
Only default metric translation rules are configured in the example. If you want to make your own metrics available to Kubernetes, you have to custom your own configurations based on your needs with the projects' instructions (https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/docs/config.md).

 To verify the installation, we can query the following path to see whether any metrics are returned from our monitoring backend (jq is only used for formatting the result):

$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/" | jq '.resources[].name'
"namespaces/network_udp_usage"
"pods/memory_usage_bytes"
"namespaces/spec_cpu_period"
...

Back to the HPA, the configuration of non-resource metrics is quite similar to resource metrics. The Pods type specification snippet is as follows:

...
metrics:
- type: Pods
pods:
metric:
name: <metrics-name>
selector: <optional, a LabelSelector object>
target:
type: AverageValue or Value
averageValue or value: <quantity>
...

The definition of an Object metric is as follows:

...
metrics:
- type: Object
pods:
metric:
name: <metrics-name>
selector: <optional, a LabelSelector object>
describedObject:
apiVersion: <api version>
kind: <kind>
name: <object name>
target:
type: AverageValue or Value
averageValue or value: <quantity>
...

The syntax for the External metrics is almost identical to the Pods metrics, except for the following part:

- type: External
external:

Let's say that we specify a Pods metric with the metric name fs_read and the associated controller, which is Deployment, that selects app=worker. In that case, the HPA would make queries to the custom metric server with the following information:

  • namespace: HPA's namespace
  • Metrics name: fs_read
  • labelSelector: app=worker

Furthermore, if we have the optional metric selector <type>.metirc.selector configured, it would be passed to the backend as well. A query for the previous example, plus a metric selector, app=myapp, could look like this:

/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/fs_read?labelSelector=app=worker&metricLabelSelector=app=myapp

After the HPA gets values for a metric, it aggregates the metric with either AverageValue or the raw Value to decide whether to scale something or not. Bear in mind that the Utilization method isn't supported here.

For an Object metric, the only difference is that the HPA would attach the information of the referenced object into the query. For example, we have the following configuration in the default namespace:

spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: gateway
...
metric:

name: rps
selector:
matchExpressions:
- key: app
operator: In
values:
- gwapp
describedObject:
apiVersion: extensions/v1beta1
kind: Ingress
name: cluster-ingress

The query to the monitoring backend would then be as follows:

/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/ingresses.extensions/cluster-ingress/rps?metricLabelSelector=app+in+(gwapp)

Notice that there wouldn't be any information about the target controller being passed, and we can't reference objects in other namespaces.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset